WorldWideScience

Sample records for standard curve method

  1. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    Science.gov (United States)

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  2. A standard curve based method for relative real time PCR data processing

    Directory of Open Access Journals (Sweden)

    Krause Andreas

    2005-03-01

    Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that

  3. Construction of the World Health Organization child growth standards: Selection of methods for attained growth curves

    NARCIS (Netherlands)

    Borghi, E.; Onis, M. de; Garza, C.; Broeck, J. van den; Frongillo, E.A.; Grummer-Strawn, L.; Buuren, S. van; Pan, H.; Molinari, L.; Martorell, R.; Onyango, A.W.; Martines, J.C.; Pinol, A.; Siyam, A.; Victoria, C.G.; Bhan, M.K.; Araújo, C.L.; Lartey, A.; Owusu, W.B.; Bhandari, N.; Norum, K.R.; Bjoerneboe, G.-E.Aa.; Mohamed, A.J.; Dewey, K.G.; Belbase, K.; Chumlea, C.; Cole, T.; Shrimpton, R.; Albernaz, E.; Tomasi, E.; Cássia Fossati da Silveira, R. de; Nader, G.; Sagoe-Moses, I.; Gomez, V.; Sagoe-Moses, C.; Taneja, S.; Rongsen, T.; Chetia, J.; Sharma, P.; Bahl, R.; Baerug, A.; Tufte, E.; Alasfoor, D.; Prakash, N.S.; Mabry, R.M.; Al Rajab, H.J.; Helmi, S.A.; Nommsen-Rivers, L.A.; Cohen, R.J.; Heinig, M.J.

    2006-01-01

    The World Health Organization (WHO), in collaboration with a number of research institutions worldwide, is developing new child growth standards. As part of a broad consultative process for selecting the best statistical methods, WHO convened a group of statisticians and child growth experts to

  4. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  5. Incorporating experience curves in appliance standards analysis

    International Nuclear Information System (INIS)

    Desroches, Louis-Benoit; Garbesi, Karina; Kantner, Colleen; Van Buskirk, Robert; Yang, Hung-Chia

    2013-01-01

    There exists considerable evidence that manufacturing costs and consumer prices of residential appliances have decreased in real terms over the last several decades. This phenomenon is generally attributable to manufacturing efficiency gained with cumulative experience producing a certain good, and is modeled by an empirical experience curve. The technical analyses conducted in support of U.S. energy conservation standards for residential appliances and commercial equipment have, until recently, assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. This assumption does not reflect real market price dynamics. Using price data from the Bureau of Labor Statistics, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These experience curves were incorporated into recent energy conservation standards analyses for these products. Including experience curves increases the national consumer net present value of potential standard levels. In some cases a potential standard level exhibits a net benefit when considering experience, whereas without experience it exhibits a net cost. These results highlight the importance of modeling more representative market prices. - Highlights: ► Past appliance standards analyses have assumed constant equipment prices. ► There is considerable evidence of consistent real price declines. ► We incorporate experience curves for several large appliances into the analysis. ► The revised analyses demonstrate larger net present values of potential standards. ► The results imply that past standards analyses may have undervalued benefits.

  6. Incorporating Experience Curves in Appliance Standards Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery; Kantner, Colleen; Lekov, Alex; Meyers, Stephen; Rosenquist, Gregory; Buskirk, Robert Van; Yang, Hung-Chia; Desroches, Louis-Benoit

    2011-10-31

    The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.

  7. Exponential models applied to automated processing of radioimmunoassay standard curves

    International Nuclear Information System (INIS)

    Morin, J.F.; Savina, A.; Caroff, J.; Miossec, J.; Legendre, J.M.; Jacolot, G.; Morin, P.P.

    1979-01-01

    An improved computer processing is described for fitting of radio-immunological standard curves by means of an exponential model on a desk-top calculator. This method has been applied to a variety of radioassays and the results are in accordance with those obtained by more sophisticated models [fr

  8. A Novel Reverse-Transcriptase Real-Time PCR Method for Quantification of Viable Vibrio Parahemolyticus in Raw Shrimp Based on a Rapid Construction of Standard Curve Method

    OpenAIRE

    Mengtong Jin; Haiquan Liu; Wenshuo Sun; Qin Li; Zhaohuan Zhang; Jibing Li; Yingjie Pan; Yong Zhao

    2015-01-01

    Vibrio parahemolyticus is an important pathogen that leads to food illness associated seafood. Therefore, rapid and reliable methods to detect and quantify the total viable V. parahaemolyticus in seafood are needed. In this assay, a RNA-based real-time reverse-transcriptase PCR (RT-qPCR) without an enrichment step has been developed for detection and quantification of the total viable V. parahaemolyticus in shrimp. RNA standards with the target segments were synthesized in vitro with T7 RNA p...

  9. GLOBAL AND STRICT CURVE FITTING METHOD

    NARCIS (Netherlands)

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  10. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO3 standards

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s

  11. Method of construction spatial transition curve

    Directory of Open Access Journals (Sweden)

    S.V. Didanov

    2013-04-01

    Full Text Available Purpose. The movement of rail transport (speed rolling stock, traffic safety, etc. is largely dependent on the quality of the track. In this case, a special role is the transition curve, which ensures smooth insertion of the transition from linear to circular section of road. The article deals with modeling of spatial transition curve based on the parabolic distribution of the curvature and torsion. This is a continuation of research conducted by the authors regarding the spatial modeling of curved contours. Methodology. Construction of the spatial transition curve is numerical methods for solving nonlinear integral equations, where the initial data are taken coordinate the starting and ending points of the curve of the future, and the inclination of the tangent and the deviation of the curve from the tangent plane at these points. System solutions for the numerical method are the partial derivatives of the equations of the unknown parameters of the law of change of torsion and length of the transition curve. Findings. The parametric equations of the spatial transition curve are calculated by finding the unknown coefficients of the parabolic distribution of the curvature and torsion, as well as the spatial length of the transition curve. Originality. A method for constructing the spatial transition curve is devised, and based on this software geometric modeling spatial transition curves of railway track with specified deviations of the curve from the tangent plane. Practical value. The resulting curve can be applied in any sector of the economy, where it is necessary to ensure a smooth transition from linear to circular section of the curved space bypass. An example is the transition curve in the construction of the railway line, road, pipe, profile, flat section of the working blades of the turbine and compressor, the ship, plane, car, etc.

  12. Comparison of power curve monitoring methods

    Directory of Open Access Journals (Sweden)

    Cambron Philippe

    2017-01-01

    Full Text Available Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM of wind turbines (WT. In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.

  13. Standard gestational birth weight ranges and Curve in Yaounde ...

    African Journals Online (AJOL)

    The aim of this study was to establish standard ranges and curve of mean gestational birth weights validated by ultrasonography for the Cameroonian population in Yaoundé. This cross sectional study was carried out in the Obstetrics & Gynaecology units of 4 major hospitals in the metropolis between March 5 and ...

  14. Mathematics of quantitative kinetic PCR and the application of standard curves.

    Science.gov (United States)

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  15. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  16. SCINFI II A program to calculate the standardization curve in liquid scintillation counting

    Energy Technology Data Exchange (ETDEWEB)

    Grau Carles, A.; Grau Malonda, A.

    1985-07-01

    A code, SCINFI II, written in BASIC, has been developed to compute the efficiency-quench standardization curve for any beta radionuclide. The free parameter method has been applied. The program requires the standardization curve for 3{sup H} and the polynomial or tabulated relating counting efficiency as figure of merit for both 3{sup H} and the problem radionuclide. The program is applied to the computation, of the counting efficiency for different values of quench when the problem is 14{sup C}. The results of four different computation methods are compared. (Author) 17 refs.

  17. SCINFI II A program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1985-01-01

    A code, SCINFI II, written in BASIC, has been developed to compute the efficiency-quench standardization curve for any beta radionuclide. The free parameter method has been applied. The program requires the standardization curve for 3 H and the polynomial or tabulated relating counting efficiency as figure of merit for both 3 H and the problem radionuclide. The program is applied to the computation, of the counting efficiency for different values of quench when the problem is 14 C . The results of four different computation methods are compared. (Author) 17 refs

  18. Research on Standard and Automatic Judgment of Press-fit Curve of Locomotive Wheel-set Based on AAR Standard

    Science.gov (United States)

    Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu

    2018-03-01

    In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.

  19. Implementation of the Master Curve method in ProSACC

    International Nuclear Information System (INIS)

    Feilitzen, Carl von; Sattari-Far, Iradj

    2012-03-01

    Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the

  20. Implementation of the Master Curve method in ProSACC

    Energy Technology Data Exchange (ETDEWEB)

    Feilitzen, Carl von; Sattari-Far, Iradj [Inspecta Technology AB, Stockholm (Sweden)

    2012-03-15

    Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the

  1. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    Science.gov (United States)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  2. MATHEMATICAL METHODS TO DETERMINE THE INTERSECTION CURVES OF THE CYLINDERS

    Directory of Open Access Journals (Sweden)

    POPA Carmen

    2010-07-01

    Full Text Available The aim of this paper is to establish the intersection curves between cylinders, by using the Mathematica program. This thing can be obtained by introducing the curves equations, which are inferred, in Mathematica program. This paper take into discussion three right cylinders and another inclined to 45 degrees. The intersection curves can also be obtained by using the classical methods of the descriptive geometry.

  3. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  4. CURVE LSFIT, Gamma Spectrometer Calibration by Interactive Fitting Method

    International Nuclear Information System (INIS)

    Olson, D.G.

    1992-01-01

    1 - Description of program or function: CURVE and LSFIT are interactive programs designed to obtain the best data fit to an arbitrary curve. CURVE finds the type of fitting routine which produces the best curve. The types of fitting routines available are linear regression, exponential, logarithmic, power, least squares polynomial, and spline. LSFIT produces a reliable calibration curve for gamma ray spectrometry by using the uncertainty value associated with each data point. LSFIT is intended for use where an entire efficiency curve is to be made starting at 30 KeV and continuing to 1836 KeV. It creates calibration curves using up to three least squares polynomial fits to produce the best curve for photon energies above 120 KeV and a spline function to combine these fitted points with a best fit for points below 120 KeV. 2 - Method of solution: The quality of fit is tested by comparing the measured y-value to the y-value calculated from the fitted curve. The fractional difference between these two values is printed for the evaluation of the quality of the fit. 3 - Restrictions on the complexity of the problem - Maxima of: 2000 data points calibration curve output (LSFIT) 30 input data points 3 least squares polynomial fits (LSFIT) The least squares polynomial fit requires that the number of data points used exceed the degree of fit by at least two

  5. Methods for predicting isochronous stress-strain curves

    International Nuclear Information System (INIS)

    Kiyoshige, Masanori; Shimizu, Shigeki; Satoh, Keisuke.

    1976-01-01

    Isochronous stress-strain curves show the relation between stress and total strain at a certain temperature with time as a parameter, and they are drawn up from the creep test results at various stress levels at a definite temperature. The concept regarding the isochronous stress-strain curves was proposed by McVetty in 1930s, and has been used for the design of aero-engines. Recently the high temperature characteristics of materials are shown as the isochronous stress-strain curves in the design guide for the nuclear energy equipments and structures used in high temperature creep region. It is prescribed that these curves are used as the criteria for determining design stress intensity or the data for analyzing the superposed effects of creep and fatigue. In case of the isochronous stress-strain curves used for the design of nuclear energy equipments with very long service life, it is impractical to determine the curves directly from the results of long time creep test, accordingly the method of predicting long time stress-strain curves from short time creep test results must be established. The method proposed by the authors, for which the creep constitution equations taking the first and second creep stages into account are used, and the method using Larson-Miller parameter were studied, and it was found that both methods were reliable for the prediction. (Kako, I.)

  6. Qualitative Comparison of Contraction-Based Curve Skeletonization Methods

    NARCIS (Netherlands)

    Sobiecki, André; Yasan, Haluk C.; Jalba, Andrei C.; Telea, Alexandru C.

    2013-01-01

    In recent years, many new methods have been proposed for extracting curve skeletons of 3D shapes, using a mesh-contraction principle. However, it is still unclear how these methods perform with respect to each other, and with respect to earlier voxel-based skeletonization methods, from the viewpoint

  7. Scinfi, a program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1984-01-01

    A code, Scinfi, was developed, written in Basic, to compute the efficiency-quench standardization curve for any radionuclide. The program requires the standardization curve for 3 H and the polynomial relations between counting efficiency and figure of merit for both 3 H and the problem (e.g. 14 C). The program is applied to the computation of the efficiency-quench standardization curve for 14 C. Five different liquid scintillation spectrometers and two scintillator solutions have been checked. The computation results are compared with the experimental values obtained with a set of 14 C standardized samples. (author)

  8. SCINFI, a program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1984-01-01

    A code, SCINFI, was developed, written in BASIC, to compute the efficiency- quench standardization curve for any radionuclide. The program requires the standardization curve for 3H and the polynomial relations between counting efficiency and figure of merit for both 3H and the problem (e.g. 14 C ). The program is applied to the computation of the efficiency-quench standardization curve for 14 c . Five different liquid scintillation spectrometers and two scintillator solutions have bean checked. The computation results are compared with the experimental values obtained with a set of 14 c standardized samples. (Author)

  9. herd levels and standard lactation curves for south african jersey

    African Journals Online (AJOL)

    Protein. 34.2. 370.7. 148.1. 37.6. 31.3. 482.8. 191.1. 53.8. According to the standard deviations in Table 1, much more variation exists for 305-day yields of. Holstein cows in comparison with Jersey cows, resulting in upper limits of herd levels ranging from 3487.7 kg to more than 11 219.2 kg for adjusted 305-day milk yield, ...

  10. Construction of molecular potential energy curves by an optimization method

    Science.gov (United States)

    Wang, J.; Blake, A. J.; McCoy, D. G.; Torop, L.

    1991-01-01

    A technique for determining the potential energy curves for diatomic molecules from measurements of diffused or continuum spectra is presented. It is based on a numerical procedure which minimizes the difference between the calculated spectra and the experimental measurements and can be used in cases where other techniques, such as the conventional RKR method, are not applicable. With the aid of suitable spectral data, the associated dipole electronic transition moments can be simultaneously obtained. The method is illustrated by modeling the "longest band" of molecular oxygen to extract the E 3Σ u- and B 3Σ u- potential curves in analytical form.

  11. A non-iterative method for fitting decay curves with background

    International Nuclear Information System (INIS)

    Mukoyama, T.

    1982-01-01

    A non-iterative method for fitting a decay curve with background is presented. The sum of an exponential function and a constant term is linearized by the use of the difference equation and parameters are determined by the standard linear least-squares fitting. The validity of the present method has been tested against pseudo-experimental data. (orig.)

  12. Waist Circumferences of Chilean Students: Comparison of the CDC-2012 Standard and Proposed Percentile Curves

    Directory of Open Access Journals (Sweden)

    Rossana Gómez-Campos

    2015-07-01

    Full Text Available The measurement of waist circumference (WC is considered to be an important means to control overweight and obesity in children and adolescents. The objectives of the study were to (a compare the WC measurements of Chilean students with the international CDC-2012 standard and other international standards, and (b propose a specific measurement value for the WC of Chilean students based on age and sex. A total of 3892 students (6 to 18 years old were assessed. Weight, height, body mass index (BMI, and WC were measured. WC was compared with the CDC-2012 international standard. Percentiles were constructed based on the LMS method. Chilean males had a greater WC during infancy. Subsequently, in late adolescence, males showed values lower than those of the international standards. Chilean females demonstrated values similar to the standards until the age of 12. Subsequently, females showed lower values. The 85th and 95th percentiles were adopted as cutoff points for evaluating overweight and obesity based on age and sex. The WC of Chilean students differs from the CDC-2012 curves. The regional norms proposed are a means to identify children and adolescents with a high risk of suffering from overweight and obesity disorders.

  13. Waist Circumferences of Chilean Students: Comparison of the CDC-2012 Standard and Proposed Percentile Curves

    Science.gov (United States)

    Gómez-Campos, Rossana; Lee Andruske, Cinthya; Hespanhol, Jefferson; Sulla Torres, Jose; Arruda, Miguel; Luarte-Rocha, Cristian; Cossio-Bolaños, Marco Antonio

    2015-01-01

    The measurement of waist circumference (WC) is considered to be an important means to control overweight and obesity in children and adolescents. The objectives of the study were to (a) compare the WC measurements of Chilean students with the international CDC-2012 standard and other international standards, and (b) propose a specific measurement value for the WC of Chilean students based on age and sex. A total of 3892 students (6 to 18 years old) were assessed. Weight, height, body mass index (BMI), and WC were measured. WC was compared with the CDC-2012 international standard. Percentiles were constructed based on the LMS method. Chilean males had a greater WC during infancy. Subsequently, in late adolescence, males showed values lower than those of the international standards. Chilean females demonstrated values similar to the standards until the age of 12. Subsequently, females showed lower values. The 85th and 95th percentiles were adopted as cutoff points for evaluating overweight and obesity based on age and sex. The WC of Chilean students differs from the CDC-2012 curves. The regional norms proposed are a means to identify children and adolescents with a high risk of suffering from overweight and obesity disorders. PMID:26184250

  14. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  15. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  16. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  17. Wind turbine performance: Methods and criteria for reliability of measured power curves

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, D.A. [Advanced Wind Turbines Inc., Seattle, WA (United States)

    1996-12-31

    In order to evaluate the performance of prototype turbines, and to quantify incremental changes in performance through field testing, Advanced Wind Turbines (AWT) has been developing methods and requirements for power curve measurement. In this paper, field test data is used to illustrate several issues and trends which have resulted from this work. Averaging and binning processes, data hours per wind-speed bin, wind turbulence levels, and anemometry methods are all shown to have significant impacts on the resulting power curves. Criteria are given by which the AWT power curves show a high degree of repeatability, and these criteria are compared and contrasted with current published standards for power curve measurement. 6 refs., 5 figs., 5 tabs.

  18. THE CPA QUALIFICATION METHOD BASED ON THE GAUSSIAN CURVE FITTING

    Directory of Open Access Journals (Sweden)

    M.T. Adithia

    2015-01-01

    Full Text Available The Correlation Power Analysis (CPA attack is an attack on cryptographic devices, especially smart cards. The results of the attack are correlation traces. Based on the correlation traces, an evaluation is done to observe whether significant peaks appear in the traces or not. The evaluation is done manually, by experts. If significant peaks appear then the smart card is not considered secure since it is assumed that the secret key is revealed. We develop a method that objectively detects peaks and decides which peak is significant. We conclude that using the Gaussian curve fitting method, the subjective qualification of the peak significance can be objectified. Thus, better decisions can be taken by security experts. We also conclude that the Gaussian curve fitting method is able to show the influence of peak sizes, especially the width and height, to a significance of a particular peak.

  19. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  20. Standardized Percentile Curves of Body Mass Index of Northeast Iranian Children Aged 25 to 60 Months

    Science.gov (United States)

    Emdadi, Maryam; Safarian, Mohammad; Doosti, Hassan

    2011-01-01

    Objective Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. Racial differences necessitate using local growth charts. This study aimed to provide standardized growth curves of body mass index (BMI) for children living in northeast Iran. Methods A total of 23730 apparently healthy boys and girls aged 25 to 60 months recruited for 20 days from those attending community clinics for routine health checks. Anthropometric measurements were done by trained health staff using WHO methodology. The LMSP method with maximum penalized likelihood, the Generalized Additive Models, the Box-Cox power exponential distribution distribution, Akaike Information Criteria and Generalized Akaike Criteria with penalty equal to 3 [GAIC(3)], and Worm plot and Q-tests as goodness of fit tests were used to construct the centile reference charts. Findings The BMI centile curves for boys and girls aged 25 to 60 months were drawn utilizing a population of children living in northeast Iran. Conclusion The results of the current study demonstrate the possibility of preparation of local growth charts and their importance in evaluating children's growth. Also their differences, relative to those prepared by global references, reflect the necessity of preparing local charts in future studies using longitudinal data. PMID:23056770

  1. Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards

    Science.gov (United States)

    Remer, D. S.; Moore, R. C.

    1985-01-01

    Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.

  2. Aerodynamic calculational methods for curved-blade Darrieus VAWT WECS

    Science.gov (United States)

    Templin, R. J.

    1985-03-01

    Calculation of aerodynamic performance and load distributions for curved-blade wind turbines is discussed. Double multiple stream tube theory, and the uncertainties that remain in further developing adequate methods are considered. The lack of relevant airfoil data at high Reynolds numbers and high angles of attack, and doubts concerning the accuracy of models of dynamic stall are underlined. Wind tunnel tests of blade airbrake configurations are summarized.

  3. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi

    2017-02-01

    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  4. Analysis of RIA standard curve by log-logistic and cubic log-logit models

    International Nuclear Information System (INIS)

    Yamada, Hideo; Kuroda, Akira; Yatabe, Tami; Inaba, Taeko; Chiba, Kazuo

    1981-01-01

    In order to improve goodness-of-fit in RIA standard analysis, programs for computing log-logistic and cubic log-logit were written in BASIC using personal computer P-6060 (Olivetti). Iterative least square method of Taylor series was applied for non-linear estimation of logistic and log-logistic. Hear ''log-logistic'' represents Y = (a - d)/(1 + (log(X)/c)sup(b)) + d As weights either 1, 1/var(Y) or 1/σ 2 were used in logistic or log-logistic and either Y 2 (1 - Y) 2 , Y 2 (1 - Y) 2 /var(Y), or Y 2 (1 - Y) 2 /σ 2 were used in quadratic or cubic log-logit. The term var(Y) represents squares of pure error and σ 2 represents estimated variance calculated using a following equation log(σ 2 + 1) = log(A) + J log(y). As indicators for goodness-of-fit, MSL/S sub(e)sup(2), CMD% and WRV (see text) were used. Better regression was obtained in case of alpha-fetoprotein by log-logistic than by logistic. Cortisol standard curve was much better fitted with cubic log-logit than quadratic log-logit. Predicted precision of AFP standard curve was below 5% in log-logistic in stead of 8% in logistic analysis. Predicted precision obtained using cubic log-logit was about five times lower than that with quadratic log-logit. Importance of selecting good models in RIA data processing was stressed in conjunction with intrinsic precision of radioimmunoassay system indicated by predicted precision. (author)

  5. Standardization of 57Co using different methods of LNMRI

    International Nuclear Information System (INIS)

    Rezende, E.A.; Lopes, R.T.; Silva, C.J. da; Poledna, R.; Silva, R.L. da; Tauhata, L.

    2015-01-01

    The activity of a 57 Co solution was determined using four LNMRI different measurement methods. The solution was standardized by live-timed anti-coincidence method and sum-peak method. The efficiency curve and standard-sample comparison methods were also used in this comparison. The results and their measurement uncertainties demonstrating the equivalence of these methods. As an additional contribution, the gamma emission probabilities of 57 Co were also determined. (author)

  6. Experimental Method for Plotting S-N Curve with a Small Number of Specimens

    Directory of Open Access Journals (Sweden)

    Strzelecki Przemysław

    2016-12-01

    Full Text Available The study presents two approaches to plotting an S-N curve based on the experimental results. The first approach is commonly used by researchers and presented in detail in many studies and standard documents. The model uses a linear regression whose parameters are estimated by using the least squares method. A staircase method is used for an unlimited fatigue life criterion. The second model combines the S-N curve defined as a straight line and the record of random occurrence of the fatigue limit. A maximum likelihood method is used to estimate the S-N curve parameters. Fatigue data for C45+C steel obtained in the torsional bending test were used to compare the estimated S-N curves. For pseudo-random numbers generated by using the Mersenne Twister algorithm, the estimated S-N curve for 10 experimental results plotted by using the second model, estimates the fatigue life in the scatter band of the factor 3. The result gives good approximation, especially regarding the time required to plot the S-N curve.

  7. A preliminary study on method of saturated curve

    International Nuclear Information System (INIS)

    Cao Liguo; Chen Yan; Ao Qi; Li Huijuan

    1987-01-01

    It is an effective method to determine directly the absorption coefficient of sample with the matrix effect correction. The absorption coefficient is calculated using the relation of the characteristic X-ray intensity with the thickness of samples (saturated curve). The method explains directly the feature of the sample and the correction of the enhanced effect in certain condition. The method is not as same as the usual one in which the determination of the absorption coefficient of sample is based on the procedure of absorption of X-ray penetrating sample. The sensitivity factor KI 0 is discussed. The idea of determinating KI o by experiment and quasi-absoluted measurement of absorption coefficient μ are proposed. The experimental results with correction in different condition are shown

  8. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry; von Glehn, Ingrid; Macdonald, Colin B.; Marz, Thomas

    2013-01-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  9. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry

    2013-09-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  10. The method of covariant symbols in curved space-time

    International Nuclear Information System (INIS)

    Salcedo, L.L.

    2007-01-01

    Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)

  11. Measuring the surgical 'learning curve': methods, variables and competency.

    Science.gov (United States)

    Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2014-03-01

    To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.

  12. New method of safety assessment for pressure vessel of nuclear power plant--brief introduction of master curve approach

    International Nuclear Information System (INIS)

    Yang Wendou

    2011-01-01

    The new Master Curve Method is called as a revolutionary advance to the assessment of- reactor pressure vessel integrity in USA. This paper explains the origin, basis and standard of the Master Curve from the reactor pressure-temperature limit curve which assures the safety of nuclear power plant. According to the characteristics of brittle fracture which is greatly susceptible to the microstructure, the theory and the test method of the Master Curve as well as its statistical law which can be modeled using Weibull distribution are described in this paper. The meaning, advantage, application and importance of the Master Curve as well as the relation between the Master Curve and nuclear power safety are understood from the fitting formula for the fracture toughness database by Weibull distribution model. (author)

  13. Validation of curve-fitting method for blood retention of 99mTc-GSA. Comparison with blood sampling method

    International Nuclear Information System (INIS)

    Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa

    1997-01-01

    We investigated a curve-fitting method for the rate of blood retention of 99m Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent 99m Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of 99m Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1±2.1% (mean±standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5±6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2±11.6%, n=13; vs. 31.9±2.8%, n=7, p 99m Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p 99m Tc-GSA and could be a substitute for the blood sampling method. (author)

  14. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-11-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results is presented: The radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach tests methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transportation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance. 2 references, 2 figures

  15. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-01-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results are presented: the radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach test methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transporation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance

  16. Semiclassical methods in curved spacetime and black hole thermodynamics

    International Nuclear Information System (INIS)

    Camblong, Horacio E.; Ordonez, Carlos R.

    2005-01-01

    Improved semiclassical techniques are developed and applied to a treatment of a real scalar field in a D-dimensional gravitational background. This analysis, leading to a derivation of the thermodynamics of black holes, is based on the simultaneous use of (i) a near-horizon description of the scalar field in terms of conformal quantum mechanics; (ii) a novel generalized WKB framework; and (iii) curved-spacetime phase-space methods. In addition, this improved semiclassical approach is shown to be asymptotically exact in the presence of hierarchical expansions of a near-horizon type. Most importantly, this analysis further supports the claim that the thermodynamics of black holes is induced by their near-horizon conformal invariance

  17. Elastic-plastic fracture assessment using a J-R curve by direct method

    International Nuclear Information System (INIS)

    Asta, E.P.

    1996-01-01

    In the elastic-plastic evaluation methods, based on J integral and tearing modulus procedures, an essential input is the material fracture resistance (J-R) curve. In order to simplify J-R determination direct, a method from load-load point displacement records of the single specimen tests may be employed. This procedure has advantages such as avoiding accuracy problems of the crack growth measuring devices and reducing testing time. This paper presents a structural integrity assessment approach, for ductile fracture, using the J-R obtained by a direct method from small single specimen fracture tests. The J-R direct method was carried out by means of a developed computational program based on theoretical elastic-plastic expressions. A comparative evaluation between the direct method J resistance curves and those obtained by the standard testing methodology on typical pressure vessel steels has been made. The J-R curves estimated from the direct method give an acceptable agreement with the approach proposed in this study which is reliable to use for engineering determinations. (orig.)

  18. Growth curves and the international standard: How children's growth reflects challenging conditions in rural Timor-Leste.

    Science.gov (United States)

    Spencer, Phoebe R; Sanders, Katherine A; Judge, Debra S

    2018-02-01

    Population-specific growth references are important in understanding local growth variation, especially in developing countries where child growth is poor and the need for effective health interventions is high. In this article, we use mixed longitudinal data to calculate the first growth curves for rural East Timorese children to identify where, during development, deviation from the international standards occurs. Over an eight-year period, 1,245 children from two ecologically distinct rural areas of Timor-Leste were measured a total of 4,904 times. We compared growth to the World Health Organization (WHO) standards using z-scores, and modeled height and weight velocity using the SuperImposition by Translation And Rotation (SITAR) method. Using the Generalized Additive Model for Location, Scale and Shape (GAMLSS) method, we created the first growth curves for rural Timorese children for height, weight and body mass index (BMI). Relative to the WHO standards, children show early-life growth faltering, and stunting throughout childhood and adolescence. The median height and weight for this population tracks below the WHO fifth centile. Males have poorer growth than females in both z-BMI (p = .001) and z-height-for-age (p = .018) and, unlike females, continue to grow into adulthood. This is the most comprehensive investigation to date of rural Timorese children's growth, and the growth curves created may potentially be used to identify future secular trends in growth as the country develops. We show significant deviation from the international standard that becomes most pronounced at adolescence, similar to the growth of other Asian populations. Males and females show different growth responses to challenging conditions in this population. © 2017 Wiley Periodicals, Inc.

  19. NEW CONCEPTS AND TEST METHODS OF CURVE PROFILE AREA DENSITY IN SURFACE: ESTIMATION OF AREAL DENSITY ON CURVED SPATIAL SURFACE

    OpenAIRE

    Hong Shen

    2011-01-01

    The concepts of curve profile, curve intercept, curve intercept density, curve profile area density, intersection density in containing intersection (or intersection density relied on intersection reference), curve profile intersection density in surface (or curve intercept intersection density relied on intersection of containing curve), and curve profile area density in surface (AS) were defined. AS expressed the amount of curve profile area of Y phase in the unit containing surface area, S...

  20. DAG expression: high-throughput gene expression analysis of real-time PCR data using standard curves for relative quantification.

    Directory of Open Access Journals (Sweden)

    María Ballester

    Full Text Available BACKGROUND: Real-time quantitative PCR (qPCR is still the gold-standard technique for gene-expression quantification. Recent technological advances of this method allow for the high-throughput gene-expression analysis, without the limitations of sample space and reagent used. However, non-commercial and user-friendly software for the management and analysis of these data is not available. RESULTS: The recently developed commercial microarrays allow for the drawing of standard curves of multiple assays using the same n-fold diluted samples. Data Analysis Gene (DAG Expression software has been developed to perform high-throughput gene-expression data analysis using standard curves for relative quantification and one or multiple reference genes for sample normalization. We discuss the application of DAG Expression in the analysis of data from an experiment performed with Fluidigm technology, in which 48 genes and 115 samples were measured. Furthermore, the quality of our analysis was tested and compared with other available methods. CONCLUSIONS: DAG Expression is a freely available software that permits the automated analysis and visualization of high-throughput qPCR. A detailed manual and a demo-experiment are provided within the DAG Expression software at http://www.dagexpression.com/dage.zip.

  1. Stage discharge curve for Guillemard Bridge streamflow sation based on rating curve method using historical flood event data

    International Nuclear Information System (INIS)

    Ros, F C; Sidek, L M; Desa, M N; Arifin, K; Tosaka, H

    2013-01-01

    The purpose of the stage-discharge curves varies from water quality study, flood modelling study, can be used to project climate change scenarios and so on. As the bed of the river often changes due to the annual monsoon seasons that sometimes cause by massive floods, the capacity of the river will changed causing shifting controlled to happen. This study proposes to use the historical flood event data from 1960 to 2009 in calculating the stage-discharge curve of Guillemard Bridge located in Sg. Kelantan. Regression analysis was done to check the quality of the data and examine the correlation between the two variables, Q and H. The mean values of the two variables then were adopted to find the value of difference between zero gauge height and the level of zero flow, 'a', K and 'n' to fit into rating curve equation and finally plotting the stage-discharge rating curve. Regression analysis of the historical flood data indicate that 91 percent of the original uncertainty has been explained by the analysis with the standard error of 0.085.

  2. Corrections for hysteresis curves for rare earth magnet materials measured by open magnetic circuit methods

    International Nuclear Information System (INIS)

    Nakagawa, Yasuaki

    1996-01-01

    The methods for testing permanent magnets stipulated in the usual industrial standards are so-called closed magnetic circuit methods which employ a loop tracer using an iron-core electromagnet. If the coercivity exceeds the highest magnetic field generated by the electromagnet, full hysteresis curves cannot be obtained. In the present work, magnetic fields up to 15 T were generated by a high-power water-cooled magnet, and the magnetization was measured by an induction method with an open magnetic circuit, in which the effect of a demagnetizing field should be taken into account. Various rare earth magnets materials such as sintered or bonded Sm-Co and Nd-Fe-B were provided by a number of manufacturers. Hysteresis curves for cylindrical samples with 10 nm in diameter and 2 mm, 3.5 mm, 5 mm, 14 mm or 28 mm in length were measured. Correction for the demagnetizing field is rather difficult because of its non-uniformity. Roughly speaking, a mean demagnetizing factor for soft magnetic materials can be used for the correction, although the application of this factor to hard magnetic material is hardly justified. Thus the dimensions of the sample should be specified when the data obtained by the open magnetic circuit method are used as industrial standards. (author)

  3. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Directory of Open Access Journals (Sweden)

    Gaetano Luglio

    2015-06-01

    Conclusion: Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon.

  4. A simple method for one-loop renormalization in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@uis.no [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2013-08-01

    We present a simple method for deriving the renormalization counterterms from the components of the energy-momentum tensor in curved space-time. This method allows control over the finite parts of the counterterms and provides explicit expressions for each term separately. As an example, the method is used for the self-interacting scalar field in a Friedmann-Robertson-Walker metric in the adiabatic approximation, where we calculate the renormalized equation of motion for the field and the renormalized components of the energy-momentum tensor to fourth adiabatic order while including interactions to one-loop order. Within this formalism the trace anomaly, including contributions from interactions, is shown to have a simple derivation. We compare our results to those obtained by two standard methods, finding agreement with the Schwinger-DeWitt expansion but disagreement with adiabatic subtractions for interacting theories.

  5. Functional methods for arbitrary densities in curved spacetime

    International Nuclear Information System (INIS)

    Basler, M.

    1993-01-01

    This paper gives an introduction to the technique of functional differentiation and integration in curved spacetime, applied to examples from quantum field theory. Special attention is drawn on the choice of functional integral measure. Referring to a suggestion by Toms, fields are choosen as arbitrary scalar, spinorial or vectorial densities. The technique developed by Toms for a pure quadratic Lagrangian are extended to the calculation of the generating functional with external sources. Included are two examples of interacting theories, a self-interacting scalar field and a Yang-Mills theory. For these theories the complete set of Feynman graphs depending on the weight of variables is derived. (orig.)

  6. Fitting methods for constructing energy-dependent efficiency curves and their application to ionization chamber measurements

    International Nuclear Information System (INIS)

    Svec, A.; Schrader, H.

    2002-01-01

    An ionization chamber without and with an iron liner (absorber) was calibrated by a set of radionuclide activity standards of the Physikalisch-Technische Bundesanstalt (PTB). The ionization chamber is used as a secondary standard measuring system for activity at the Slovak Institute of Metrology (SMU). Energy-dependent photon-efficiency curves were established for the ionization chamber in defined measurement geometry without and with the liner, and radionuclide efficiencies were calculated. Programmed calculation with an analytical efficiency function and a nonlinear regression algorithm of Microsoft (MS) Excel for fitting was used. Efficiencies from bremsstrahlung of pure beta-particle emitters were calibrated achieving a 10% accuracy level. Such efficiency components are added to obtain the total radionuclide efficiency of photon emitters after beta decay. The method yields differences of experimental and calculated radionuclide efficiencies for most of the photon-emitting radionuclides in the order of a few percent

  7. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3 , No. of iterations, variance(σ2 , Durbin-Watson statistic, goodness of fit measures such as R2 , D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  8. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3, No. of iterations, variance(σ2, Durbin-Watson statistic, goodness of fit measures such as R2, D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  9. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  10. Status on the selection and development of an embrittlement trend curve to use in ASTM standard guide E900

    International Nuclear Information System (INIS)

    Kirk, M.; Brian Hall, J.; Server, W.; Lucon, E.; Erickson, M.; Stoller, R.

    2015-01-01

    ASTM E900-07, Standard Guide for Predicting Radiation-Induced Transition Temperature Shift in Reactor Vessel Materials, includes an embrittlement trend curve. The trend curve can be used to predict the effect of neutron irradiation on the embrittlement of ferritic pressure vessel steels, as quantified by the shift in the Charpy V-Notch transition curve at 41 Joules of absorbed energy (ΔT 41J ). The current E900 trend curve was first adopted in the 2002 revision. In 2011 ASTM Subcommittee E10.02 undertook an extensive effort to evaluate the adequacy of the E900 trend curve for continued use. This paper summarizes the current status of this effort, which has produced a trend curve calibrated using a database of over 1800 ΔT 41J values from the light water reactor surveillance programs in thirteen countries. (authors)

  11. The 1-loop effective potential for the Standard Model in curved spacetime

    Science.gov (United States)

    Markkanen, Tommi; Nurmi, Sami; Rajantie, Arttu; Stopyra, Stephen

    2018-06-01

    The renormalisation group improved Standard Model effective potential in an arbitrary curved spacetime is computed to one loop order in perturbation theory. The loop corrections are computed in the ultraviolet limit, which makes them independent of the choice of the vacuum state and allows the derivation of the complete set of β-functions. The potential depends on the spacetime curvature through the direct non-minimal Higgs-curvature coupling, curvature contributions to the loop diagrams, and through the curvature dependence of the renormalisation scale. Together, these lead to significant curvature dependence, which needs to be taken into account in cosmological applications, which is demonstrated with the example of vacuum stability in de Sitter space.

  12. A Novel Method for Detecting and Computing Univolatility Curves in Ternary Mixtures

    DEFF Research Database (Denmark)

    Shcherbakov, Nataliya; Rodriguez-Donis, Ivonne; Abildskov, Jens

    2017-01-01

    Residue curve maps (RCMs) and univolatility curves are crucial tools for analysis and design of distillation processes. Even in the case of ternary mixtures, the topology of these maps is highly non-trivial. We propose a novel method allowing detection and computation of univolatility curves...... of the generalized univolatility and unidistribution curves in the three dimensional composition – temperature state space lead to a simple and efficient algorithm of computation of the univolatility curves. Two peculiar ternary systems, namely diethylamine – chloroform – methanol and hexane – benzene...

  13. Automated pavement horizontal curve measurement methods based on inertial measurement unit and 3D profiling data

    Directory of Open Access Journals (Sweden)

    Wenting Luo

    2016-04-01

    Full Text Available Pavement horizontal curve is designed to serve as a transition between straight segments, and its presence may cause a series of driving-related safety issues to motorists and drivers. As is recognized that traditional methods for curve geometry investigation are time consuming, labor intensive, and inaccurate, this study attempts to develop a method that can automatically conduct horizontal curve identification and measurement at network level. The digital highway data vehicle (DHDV was utilized for data collection, in which three Euler angles, driving speed, and acceleration of survey vehicle were measured with an inertial measurement unit (IMU. The 3D profiling data used for cross slope calibration was obtained with PaveVision3D Ultra technology at 1 mm resolution. In this study, the curve identification was based on the variation of heading angle, and the curve radius was calculated with kinematic method, geometry method, and lateral acceleration method. In order to verify the accuracy of the three methods, the analysis of variance (ANOVA test was applied by using the control variable of curve radius measured by field test. Based on the measured curve radius, a curve safety analysis model was used to predict the crash rates and safe driving speeds at horizontal curves. Finally, a case study on 4.35 km road segment demonstrated that the proposed method could efficiently conduct network level analysis.

  14. An extended L-curve method for choosing a regularization parameter in electrical resistance tomography

    International Nuclear Information System (INIS)

    Xu, Yanbin; Pei, Yang; Dong, Feng

    2016-01-01

    The L-curve method is a popular regularization parameter choice method for the ill-posed inverse problem of electrical resistance tomography (ERT). However the method cannot always determine a proper parameter for all situations. An investigation into those situations where the L-curve method failed show that a new corner point appears on the L-curve and the parameter corresponding to the new corner point can obtain a satisfactory reconstructed solution. Thus an extended L-curve method, which determines the regularization parameter associated with either global corner or the new corner, is proposed. Furthermore, two strategies are provided to determine the new corner–one is based on the second-order differential of L-curve, and the other is based on the curvature of L-curve. The proposed method is examined by both numerical simulations and experimental tests. And the results indicate that the extended method can handle the parameter choice problem even in the case where the typical L-curve method fails. Finally, in order to reduce the running time of the method, the extended method is combined with a projection method based on the Krylov subspace, which was able to boost the extended L-curve method. The results verify that the speed of the extended L-curve method is distinctly improved. The proposed method extends the application of the L-curve in the field of choosing regularization parameter with an acceptable running time and can also be used in other kinds of tomography. (paper)

  15. An adaptive-binning method for generating constant-uncertainty/constant-significance light curves with Fermi-LAT data

    International Nuclear Information System (INIS)

    Lott, B.; Escande, L.; Larsson, S.; Ballet, J.

    2012-01-01

    Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LAT analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.

  16. High cycle fatigue test and regression methods of S-N curve

    International Nuclear Information System (INIS)

    Kim, D. W.; Park, J. Y.; Kim, W. G.; Yoon, J. H.

    2011-11-01

    The fatigue design curve in the ASME Boiler and Pressure Vessel Code Section III are based on the assumption that fatigue life is infinite after 106 cycles. This is because standard fatigue testing equipment prior to the past decades was limited in speed to less than 200 cycles per second. Traditional servo-hydraulic machines work at frequency of 50 Hz. Servo-hydraulic machines working at 1000 Hz have been developed after 1997. This machines allow high frequency and displacement of up to ±0.1 mm and dynamic load of ±20 kN are guaranteed. The frequency of resonant fatigue test machine is 50-250 Hz. Various forced vibration-based system works at 500 Hz or 1.8 kHz. Rotating bending machines allow testing frequency at 0.1-200 Hz. The main advantage of ultrasonic fatigue testing at 20 kHz is performing Although S-N curve is determined by experiment, the fatigue strength corresponding to a given fatigue life should be determined by statistical method considering the scatter of fatigue properties. In this report, the statistical methods for evaluation of fatigue test data is investigated

  17. Evaluation methods for neutron cross section standards

    International Nuclear Information System (INIS)

    Bhat, M.R.

    1980-01-01

    Methods used to evaluate the neutron cross section standards are reviewed and their relative merits, assessed. These include phase-shift analysis, R-matrix fit, and a number of other methods by Poenitz, Bhat, Kon'shin and the Bayesian or generalized least-squares procedures. The problems involved in adopting these methods for future cross section standards evaluations are considered, and the prospects for their use, discussed. 115 references, 5 figures, 3 tables

  18. Standard setting: Comparison of two methods

    Directory of Open Access Journals (Sweden)

    Oyebode Femi

    2006-09-01

    Full Text Available Abstract Background The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard – setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. Methods The norm – reference method of standard -setting (mean minus 1 SD was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ. Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart. We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. Results The pass rate with the norm-reference method was 85% (66/78 and that by the Angoff method was 100% (78 out of 78. The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% – 87%. The modified Angoff method had an inter-rater reliability of 0.81 – 0.82 and a test-retest reliability of 0.59–0.74. Conclusion There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  19. Standard setting: comparison of two methods.

    Science.gov (United States)

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  20. Solving eigenvalue problems on curved surfaces using the Closest Point Method

    KAUST Repository

    Macdonald, Colin B.

    2011-06-01

    Eigenvalue problems are fundamental to mathematics and science. We present a simple algorithm for determining eigenvalues and eigenfunctions of the Laplace-Beltrami operator on rather general curved surfaces. Our algorithm, which is based on the Closest Point Method, relies on an embedding of the surface in a higher-dimensional space, where standard Cartesian finite difference and interpolation schemes can be easily applied. We show that there is a one-to-one correspondence between a problem defined in the embedding space and the original surface problem. For open surfaces, we present a simple way to impose Dirichlet and Neumann boundary conditions while maintaining second-order accuracy. Convergence studies and a series of examples demonstrate the effectiveness and generality of our approach. © 2011 Elsevier Inc.

  1. Test of the nonexponential deviations from decay curve of 52V using continuous kinetic function method

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son

    1995-01-01

    Present work is aimed at a formulation of an experimental approach to search the proposed nonexponential deviations from decay curve and at description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. A continuous kinetic function (CKF) method is described for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behaviour of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is researched. A complex type of decay is discussed. (authors). 10 refs., 4 figs., 2 tabs

  2. Methods for extracting dose response curves from radiation therapy data. I. A unified approach

    International Nuclear Information System (INIS)

    Herring, D.F.

    1980-01-01

    This paper discusses an approach to fitting models to radiation therapy data in order to extract dose response curves for tumor local control and normal tissue damage. The approach is based on the method of maximum likelihood and is illustrated by several examples. A general linear logistic equation which leads to the Ellis nominal standard dose (NSD) equation is discussed; the fit of this equation to experimental data for mouse foot skin reactions produced by fractionated irradiation is described. A logistic equation based on the concept that normal tissue reactions are associated with the surviving fraction of cells is also discussed, and the fit of this equation to the same set of mouse foot skin reaction data is also described. These two examples illustrate the importance of choosing a model based on underlying mechanisms when one seeks to attach biological significance to a model's parameters

  3. On the Shadow Simplex Method for Curved Polyhedra

    NARCIS (Netherlands)

    D.N. Dadush (Daniel); N. Hähnle

    2015-01-01

    htmlabstractWe study the simplex method over polyhedra satisfying certain “discrete curvature” lower bounds, which enforce that the boundary always meets vertices at sharp angles. Motivated by linear programs with totally unimodular constraint matrices, recent results of Bonifas et al

  4. On the Shadow Simplex Method for curved polyhedra

    NARCIS (Netherlands)

    D.N. Dadush (Daniel); N. Hähnle

    2016-01-01

    htmlabstractWe study the simplex method over polyhedra satisfying certain “discrete curvature” lower bounds, which enforce that the boundary always meets vertices at sharp angles. Motivated by linear programs with totally unimodular constraint matrices, recent results of Bonifas et al. (Discrete

  5. Finite-difference time-domain modeling of curved material interfaces by using boundary condition equations method

    International Nuclear Information System (INIS)

    Lu Jia; Zhou Huaichun

    2016-01-01

    To deal with the staircase approximation problem in the standard finite-difference time-domain (FDTD) simulation, the two-dimensional boundary condition equations (BCE) method is proposed in this paper. In the BCE method, the standard FDTD algorithm can be used as usual, and the curved surface is treated by adding the boundary condition equations. Thus, while maintaining the simplicity and computational efficiency of the standard FDTD algorithm, the BCE method can solve the staircase approximation problem. The BCE method is validated by analyzing near field and far field scattering properties of the PEC and dielectric cylinders. The results show that the BCE method can maintain a second-order accuracy by eliminating the staircase approximation errors. Moreover, the results of the BCE method show good accuracy for cylinder scattering cases with different permittivities. (paper)

  6. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    International Nuclear Information System (INIS)

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-01-01

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials

  7. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    Science.gov (United States)

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  8. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Science.gov (United States)

    Luglio, Gaetano; De Palma, Giovanni Domenico; Tarquini, Rachele; Giglio, Mariano Cesare; Sollazzo, Viviana; Esposito, Emanuela; Spadarella, Emanuela; Peltrini, Roberto; Liccardo, Filomena; Bucci, Luigi

    2015-01-01

    Background Despite the proven benefits, laparoscopic colorectal surgery is still under utilized among surgeons. A steep learning is one of the causes of its limited adoption. Aim of the study is to determine the feasibility and morbidity rate after laparoscopic colorectal surgery in a single institution, “learning curve” experience, implementing a well standardized operative technique and recovery protocol. Methods The first 50 patients treated laparoscopically were included. All the procedures were performed by a trainee surgeon, supervised by a consultant surgeon, according to the principle of complete mesocolic excision with central vascular ligation or TME. Patients underwent a fast track recovery programme. Recovery parameters, short-term outcomes, morbidity and mortality have been assessed. Results Type of resections: 20 left side resections, 8 right side resections, 14 low anterior resection/TME, 5 total colectomy and IRA, 3 total panproctocolectomy and pouch. Mean operative time: 227 min; mean number of lymph-nodes: 18.7. Conversion rate: 8%. Mean time to flatus: 1.3 days; Mean time to solid stool: 2.3 days. Mean length of hospital stay: 7.2 days. Overall morbidity: 24%; major morbidity (Dindo–Clavien III): 4%. No anastomotic leak, no mortality, no 30-days readmission. Conclusion Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon. PMID:25859386

  9. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    International Nuclear Information System (INIS)

    Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang

    2016-01-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)

  10. Reactor Section standard analytical methods. Part 1

    Energy Technology Data Exchange (ETDEWEB)

    Sowden, D.

    1954-07-01

    the Standard Analytical Methods manual was prepared for the purpose of consolidating and standardizing all current analytical methods and procedures used in the Reactor Section for routine chemical analyses. All procedures are established in accordance with accepted practice and the general analytical methods specified by the Engineering Department. These procedures are specifically adapted to the requirements of the water treatment process and related operations. The methods included in this manual are organized alphabetically within the following five sections which correspond to the various phases of the analytical control program in which these analyses are to be used: water analyses, essential material analyses, cotton plug analyses boiler water analyses, and miscellaneous control analyses.

  11. Statistical inference methods for two crossing survival curves: a comparison of methods.

    Science.gov (United States)

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.

  12. Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method

    Science.gov (United States)

    Verachtert, R.; Lombaert, G.; Degrande, G.

    2018-03-01

    This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.

  13. Application of numerical methods in spectroscopy : fitting of the curve of thermoluminescence

    International Nuclear Information System (INIS)

    RANDRIAMANALINA, S.

    1999-01-01

    The method of non linear least squares is one of the mathematical tools widely employed in spectroscopy, it is used for the determination of parameters of a model. In other hand, the spline function is among fitting functions that introduce the smallest error. It is used for the calculation of the area under the curve. We present an application of these methods, with the details of the corresponding algorithms, to the fitting of the thermoluminescence curve. [fr

  14. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    Directory of Open Access Journals (Sweden)

    Tatsuhiro Gotanda

    2016-01-01

    Full Text Available Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  15. Standard Test Method for Sandwich Corrosion Test

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method defines the procedure for evaluating the corrosivity of aircraft maintenance chemicals, when present between faying surfaces (sandwich) of aluminum alloys commonly used for aircraft structures. This test method is intended to be used in the qualification and approval of compounds employed in aircraft maintenance operations. 1.2 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information. 1.3 This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. Specific hazard statements appear in Section 9.

  16. Surface charge method for molecular surfaces with curved areal elements I. Spherical triangles

    Science.gov (United States)

    Yu, Yi-Kuo

    2018-03-01

    Parametrizing a curved surface with flat triangles in electrostatics problems creates a diverging electric field. One way to avoid this is to have curved areal elements. However, charge density integration over curved patches appears difficult. This paper, dealing with spherical triangles, is the first in a series aiming to solve this problem. Here, we lay the ground work for employing curved patches for applying the surface charge method to electrostatics. We show analytically how one may control the accuracy by expanding in powers of the the arc length (multiplied by the curvature). To accommodate not extremely small curved areal elements, we have provided enough details to include higher order corrections that are needed for better accuracy when slightly larger surface elements are used.

  17. A graph-based method for fitting planar B-spline curves with intersections

    Directory of Open Access Journals (Sweden)

    Pengbo Bo

    2016-01-01

    Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.

  18. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    Science.gov (United States)

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences

  19. A method of non-destructive quantitative analysis of the ancient ceramics with curved surface

    International Nuclear Information System (INIS)

    He Wenquan; Xiong Yingfei

    2002-01-01

    Generally the surface of the sample should be smooth and flat in XRF analysis, but the ancient ceramics and hardly match this condition. Two simple methods are put forward in fundamental method and empirical correction method of XRF analysis, so the analysis of little sample or the sample with curved surface can be easily completed

  20. Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method ...

    Indian Academy of Sciences (India)

    Abstract. The Jurkevich method is a useful method to explore periodic- ity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states.

  1. Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method

    Indian Academy of Sciences (India)

    The Jurkevich method is a useful method to explore periodicity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states. That period may be ...

  2. Purohit's spectrophotometric method for determination of stability constants of complexes using Job's curves

    International Nuclear Information System (INIS)

    Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.

    1999-01-01

    A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)

  3. A new method for measuring coronary artery diameters with CT spatial profile curves

    International Nuclear Information System (INIS)

    Shimamoto, Ryoichi; Suzuki, Jun-ichi; Yamazaki, Tadashi; Tsuji, Taeko; Ohmoto, Yuki; Morita, Toshihiro; Yamashita, Hiroshi; Honye, Junko; Nagai, Ryozo; Akahane, Masaaki; Ohtomo, Kuni

    2007-01-01

    Purpose: Coronary artery vascular edge recognition on computed tomography (CT) angiograms is influenced by window parameters. A noninvasive method for vascular edge recognition independent of window setting with use of multi-detector row CT was contrived and its feasibility and accuracy were estimated by intravascular ultrasound (IVUS). Methods: Multi-detector row CT was performed to obtain 29 CT spatial profile curves by setting a line cursor across short-axis coronary angiograms processed by multi-planar reconstruction. IVUS was also performed to determine the reference coronary diameter. IVUS diameter was fitted horizontally between two points on the upward and downward slopes of the profile curves and Hounsfield number was measured at the fitted level to test seven candidate indexes for definition of intravascular coronary diameter. The best index from the curves should show the best agreement with IVUS diameter. Results: Of the seven candidates the agreement was the best (agreement: 16 ± 11%) when the two ratios of Hounsfield number at the level of IVUS diameter over that at the peak on the profile curves were used with water and with fat as the background tissue. These edge definitions were achieved by cutting the horizontal distance by the curves at the level defined by the ratio of 0.41 for water background and 0.57 for fat background. Conclusions: Vascular edge recognition of the coronary artery with CT spatial profile curves was feasible and the contrived method could define the coronary diameter with reasonable agreement

  4. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  5. Curve Evolution in Subspaces and Exploring the Metameric Class of Histogram of Gradient Orientation based Features using Nonlinear Projection Methods

    DEFF Research Database (Denmark)

    Tatu, Aditya Jayant

    This thesis deals with two unrelated issues, restricting curve evolution to subspaces and computing image patches in the equivalence class of Histogram of Gradient orientation based features using nonlinear projection methods. Curve evolution is a well known method used in various applications like...... tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... specific requirements like shape priors or a given data model, and due to limitations of the computer, the computed curve evolution forms a path in some finite dimensional subspace of the space of curves. We give methods to restrict the curve evolution to a finite dimensional linear or implicitly defined...

  6. Residual stress measurement by X-ray diffraction with the Gaussian curve method and its automation

    International Nuclear Information System (INIS)

    Kurita, M.

    1987-01-01

    X-ray technique with the Gaussian curve method and its automation are described for rapid and nondestructive measurement of residual stress. A simplified equation for measuring the stress by the Gaussian curve method is derived because in its previous form this method required laborious calculation. The residual stress can be measured in a few minutes, depending on materials, using an automated X-ray stress analyzer with a microcomputer which was developed in the laboratory. The residual stress distribution of a partially induction hardened and tempered (at 280 0 C) steel bar was measured with the Gaussian curve method. A sharp residual tensile stress peak of 182 MPa appeared right outside the hardened region at which fatigue failure is liable to occur

  7. Standard methods for analysis of phosphorus-32

    International Nuclear Information System (INIS)

    Anon.

    1975-01-01

    Methods are described for the determination of the radiochemical purity and the absolute disintegration rate of 32 P radioisotope preparations. The 32 P activity is determined by β counting, and other low-energy β radioactive contaminants are determined by aluminum-absorption curve data. Any γ-radioactive contaminants are determined by γ counting. Routine chemical testing is used to establish the chemical characteristics. The presence or absence of heavy metals is established by spot tests; free acid is determined by use of a pH meter; total solids are determined gravimetrically by evaporation and ignition at a temperature sufficient to evaporate the mineral acids, HCl and HNO 3 ; and nonvolatile matter, defined as that material which does not evaporate or ignite at a temperature sufficient to convert C to CO or CO 2 , is determined gravimetrically after such ignition

  8. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions

    NARCIS (Netherlands)

    López, S.; France, J.; Odongo, N.E.; McBride, R.A.; Kebreab, E.; Alzahal, O.; McBride, B.W.; Dijkstra, J.

    2015-01-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records

  9. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  10. METHOD TO DEVELOP THE DOUBLE-CURVED SURFACE OF THE ROOF

    Directory of Open Access Journals (Sweden)

    JURCO Ancuta Nadia

    2017-05-01

    Full Text Available This work present two methods for determining the development of double-curved surface. The aims of this paper is to show a comparative study between methods for determination of the sheet metal requirements for complex roof cover shape. In first part of the paper are presented the basic sketch and information about the roof shape and some consecrated buildings, which have a complex roof shape. The second part of the paper shows two methods for determining the developed of the spherical roof. The graphical method is the first method used for developing of the spherical shape. In this method it used the poly-cylindrical method to develop the double-curved surface. The second method is accomplishing by using the dedicated CAD software method.

  11. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.

    Science.gov (United States)

    López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J

    2015-04-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. The nuclear fluctuation width and the method of maxima in excitation curves

    International Nuclear Information System (INIS)

    Burjan, V.

    1988-01-01

    The method of counting maxima of excitation curves in the region of the occurrence of nuclear cross section fluctuations is extended to the case of the more realistic maxima defined as a sequence of five points instead of the simpler and commonly used case of a sequence of three points of an excitation curve. The dependence of the coefficient b (5) (κ), relating the number of five-point maxima and the mean level width Γ of the compound nucleus, on the relative distance K of excitation curve points is calculated. The influence of the random background on the coefficient b (5) (κ) is discussed and a comparison with the properties of the three-point coefficient b (3) (κ) is made - also in connection with the contribution of the random background. The calculated values of b (5) (κ) are well reproduced by the data obtained from the analysis of artificial excitation curves. (orig.)

  13. A method to enhance the curve negotiation performance of HTS Maglev

    Science.gov (United States)

    Che, T.; Gou, Y. F.; Deng, Z. G.; Zheng, J.; Zheng, B. T.; Chen, P.

    2015-09-01

    High temperature superconducting (HTS) Maglev has attracted more and more attention due to its special self-stable characteristic, and much work has been done to achieve its actual application, but the research about the curve negotiation is not systematic and comprehensive. In this paper, we focused on the change of the lateral displacements of the Maglev vehicle when going through curves under different velocities, and studied the change of the electromagnetic forces through experimental methods. Experimental results show that setting an appropriate initial eccentric distance (ED), which is the distance between the center of the bulk unit and the center of the permanent magnet guideway (PMG), when cooling the bulks is favorable for the Maglev system’s curve negotiation. This work will provide some available suggestions for improving the curve negotiation performance of the HTS Maglev system.

  14. Creep curve modeling of hastelloy-X alloy by using the theta projection method

    International Nuclear Information System (INIS)

    Woo Gon, Kim; Woo-Seog, Ryu; Jong-Hwa, Chang; Song-Nan, Yin

    2007-01-01

    To model the creep curves of the Hastelloy-X alloy which is being considered as a candidate material for the VHTR (Very High Temperature gas-cooled Reactor) components, full creep curves were obtained by constant-load creep tests for different stress levels at 950 C degrees. Using the experimental creep data, the creep curves were modeled by applying the Theta projection method. A number of computing processes of a nonlinear least square fitting (NLSF) analysis was carried out to establish the suitably of the four Theta parameters. The results showed that the Θ 1 and Θ 2 parameters could not be optimized well with a large error during the fitting of the full creep curves. On the other hand, the Θ 3 and Θ 4 parameters were optimized well without an error. For this result, to find a suitable cutoff strain criterion, the NLSF analysis was performed with various cutoff strains for all the creep curves. An optimum cutoff strain range for defining the four Theta parameters accurately was found to be a 3% cutoff strain. At the 3% cutoff strain, the predicted curves coincided well with the experimental ones. The variation of the four Theta parameters as the function of a stress showed a good linearity, and the creep curves were modeled well for the low stress levels. Predicted minimum creep rate showed a good agreement with the experimental data. Also, for a design usage of the Hastelloy-X alloy, the plot of the log stress versus log the time to a 1% strain was predicted, and the creep rate curves with time and a cutoff strain at 950 C degrees were constructed numerically for a wide rang of stresses by using the Theta projection method. (authors)

  15. Mixed gamma emitting gas standard and method

    International Nuclear Information System (INIS)

    McFarland, R.C.; McFarland, P.A.

    1986-01-01

    The invention in one aspect pertains to a method of calibrating gamma spectroscopy systems for gas counting in a variety of counting containers comprising withdrawing a precision volume of a mixed gamma-emitting gas standard from a precision volume vial and delivering the withdrawn precision volume of the gas standard to the interior of a gas counting container. Another aspect of the invention pertains to a mixed gamma-emitting gas standard, comprising a precision spherical vial of predetermined volume, multiple mixed emitting gas components enclosed within the vial, and means for withdrawing from the vial a predetermined amount of the components wherein the gas standard is used to calibrate a gamma spectrometer system for gas counting over a wide energy range without the use of additional standards. A third aspect comprehends a gamma spectrometer calibration system for gas counting, comprising a precision volume spherical glass vial for receiving mixed multiisotope gas components, and two tubular arms extending from the vial. A ground glass stopcock is positioned on each arm, and the outer end of one arm is provided with a rubber septum port

  16. Wright meets Markowitz: How standard portfolio theory changes when assets are technologies following experience curves

    OpenAIRE

    Way, Rupert; Lafond, François; Farmer, J. Doyne; Lillo, Fabrizio; Panchenko, Valentyn

    2017-01-01

    This paper considers how to optimally allocate investments in a portfolio of competing technologies. We introduce a simple model representing the underlying trade-off - between investing enough effort in any one project to spur rapid progress, and diversifying effort over many projects simultaneously to hedge against failure. We use stochastic experience curves to model the idea that investing more in a technology reduces its unit costs, and we use a mean-variance objective function to unders...

  17. Method for linearizing the potentiometric curves of precipitation titration in nonaqueous and aqueous-organic solutions

    International Nuclear Information System (INIS)

    Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.

    1995-01-01

    The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs

  18. Fitness analysis method for magnesium in drinking water with atomic absorption using quadratic curve calibration

    Directory of Open Access Journals (Sweden)

    Esteban Pérez-López

    2014-11-01

    Full Text Available Because of the importance of quantitative chemical analysis in research, quality control, sales of services and other areas of interest , and the limiting of some instrumental analysis methods for quantification with linear calibration curve, sometimes because the short linear dynamic ranges of the analyte, and sometimes by limiting the technique itself, is that there is a need to investigate a little more about the convenience of using quadratic curves for analytical quantification, which seeks demonstrate that it is a valid calculation model for chemical analysis instruments. To this was taken as an analysis method based on the technique and atomic absorption spectroscopy in particular a determination of magnesium in a sample of drinking water Tacares sector Northern Grecia, employing a nonlinear calibration curve and a curve specific quadratic behavior, which was compared with the test results obtained for the same analysis with a linear calibration curve. The results show that the methodology is valid for the determination referred to, with all confidence, since the concentrations are very similar, and as used hypothesis testing can be considered equal.

  19. A simple preparation of calibration curve standards of 134Cs and 137Cs by serial dilution of a standard reference material

    International Nuclear Information System (INIS)

    Labrecque, J.J.; Rosales, P.A.

    1990-01-01

    Two sets of calibration standards for 134 Cs and 137 Cs were prepared by small serial dilution of a natural matrix standard reference material, IAEA-154 whey powder. The first set was intended to screen imported milk powders which were suspected to be contaminated with 134 Cs and 137 Cs. Their concentration ranged from 40 to 400 Bq/kg. The other set of calibration standards was prepared to measure the environmental levels of 137 Cs in commercial Venezuelan milk powders. Their concentration ranged from 3 to 10 Bq/kg of 137 Cs. The accuracy of these calibration curves was checked by IAEA-152 and A-14 milk powders. Their measured values were in good agreement with their certified values. Finally, it is shown that these preparation techniques using serial dilution of a standard reference material were simple, rapid, precise, accurate and cost-effective. (author) 5 refs.; 5 figs.; 3 tabs

  20. Arterial pressure measurement: Is the envelope curve of the oscillometric method influenced by arterial stiffness?

    International Nuclear Information System (INIS)

    Gelido, G; Angiletta, S; Pujalte, A; Quiroga, P; Cornes, P; Craiem, D

    2007-01-01

    Measurement of peripheral arterial pressure using the oscillometric method is commonly used by professionals as well as by patients in their homes. This non invasive automatic method is fast, efficient and the required equipment is affordable with a low cost. The measurement method consists of obtaining parameters from a calibrated decreasing curve that is modulated by heart beats witch appear when arterial pressure reaches the cuff pressure. Diastolic, mean and systolic pressures are obtained calculating particular instants from the heart beats envelope curve. In this article we analyze the envelope of this amplified curve to find out if its morphology is related to arterial stiffness in patients. We found, in 33 volunteers, that the envelope waveform width correlates to systolic pressure (r=0.4, p<0.05), to pulse pressure (r=0.6, p<0.05) and to pulse pressure normalized to systolic pressure (r=0.6, p<0.05). We believe that the morphology of the heart beats envelope curve obtained with the oscillometric method for peripheral pressure measurement depends on arterial stiffness and can be used to enhance pressure measurements

  1. Statistical reexamination of analytical method on the observed electron spin (or nuclear) resonance curves

    International Nuclear Information System (INIS)

    Kim, J.W.

    1980-01-01

    Observed magnetic resonance curves are statistically reexamined. Typical models of resonance lines are Lorentzian and Gaussian distribution functions. In the case of metallic, alloy or intermetallic compound samples, observed resonance lines are supperposed with the absorption line and the dispersion line. The analyzing methods of supperposed resonance lines are demonstrated. (author)

  2. An Investigation of Undefined Cut Scores with the Hofstee Standard-Setting Method

    Science.gov (United States)

    Wyse, Adam E.; Babcock, Ben

    2017-01-01

    This article provides an overview of the Hofstee standard-setting method and illustrates several situations where the Hofstee method will produce undefined cut scores. The situations where the cut scores will be undefined involve cases where the line segment derived from the Hofstee ratings does not intersect the score distribution curve based on…

  3. A new method for curve fitting to the data with low statistics not using the chi2-method

    International Nuclear Information System (INIS)

    Awaya, T.

    1979-01-01

    A new method which does not use the chi 2 -fitting method is investigated in order to fit the theoretical curve to data with low statistics. The method is compared with the usual and modified chi 2 -fitting ones. The analyses are done for data which are generated by computers. It is concluded that the new method gives good results in all the cases. (Auth.)

  4. The strategy curve. A method for representing and interpreting generator bidding strategies

    International Nuclear Information System (INIS)

    Lucas, N.; Taylor, P.

    1995-01-01

    The pool is the novel trading arrangement at the heart of the privatized electricity market in England and Wales. This central role in the new system makes it crucial that it is seen to function efficiently. Unfortunately, it is governed by a set of complex rules, which leads to a lack of transparency, and this makes monitoring of its operation difficult. This paper seeks to provide a method for illuminating one aspect of the pool, that of generator bidding behaviour. We introduce the concept of a strategy curve, which is a concise device for representing generator bidding strategies. This curve has the appealing characteristic of directly revealing any deviation in the bid price of a genset from the costs of generating electricity. After a brief discussion about what constitutes price and cost in this context we present a number of strategy curves for different days and provide some interpretation of their form, based in part on our earlier work with game theory. (author)

  5. Determination of Dispersion Curves for Composite Materials with the Use of Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Barski Marek

    2017-06-01

    Full Text Available Elastic waves used in Structural Health Monitoring systems have strongly dispersive character. Therefore it is necessary to determine the appropriate dispersion curves in order to proper interpretation of a received dynamic response of an analyzed structure. The shape of dispersion curves as well as number of wave modes depends on mechanical properties of layers and frequency of an excited signal. In the current work, the relatively new approach is utilized, namely stiffness matrix method. In contrast to transfer matrix method or global matrix method, this algorithm is considered as numerically unconditionally stable and as effective as transfer matrix approach. However, it will be demonstrated that in the case of hybrid composites, where mechanical properties of particular layers differ significantly, obtaining results could be difficult. The theoretical relationships are presented for the composite plate of arbitrary stacking sequence and arbitrary direction of elastic waves propagation. As a numerical example, the dispersion curves are estimated for the lamina, which is made of carbon fibers and epoxy resin. It is assumed that elastic waves travel in the parallel, perpendicular and arbitrary direction to the fibers in lamina. Next, the dispersion curves are determined for the following laminate [0°, 90°, 0°, 90°, 0°, 90°, 0°, 90°] and hybrid [Al, 90°, 0°, 90°, 0°, 90°, 0°], where Al is the aluminum alloy PA38 and the rest of layers are made of carbon fibers and epoxy resin.

  6. A new method for testing pile by single-impact energy and P-S curve

    Science.gov (United States)

    Xu, Zhao-Yong; Duan, Yong-Kang; Wang, Bin; Hu, Yi-Li; Yang, Run-Hai; Xu, Jun; Zhao, Jin-Ming

    2004-11-01

    By studying the pile-formula and stress-wave methods ( e.g., CASE method), the authors propose a new method for testing piles using the single-impact energy and P-S curves. The vibration and wave figures are recorded, and the dynamic and static displacements are measured by different transducers near the top of piles when the pile is impacted by a heavy hammer or micro-rocket. By observing the transformation coefficient of driving energy (total energy), the consumed energy of wave motion and vibration and so on, the vertical bearing capacity for single pile is measured and calculated. Then, using the vibration wave diagram, the dynamic relation curves between the force ( P) and the displacement ( S) is calculated and the yield points are determined. Using the static-loading test, the dynamic results are checked and the relative constants of dynamic-static P-S curves are determined. Then the subsidence quantity corresponding to the bearing capacity is determined. Moreover, the shaped quality of the pile body can be judged from the formation of P-S curves.

  7. SCINFI, a program to calculate the standardization curve in liquid scintillation counting; SCINFI, un programa para calcular la curva de calibracion eficiencia-extincion en centelleo liquido

    Energy Technology Data Exchange (ETDEWEB)

    Grau Carles, A.; Grau Malonda, A.

    1984-07-01

    A code, SCINFI, was developed, written in BASIC, to compute the efficiency- quench standardization curve for any radionuclide. The program requires the standardization curve for 3H and the polynomial relations between counting efficiency and figure of merit for both 3H and the problem (e.g. 14{sup C}). The program is applied to the computation of the efficiency-quench standardization curve for 14{sup c}. Five different liquid scintillation spectrometers and two scintillator solutions have bean checked. The computation results are compared with the experimental values obtained with a set of 14{sup c} standardized samples. (Author)

  8. Determination of the saturation curve of a primary standard for low energy X-ray beams

    International Nuclear Information System (INIS)

    Cardoso, Ricardo de Souza; Poledna, Roberto; Peixoto, Jose Guilherme P.

    2003-01-01

    Thr free air is the well recognized as the primary standard for the measurement of kerma in the air due to his characteristics to perform the absolute measurements of that entity according to definitions. Therefore, the Institute for Radioprotection and dosimetry - IRD, Brazil used for his implantation a free air cylindrical ionization chamber. Initially, a mechanical characterization was performed for verification as a primary standard. This paper will proceed a full detailed description the point operation of 2000 V found for that chamber and her saturation coefficient

  9. Prediction Method for the Complete Characteristic Curves of a Francis Pump-Turbine

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2018-02-01

    Full Text Available Complete characteristic curves of a pump-turbine are essential for simulating the hydraulic transients and designing pumped storage power plants but are often unavailable in the preliminary design stage. To solve this issue, a prediction method for the complete characteristics of a Francis pump-turbine was proposed. First, based on Euler equations and the velocity triangles at the runners, a mathematical model describing the complete characteristics of a Francis pump-turbine was derived. According to multiple sets of measured complete characteristic curves, explicit expressions for the characteristic parameters of characteristic operating point sets (COPs, as functions of a specific speed and guide vane opening, were then developed to determine the undetermined coefficients in the mathematical model. Ultimately, by combining the mathematical model with the regression analysis of COPs, the complete characteristic curves for an arbitrary specific speed were predicted. Moreover, a case study shows that the predicted characteristic curves are in good agreement with the measured data. The results obtained by 1D numerical simulation of the hydraulic transient process using the predicted characteristics deviate little from the measured characteristics. This method is effective and sufficient for a priori simulations before obtaining the measured characteristics and provides important support for the preliminary design of pumped storage power plants.

  10. A novel method of calculating the energy deposition curve of nanosecond pulsed surface dielectric barrier discharge

    International Nuclear Information System (INIS)

    He, Kun; Wang, Xinying; Lu, Jiayu; Cui, Quansheng; Pang, Lei; Di, Dongxu; Zhang, Qiaogen

    2015-01-01

    To obtain the energy deposition curve is very important in the fields to which nanosecond pulse dielectric barrier discharges (NPDBDs) are applied. It helps the understanding of the discharge physics and fast gas heating. In this paper, an equivalent circuit model, composed of three capacitances, is introduced and a method of calculating the energy deposition curve is proposed for a nanosecond pulse surface dielectric barrier discharge (NPSDBD) plasma actuator. The capacitance C d and the energy deposition curve E R are determined by mathematically proving that the mapping from C d to E R is bijective and numerically searching one C d that satisfies the requirement for E R to be a monotonically non-decreasing function. It is found that the value of capacitance C d varies with the amplitude of applied pulse voltage due to the change of discharge area and is dependent on the polarity of applied voltage. The bijectiveness of the mapping from C d to E R in nanosecond pulse volumetric dielectric barrier discharge (NPVDBD) is demonstrated and the feasibility of the application of the new method to NPVDBD is validated. This preliminarily shows a high possibility of developing a unified approach to calculate the energy deposition curve in NPDBD. (paper)

  11. Learning curve for robotic-assisted surgery for rectal cancer: use of the cumulative sum method.

    Science.gov (United States)

    Yamaguchi, Tomohiro; Kinugasa, Yusuke; Shiomi, Akio; Sato, Sumito; Yamakawa, Yushi; Kagawa, Hiroyasu; Tomioka, Hiroyuki; Mori, Keita

    2015-07-01

    Few data are available to assess the learning curve for robotic-assisted surgery for rectal cancer. The aim of the present study was to evaluate the learning curve for robotic-assisted surgery for rectal cancer by a surgeon at a single institute. From December 2011 to August 2013, a total of 80 consecutive patients who underwent robotic-assisted surgery for rectal cancer performed by the same surgeon were included in this study. The learning curve was analyzed using the cumulative sum method. This method was used for all 80 cases, taking into account operative time. Operative procedures included anterior resections in 6 patients, low anterior resections in 46 patients, intersphincteric resections in 22 patients, and abdominoperineal resections in 6 patients. Lateral lymph node dissection was performed in 28 patients. Median operative time was 280 min (range 135-683 min), and median blood loss was 17 mL (range 0-690 mL). No postoperative complications of Clavien-Dindo classification Grade III or IV were encountered. We arranged operative times and calculated cumulative sum values, allowing differentiation of three phases: phase I, Cases 1-25; phase II, Cases 26-50; and phase III, Cases 51-80. Our data suggested three phases of the learning curve in robotic-assisted surgery for rectal cancer. The first 25 cases formed the learning phase.

  12. Dispersion curve estimation via a spatial covariance method with ultrasonic wavefield imaging.

    Science.gov (United States)

    Chong, See Yenn; Todd, Michael D

    2018-05-01

    Numerous Lamb wave dispersion curve estimation methods have been developed to support damage detection and localization strategies in non-destructive evaluation/structural health monitoring (NDE/SHM) applications. In this paper, the covariance matrix is used to extract features from an ultrasonic wavefield imaging (UWI) scan in order to estimate the phase and group velocities of S0 and A0 modes. A laser ultrasonic interrogation method based on a Q-switched laser scanning system was used to interrogate full-field ultrasonic signals in a 2-mm aluminum plate at five different frequencies. These full-field ultrasonic signals were processed in three-dimensional space-time domain. Then, the time-dependent covariance matrices of the UWI were obtained based on the vector variables in Cartesian and polar coordinate spaces for all time samples. A spatial covariance map was constructed to show spatial correlations within the full wavefield. It was observed that the variances may be used as a feature for S0 and A0 mode properties. The phase velocity and the group velocity were found using a variance map and an enveloped variance map, respectively, at five different frequencies. This facilitated the estimation of Lamb wave dispersion curves. The estimated dispersion curves of the S0 and A0 modes showed good agreement with the theoretical dispersion curves. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. One Curve Embedded Full-Bridge MMC Modeling Method with Detailed Representation of IGBT Characteristics

    Science.gov (United States)

    Hongyang, Yu; Zhengang, Lu; Xi, Yang

    2017-05-01

    Modular Multilevel Converter is more and more widely used in high voltage DC transmission system and high power motor drive system. It is a major topological structure for high power AC-DC converter. Due to the large module number, the complex control algorithm, and the high power user’s back ground, the MMC model used for simulation should be as accurate as possible to simulate the details of how MMC works for the dynamic testing of the MMC controller. But so far, there is no sample simulation MMC model which can simulate the switching dynamic process. In this paper, one curve embedded full-bridge MMC modeling method with detailed representation of IGBT characteristics is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the simulation comparison test under Matlab/Simulink, the proposed method is proved to be correct.

  14. About the method of approximation of a simple closed plane curve with a sharp edge

    Directory of Open Access Journals (Sweden)

    Zelenyy A.S.

    2017-02-01

    Full Text Available it was noted in the article, that initially the problem of interpolation of the simple plane curve arose in the problem of simulation of subsonic flow around a body with the subsequent calculation of the velocity potential using the vortex panel method. However, as it turned out, the practical importance of this method is much wider. This algorithm can be successfully applied in any task that requires a discrete set of points which describe an arbitrary curve: potential function method, flow around an airfoil with the trailing edge (airfoil, liquid drop, etc., analytic expression, which is very difficult to obtain, creation of the font and logo and in some tasks of architecture and garment industry.

  15. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    Science.gov (United States)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  16. Monitoring pulmonary function with superimposed pulmonary gas exchange curves from standard analyzers.

    Science.gov (United States)

    Zar, Harvey A; Noe, Frances E; Szalados, James E; Goodrich, Michael D; Busby, Michael G

    2002-01-01

    A repetitive graphic display of the single breath pulmonary function can indicate changes in cardiac and pulmonary physiology brought on by clinical events. Parallel advances in computer technology and monitoring make real-time, single breath pulmonary function clinically practicable. We describe a system built from a commercially available airway gas monitor and off the shelf computer and data-acquisition hardware. Analog data for gas flow rate, O2, and CO2 concentrations are introduced into a computer through an analog-to-digital conversion board. Oxygen uptake (VO2) and carbon dioxide output (VCO2) are calculated for each breath. Inspired minus expired concentrations for O2 and CO2 are displayed simultaneously with the expired gas flow rate curve for each breath. Dead-space and alveolar ventilation are calculated for each breath and readily appreciated from the display. Graphs illustrating the function of the system are presented for the following clinical scenarios; upper airway obstruction, bronchospasm, bronchopleural fistula, pulmonary perfusion changes and inadequate oxygen delivery. This paper describes a real-time, single breath pulmonary monitoring system that displays three parameters graphed against time: expired flow rate, oxygen uptake and carbon dioxide production. This system allows for early and rapid recognition of treatable conditions that may lead to adverse events without any additional patient measurements or invasive procedures. Monitoring systems similar to the one described in this paper may lead to a higher level of patient safety without any additional patient risk.

  17. S-curve networks and an approximate method for estimating degree distributions of complex networks

    International Nuclear Information System (INIS)

    Guo Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)

  18. S-curve networks and an approximate method for estimating degree distributions of complex networks

    Science.gov (United States)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  19. Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves

    Science.gov (United States)

    Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley

    2016-09-01

    Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.

  20. Feasibility of the correlation curves method in calorimeters of different types

    OpenAIRE

    Grushevskaya, E. A.; Lebedev, I. A.; Fedosimova, A. I.

    2014-01-01

    The simulation of the development of cascade processes in calorimeters of different types for the implementation of energy measurement by correlation curves method, is carried out. Heterogeneous calorimeter has a significant transient effects, associated with the difference of the critical energy in the absorber and the detector. The best option is a mixed calorimeter, which has a target block, leading to the rapid development of the cascade, and homogeneous measuring unit. Uncertainties of e...

  1. A method for the rapid generation of nonsequential light-response curves of chlorophyll fluorescence.

    Science.gov (United States)

    Serôdio, João; Ezequiel, João; Frommlet, Jörg; Laviale, Martin; Lavaud, Johann

    2013-11-01

    Light-response curves (LCs) of chlorophyll fluorescence are widely used in plant physiology. Most commonly, LCs are generated sequentially, exposing the same sample to a sequence of distinct actinic light intensities. These measurements are not independent, as the response to each new light level is affected by the light exposure history experienced during previous steps of the LC, an issue particularly relevant in the case of the popular rapid light curves. In this work, we demonstrate the proof of concept of a new method for the rapid generation of LCs from nonsequential, temporally independent fluorescence measurements. The method is based on the combined use of sample illumination with digitally controlled, spatially separated beams of actinic light and a fluorescence imaging system. It allows the generation of a whole LC, including a large number of actinic light steps and adequate replication, within the time required for a single measurement (and therefore named "single-pulse light curve"). This method is illustrated for the generation of LCs of photosystem II quantum yield, relative electron transport rate, and nonphotochemical quenching on intact plant leaves exhibiting distinct light responses. This approach makes it also possible to easily characterize the integrated dynamic light response of a sample by combining the measurement of LCs (actinic light intensity is varied while measuring time is fixed) with induction/relaxation kinetics (actinic light intensity is fixed and the response is followed over time), describing both how the response to light varies with time and how the response kinetics varies with light intensity.

  2. A neural network driving curve generation method for the heavy-haul train

    Directory of Open Access Journals (Sweden)

    Youneng Huang

    2016-05-01

    Full Text Available The heavy-haul train has a series of characteristics, such as the locomotive traction properties, the longer length of train, and the nonlinear train pipe pressure during train braking. When the train is running on a continuous long and steep downgrade railway line, the safety of the train is ensured by cycle braking, which puts high demands on the driving skills of the driver. In this article, a driving curve generation method for the heavy-haul train based on a neural network is proposed. First, in order to describe the nonlinear characteristics of train braking, the neural network model is constructed and trained by practical driving data. In the neural network model, various nonlinear neurons are interconnected to work for information processing and transmission. The target value of train braking pressure reduction and release time is achieved by modeling the braking process. The equation of train motion is computed to obtain the driving curve. Finally, in four typical operation scenarios, comparing the curve data generated by the method with corresponding practical data of the Shuohuang heavy-haul railway line, the results show that the method is effective.

  3. Standard-Setting Methods as Measurement Processes

    Science.gov (United States)

    Nichols, Paul; Twing, Jon; Mueller, Canda D.; O'Malley, Kimberly

    2010-01-01

    Some writers in the measurement literature have been skeptical of the meaningfulness of achievement standards and described the standard-setting process as blatantly arbitrary. We argue that standard setting is more appropriately conceived of as a measurement process similar to student assessment. The construct being measured is the panelists'…

  4. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    Science.gov (United States)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  5. A three-parameter langmuir-type model for fitting standard curves of sandwich enzyme immunoassays with special attention to the α-fetoprotein assay

    NARCIS (Netherlands)

    Kortlandt, W.; Endeman, H.J.; Hoeke, J.O.O.

    In a simplified approach to the reaction kinetics of enzyme-linked immunoassays, a Langmuir-type equation y = [ax/(b + x)] + c was derived. This model proved to be superior to logit-log and semilog models in the curve-fitting of standard curves. An assay for α-fetoprotein developed in our laboratory

  6. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    Science.gov (United States)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  7. A study of potential energy curves from the model space quantum Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)

    2015-12-07

    We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.

  8. Cleanup standards and pathways analysis methods

    International Nuclear Information System (INIS)

    Devgun, J.S.

    1993-01-01

    Remediation of a radioactively contaminated site requires that certain regulatory criteria be met before the site can be released for unrestricted future use. Since the ultimate objective of remediation is to protect the public health and safety, residual radioactivity levels remaining at a site after cleanup must be below certain preset limits or meet acceptable dose or risk criteria. Release of a decontaminated site requires proof that the radiological data obtained from the site meet the regulatory criteria for such a release. Typically release criteria consist of a composite of acceptance limits that depend on the radionuclides, the media in which they are present, and federal and local regulations. In recent years, the US Department of Energy (DOE) has developed a pathways analysis model to determine site-specific soil activity concentration guidelines for radionuclides that do not have established generic acceptance limits. The DOE pathways analysis computer code (developed by Argonne National Laboratory for the DOE) is called RESRAD (Gilbert et al. 1989). Similar efforts have been initiated by the US Nuclear Regulatory Commission (NRC) to develop and use dose-related criteria based on genetic pathways analyses rather than simplistic numerical limits on residual radioactivity. The focus of this paper is radionuclide contaminated soil. Cleanup standards are reviewed, pathways analysis methods are described, and an example is presented in which RESRAD was used to derive cleanup guidelines

  9. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  10. An Efficient Method for Detection of Outliers in Tracer Curves Derived from Dynamic Contrast-Enhanced Imaging

    Directory of Open Access Journals (Sweden)

    Linning Ye

    2018-01-01

    Full Text Available Presence of outliers in tracer concentration-time curves derived from dynamic contrast-enhanced imaging can adversely affect the analysis of the tracer curves by model-fitting. A computationally efficient method for detecting outliers in tracer concentration-time curves is presented in this study. The proposed method is based on a piecewise linear model and implemented using a robust clustering algorithm. The method is noniterative and all the parameters are automatically estimated. To compare the proposed method with existing Gaussian model based and robust regression-based methods, simulation studies were performed by simulating tracer concentration-time curves using the generalized Tofts model and kinetic parameters derived from different tissue types. Results show that the proposed method and the robust regression-based method achieve better detection performance than the Gaussian model based method. Compared with the robust regression-based method, the proposed method can achieve similar detection performance with much faster computation speed.

  11. Assessment of Estimation Methods ForStage-Discharge Rating Curve in Rippled Bed Rivers

    Directory of Open Access Journals (Sweden)

    P. Maleki

    2016-02-01

    in a flume located at the hydraulic laboratory ofShahrekordUniversity, Iran. Bass (1993 [reported in Joep (1999], determined an empirical relation between median grain size, D50, and equilibrium ripple length, l: L=75.4 (logD50+197 Eq.(1 Where l and D50 are both given in millimeters. Raudkivi (1997 [reported in Joep (1999], proposed another empirical relation to estimate the ripple length that D50 is given in millimeters: L=245(D500.35 Eq. (2 Flemming (1988 [reported in Joep (1999], derived an empirical relation between mean ripple length and ripple height based on a large dataset: hm= 0.0677l 0.8098 Eq.(3 Where hm is the mean ripple height (m and l is the mean ripple length (m. Ikeda S. and Asaeda (1983 investigated the characteristics of flow over ripples. They found that there are separation areas and vortices at lee of ripples and maximum turbulent diffusion occurs in these areas. Materials and Methods: In this research, the effects of two different type of ripples onthe hydraulic characteristics of flow were experimentally studied in a flume located at the hydraulic laboratory of ShahrekordUniversity, Iran. The flume has the dimensions of 0.4 m wide and depth and 12 m long. Generally 48 tests variety slopes of 0.0005 to 0.003 and discharges of 10 to 40 lit/s, were conducted. Velocity and the shear stress were measured by using an Acoustic Doppler Velocimeter (ADV. Two different types of ripples (parallel and flake ripples were used. The stage- discharge rating curve was then estimated in different ways, such as Einstein - Barbarvsa, shen and White et al. Results and Discussion: In order to investigateresult of the tests, were usedst atistical methods.White method as amaximum valueofα, RMSE, and average absolute error than other methods. Einstein method offitting the discharge under estimated. Evaluation of stage- discharge rating curve methods based on the obtained results from this research showed that Shen method had the highest accuracy for developing the

  12. Trajectory Optimization of Spray Painting Robot for Complex Curved Surface Based on Exponential Mean Bézier Method

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2017-01-01

    Full Text Available Automated tool trajectory planning for spray painting robots is still a challenging problem, especially for a large complex curved surface. This paper presents a new method of trajectory optimization for spray painting robot based on exponential mean Bézier method. The definition and the three theorems of exponential mean Bézier curves are discussed. Then a spatial painting path generation method based on exponential mean Bézier curves is developed. A new simple algorithm for trajectory optimization on complex curved surfaces is introduced. A golden section method is adopted to calculate the values. The experimental results illustrate that the exponential mean Bézier curves enhanced flexibility of the path planning, and the trajectory optimization algorithm achieved satisfactory performance. This method can also be extended to other applications.

  13. Lung function in North American Indian children: reference standards for spirometry, maximal expiratory flow volume curves, and peak expiratory flow.

    Science.gov (United States)

    Wall, M A; Olson, D; Bonn, B A; Creelman, T; Buist, A S

    1982-02-01

    Reference standards of lung function was determined in 176 healthy North American Indian children (94 girls, 82 boys) 7 to 18 yr of age. Spirometry, maximal expiratory flow volume curves, and peak expiratory flow rate were measured using techniques and equipment recommended by the American Thoracic Society. Standing height was found to be an accurate predictor of lung function, and prediction equations for each lung function variable are presented using standing height as the independent variable. Lung volumes and expiratory flow rates in North American Indian children were similar to those previously reported for white and Mexican-American children but were greater than those in black children. In both boys and girls, lung function increased in a curvilinear fashion. Volume-adjusted maximal expiratory flow rates after expiring 50 or 75% of FVC tended to decrease in both sexes as age and height increased. Our maximal expiratory flow volume curve data suggest that as North American Indian children grow, lung volume increases at a slightly faster rate than airway size does.

  14. An Advanced Encryption Standard Powered Mutual Authentication Protocol Based on Elliptic Curve Cryptography for RFID, Proven on WISP

    Directory of Open Access Journals (Sweden)

    Alaauldin Ibrahim

    2017-01-01

    Full Text Available Information in patients’ medical histories is subject to various security and privacy concerns. Meanwhile, any modification or error in a patient’s medical data may cause serious or even fatal harm. To protect and transfer this valuable and sensitive information in a secure manner, radio-frequency identification (RFID technology has been widely adopted in healthcare systems and is being deployed in many hospitals. In this paper, we propose a mutual authentication protocol for RFID tags based on elliptic curve cryptography and advanced encryption standard. Unlike existing authentication protocols, which only send the tag ID securely, the proposed protocol could also send the valuable data stored in the tag in an encrypted pattern. The proposed protocol is not simply a theoretical construct; it has been coded and tested on an experimental RFID tag. The proposed scheme achieves mutual authentication in just two steps and satisfies all the essential security requirements of RFID-based healthcare systems.

  15. Development and Evaluation of a Novel Curved Biopsy Device for CT-Guided Biopsy of Lesions Unreachable Using Standard Straight Needle Trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Hagen, Maximilian Franz, E-mail: mschulze@ukaachen.de; Pfeffer, Jochen; Zimmermann, Markus; Liebl, Martin [University Hospital RWTH Aachen, Department of Diagnostic and Interventional Radiology (Germany); Stillfried, Saskia Freifrau von [University Hospital RWTH Aachen, Department of Pathology (Germany); Kuhl, Christiane; Bruners, Philipp; Isfort, Peter [University Hospital RWTH Aachen, Department of Diagnostic and Interventional Radiology (Germany)

    2017-06-15

    PurposeTo evaluate the feasibility of a novel curved CT-guided biopsy needle prototype with shape memory to access otherwise not accessible biopsy targets.Methods and MaterialsA biopsy needle curved by 90° with specific radius was designed. It was manufactured using nitinol to acquire shape memory, encased in a straight guiding trocar to be driven out for access of otherwise inaccessible targets. Fifty CT-guided punctures were conducted in a biopsy phantom and 10 CT-guided punctures in a swine corpse. Biposies from porcine liver and muscle tissue were separately gained using the biopsy device, and histological examination was performed subsequently.ResultsMean time for placement of the trocar and deployment of the inner biopsy needle was ~205 ± 69 and ~93 ± 58 s, respectively, with a mean of ~4.5 ± 1.3 steps to reach adequate biopsy position. Mean distance from the tip of the needle to the target was ~0.7 ± 0.8 mm. CT-guided punctures in the swine corpse took relatively longer and required more biopsy steps (~574 ± 107 and ~380 ± 148 s, 8 ± 2.6 steps). Histology demonstrated appropriate tissue samples in nine out of ten cases (90%).ConclusionsTargets that were otherwise inaccessible via standard straight needle trajectories could be successfully reached with the curved biopsy needle prototype. Shape memory and preformed size with specific radius of the curved needle simplify the target accessibility with a low risk of injuring adjacent structures.

  16. A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers

    International Nuclear Information System (INIS)

    Vijay, Samudra; DeCarolis, Joseph F.; Srivastava, Ravi K.

    2010-01-01

    This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology cost and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO x control configurations on a large subset of the existing coal-fired, utility-owned boilers in the US. The resultant data are used to create technology-specific marginal abatement cost curves (MACCs) and also serve as input to an integer linear program, which minimizes system-wide control costs by finding the optimal distribution of NO x controls across the modeled boilers under an emission constraint. The result is a single optimized MACC that accounts for detailed, boiler-specific information related to NO x retrofits. Because the resultant MACCs do not take into account regional differences in air-quality standards or pre-existing NO x controls, the results should not be interpreted as a policy prescription. The general method as well as NO x -specific results presented here should be of significant value to modelers and policy analysts who must estimate the costs of pollution reduction.

  17. An information preserving method for producing full coverage CoRoT light curves

    Directory of Open Access Journals (Sweden)

    Pascual-Granado J.

    2015-01-01

    Full Text Available Invalid flux measurements, caused mainly by the South Atlantic Anomaly crossing of the CoRoT satellite, introduce aliases in the periodogram and wrong amplitudes. It has been demonstrated that replacing such invalid data with a linear interpolation is not harmless. On the other side, using power spectrum estimators for unevenly sampled time series is not only less computationally efficient but it leads to difficulties in the interpretation of the results. Therefore, even when the gaps are rather small and the duty cycle is high enough the use of gap-filling methods is a gain in frequency analysis. However, the method must preserve the information contained in the time series. In this work we give a short description of an information preserving method (MIARMA and show some results when applying it to CoRoT seismo light curves. The method is implemented as the second step of a pipeline for CoRoT data analysis.

  18. A simple method for determining the critical point of the soil water retention curve

    DEFF Research Database (Denmark)

    Chen, Chong; Hu, Kelin; Ren, Tusheng

    2017-01-01

    he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...... the fixed tangent line method was 0.007 g g–1, which was slightly better than that of the flexible tangent line method. With increasing clay content or SSA, ψc was more negative initially but became less negative at clay contents above ∼30%. Increasing the silt contents resulted in more negative ψc values...

  19. A method for the measurement of dispersion curves of circumferential guided waves radiating from curved shells: experimental validation and application to a femoral neck mimicking phantom

    Science.gov (United States)

    Nauleau, Pierre; Minonzio, Jean-Gabriel; Chekroun, Mathieu; Cassereau, Didier; Laugier, Pascal; Prada, Claire; Grimal, Quentin

    2016-07-01

    Our long-term goal is to develop an ultrasonic method to characterize the thickness, stiffness and porosity of the cortical shell of the femoral neck, which could enhance hip fracture risk prediction. To this purpose, we proposed to adapt a technique based on the measurement of guided waves. We previously evidenced the feasibility of measuring circumferential guided waves in a bone-mimicking phantom of a circular cross-section of even thickness. The goal of this study is to investigate the impact of the complex geometry of the femoral neck on the measurement of guided waves. Two phantoms of an elliptical cross-section and one phantom of a realistic cross-section were investigated. A 128-element array was used to record the inter-element response matrix of these waveguides. This experiment was simulated using a custom-made hybrid code. The response matrices were analyzed using a technique based on the physics of wave propagation. This method yields portions of dispersion curves of the waveguides which were compared to reference dispersion curves. For the elliptical phantoms, three portions of dispersion curves were determined with a good agreement between experiment, simulation and theory. The method was thus validated. The characteristic dimensions of the shell were found to influence the identification of the circumferential wave signals. The method was then applied to the signals backscattered by the superior half of constant thickness of the realistic phantom. A cut-off frequency and some portions of modes were measured, with a good agreement with the theoretical curves of a plate waveguide. We also observed that the method cannot be applied directly to the signals backscattered by the lower half of varying thicknesses of the phantom. The proposed approach could then be considered to evaluate the properties of the superior part of the femoral neck, which is known to be a clinically relevant site.

  20. Soil Conservation Service Curve Number method: How to mend a wrong soil moisture accounting procedure?

    Science.gov (United States)

    Michel, Claude; Andréassian, Vazken; Perrin, Charles

    2005-02-01

    This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.

  1. A New Processing Method Combined with BP Neural Network for Francis Turbine Synthetic Characteristic Curve Research

    Directory of Open Access Journals (Sweden)

    Junyi Li

    2017-01-01

    Full Text Available A BP (backpropagation neural network method is employed to solve the problems existing in the synthetic characteristic curve processing of hydroturbine at present that most studies are only concerned with data in the high efficiency and large guide vane opening area, which can hardly meet the research requirements of transition process especially in large fluctuation situation. The principle of the proposed method is to convert the nonlinear characteristics of turbine to torque and flow characteristics, which can be used for real-time simulation directly based on neural network. Results show that obtained sample data can be extended successfully to cover working areas wider under different operation conditions. Another major contribution of this paper is the resampling technique proposed in the paper to overcome the limitation to sample period simulation. In addition, a detailed analysis for improvements of iteration convergence of the pressure loop is proposed, leading to a better iterative convergence during the head pressure calculation. Actual applications verify that methods proposed in this paper have better simulation results which are closer to the field and provide a new perspective for hydroturbine synthetic characteristic curve fitting and modeling.

  2. Determination of electron clinical spectra from percentage depth dose (PDD) curves by classical simulated annealing method

    International Nuclear Information System (INIS)

    Visbal, Jorge H. Wilches; Costa, Alessandro M.

    2016-01-01

    Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)

  3. [Determination of six main components in compound theophylline tablet by convolution curve method after prior separation by column partition chromatography

    Science.gov (United States)

    Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)

    1993-01-01

    On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.

  4. Assessment of p-y Curves from Numerical Methods for a non-Slender Monopile in Cohesionless Soil

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Roesen, Hanne Ravn; Wolf, Torben K.

    2013-01-01

    In current design the stiff large diameter monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The current design guidances apply the p-y curve method with formulations for the curves based on slender piles....... However, the behaviour of the stiff monopiles during lateral loading is not fully understood. In this paper case study from Barrow Offshore Wind Farm is used in a 3D finite element model. The analysis forms a basis for extraction of p-y curves which are used in an evaluation of the traditional curves...

  5. Assessment of p-y Curves from Numerical Methods for a non-Slender Monopile in Cohesionless Soil

    DEFF Research Database (Denmark)

    Wolf, Torben K.; Rasmussen, Kristian L.; Hansen, Mette

    In current design the stiff large diameter monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The current design guidances apply the p-y curve method with formulations for the curves based on slender piles....... However, the behaviour of the stiff monopiles during lateral loading is not fully understood. In this paper case study from Barrow Offshore Wind Farm is used in a 3D finite element model. The analysis forms a basis for extraction of p-y curves which are used in an evaluation of the traditional curves...

  6. Thermoluminescence glow curve analysis and CGCD method for erbium doped CaZrO{sub 3} phosphor

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, Ratnesh, E-mail: 31rati@gmail.com [Department of Physics, Bhilai Institute of Technology, Raipur, 493661 (India); Chopra, Seema [Department Physics, G.D Goenka Public School (India)

    2016-05-06

    The manuscript report the synthesis, thermoluminescence study at fixed concentration of Er{sup 3+} (1 mol%) doped CaZrO{sub 3} phosphor. The phosphors were prepared by modified solid state reaction method. The powder sample was characterized by thermoluminescence (TL) glow curve analysis. In TL glow curve the optimized concentration in 1mol% for UV irradiated sample. The kinetic parameters were calculated by computerized glow curve deconvolution (CGCD) techniaue. Trapping parameters gives the information of dosimetry loss in prepared phosphor and its usability in environmental monitoring and for personal monitoring. CGCD is the advance tool for analysis of complicated TL glow curves.

  7. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    Science.gov (United States)

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  8. Estimating Aquifer Transmissivity Using the Recession-Curve-Displacement Method in Tanzania’s Kilombero Valley

    Directory of Open Access Journals (Sweden)

    William Senkondo

    2017-12-01

    Full Text Available Information on aquifer processes and characteristics across scales has long been a cornerstone for understanding water resources. However, point measurements are often limited in extent and representativeness. Techniques that increase the support scale (footprint of measurements or leverage existing observations in novel ways can thus be useful. In this study, we used a recession-curve-displacement method to estimate regional-scale aquifer transmissivity (T from streamflow records across the Kilombero Valley of Tanzania. We compare these estimates to local-scale estimates made from pumping tests across the Kilombero Valley. The median T from the pumping tests was 0.18 m2/min. This was quite similar to the median T estimated from the recession-curve-displacement method applied during the wet season for the entire basin (0.14 m2/min and for one of the two sub-basins tested (0.16 m2/min. On the basis of our findings, there appears to be reasonable potential to inform water resource management and hydrologic model development through streamflow-derived transmissivity estimates, which is promising for data-limited environments facing rapid development, such as the Kilombero Valley.

  9. Standardization of methods of maxillofacial roentgenology

    International Nuclear Information System (INIS)

    Rabukhina, N.A.; Arzhantsev, A.P.; Chikirdin, Eh.G.; Tombak, M.I.; Stavitskij, R.V.; Vasil'ev, Yu.D.

    1989-01-01

    Typical errors in teeth roentgenography reproduced in experiment, indicate that considerable disproportional distortions of images of anatomical structures which are decisive for radiodiagnosis, may occur in these cases. Standardization of intraoral roentgenography is based on a strict position of the patient's head, angle of inclination and alignment of a tube. Specialized R3-1 film should be used

  10. SiFTO: An Empirical Method for Fitting SN Ia Light Curves

    Science.gov (United States)

    Conley, A.; Sullivan, M.; Hsiao, E. Y.; Guy, J.; Astier, P.; Balam, D.; Balland, C.; Basa, S.; Carlberg, R. G.; Fouchez, D.; Hardin, D.; Howell, D. A.; Hook, I. M.; Pain, R.; Perrett, K.; Pritchet, C. J.; Regnault, N.

    2008-07-01

    We present SiFTO, a new empirical method for modeling Type Ia supernova (SN Ia) light curves by manipulating a spectral template. We make use of high-redshift SN data when training the model, allowing us to extend it bluer than rest-frame U. This increases the utility of our high-redshift SN observations by allowing us to use more of the available data. We find that when the shape of the light curve is described using a stretch prescription, applying the same stretch at all wavelengths is not an adequate description. SiFTO therefore uses a generalization of stretch which applies different stretch factors as a function of both the wavelength of the observed filter and the stretch in the rest-frame B band. We compare SiFTO to other published light-curve models by applying them to the same set of SN photometry, and demonstrate that SiFTO and SALT2 perform better than the alternatives when judged by the scatter around the best-fit luminosity distance relationship. We further demonstrate that when SiFTO and SALT2 are trained on the same data set the cosmological results agree. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.

  11. Modelling the Influence of Ground Surface Relief on Electric Sounding Curves Using the Integral Equations Method

    Directory of Open Access Journals (Sweden)

    Balgaisha Mukanova

    2017-01-01

    Full Text Available The problem of electrical sounding of a medium with ground surface relief is modelled using the integral equations method. This numerical method is based on the triangulation of the computational domain, which is adapted to the shape of the relief and the measuring line. The numerical algorithm is tested by comparing the results with the known solution for horizontally layered media with two layers. Calculations are also performed to verify the fulfilment of the “reciprocity principle” for the 4-electrode installations in our numerical model. Simulations are then performed for a two-layered medium with a surface relief. The quantitative influences of the relief, the resistivity ratios of the contacting media, and the depth of the second layer on the apparent resistivity curves are established.

  12. Efficient method for finding square roots for elliptic curves over OEF

    CSIR Research Space (South Africa)

    Abu-Mahfouz, Adnan M

    2009-01-01

    Full Text Available Elliptic curve cryptosystems like others public key encryption schemes, require computing a square roots modulo a prime number. The arithmetic operations in elliptic curve schemes over Optimal Extension Fields (OEF) can be efficiently computed...

  13. Antibody reactions methods in safety standards

    International Nuclear Information System (INIS)

    Shubik, V.M.; Sirasdinov, V.G.; Zasedatelev, A.A.; Kal'nitskij, S.A.; Livshits, R.E.

    1978-01-01

    Results of determinations are presented of autoantibodies in white rats to which the radionuclides 137 Cs, 226 Ra, and 90 Sr that show different distribution patterns in the body, have been administered chronically. Autoantiboby production is found to increase when the absorbed doses are close to or exceeding seven- to tenfold the maximum permissible values. The results obtained point to the desirability of autoantibody determination in studies aimed at setting hygienic standards for the absorption of radioactive substances

  14. Analysis and Extension of the Percentile Method, Estimating a Noise Curve from a Single Image

    Directory of Open Access Journals (Sweden)

    Miguel Colom

    2013-12-01

    Full Text Available Given a white Gaussian noise signal on a sampling grid, its variance can be estimated from a small block sample. However, in natural images we observe the combination of the geometry of the scene being photographed and the added noise. In this case, estimating directly the standard deviation of the noise from block samples is not reliable since the measured standard deviation is not explained just by the noise but also by the geometry of the image. The Percentile method tries to estimate the standard deviation of the noise from blocks of a high-passed version of the image and a small p-percentile of these standard deviations. The idea behind is that edges and textures in a block of the image increase the observed standard deviation but they never make it decrease. Therefore, a small percentile (0.5%, for example in the list of standard deviations of the blocks is less likely to be affected by the edges and textures than a higher percentile (50%, for example. The 0.5%-percentile is empirically proven to be adequate for most natural, medical and microscopy images. The Percentile method is adapted to signal-dependent noise, which is realistic with the Poisson noise model obtained by a CCD device in a digital camera.

  15. DEVELOPING A METHOD TO IDENTIFY HORIZONTAL CURVE SEGMENTS WITH HIGH CRASH OCCURRENCES USING THE HAF ALGORITHM

    Science.gov (United States)

    2018-04-01

    Crashes occur every day on Utahs highways. Curves can be particularly dangerous as they require driver focus due to potentially unseen hazards. Often, crashes occur on curves due to poor curve geometry, a lack of warning signs, or poor surface con...

  16. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers

    International Nuclear Information System (INIS)

    Cardoso, Vanderlei

    2002-01-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  17. A Method for Formulizing Disaster Evacuation Demand Curves Based on SI Model

    Directory of Open Access Journals (Sweden)

    Yulei Song

    2016-10-01

    Full Text Available The prediction of evacuation demand curves is a crucial step in the disaster evacuation plan making, which directly affects the performance of the disaster evacuation. In this paper, we discuss the factors influencing individual evacuation decision making (whether and when to leave and summarize them into four kinds: individual characteristics, social influence, geographic location, and warning degree. In the view of social contagion of decision making, a method based on Susceptible-Infective (SI model is proposed to formulize the disaster evacuation demand curves to address both social influence and other factors’ effects. The disaster event of the “Tianjin Explosions” is used as a case study to illustrate the modeling results influenced by the four factors and perform the sensitivity analyses of the key parameters of the model. Some interesting phenomena are found and discussed, which is meaningful for authorities to make specific evacuation plans. For example, due to the lower social influence in isolated communities, extra actions might be taken to accelerate evacuation process in those communities.

  18. Miscellaneous standard methods for Apis mellifera research

    DEFF Research Database (Denmark)

    Human, Hannelie; Brodschneider, Robert; Dietemann, Vincent

    2013-01-01

    A variety of methods are used in honey bee research and differ depending on the level at which the research is conducted. On an individual level, the handling of individual honey bees, including the queen, larvae and pupae are required. There are different methods for the immobilising, killing an...

  19. [Determination of the daily changes curve of nitrogen oxides in the atmosphere by digital imaging colorimetry method].

    Science.gov (United States)

    Yang, Chuan-Xiao; Sun, Xiang-Ying; Liu, Bin

    2009-06-01

    From the digital images of the red complex which resulted in the interaction of nitrite with N-(1-naphthyl) ethylenediamine dihydrochloride and P-Aminobenzene sulfonic acid, it could be seen that the solution colors obviously increased with increasing the concentration of nitrite ion. The JPEG format of the digital images was transformed into gray-scale format by origin 7.0 software, and the gray values were measured with scion image software. It could be seen that the gray values of the digital image obviously increased with increasing the concentration of nitrite ion, too. Thus a novel digital imaging colorimetric (DIC) method to determine nitrogen oxides (NO(x)) contents in air was developed. Based on the red, green and blue (RGB) tricolor theory, the principle of the digital imaging colorimetric method and the influential factors on digital imaging were discussed. The present method was successfully applied to the determination of the daily changes curve of nitrogen oxides in the atmosphere and NO2- in synthetic samples with the recovery of 97.3%-104.0%, and the relative standard deviation (RSD) was less than 5.0%. The results of the determination were consistent with those obtained by spectrophotometric method.

  20. Standardized Methods for Detection of Poliovirus Antibodies.

    Science.gov (United States)

    Weldon, William C; Oberste, M Steven; Pallansch, Mark A

    2016-01-01

    Testing for neutralizing antibodies against polioviruses has been an established gold standard for assessing individual protection from disease, population immunity, vaccine efficacy studies, and other vaccine clinical trials. Detecting poliovirus specific IgM and IgA in sera and mucosal specimens has been proposed for evaluating the status of population mucosal immunity. More recently, there has been a renewed interest in using dried blood spot cards as a medium for sample collection to enhance surveillance of poliovirus immunity. Here, we describe the modified poliovirus microneutralization assay, poliovirus capture IgM and IgA ELISA assays, and dried blood spot polio serology procedures for the detection of antibodies against poliovirus serotypes 1, 2, and 3.

  1. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    International Nuclear Information System (INIS)

    Winkler, A W; Zagar, B G

    2013-01-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives. (paper)

  2. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    Science.gov (United States)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  3. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  4. Improving runoff risk estimates: Formulating runoff as a bivariate process using the SCS curve number method

    Science.gov (United States)

    Shaw, Stephen B.; Walter, M. Todd

    2009-03-01

    The Soil Conservation Service curve number (SCS-CN) method is widely used to predict storm runoff for hydraulic design purposes, such as sizing culverts and detention basins. As traditionally used, the probability of calculated runoff is equated to the probability of the causative rainfall event, an assumption that fails to account for the influence of variations in soil moisture on runoff generation. We propose a modification to the SCS-CN method that explicitly incorporates rainfall return periods and the frequency of different soil moisture states to quantify storm runoff risks. Soil moisture status is assumed to be correlated to stream base flow. Fundamentally, this approach treats runoff as the outcome of a bivariate process instead of dictating a 1:1 relationship between causative rainfall and resulting runoff volumes. Using data from the Fall Creek watershed in western New York and the headwaters of the French Broad River in the mountains of North Carolina, we show that our modified SCS-CN method improves frequency discharge predictions in medium-sized watersheds in the eastern United States in comparison to the traditional application of the method.

  5. Folding to Curved Surfaces: A Generalized Design Method and Mechanics of Origami-based Cylindrical Structures

    Science.gov (United States)

    Wang, Fei; Gong, Haoran; Chen, Xi; Chen, C. Q.

    2016-09-01

    Origami structures enrich the field of mechanical metamaterials with the ability to convert morphologically and systematically between two-dimensional (2D) thin sheets and three-dimensional (3D) spatial structures. In this study, an in-plane design method is proposed to approximate curved surfaces of interest with generalized Miura-ori units. Using this method, two combination types of crease lines are unified in one reprogrammable procedure, generating multiple types of cylindrical structures. Structural completeness conditions of the finite-thickness counterparts to the two types are also proposed. As an example of the design method, the kinematics and elastic properties of an origami-based circular cylindrical shell are analysed. The concept of Poisson’s ratio is extended to the cylindrical structures, demonstrating their auxetic property. An analytical model of rigid plates linked by elastic hinges, consistent with numerical simulations, is employed to describe the mechanical response of the structures. Under particular load patterns, the circular shells display novel mechanical behaviour such as snap-through and limiting folding positions. By analysing the geometry and mechanics of the origami structures, we extend the design space of mechanical metamaterials and provide a basis for their practical applications in science and engineering.

  6. Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method.

    Science.gov (United States)

    Bruno, Thomas J; Allen, Samuel

    2013-01-01

    One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC.

  7. Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method

    Science.gov (United States)

    Bruno, Thomas J; Allen, Samuel

    2013-01-01

    One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC. PMID:26401423

  8. Effects of different premature chromosome condensation method on dose-curve of 60Co γ-ray

    International Nuclear Information System (INIS)

    Guo Yicao; Yang Haoxian; Yang Yuhua; Li Xi'na; Huang Weixu; Zheng Qiaoling

    2012-01-01

    Objective: To study the effect of traditional method and improved method of the premature chromosome condensation (PCC) on the dose-effect curve of 60 Co γ ray, for choosing the rapid and accurate biological dose estimating method for the accident emergency. Methods: Collected 3 healthy male cubits venous blood (23 to 28 years old), and irradiated by 0, 1.0, 5.0, 10.0, 15.0, 20.0 Gy 60 Co γ ray (absorbed dose rate: 0.635 Gy/min). Observed the relation of dose-effect curve in the 2 incubation time (50 hours and 60 hours) of the traditional method and improved method. Used the dose-effect curve to verify the exposure of 10.0 Gy (absorbed dose rate: 0.670 Gy/min). Results: (1) In the traditional method of 50-hour culture, the PCC cell count in 15.0 Gy and 20.0 Gy was of no statistical significance. But there were statistical significance in the traditional method of 60-hours culture and improved method (50-hour culture and 60-hour culture). Used the last 3 culture methods to make dose curve. (2) In the above 3 culture methods, the related coefficient between PCC ring and exposure dose was quite close (all of more than 0.996, P 0.05), the morphology of regression straight lines almost overlap. (3) Used the above 3 dose-effect curves to estimate the irradiation results (10.0 Gy), the error was less than or equal to 8%, all of them were within the allowable range of the biological experiment (15%). Conclusion: The 3 dose-effect curves of the above 3 culture methods can apply to biological dose estimating of large doses of ionizing radiation damage. Especially the improved method of 50-hour culture,it is much faster to estimate and it should be regarded as the first choice in accident emergency. (authors)

  9. A systematic methodology for creep master curve construction using the stepped isostress method (SSM): a numerical assessment

    Science.gov (United States)

    Miranda Guedes, Rui

    2018-02-01

    Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.

  10. An alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film

    International Nuclear Information System (INIS)

    Awad, M.M.

    2014-01-01

    The S-shaped curve was observed by Yilbas and Bin Mansoor (2013). In this study, an alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film is presented by using an analytical prediction method. This analytical prediction method was introduced by Bejan and Lorente in 2011 and 2012. The Bejan and Lorente method is based on two-mechanism flow of fast “invasion” by convection and slow “consolidation” by diffusion.

  11. Providing the physical basis of SCS curve number method and its proportionality relationship from Richards' equation

    Science.gov (United States)

    Hooshyar, M.; Wang, D.

    2016-12-01

    The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: 1) the soil is saturated at the land surface; and 2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.

  12. A Method of Timbre-Shape Synthesis Based On Summation of Spherical Curves

    DEFF Research Database (Denmark)

    Putnam, Lance Jonathan

    2014-01-01

    It is well-known that there is a rich correspondence between sound and visual curves, perhaps most widely explored through direct input of sound into an oscilloscope. However, there have been relatively few proposals on how to translate sound into three-dimensional curves. We present a novel meth...

  13. Applicability of the θ projection method to creep curves of Ni-22Cr-18Fe-9Mo alloy

    International Nuclear Information System (INIS)

    Kurata, Yuji; Utsumi, Hirokazu

    1998-01-01

    Applicability of the θ projection method has been examined for constant-load creep test results at 800 and 1000degC on Ni-22Cr-18Fe-9Mo alloy in the solution-treated and aged conditions. The results obtained are as follows: (1) Normal type creep curves obtained at 1000degC for aged Ni-22Cr-18Fe-9Mo alloy are fitted using the θ projection method with four θ parameters. Stress dependence of θ parameters can be expressed in terms of simple equations. (2) The θ projection method with four θ parameters cannot be applied to the remaining creep curves where most of the life is occupied by a tertiary creep stage. Therefore, the θ projection method consisting of only the tertiary creep component with two θ parameters was applied. The creep curves can be fitted using this method. (3) If the θ projection method with four θ or two θ parameters is applied to creep curves in accordance with creep curve shapes, creep rupture time can be predicted in terms of formulation of stress and/or temperature dependence of θ parameters. (author)

  14. Development of standard testing methods for nuclear-waste forms

    International Nuclear Information System (INIS)

    Mendel, J.E.; Nelson, R.D.

    1981-11-01

    Standard test methods for waste package component development and design, safety analyses, and licensing are being developed for the Nuclear Waste Materials Handbook. This paper describes mainly the testing methods for obtaining waste form materials data

  15. Standard methods for sampling North American freshwater fishes

    Science.gov (United States)

    Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.

    2009-01-01

    This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.

  16. Comparison of catchment grouping methods for flow duration curve estimation at ungauged sites in France

    Directory of Open Access Journals (Sweden)

    E. Sauquet

    2011-08-01

    Full Text Available The study aims at estimating flow duration curves (FDC at ungauged sites in France and quantifying the associated uncertainties using a large dataset of 1080 FDCs. The interpolation procedure focuses here on 15 percentiles standardised by the mean annual flow, which is assumed to be known at each site. In particular, this paper discusses the impact of different catchment grouping procedures on the estimation of percentiles by regional regression models.

    In a first step, five parsimonious FDC parametric models are tested to approximate FDCs at gauged sites. The results show that the model based on the expansion of Empirical Orthogonal Functions (EOF outperforms the other tested models. In the EOF model, each FDC is interpreted as a linear combination of regional amplitude functions with spatially variable weighting factors corresponding to the parameters of the model. In this approach, only one amplitude function is required to obtain a satisfactory fit with most of the observed curves. Thus, the considered model requires only two parameters to be applicable at ungauged locations.

    Secondly, homogeneous regions are derived according to hydrological response, on the one hand, and geological, climatic and topographic characteristics on the other hand. Hydrological similarity is assessed through two simple indicators: the concavity index (IC representing the shape of the dimensionless FDC and the seasonality ratio (SR, which is the ratio of summer and winter median flows. These variables are used as homogeneity criteria in three different methods for grouping catchments: (i according to an a priori classification of French Hydro-EcoRegions (HERs, (ii by applying regression tree clustering and (iii by using neighbourhoods obtained by canonical correlation analysis.

    Finally, considering all the data, and subsequently for each group obtained through the tested grouping techniques, we derive regression models between

  17. Recrystallization curve study of zircaloy-4 with DRX line width method

    International Nuclear Information System (INIS)

    Juarez, G; Buioli, C; Samper, R; Vizcaino, P

    2012-01-01

    X-ray diffraction peak broadening analysis is a method that allows to characterize the plastic deformation in metals. This technique is a complement of transmission electron microscopy (TEM) to determine dislocation densities. So that, both techniques may cover a wide range in the analysis of metals deformation. The study of zirconium alloys is an issue of usual interest in the nuclear industry since such materials present the best combination of good mechanical properties, corrosion behavior and low neutron cross section. It is worth noting there are two factors to be taken into account in the application of the method developed for this purpose: the characteristic anisotropy of the hexagonals and the strong texture that these alloys acquire during the manufacturing process. In order to assess the recrystallization curve of Zircaloy-4, a powder of this alloy was produced through filing. Then, fractions of the powder were subjected to thermal treatments at different temperatures for the same time. Since the powder has a random crystallographic orientation, the texture effect practically disappears; this is the reason why the Williamson and Hall method may be easily used, producing good fittings and predicting confidence values of diffraction domain size and the accumulated deformation. The temperatures selected for the thermal treatments were 1000, 700, 600, 500, 420, 300 and 200 o C during 2 hs. As a result of these annealings, powders in different recovery stages were obtained (completely recrystallized, partially recrystallized and non-recrystallized structures with different levels of stress relieve). The obtained values were also compared with the non annealed powder ones. All the microstructural evolution through the annealings was followed by optical microscopy (author)

  18. Application of the Advanced Distillation Curve Method to Fuels for Advanced Combustion Engine Gasolines

    KAUST Repository

    Burger, Jessica L.

    2015-07-16

    © This article not subject to U.S. Copyright. Published 2015 by the American Chemical Society. Incremental but fundamental changes are currently being made to fuel composition and combustion strategies to diversify energy feedstocks, decrease pollution, and increase engine efficiency. The increase in parameter space (by having many variables in play simultaneously) makes it difficult at best to propose strategic changes to engine and fuel design by use of conventional build-and-test methodology. To make changes in the most time- and cost-effective manner, it is imperative that new computational tools and surrogate fuels are developed. Currently, sets of fuels are being characterized by industry groups, such as the Coordinating Research Council (CRC) and other entities, so that researchers in different laboratories have access to fuels with consistent properties. In this work, six gasolines (FACE A, C, F, G, I, and J) are characterized by the advanced distillation curve (ADC) method to determine the composition and enthalpy of combustion in various distillate volume fractions. Tracking the composition and enthalpy of distillate fractions provides valuable information for determining structure property relationships, and moreover, it provides the basis for the development of equations of state that can describe the thermodynamic properties of these complex mixtures and lead to development of surrogate fuels composed of major hydrocarbon classes found in target fuels.

  19. An external standard method for quantification of human cytomegalovirus by PCR

    International Nuclear Information System (INIS)

    Rongsen, Shen; Liren, Ma; Fengqi, Zhou; Qingliang, Luo

    1997-01-01

    An external standard method for PCR quantification of HCMV was reported. [α- 32 P]dATP was used as a tracer. 32 P-labelled specific amplification product was separated by agarose gel electrophoresis. A gel piece containing the specific product band was excised and counted in a plastic scintillation counter. Distribution of [α- 32 P]dATP in the electrophoretic gel plate and effect of separation between the 32 P-labelled specific product and free [α- 32 P]dATP were observed. A standard curve for quantification of HCMV by PCR was established and detective results of quality control templets were presented. The external standard method and the electrophoresis separation effect were appraised. The results showed that the method could be used for relative quantification of HCMV. (author)

  20. Radioligand assays - methods and applications. IV. Uniform regression of hyperbolic and linear radioimmunoassay calibration curves

    Energy Technology Data Exchange (ETDEWEB)

    Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic)

    1980-10-01

    In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature.

  1. Assessment of p-y curves from numerical methods for a non-slender monopile in cohesionless soil

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, L. B.; Ravn Roesen, H. [Aalborg Univ. Dept. of Civil Engineering, Aalborg (Denmark); Hansen, Mette; Kirk Wolf, T. [COWI, Kgs. Lyngby (Denmark); Lange Rasmussen, K. [Niras, Aalborg (Denmark)

    2013-06-15

    In current design the monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The behaviour of monopiles under lateral loading is not fully understood and the current design guidances apply the p-y curve method in a Winkler model approach. The p-y curve method was originally developed for jag-piles used in the oil and gas industry which are much more slender than the monopile foundation. In recent years the 3D finite element analysis has become a tool in the investigation of complex geotechnical situations, such as the laterally loaded monopile. In this paper a 3D FEA is conducted as basis of an extraction of p-y curves, as a basis for an evaluation of the traditional curves. Two different methods are applied to create a list of data points used for the p-y curves: A force producing a similar response as seen in the ULS situation is applied stepwise; hereby creating the most realistic soil response. This method, however, does not generate sufficient data points around the rotation point of the pile. Therefore, also a forced horizontal displacement of the entire pile is applied, whereby displacements are created over the entire length of the pile. The response is extracted from the interface and the nearby soil elements respectively, as to investigate the influence this has on the computed curves. p-y curves are obtained near the rotation point by evaluation of soil response during a prescribed displacement but the response is not in clear agreement with the response during an applied load. Two different material models are applied. It is found that the applied material models have a significant influence on the stiffness of the evaluated p-y curves. The p-y curves evaluated by means of FEA are compared to the conventional p-y curve formulation which provides a much stiffer response. It is found that the best response is computed by implementing the Hardening Soil model and

  2. S-curve networks and an approximate method for estimating degree distributions of complex networks

    OpenAIRE

    Guo, Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (Logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference value for optimizing the distribution of IPv4 address resource and the development of IPv6. Based o...

  3. Standard methods for sampling freshwater fishes: Opportunities for international collaboration

    Science.gov (United States)

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.

  4. High resolution melting curve analysis, a rapid and affordable method for mutation analysis in childhood acute myeloid leukemia

    Directory of Open Access Journals (Sweden)

    Yin eLiu

    2014-09-01

    Full Text Available Background: Molecular genetic alterations with prognostic significance have been described in childhood acute myeloid leukemia (AML. The aim of this study was to establish cost-effective techniques to detect mutations of FMS-like tyrosine kinase 3 (FLT3, Nucleophosmin 1 (NPM1, and a partial tandem duplication within the mixed lineage leukemia (MLL-PTD genes in childhood AML. Procedure: Ninety-nine children with newly diagnosed AML were included in this study. We developed a fluoresent dye SYTO-82 based high resolution melting curve (HRM anaylsis to detect FLT3 internal tandem duplication (FLT3-ITD, FLT3 tyrosine kinase domain (FLT3-TKD and NPM1 mutations. MLL-PTD was screened by real-time quantitative PCR. Results: The HRM methodology correlated well with gold standard Sanger sequencing with less cost. Among the 99 patients studied, the FLT3-ITD mutation was associated with significantly worse event free survival (EFS. Patients with the NPM1 mutation had significantly better EFS and overall survival. However, HRM was not sensitive enough for minimal residual disease monitoring. Conclusions: HRM was a rapid and efficient method for screening of FLT3 and NPM1 gene mutations. It was both affordable and accurate, especially in resource underprivileged regions. Our results indicated that HRM could be a useful clinical tool for rapid and cost effective screening of the FLT3 and NPM1 mutations in AML patients.

  5. Standard test methods for rockwell hardness of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 These test methods cover the determination of the Rockwell hardness and the Rockwell superficial hardness of metallic materials by the Rockwell indentation hardness principle. This standard provides the requirements for Rockwell hardness machines and the procedures for performing Rockwell hardness tests. 1.2 This standard includes additional requirements in annexes: Verification of Rockwell Hardness Testing Machines Annex A1 Rockwell Hardness Standardizing Machines Annex A2 Standardization of Rockwell Indenters Annex A3 Standardization of Rockwell Hardness Test Blocks Annex A4 Guidelines for Determining the Minimum Thickness of a Test Piece Annex A5 Hardness Value Corrections When Testing on Convex Cylindrical Surfaces Annex A6 1.3 This standard includes nonmandatory information in appendixes which relates to the Rockwell hardness test. List of ASTM Standards Giving Hardness Values Corresponding to Tensile Strength Appendix X1 Examples of Procedures for Determining Rockwell Hardness Uncertainty Appendix X...

  6. Standard test methods for rockwell hardness of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 These test methods cover the determination of the Rockwell hardness and the Rockwell superficial hardness of metallic materials by the Rockwell indentation hardness principle. This standard provides the requirements for Rockwell hardness machines and the procedures for performing Rockwell hardness tests. 1.2 This standard includes additional requirements in annexes: Verification of Rockwell Hardness Testing Machines Annex A1 Rockwell Hardness Standardizing Machines Annex A2 Standardization of Rockwell Indenters Annex A3 Standardization of Rockwell Hardness Test Blocks Annex A4 Guidelines for Determining the Minimum Thickness of a Test Piece Annex A5 Hardness Value Corrections When Testing on Convex Cylindrical Surfaces Annex A6 1.3 This standard includes nonmandatory information in appendixes which relates to the Rockwell hardness test. List of ASTM Standards Giving Hardness Values Corresponding to Tensile Strength Appendix X1 Examples of Procedures for Determining Rockwell Hardness Uncertainty Appendix X...

  7. Standard methods for sampling freshwater fishes: opportunities for international collaboration

    OpenAIRE

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, T. Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D.S.; Lester, Nigel P.; Porath, Mark; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by...

  8. Unconditional and Conditional Standards Using Cognitive Function Curves for the Modified Mini-Mental State Exam: Cross-Sectional and Longitudinal Analyses in Older Chinese Adults in Singapore.

    Science.gov (United States)

    Cheung, Yin Bun; Xu, Ying; Feng, Lei; Feng, Liang; Nyunt, Ma Shwe Zin; Chong, Mei Sian; Lim, Wee Shiong; Lee, Tih Shih; Yap, Philip; Yap, Keng Bee; Ng, Tze Pin

    2015-09-01

    The conventional practice of assessing cognitive status and monitoring change over time in older adults using normative values of the Mini-Mental State Exam (MMSE) based on age bands is imprecise. Moreover, population-based normative data on changes in MMSE score over time are scarce and crude because they do not include age- and education-specific norms. This study aims to develop unconditional standards for assessing current cognitive status and conditional standards that take prior MMSE score into account for assessing longitudinal change, with percentile curves as smooth functions of age. Cross-sectional and longitudinal data of a modified version of the MMSE for 2,026 older Chinese adults from the Singapore Longitudinal Aging Study, aged 55-84, in Singapore were used to estimate quantile regression coefficients and create unconditional standards and conditional standards. We presented MMSE percentile curves as a smooth function of age in education strata, for unconditional and conditional standards, based on quantile regression coefficient estimates. We found the 5th and 10th percentiles were more strongly associated with age and education than were higher percentiles. Model diagnostics demonstrated the accuracy of the standards. The development and use of unconditional and conditional standards should facilitate cognitive assessment in clinical practice and deserve further studies. Copyright © 2015 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  9. Inferring Lévy walks from curved trajectories: A rescaling method

    Science.gov (United States)

    Tromer, R. M.; Barbosa, M. B.; Bartumeus, F.; Catalan, J.; da Luz, M. G. E.; Raposo, E. P.; Viswanathan, G. M.

    2015-08-01

    An important problem in the study of anomalous diffusion and transport concerns the proper analysis of trajectory data. The analysis and inference of Lévy walk patterns from empirical or simulated trajectories of particles in two and three-dimensional spaces (2D and 3D) is much more difficult than in 1D because path curvature is nonexistent in 1D but quite common in higher dimensions. Recently, a new method for detecting Lévy walks, which considers 1D projections of 2D or 3D trajectory data, has been proposed by Humphries et al. The key new idea is to exploit the fact that the 1D projection of a high-dimensional Lévy walk is itself a Lévy walk. Here, we ask whether or not this projection method is powerful enough to cleanly distinguish 2D Lévy walk with added curvature from a simple Markovian correlated random walk. We study the especially challenging case in which both 2D walks have exactly identical probability density functions (pdf) of step sizes as well as of turning angles between successive steps. Our approach extends the original projection method by introducing a rescaling of the projected data. Upon projection and coarse-graining, the renormalized pdf for the travel distances between successive turnings is seen to possess a fat tail when there is an underlying Lévy process. We exploit this effect to infer a Lévy walk process in the original high-dimensional curved trajectory. In contrast, no fat tail appears when a (Markovian) correlated random walk is analyzed in this way. We show that this procedure works extremely well in clearly identifying a Lévy walk even when there is noise from curvature. The present protocol may be useful in realistic contexts involving ongoing debates on the presence (or not) of Lévy walks related to animal movement on land (2D) and in air and oceans (3D).

  10. IPR CURVE CALCULATING FOR A WELL PRODUCING BY INTERMITTENT GAS-LIFT METHOD

    Directory of Open Access Journals (Sweden)

    Zoran Mršić

    2009-12-01

    Full Text Available Master degree thesis (Mršić Z., 2009 shows the detailed procedure of calculating inflow performance curve for intermittent gas lift, based entirely on the data measured at surface. This article explains the detailed approach of the mentioned research and the essence of the results and observations acquired during the study. To evaluate the proposed method of calculating the average bottom hole flowing pressure (BHFP as the key parameter of inflow performance calculation, downhole pressure surveys have been conducted in three producing wells at Šandrovac and Bilogora oil fields: Šandrovac-75α, Bilogora-52 and Šandrovac-34. Absolute difference between measured and calculated values of average BHFP for first two wells was Δp=0,64 bar and Δp=0,06 bar while calculated relative error was εr=0,072 and εr=0,0038 respectively. Due to gas-lift valve malfunction in well Šandrovac-34, noticed during downhole pressure survey, value of calculated BHFP cannot be considered correct to compare with measured value. Based on the measured data the information have been revealed about actual values of a certain intermittent gas lift parameters that are usually assumed based on experience gained values or are calculated using empirical equations given in literature. The significant difference has been noticed for a parameter t2. The length of a minimum pressure period for which the measured values were in range of 10,74 min up to 16 min, while empirical equation gives values in the range of 1,23 min up to 1,75 min. Based on measured values of above mentioned parameter a new empirical equation has been established (the paper is published in Croatian.

  11. Determination of trace elements in standard reference materials by the ko-standardization method

    International Nuclear Information System (INIS)

    Smodis, B.; Jacimovic, R.; Stegnar, P.; Jovanovic, S.

    1990-01-01

    The k o -standardization method is suitable for routine multielement determinations by reactor neutron activation analysis (NAA). Investigation of NIST standard reference materials SRM 1571 Orchard Leaves, SRM 1572 Citrus leaves, and SRM 1573 Tomato Leaves showed the systematic error of 12 certified elements determined to be less than 8%. Thirty-four elements were determined in NIST proposed SRM 1515 Apple Leaves

  12. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2014-01-01

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online

  13. A computerized glow curve analysis (GCA) method for WinREMS thermoluminescent dosimeter data using MATLAB

    International Nuclear Information System (INIS)

    Harvey, John A.; Rodrigues, Miesher L.; Kearfott, Kimberlee J.

    2011-01-01

    A computerized glow curve analysis (GCA) program for handling of thermoluminescence data originating from WinREMS is presented. The MATLAB program fits the glow peaks using the first-order kinetics model. Tested materials are LiF:Mg,Ti, CaF 2 :Dy, CaF 2 :Tm, CaF 2 :Mn, LiF:Mg,Cu,P, and CaSO 4 :Dy, with most having an average figure of merit (FOM) of 1.3% or less, with CaSO 4 :Dy 2.2% or less. Output is a list of fit parameters, peak areas, and graphs for each fit, evaluating each glow curve in 1.5 s or less. - Highlights: → Robust algorithm for performing thermoluminescent dosimeter glow curve analysis. → Written in MATLAB so readily implemented on variety of computers. → Usage of figure of merit demonstrated for six different materials.

  14. The development of a curved beam element model applied to finite elements method

    International Nuclear Information System (INIS)

    Bento Filho, A.

    1980-01-01

    A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt

  15. Fitness of the analysis method of magnesium in drinking water using atomic absorption with quadratic calibration curve

    International Nuclear Information System (INIS)

    Perez-Lopez, Esteban

    2014-01-01

    The quantitative chemical analysis has been importance in research. Also, aspects like: quality control, sales of services and other areas of interest. Some instrumental analysis methods for quantification with linear calibration curve have presented limitations, because the short liner dynamic ranges of the analyte, or sometimes, by limiting the technique itself. The need has been to investigate a little more about the convenience of using quadratic calibration curves for analytical quantification, with which it has seeked demonstrate that has been a valid calculation model for chemical analysis instruments. An analysis base method is used on the technique of atomic absorption spectroscopy and in particular a determination of magnesium in a drinking water sample of the Tacares sector North of Grecia. A nonlinear calibration curve was used and specifically a curve with quadratic behavior. The same was compared with the test results obtained for the equal analysis with a linear calibration curve. The results have showed that the methodology has been valid for the determination referred with all confidence, since the concentrations have been very similar and, according to the used hypothesis testing, can be considered equal. (author) [es

  16. Standard test method for galling resistance of material couples

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers a laboratory test that ranks the galling resistance of material couples using a quantitative measure. Bare metals, alloys, nonmetallic materials, coatings, and surface modified materials may be evaluated by this test method. 1.2 This test method is not designed for evaluating the galling resistance of material couples sliding under lubricated conditions, because galling usually will not occur under lubricated sliding conditions using this test method. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  17. Analytical chemistry methods for boron carbide absorber material. [Standard

    Energy Technology Data Exchange (ETDEWEB)

    DELVIN WL

    1977-07-01

    This standard provides analytical chemistry methods for the analysis of boron carbide powder and pellets for the following: total C and B, B isotopic composition, soluble C and B, fluoride, chloride, metallic impurities, gas content, water, nitrogen, and oxygen. (DLC)

  18. Evaluation of pyrolysis curves for volatile elements in aqueous standards and carbon-containing matrices in electrothermal vaporization inductively coupled plasma mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Silva, A.F. [Delft University of Technology, Faculty of Applied Sciences, DelftChemTech, Julianalaan 136, 2628 BL Delft (Netherlands); Universidade Federal de Santa Catarina, Departamento de Quimica, 88040-900 Florianopolis, SC (Brazil); Welz, B. [Universidade Federal de Santa Catarina, Departamento de Quimica, 88040-900 Florianopolis, SC (Brazil); Loos-Vollebregt, M.T.C. de [Delft University of Technology, Faculty of Applied Sciences, DelftChemTech, Julianalaan 136, 2628 BL Delft (Netherlands)], E-mail: m.t.c.deloos-vollebregt@tudelft.nl

    2008-07-15

    Pyrolysis curves in electrothermal atomic absorption spectrometry (ET AAS) and electrothermal vaporization inductively coupled plasma mass spectrometry (ETV-ICP-MS) have been compared for As, Se and Pb in lobster hepatopancreas certified reference material using Pd/Mg as the modifier. The ET AAS pyrolysis curves confirm that the analytes are not lost from the graphite furnace up to a pyrolysis temperature of 800 deg. C. Nevertheless, a downward slope of the pyrolysis curve was observed for these elements in the biological material using ETV-ICP-MS. This could be related to a gain of sensitivity at low pyrolysis temperatures due to the matrix, which can act as carrier and/or promote changes in the plasma ionization equilibrium. Experiments with the addition of ascorbic acid to the aqueous standards confirmed that the higher intensities obtained in ETV-ICP-MS are related to the presence of organic compounds in the slurry. Pyrolysis curves for As, Se and Pb in coal and coal fly ash were also investigated using the same Pd/Mg modifier. Carbon intensities were measured in all samples using different pyrolysis temperatures. It was observed that pyrolysis curves for the three analytes in all slurry samples were similar to the corresponding graphs that show the carbon intensity for the same slurries for pyrolysis temperatures from 200 deg. C up to 1000 deg. C.

  19. Test of nonexponential deviations from decay curve of 52V using continuous kinetic function method

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son

    1993-01-01

    The present work is aimed at a formulation of an experimental approach to search the proposed description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. The continuous kinetic function (CKF) method is used for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behavior of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is research. A complex type of decay is discussed. (author). 10 refs, 2 tabs, 5 figs

  20. Standard test methods for characterizing duplex grain sizes

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 These test methods provide simple guidelines for deciding whether a duplex grain size exists. The test methods separate duplex grain sizes into one of two distinct classes, then into specific types within those classes, and provide systems for grain size characterization of each type. 1.2 Units—The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns associated with its use. It is the responsibility of the user of this standard to consult appropriate safety and health practices and determine the applicability of regulatory limitations prior to its use.

  1. A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers

    Science.gov (United States)

    This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology costs and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO...

  2. Production of Curved Precast Concrete Elements for Shell Structures and Free-form Architecture using the Flexible Mould Method

    NARCIS (Netherlands)

    Schipper, H.R.; Grünewald, S.; Eigenraam, P.; Raghunath, P.; Kok, M.A.D.

    2014-01-01

    Free-form buildings tend to be expensive. By optimizing the production process, economical and well-performing precast concrete structures can be manufactured. In this paper, a method is presented that allows producing highly accurate double curved-elements without the need for milling two expensive

  3. New methods for deriving cometary secular light curves: C/1995 O1 (Hale-Bopp) revisited

    Science.gov (United States)

    Womack, Maria; Lastra, Nathan; Harrington, Olga; Curtis, Anthony; Wierzchos, Kacper; Ruffini, Nicholas; Charles, Mentzer; Rabson, David; Cox, Timothy; Rivera, Isabel; Micciche, Anthony

    2017-10-01

    We present an algorithm for reducing scatter and increasing precision in a comet light curve. As a demonstration, we processed apparent magnitudes of comet Hale-Bopp from 16 highly experienced observers (archived with the International Comet Quarterly), correcting for distance from Earth and phase angle. Different observers tend to agree on the difference in magnitudes of an object at different distances, but the magnitude reported by observer is shifted relative to that of another for an object at a fixed distance. We estimated the shifts using a self-consistent statistical approach, leading to a sharper light curve and improving the precision of the measured slopes. The final secular lightcurve for comet Hale-Bopp ranges from -7 au (pre-perihelion) to +8 au (post-perihelion) and is the best secular light curve produced to date for this “great” comet. We discuss Hale-Bopp’s lightcurve evolution and possibly related physical implications, and potential usefulness of this light curve for comparisons with other future bright comets. We also assess the appropriateness of using secular lightcurves to characterize dust production rates in Hale-Bopp and other dust-rich comets. M.W. acknowledges support from NSF grant AST-1615917.

  4. Double-curved precast concrete elements : Research into technical viability of the flexible mould method

    NARCIS (Netherlands)

    Schipper, H.R.

    2015-01-01

    The production of precast, concrete elements with complex, double-curved geometry is expensive due to the high costcosts of the necessary moulds and the limited possibilities for mould reuse. Currently, CNC-milled foam moulds are the solution applied mostly in projects, offering good aesthetic

  5. TWO METHODS OF ESTIMATING SEMIPARAMETRIC COMPONENT IN THE ENVIRONMENTAL KUZNET'S CURVE (EKC)

    OpenAIRE

    Paudel, Krishna P.; Zapata, Hector O.

    2004-01-01

    This study compares parametric and semiparametric smoothing techniques to estimate the environmental Kuznet curve. The ad hoc functional form where income is related either as a square or a cubic function to environmental quality is relaxed in search of a better nonlinear fit to the pollution-income relationship for panel data.

  6. Atlas of stress-strain curves

    CERN Document Server

    2002-01-01

    The Atlas of Stress-Strain Curves, Second Edition is substantially bigger in page dimensions, number of pages, and total number of curves than the previous edition. It contains over 1,400 curves, almost three times as many as in the 1987 edition. The curves are normalized in appearance to aid making comparisons among materials. All diagrams include metric (SI) units, and many also include U.S. customary units. All curves are captioned in a consistent format with valuable information including (as available) standard designation, the primary source of the curve, mechanical properties (including hardening exponent and strength coefficient), condition of sample, strain rate, test temperature, and alloy composition. Curve types include monotonic and cyclic stress-strain, isochronous stress-strain, and tangent modulus. Curves are logically arranged and indexed for fast retrieval of information. The book also includes an introduction that provides background information on methods of stress-strain determination, on...

  7. Standard Practice for Optical Distortion and Deviation of Transparent Parts Using the Double-Exposure Method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This photographic practice determines the optical distortion and deviation of a line of sight through a simple transparent part, such as a commercial aircraft windshield or a cabin window. This practice applies to essentially flat or nearly flat parts and may not be suitable for highly curved materials. 1.2 Test Method F 801 addresses optical deviation (angluar deviation) and Test Method F 2156 addresses optical distortion using grid line slope. These test methods should be used instead of Practice F 733 whenever practical. 1.3 This standard does not purport to address the safety concerns associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  8. Standard test method for isotopic analysis of uranium hexafluoride by double standard single-collector gas mass spectrometer method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This is a quantitative test method applicable to determining the mass percent of uranium isotopes in uranium hexafluoride (UF6) samples with 235U concentrations between 0.1 and 5.0 mass %. 1.2 This test method may be applicable for the entire range of 235U concentrations for which adequate standards are available. 1.3 This test method is for analysis by a gas magnetic sector mass spectrometer with a single collector using interpolation to determine the isotopic concentration of an unknown sample between two characterized UF6 standards. 1.4 This test method is to replace the existing test method currently published in Test Methods C761 and is used in the nuclear fuel cycle for UF6 isotopic analyses. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appro...

  9. Estimating Composite Curve Number Using an Improved SCS-CN Method with Remotely Sensed Variables in Guangzhou, China

    OpenAIRE

    Fan, Fenglei; Deng, Yingbin; Hu, Xuefei; Weng, Qihao

    2013-01-01

    The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN) method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS) of the United States Department of Agriculture (USDA), and is on...

  10. Method of making stepped photographic density standards of radiographic photographs

    International Nuclear Information System (INIS)

    Borovin, I.V.; Kondina, M.A.

    1987-01-01

    In industrial radiography practice the need often arises for a prompt evaluation of the photographic density of an x-ray film. A method of making stepped photographic density standards for industrial radiography by contact printing from a negative is described. The method is intended for industrial radiation flaw detection laboratories not having specialized sensitometric equipment

  11. [Modified Delphi method in the constitution of school sanitation standard].

    Science.gov (United States)

    Yin, Xunqiang; Liang, Ying; Tan, Hongzhuan; Gong, Wenjie; Deng, Jing; Luo, Jiayou; Di, Xiaokang; Wu, Yue

    2012-11-01

    To constitute school sanitation standard using modified Delphi method, and to explore the feasibility and the predominance of Delphi method in the constitution of school sanitation standard. Two rounds of expert consultations were adopted in this study. The data were analyzed with SPSS15.0 to screen indices of school sanitation standard. Thirty-two experts accomplished the 2 rounds of consultations. The average length of expert service was (24.69 ±8.53) years. The authority coefficient was 0.729 ±0.172. The expert positive coefficient was 94.12% (32/34) in the first round and 100% (32/32) in the second round. The harmonious coefficients of importance, feasibility and rationality in the second round were 0.493 (PDelphi method is a rapid, effective and feasible method in this field.

  12. The phase curve survey of the irregular saturnian satellites: A possible method of physical classification

    Science.gov (United States)

    Bauer, James M.; Grav, Tommy; Buratti, Bonnie J.; Hicks, Michael D.

    2006-09-01

    During its 2005 January opposition, the saturnian system could be viewed at an unusually low phase angle. We surveyed a subset of Saturn's irregular satellites to obtain their true opposition magnitudes, or nearly so, down to phase angle values of 0.01°. Combining our data taken at the Palomar 200-inch and Cerro Tololo Inter-American Observatory's 4-m Blanco telescope with those in the literature, we present the first phase curves for nearly half the irregular satellites originally reported by Gladman et al. [2001. Nature 412, 163-166], including Paaliaq (SXX), Siarnaq (SXXIX), Tarvos (SXXI), Ijiraq (SXXII), Albiorix (SXVI), and additionally Phoebe's narrowest angle brightness measured to date. We find centaur-like steepness in the phase curves or opposition surges in most cases with the notable exception of three, Albiorix and Tarvos, which are suspected to be of similar origin based on dynamical arguments, and Siarnaq.

  13. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to d...

  14. Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods

    Science.gov (United States)

    Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.

    2018-06-01

    We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.

  15. Marginal abatement cost curves for policy recommendation – A method for energy system analysis

    International Nuclear Information System (INIS)

    Tomaschek, Jan

    2015-01-01

    The transport sector is seen as one of the key factors for driving future energy consumption and greenhouse gas (GHG) emissions. In order to rank possible measures marginal abatement cost curves have become a tool to graphically represent the relationship between abatement costs and emission reduction. This paper demonstrates how to derive marginal abatement cost curves for well-to-wheel GHG emissions of the transport sector considering the full energy provision chain and the interlinkages and interdependencies within the energy system. Presented marginal abatement cost curves visualize substitution effects between measures for different marginal mitigation costs. The analysis makes use of an application of the energy system model generator TIMES for South Africa (TIMES-GEECO). For the example of Gauteng province, this study exemplary shows that the transport sector is not the first sector to address for cost-efficient reduction of GHG emissions. However, the analysis also demonstrates that several options are available to mitigate transport related GHG emissions at comparable low marginal abatement costs. This methodology can be transferred to other economic sectors as well as to other regions in the world to derive cost-efficient GHG reduction strategies

  16. Statistical methods for evaluating the attainment of cleanup standards

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, R.O.; Simpson, J.C.

    1992-12-01

    This document is the third volume in a series of volumes sponsored by the US Environmental Protection Agency (EPA), Statistical Policy Branch, that provide statistical methods for evaluating the attainment of cleanup Standards at Superfund sites. Volume 1 (USEPA 1989a) provides sampling designs and tests for evaluating attainment of risk-based standards for soils and solid media. Volume 2 (USEPA 1992) provides designs and tests for evaluating attainment of risk-based standards for groundwater. The purpose of this third volume is to provide statistical procedures for designing sampling programs and conducting statistical tests to determine whether pollution parameters in remediated soils and solid media at Superfund sites attain site-specific reference-based standards. This.document is written for individuals who may not have extensive training or experience with statistical methods. The intended audience includes EPA regional remedial project managers, Superfund-site potentially responsible parties, state environmental protection agencies, and contractors for these groups.

  17. Development of test practice requirements for a standard method on fracture toughness testing in the transition range

    International Nuclear Information System (INIS)

    McCabe, D.E.; Zerbst, U.; Heerens, J.

    1993-01-01

    This report covers the resolution of several issues that are relevant to the ductile to brittle transition range of structural steels. One of this issues was to compare a statistical-based weakest-link method to constraint data adjustment methods for modeling the specimen size effects on fracture toughness. Another was to explore the concept of a universal transition temperature curve shape (Master Curve). Data from a Materials Properties Council round robin activity were used to test the proposals empirically. The findings of this study are inclosed in an activity for the development of a draft standard test procedure ''Test Practice for Fracture Toughness in the Transition Range''. (orig.) [de

  18. Toward a standard method for determination of waterborne radon

    International Nuclear Information System (INIS)

    Vitz, E.

    1990-01-01

    When the USEPA specifies the maximum contaminant level (MCL) for any contaminant, a standard method for analysis must be simultaneously stipulated. Promulgation of the proposed MCL and standard method for radon in drinking water is expected by early next year, but a six-month comment period and revision will precede final enactment. The standard method for radon in drinking water will probably specify that either the Lucas cell technique or liquid scintillation spectrometry be used. This paper reports results which support a standard method with the following features: samples should be collected by an explicitly stated technique to control degassing, in glass vials with or without scintillation cocktail, and possibly in duplicate; samples should be measured by liquid scintillation spectroscopy in a specified energy window', in a glass vial with particular types of cocktails; radium standards should be prepared with controlled quench levels and specified levels of carriers, but radium-free controls prepared by a specified method should be used in interlaboratory comparison studies

  19. Influence of experimental methods on crossing in magnetic force-gap hysteresis curve of HTS maglev system

    Energy Technology Data Exchange (ETDEWEB)

    Lu Yiyun, E-mail: luyiyun6666@vip.sohu.co [Luoyang Institute of Science and Technology, Luoyang, Henan 471023 (China); Qin Yujie; Dang Qiaohong [Luoyang Institute of Science and Technology, Luoyang, Henan 471023 (China); Wang Jiasu [Applied Superconductivity Laboratory, Southwest Jiaotong University, P.O. Box 152, Chengdu, Sichuan 610031 (China)

    2010-12-01

    The crossing in magnetic levitation force-gap hysteresis curve of melt high-temperature superconductor (HTS) vs. NdFeB permanent magnet (PM) was experimentally studied. One HTS bulk and PM was used in the experiments. Four experimental methods were employed combining of high/low speed of movement of PM with/without heat insulation materials (HIM) enclosed respectively. Experimental results show that crossing of the levitation force-gap curve is related to experimental methods. A crossing occurs in the magnetic force-gap curve while the PM moves approaching to and departing from the sample with high or low speed of movement without HIM enclosed. When the PM is enclosed with HIM during the measurement procedures, there is no crossing in the force-gap curve no matter high speed or low speed of movement of the PM. It was found experimentally that, with the increase of the moving speed of the PM, the maximum magnitude of levitation force of the HTS increases also. The results are interpreted based on Maxwell theories and flux flow-creep models of HTS.

  20. Influence of experimental methods on crossing in magnetic force-gap hysteresis curve of HTS maglev system

    International Nuclear Information System (INIS)

    Lu Yiyun; Qin Yujie; Dang Qiaohong; Wang Jiasu

    2010-01-01

    The crossing in magnetic levitation force-gap hysteresis curve of melt high-temperature superconductor (HTS) vs. NdFeB permanent magnet (PM) was experimentally studied. One HTS bulk and PM was used in the experiments. Four experimental methods were employed combining of high/low speed of movement of PM with/without heat insulation materials (HIM) enclosed respectively. Experimental results show that crossing of the levitation force-gap curve is related to experimental methods. A crossing occurs in the magnetic force-gap curve while the PM moves approaching to and departing from the sample with high or low speed of movement without HIM enclosed. When the PM is enclosed with HIM during the measurement procedures, there is no crossing in the force-gap curve no matter high speed or low speed of movement of the PM. It was found experimentally that, with the increase of the moving speed of the PM, the maximum magnitude of levitation force of the HTS increases also. The results are interpreted based on Maxwell theories and flux flow-creep models of HTS.

  1. Standardized methods for photography in procedural dermatology using simple equipment.

    Science.gov (United States)

    Hexsel, Doris; Hexsel, Camile L; Dal'Forno, Taciana; Schilling de Souza, Juliana; Silva, Aline F; Siega, Carolina

    2017-04-01

    Photography is an important tool in dermatology. Reproducing the settings of before photos after interventions allows more accurate evaluation of treatment outcomes. In this article, we describe standardized methods and tips to obtain photographs, both for clinical practice and research procedural dermatology, using common equipment. Standards for the studio, cameras, photographer, patients, and framing are presented in this article. © 2017 The International Society of Dermatology.

  2. Standard Test Method for Abrasive Wear Resistance of Cemented

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method covers the determination of abrasive wear resistance of cemented carbides. 1.2 The values stated in inch-pound units are to be regarded as the standard. The SI equivalents of inch-pound units are in parentheses and may be approximate. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  3. Design of a rotary dielectric elastomer actuator using a topology optimization method based on pairs of curves

    Science.gov (United States)

    Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin

    2018-05-01

    Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.

  4. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  5. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  6. Evaluation of methods for characterizing the melting curves of a high temperature cobalt-carbon fixed point to define and determine its melting temperature

    Science.gov (United States)

    Lowe, David; Machin, Graham

    2012-06-01

    The future mise en pratique for the realization of the kelvin will be founded on the melting temperatures of particular metal-carbon eutectic alloys as thermodynamic temperature references. However, at the moment there is no consensus on what should be taken as the melting temperature. An ideal melting or freezing curve should be a completely flat plateau at a specific temperature. Any departure from the ideal is due to shortcomings in the realization and should be accommodated within the uncertainty budget. However, for the proposed alloy-based fixed points, melting takes place over typically some hundreds of millikelvins. Including the entire melting range within the uncertainties would lead to an unnecessarily pessimistic view of the utility of these as reference standards. Therefore, detailed analysis of the shape of the melting curve is needed to give a value associated with some identifiable aspect of the phase transition. A range of approaches are or could be used; some purely practical, determining the point of inflection (POI) of the melting curve, some attempting to extrapolate to the liquidus temperature just at the end of melting, and a method that claims to give the liquidus temperature and an impurity correction based on the analytical Scheil model of solidification that has not previously been applied to eutectic melting. The different methods have been applied to cobalt-carbon melting curves that were obtained under conditions for which the Scheil model might be valid. In the light of the findings of this study it is recommended that the POI continue to be used as a pragmatic measure of temperature but where required a specified limits approach should be used to define and determine the melting temperature.

  7. A direct method for determining complete positive and negative capillary pressure curves for reservoir rock using the centrifuge

    Energy Technology Data Exchange (ETDEWEB)

    Spinler, E.A.; Baldwin, B.A. [Phillips Petroleum Co., Bartlesville, OK (United States)

    1997-08-01

    A method is being developed for direct experimental determination of capillary pressure curves from saturation distributions produced during centrifuging fluids in a rock plug. A free water level is positioned along the length of the plugs to enable simultaneous determination of both positive and negative capillary pressures. Octadecane as the oil phase is solidified by temperature reduction while centrifuging to prevent fluid redistribution upon removal from the centrifuge. The water saturation is then measured via magnetic resonance imaging. The saturation profile within the plug and the calculation of pressures for each point of the saturation profile allows for a complete capillary pressure curve to be determined from one experiment. Centrifuging under oil with a free water level into a 100 percent water saturated plug results in the development of a primary drainage capillary pressure curve. Centrifuging similarly at an initial water saturation in the plug results in the development of an imbibition capillary pressure curve. Examples of these measurements are presented for Berea sandstone and chalk rocks.

  8. Standard Test Method for Thermal Oxidative Resistance of Carbon Fibers

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1982-01-01

    1.1 This test method covers the apparatus and procedure for the determination of the weight loss of carbon fibers, exposed to ambient hot air, as a means of characterizing their oxidative resistance. 1.2 The values stated in SI units are to be regarded as standard. The values given in parentheses are mathematical conversions to inch-pound units which are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific hazard information, see Section 8.

  9. Standardization of Tc-99 by three liquid scintillation counting methods

    International Nuclear Information System (INIS)

    Wyngaardt, W.M. van; Staden, M.J. van; Lubbe, J.; Simpson, B.R.S.

    2014-01-01

    The NMISA participated in the international key comparison of the pure beta-emitter Technetium-99, CCRI(II)-K2.Tc-99. The comparison solution was standardized using three methods, namely the TDCR efficiency calculation method, the CIEMAT/NIST efficiency tracing method and the 4π(LS)β–γ coincidence tracing method with Co-60 as tracer. Excellent agreement between results obtained with the three methods confirmed the applicability of the beta spectral shape given by the latest (2011) DDEP evaluation of Tc-99 decay data, rather than the earlier (2004) evaluation. - Highlights: • Activity concentration of Tc-99 solution measured using three LSC methods. • Methods used are TDCR, CNET and 4π(LS)β–γ coincidence tracing. • Beta spectral shape confirmed by agreement between three methods

  10. A regret theory approach to decision curve analysis: A novel method for eliciting decision makers' preferences and decision-making

    OpenAIRE

    Vickers Andrew; Hozo Iztok; Tsalatsanis Athanasios; Djulbegovic Benjamin

    2010-01-01

    Abstract Background Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. ...

  11. Using LMS Method in Smoothing Reference Centile Curves for Lipid Profile of Iranian Children and Adolescents: A CASPIAN Study

    Directory of Open Access Journals (Sweden)

    M Hoseini

    2012-05-01

    Full Text Available

    Background and Objectives: LMS is a general monitoring method for fitting smooth reference centile curves in medical sciences. They provide the distribution of a measurement as it changes according to some covariates like age or time. This method describes the distribution of changes by three parameters; Mean, Coefficient of variation and Cox-Box power (skewness. Applying maximum penalized likelihood and spline function, the three curves are estimated and fitted and optimum smoothness is expressed by three curves. This study was conducted to provide the percentiles of lipid profile of Iranian children and adolescents by LMS.

     

    Methods: Smoothed reference centile curves of four groups of lipids (triglycerides, total-LDL- and HDL-cholesterol were developed from the data of 4824 Iranian school students, aged 6-18 years, living in six cities (Tabriz, Rasht, Gorgan, Mashad, Yazd and Tehran-Firouzkouh in Iran. Demographic and laboratory data were taken from the national study of the surveillance and prevention of non-communicable diseases from childhood (CASPIAN Study. After data management, data of 4824 students were included in the statistical analysis, which was conducted by the modified LMS method proposed by Cole. The curves were developed with a degree of freedom of four to ten with some tools such as deviance, Q tests, and detrended Q-Q plot were used for monitoring goodness of fit models.

     

    Results: All tools confirmed the model, and the LMS method was used as an appropriate method in smoothing reference centile. This method revealed the distributing features of variables serving as an objective tool to determine their relative importance.

     

    Conclusion: This study showed that the triglycerides level is higher and

  12. Determining the spill flow discharge of combined sewer overflows using rating curves based on computational fluid dynamics instead of the standard weir equation.

    Science.gov (United States)

    Fach, S; Sitzenfrei, R; Rauch, W

    2009-01-01

    It is state of the art to evaluate and optimise sewer systems with urban drainage models. Since spill flow data is essential in the calibration process of conceptual models it is important to enhance the quality of such data. A wide spread approach is to calculate the spill flow volume by using standard weir equations together with measured water levels. However, these equations are only applicable to combined sewer overflow (CSO) structures, whose weir constructions correspond with the standard weir layout. The objective of this work is to outline an alternative approach to obtain spill flow discharge data based on measurements with a sonic depth finder. The idea is to determine the relation between water level and rate of spill flow by running a detailed 3D computational fluid dynamics (CFD) model. Two real world CSO structures have been chosen due to their complex structure, especially with respect to the weir construction. In a first step the simulation results were analysed to identify flow conditions for discrete steady states. It will be shown that the flow conditions in the CSO structure change after the spill flow pipe acts as a controlled outflow and therefore the spill flow discharge cannot be described with a standard weir equation. In a second step the CFD results will be used to derive rating curves which can be easily applied in everyday practice. Therefore the rating curves are developed on basis of the standard weir equation and the equation for orifice-type outlets. Because the intersection of both equations is not known, the coefficients of discharge are regressed from CFD simulation results. Furthermore, the regression of the CFD simulation results are compared with the one of the standard weir equation by using historic water levels and hydrographs generated with a hydrodynamic model. The uncertainties resulting of the wide spread use of the standard weir equation are demonstrated.

  13. Hazard curve evaluation method development for a forest fire as an external hazard on nuclear power plants

    International Nuclear Information System (INIS)

    Okano, Yasushi; Yamano, Hidemasa

    2016-01-01

    A method to obtain a hazard curve of a forest fire was developed. The method has four steps: a logic tree formulation, a response surface evaluation, a Monte Carlo simulation, and an annual exceedance frequency calculation. The logic tree consists domains of 'forest fire breakout and spread conditions', 'weather conditions', 'vegetation conditions', and 'forest fire simulation conditions.' Condition parameters of the logic boxes are static if stable during a forest fire or not sensitive to a forest fire intensity, and non-static parameters are variables whose frequency/probability is given based on existing databases or evaluations. Response surfaces of a reaction intensity and a fireline intensity were prepared by interpolating outputs from a number of forest fire propagation simulations by fire area simulator (FARSITE). The Monte Carlo simulation was performed where one sample represented a set of variable parameters of the logic boxes and a corresponding intensity was evaluated from the response surface. The hazard curve, i.e. an annual exceedance frequency of the intensity, was therefore calculated from the histogram of the Monte Carlo simulation outputs. The new method was applied to evaluate hazard curves of a reaction intensity and a fireline intensity for a typical location around a sodium-cooled fast reactor in Japan. (author)

  14. Robust steganographic method utilizing properties of MJPEG compression standard

    Directory of Open Access Journals (Sweden)

    Jakub Oravec

    2015-06-01

    Full Text Available This article presents design of steganographic method, which uses video container as cover data. Video track was recorded by webcam and was further encoded by compression standard MJPEG. Proposed method also takes in account effects of lossy compression. The embedding process is realized by switching places of transform coefficients, which are computed by Discrete Cosine Transform. The article contains possibilities, used techniques, advantages and drawbacks of chosen solution. The results are presented at the end of the article.

  15. Standard methods for rearing and selection of Apis mellifera queens

    DEFF Research Database (Denmark)

    Büchler, Ralph; Andonov, Sreten; Bienefeld, Kaspar

    2013-01-01

    Here we cover a wide range of methods currently in use and recommended in modern queen rearing, selection and breeding. The recommendations are meant to equally serve as standards for both scientific and practical beekeeping purposes. The basic conditions and different management techniques for q...

  16. Standard test method for dynamic tear testing of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1983-01-01

    1.1 This test method covers the dynamic tear (DT) test using specimens that are 3/16 in. to 5/8 in. (5 mm to 16 mm) inclusive in thickness. 1.2 This test method is applicable to materials with a minimum thickness of 3/16 in. (5 mm). 1.3 The pressed-knife procedure described for sharpening the notch tip generally limits this test method to materials with a hardness level less than 36 HRC. Note 1—The designation 36 HRC is a Rockwell hardness number of 36 on Rockwell C scale as defined in Test Methods E 18. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  17. Evaluating the Capacity of Standard Investment Appraisal Methods

    NARCIS (Netherlands)

    M.M. Akalu

    2002-01-01

    textabstractThe survey findings indicate the existence of gap between theory and practice of capital budgeting. Standard appraisal methods have shown a wider project value discrepancy, which is beyond and above the contingency limit. In addition, the research has found the growing trend in the use

  18. Strain- and stress-based forming limit curves for DP 590 steel sheet using Marciniak-Kuczynski method

    Science.gov (United States)

    Kumar, Gautam; Maji, Kuntal

    2018-04-01

    This article deals with the prediction of strain-and stress-based forming limit curves for advanced high strength steel DP590 sheet using Marciniak-Kuczynski (M-K) method. Three yield criteria namely Von-Mises, Hill's 48 and Yld2000-2d and two hardening laws i.e., Hollomon power and Swift hardening laws were considered to predict the forming limit curves (FLCs) for DP590 steel sheet. The effects of imperfection factor and initial groove angle on prediction of FLC were also investigated. It was observed that the FLCs shifted upward with the increase of imperfection factor value. The initial groove angle was found to have significant effects on limit strains in the left side of FLC, and insignificant effect for the right side of FLC for certain range of strain paths. The limit strains were calculated at zero groove angle for the right side of FLC, and a critical groove angle was used for the left side of FLC. The numerically predicted FLCs considering the different combinations of yield criteria and hardening laws were compared with the published experimental results of FLCs for DP590 steel sheet. The FLC predicted using the combination of Yld2000-2d yield criterion and swift hardening law was in better coorelation with the experimental data. Stress based forming limit curves (SFLCs) were also calculated from the limiting strain values obtained by M-K model. Theoretically predicted SFLCs were compared with that obtained from the experimental forming limit strains. Stress based forming limit curves were seen to better represent the forming limits of DP590 steel sheet compared to that by strain-based forming limit curves.

  19. Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.

  20. An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)

    2017-03-20

    We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.

  1. Radioactive standards and calibration methods for contamination monitoring instruments

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Makoto [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-06-01

    Contamination monitoring in the facilities for handling unsealed radioactive materials is one of the most important procedures for radiation protection as well as radiation dose monitoring. For implementation of the proper contamination monitoring, radiation measuring instruments should not only be suitable to the purpose of monitoring, but also be well calibrated for the objective qualities of measurement. In the calibration of contamination monitoring instruments, quality reference activities need to be used. They are supplied in different such as extended sources, radioactive solutions or radioactive gases. These reference activities must be traceable to the national standards or equivalent standards. On the other hand, the appropriate calibration methods must be applied for each type of contamination monitoring instruments. In this paper, the concepts of calibration for contamination monitoring instruments, reference sources, determination methods of reference quantities and practical calibration methods of contamination monitoring instruments, including the procedures carried out in Japan Atomic Energy Research Institute and some relevant experimental data. (G.K.)

  2. The new fabrication method of standard surface sources

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Yasushi E-mail: yss.sato@aist.go.jp; Hino, Yoshio; Yamada, Takahiro; Matsumoto, Mikio

    2004-04-01

    We developed a new fabrication method for standard surface sources by using an inkjet printer with inks in which a radioactive material is mixed to print on a sheet of paper. Three printed test patterns have been prepared: (1) 100 mmx100 mm uniformity-test patterns, (2) positional-resolution test patterns with different widths and intervals of straight lines, and (3) logarithmic intensity test patterns with different radioactive intensities. The results revealed that the fabricated standard surface sources had high uniformity, high positional resolution, arbitrary shapes and a broad intensity range.

  3. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Mafalda; Seery, David [Astronomy Centre, University of Sussex, Brighton BN1 9QH (United Kingdom); Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk [Department of Theoretical Physics, University of the Basque Country, UPV/EHU, 48040 Bilbao (Spain)

    2015-12-01

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.

  4. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    International Nuclear Information System (INIS)

    Dias, Mafalda; Seery, David; Frazer, Jonathan

    2015-01-01

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development

  5. Transient finite element magnetic field calculation method in the anisotropic magnetic material based on the measured magnetization curves

    International Nuclear Information System (INIS)

    Jesenik, M.; Gorican, V.; Trlep, M.; Hamler, A.; Stumberger, B.

    2006-01-01

    A lot of magnetic materials are anisotropic. In the 3D finite element method calculation, anisotropy of the material is taken into account. Anisotropic magnetic material is described with magnetization curves for different magnetization directions. The 3D transient calculation of the rotational magnetic field in the sample of the round rotational single sheet tester with circular sample considering eddy currents is made and compared with the measurement to verify the correctness of the method and to analyze the magnetic field in the sample

  6. Provincial carbon intensity abatement potential estimation in China: A PSO–GA-optimized multi-factor environmental learning curve method

    International Nuclear Information System (INIS)

    Yu, Shiwei; Zhang, Junjie; Zheng, Shuhong; Sun, Han

    2015-01-01

    This study aims to estimate carbon intensity abatement potential in China at the regional level by proposing a particle swarm optimization–genetic algorithm (PSO–GA) multivariate environmental learning curve estimation method. The model uses two independent variables, namely, per capita gross domestic product (GDP) and the proportion of the tertiary industry in GDP, to construct carbon intensity learning curves (CILCs), i.e., CO 2 emissions per unit of GDP, of 30 provinces in China. Instead of the traditional ordinary least squares (OLS) method, a PSO–GA intelligent optimization algorithm is used to optimize the coefficients of a learning curve. The carbon intensity abatement potentials of the 30 Chinese provinces are estimated via PSO–GA under the business-as-usual scenario. The estimation reveals the following results. (1) For most provinces, the abatement potentials from improving a unit of the proportion of the tertiary industry in GDP are higher than the potentials from raising a unit of per capita GDP. (2) The average potential of the 30 provinces in 2020 will be 37.6% based on the emission's level of 2005. The potentials of Jiangsu, Tianjin, Shandong, Beijing, and Heilongjiang are over 60%. Ningxia is the only province without intensity abatement potential. (3) The total carbon intensity in China weighted by the GDP shares of the 30 provinces will decline by 39.4% in 2020 compared with that in 2005. This intensity cannot achieve the 40%–45% carbon intensity reduction target set by the Chinese government. Additional mitigation policies should be developed to uncover the potentials of Ningxia and Inner Mongolia. In addition, the simulation accuracy of the CILCs optimized by PSO–GA is higher than that of the CILCs optimized by the traditional OLS method. - Highlights: • A PSO–GA-optimized multi-factor environmental learning curve method is proposed. • The carbon intensity abatement potentials of the 30 Chinese provinces are estimated by

  7. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  8. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    Science.gov (United States)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  9. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    Science.gov (United States)

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Structural modeling of age specific fertility curves in Peninsular Malaysia: An approach of Lee Carter method

    Science.gov (United States)

    Hanafiah, Hazlenah; Jemain, Abdul Aziz

    2013-11-01

    In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.

  11. Legislation, standards and methods for mercury emissions control

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-04-15

    Mercury is an element of growing global concern. The United Nations Environment Programme plans to finalise and ratify a new global legally-binding convention on mercury by 2013. Canada already has legislation on mercury emissions from coal-fired utilities and the USA has recently released the new Mercury and Air Toxics Standard. Although other countries may not have mercury-specific legislation as such, many have legislation which results in significant co-benefit mercury reduction due to the installation of effective flue-gas cleaning technologies. This report reviews the current situation and trends in mercury emission legislation and, where possible, discusses the actions that will be taken under proposed or impending standards globally and regionally. The report also reviews the methods currently applied for mercury control and for mercury emission measurement with emphasis on the methodologies most appropriate for compliance. Examples of the methods of mercury control currently deployed in the USA, Canada and elsewhere are included.

  12. Standard test method for liquid impingement erosion using rotating apparatus

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers tests in which solid specimens are eroded or otherwise damaged by repeated discrete impacts of liquid drops or jets. Among the collateral forms of damage considered are degradation of optical properties of window materials, and penetration, separation, or destruction of coatings. The objective of the tests may be to determine the resistance to erosion or other damage of the materials or coatings under test, or to investigate the damage mechanisms and the effect of test variables. Because of the specialized nature of these tests and the desire in many cases to simulate to some degree the expected service environment, the specification of a standard apparatus is not deemed practicable. This test method gives guidance in setting up a test, and specifies test and analysis procedures and reporting requirements that can be followed even with quite widely differing materials, test facilities, and test conditions. It also provides a standardized scale of erosion resistance numbers applicab...

  13. Gas measuring apparatus with standardization means, and method therefor

    International Nuclear Information System (INIS)

    Typpo, P.M.

    1980-01-01

    An apparatus and a method for standardizing a gas measuring device has a source capable of emitting a beam of radiation aligned to impinge a detector. A housing means encloses the beam. The housing means has a plurality of apertures permitting the gas to enter the housing means, to intercept the beam, and to exit from the housing means. The device further comprises means for closing the apertures and a means for purging said gas from the housing means

  14. Comparison of Standard and Fast Charging Methods for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Petr Chlebis

    2014-01-01

    Full Text Available This paper describes a comparison of standard and fast charging methods used in the field of electric vehicles and also comparison of their efficiency in terms of electrical energy consumption. The comparison was performed on three-phase buck converter, which was designed for EV’s fast charging station. The results were obtained by both mathematical and simulation methods. The laboratory model of entire physical application, which will be further used for simulation results verification, is being built in these days.

  15. Modified Spectral Fatigue Methods for S-N Curves With MIL-HDBK-5J Coefficients

    Science.gov (United States)

    Irvine, Tom; Larsen, Curtis

    2016-01-01

    The rainflow method is used for counting fatigue cycles from a stress response time history, where the fatigue cycles are stress-reversals. The rainflow method allows the application of Palmgren-Miner's rule in order to assess the fatigue life of a structure subject to complex loading. The fatigue damage may also be calculated from a stress response power spectral density (PSD) using the semi-empirical Dirlik, Single Moment, Zhao-Baker and other spectral methods. These methods effectively assume that the PSD has a corresponding time history which is stationary with a normal distribution. This paper shows how the probability density function for rainflow stress cycles can be extracted from each of the spectral methods. This extraction allows for the application of the MIL-HDBK-5J fatigue coefficients in the cumulative damage summation. A numerical example is given in this paper for the stress response of a beam undergoing random base excitation, where the excitation is applied separately by a time history and by its corresponding PSD. The fatigue calculation is performed in the time domain, as well as in the frequency domain via the modified spectral methods. The result comparison shows that the modified spectral methods give comparable results to the time domain rainflow counting method.

  16. Hot Spots Detection of Operating PV Arrays through IR Thermal Image Using Method Based on Curve Fitting of Gray Histogram

    Directory of Open Access Journals (Sweden)

    Jiang Lin

    2016-01-01

    Full Text Available The overall efficiency of PV arrays is affected by hot spots which should be detected and diagnosed by applying responsible monitoring techniques. The method using the IR thermal image to detect hot spots has been studied as a direct, noncontact, nondestructive technique. However, IR thermal images suffer from relatively high stochastic noise and non-uniformity clutter, so the conventional methods of image processing are not effective. The paper proposes a method to detect hotspots based on curve fitting of gray histogram. The result of MATLAB simulation proves the method proposed in the paper is effective to detect the hot spots suppressing the noise generated during the process of image acquisition.

  17. Photon and proton activation analysis of iron and steel standards using the internal standard method coupled with the standard addition method

    International Nuclear Information System (INIS)

    Masumoto, K.; Hara, M.; Hasegawa, D.; Iino, E.; Yagi, M.

    1997-01-01

    The internal standard method coupled with the standard addition method has been applied to photon activation analysis and proton activation analysis of minor elements and trace impurities in various types of iron and steel samples issued by the Iron and Steel Institute of Japan (ISIJ). Samples and standard addition samples were once dissolved to mix homogeneously, an internal standard and elements to be determined and solidified as a silica-gel to make a similar matrix composition and geometry. Cerium and yttrium were used as an internal standard in photon and proton activation, respectively. In photon activation, 20 MeV electron beam was used for bremsstrahlung irradiation to reduce matrix activity and nuclear interference reactions, and the results were compared with those of 30 MeV irradiation. In proton activation, iron was removed by the MIBK extraction method after dissolving samples to reduce the radioactivity of 56 Co from iron via 56 Fe(p, n) 56 Co reaction. The results of proton and photon activation analysis were in good agreement with the standard values of ISIJ. (author)

  18. Standard test method for determination of resistance to stable crack extension under low-constraint conditions

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 This standard covers the determination of the resistance to stable crack extension in metallic materials in terms of the critical crack-tip-opening angle (CTOAc), ψc and/or the crack-opening displacement (COD), δ5 resistance curve (1). This method applies specifically to fatigue pre-cracked specimens that exhibit low constraint (crack-length-to-thickness and un-cracked ligament-to-thickness ratios greater than or equal to 4) and that are tested under slowly increasing remote applied displacement. The recommended specimens are the compact-tension, C(T), and middle-crack-tension, M(T), specimens. The fracture resistance determined in accordance with this standard is measured as ψc (critical CTOA value) and/or δ5 (critical COD resistance curve) as a function of crack extension. Both fracture resistance parameters are characterized using either a single-specimen or multiple-specimen procedures. These fracture quantities are determined under the opening mode (Mode I) of loading. Influences of environment a...

  19. An endogenous standard, radioisotopic ratio method in NAA

    International Nuclear Information System (INIS)

    Byrne, A.R.; Dermelj, M.

    1997-01-01

    A derivative form of NAA is proposed which is based on the use of an endogenous internal standard of already known concentration in the sample. If a comparator with a known ratio of the determinand and endogenous standard are co-irradiated with the sample, the determinand concentration is derived in terms of the endogenous standard concentration and the activity ratios of the two induced nuclides in the sample and comparator. As well as eliminating the sample mass and greatly reducing errors caused by pulse pile-up and geometrical differences, it was shown that in the radiochemical mode, if the endogenous standard is chosen so that the induced activity is radioisotopic with that from the determinand, the radiochemical yield is also eliminated and the risk non-achievement of isotopic exchange greatly reduced. The method is demonstrated with good results on reference materials for the determination of I, Mn and Ni. The advantages and disadvantages of this approach are discussed. It is suggested that it may be of application in quality control and in extending the range of certified elements in reference materials. (author)

  20. Accurate determination of arsenic in arsenobetaine standard solutions of BCR-626 and NMIJ CRM 7901-a by neutron activation analysis coupled with internal standard method.

    Science.gov (United States)

    Miura, Tsutomu; Chiba, Koichi; Kuroiwa, Takayoshi; Narukawa, Tomohiro; Hioki, Akiharu; Matsue, Hideaki

    2010-09-15

    Neutron activation analysis (NAA) coupled with an internal standard method was applied for the determination of As in the certified reference material (CRM) of arsenobetaine (AB) standard solutions to verify their certified values. Gold was used as an internal standard to compensate for the difference of the neutron exposure in an irradiation capsule and to improve the sample-to-sample repeatability. Application of the internal standard method significantly improved linearity of the calibration curve up to 1 microg of As, too. The analytical reliability of the proposed method was evaluated by k(0)-standardization NAA. The analytical results of As in AB standard solutions of BCR-626 and NMIJ CRM 7901-a were (499+/-55)mgkg(-1) (k=2) and (10.16+/-0.15)mgkg(-1) (k=2), respectively. These values were found to be 15-20% higher than the certified values. The between-bottle variation of BCR-626 was much larger than the expanded uncertainty of the certified value, although that of NMIJ CRM 7901-a was almost negligible. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  1. Anatomical curve identification

    Science.gov (United States)

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  2. Standard Test Method for Normal Spectral Emittance at Elevated Temperatures

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1972-01-01

    1.1 This test method describes a highly accurate technique for measuring the normal spectral emittance of electrically conducting materials or materials with electrically conducting substrates, in the temperature range from 600 to 1400 K, and at wavelengths from 1 to 35 μm. 1.2 The test method requires expensive equipment and rather elaborate precautions, but produces data that are accurate to within a few percent. It is suitable for research laboratories where the highest precision and accuracy are desired, but is not recommended for routine production or acceptance testing. However, because of its high accuracy this test method can be used as a referee method to be applied to production and acceptance testing in cases of dispute. 1.3 The values stated in SI units are to be regarded as the standard. The values in parentheses are for information only. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this stan...

  3. Finite element method for one-dimensional rill erosion simulation on a curved slope

    Directory of Open Access Journals (Sweden)

    Lijuan Yan

    2015-03-01

    Full Text Available Rill erosion models are important to hillslope soil erosion prediction and to land use planning. The development of rill erosion models and their use has become increasingly of great concern. The purpose of this research was to develop mathematic models with computer simulation procedures to simulate and predict rill erosion. The finite element method is known as an efficient tool in many other applications than in rill soil erosion. In this study, the hydrodynamic and sediment continuity model equations for a rill erosion system were solved by the Galerkin finite element method and Visual C++ procedures. The simulated results are compared with the data for spatially and temporally measured processes for rill erosion under different conditions. The results indicate that the one-dimensional linear finite element method produced excellent predictions of rill erosion processes. Therefore, this study supplies a tool for further development of a dynamic soil erosion prediction model.

  4. Standardization method for alpha and beta surface sources

    Energy Technology Data Exchange (ETDEWEB)

    Sahagia, M; Grigorescu, E L; Razdolescu, A C; Ivan, C [Institute of Physics and Nuclear Engineering, Institute of Atomic Physics, PO Box MG-6, R-76900 Bucharest, (Romania)

    1994-01-01

    The installation and method of standardization of large surface alpha and beta sources are presented. A multiwire, flow-type proportional counter and the associated electronics is used. The counter is placed in a lead-shield. The response of the system in (s[sup -1]/Bq) or (s[sup -1]/(particle x s[sup -1])) was determined for [sup 241] Am, [sup 239] Pu, [sup 147] Pm, [sup 204] Tl, [sup 90](Sr+Y) and [sup 137] Cs using standard sources with different dimensions, from some mm[sup 2] to 180 x 220 mm[sup 2]. The system was legally attested for expanded uncertainties of +7%. (Author).

  5. Evaluation of diastolic phase by left ventricular volume curve using s2-gated equilibrium method among radioisotope angiography

    International Nuclear Information System (INIS)

    Watanabe, Yoshirou; Sakai, Akira; Inada, Mitsuo; Shiraishi, Tomokuni; Kobayashi, Akitoshi

    1982-01-01

    S2-gated (the second heart sound) method was designed by authors. In 6 normal subjects and 16 patients (old myocardial infarction 12 cases, hypertension 2 cases and aortic regurgitation 2 cases), radioisotope (RI) angiography using S2-gated equilibrium method was performed. In RI angiography, sup(99m)Tc-human serum albumin (HSA) 555MBq (15mCi) as tracer, PDP11/34 as minicomputer and PCG/ECG symchromizer (Metro Inst.) were used. Then left ventricular (LV) volume curve by S2-gated and electrocardiogram (ECG) R wave-gated method were obtained. Using LV volume curve, left ventricular ejection fraction (EF), mean ejection rate (mER, s -1 ), mean filling rate (mFR, -1 ) and rapid filling fraction (RFF) were calculated. mFR indicated mean filling rate during rapid filling phase. RFF was defined as the filling fraction during rapid filling phase among stroke volume. S2-gated method was reliable in evaluation of early diastolic phase, compared with ECG-gated method. There was the difference between RFF in normal group and myocardial infarction (MI) group (p < 0.005). RFF in 2 groups were correlated with EF (r = 0.82, p < 0.01). RFF was useful in evaluating MI cases who had normal EF values. The comparison with mER by ECG-gated and mFR by S2-gated was useful in evaluating MI cases who had normal mER values. mFR was remarkably lower than mER in MI group, but was equal to mER in normal group approximately. In conclusion, the evaluation using RFF and mFR by S2-gated method was useful in MI cases who had normal systolic phase indices. (author)

  6. KBr-Li Br and KBr-LiBr doped with Ti mixed single crystal by Czochralski method and glow curve studies

    International Nuclear Information System (INIS)

    Faripour, H.; Faripour, N.

    2003-01-01

    Mixed-single Crystals: pure KBr-LiBr and KBr-LiBr with Ti dopant were grown by Czochralski method. Because of difference between lattice parameters of KBr and LiBr, the growth speed of crystals were relatively low, and they were annealed in a special temperature condition providing some cleavages. They were exposed by β radiation and the glow curve was analysed for each crystal. Analysing of glow curve, showed that Ti impurity has been the curves of main peak curve appearance temperature decreasing

  7. Reflector construction by sound path curves - A method of manual reflector evaluation in the field

    International Nuclear Information System (INIS)

    Siciliano, F.; Heumuller, R.

    1985-01-01

    In order to describe the time-of-flight behavior of various reflectors we have set up models and derived from them analytical and graphic approaches to reflector reconstruction. In the course of this work, maximum achievable accuracy and possible simplifications were investigated. The aim of the time-of-flight reconstruction method is to determine the points of a reflector on the basis of a sound path function (sound path as the function of the probe index position). This method can only be used on materials which are isotropic in terms of sound velocity since the method relies on time of flight being converted into sound path. This paper deals only with two-dimensional reconstruction, in other words all statements relate to the plane of incidence. The method is based on the fact that the geometrical location of the points equidistant from a certain probe index position is a circle. If circles with radiuses equal to the associated sound path are drawn for various search unit positions the points of intersection of the circles are the desired reflector points

  8. Standard CMMIsm Appraisal Method for Process Improvement (SCAMPIsm), Version 1.1: Method Definition Document

    National Research Council Canada - National Science Library

    2001-01-01

    The Standard CMMI Appraisal Method for Process Improvement (SCAMPI(Service Mark)) is designed to provide benchmark quality ratings relative to Capability Maturity Model(registered) Integration (CMMI(Service Mark)) models...

  9. Cutibacterium acnes molecular typing: time to standardize the method.

    Science.gov (United States)

    Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S

    2018-03-12

    The Gram-positive, anaerobic/aerotolerant bacterium Cutibacterium acnes is a commensal of healthy human skin; it is subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far specific subgroups of C. acnes are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups: i.e. phylotypes, clonal complexes, and types defined by single-locus sequence typing (SLST). However, as several molecular typing methods have been developed over the last decade, it has become a difficult task to compare the results from one article to another. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform molecular typing of C. acnes, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST types). We discuss the existing different typing methods from a critical point of view, emphasizing their advantages and drawbacks, and we identify the most frequently used methods. We propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about molecular typing methods for C. acnes. This standardization will facilitate the comparison of results between one article and another, and also the interpretation of clinical data. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  10. A standardized method for beam design in neutron capture therapy

    International Nuclear Information System (INIS)

    Storr, G.J.: Harrington, B.V.

    1993-01-01

    A desirable end point for a given beam design for Neutron Capture Therapy (NCT) should be quantitative description of tumour control probability and normal tissue damage. Achieving this goal will ultimately rely on data from NCT human clinical trials. Traditional descriptions of beam designs have used a variety of assessment methods to quantify proposed or installed beam designs. These methods include measurement and calculation of open-quotes free fieldclose quotes parameters, such as neutron and gamma flux intensities and energy spectra, and figures-of-merit in tissue equivalent phantoms. The authors propose here a standardized method for beam design in NCT. This method would allow all proposed and existing NCT beam facilities to be compared equally. The traditional approach to determining a quantitative description of tumour control probability and normal tissue damage in NCT research may be described by the following path: Beam design → dosimetry → macroscopic effects → microscopic effects. Methods exist that allow neutron and gamma fluxes and energy dependence to be calculated and measured to good accuracy. By using this information and intermediate dosimetric quantities such as kerma factors for neutrons and gammas, macroscopic effect (absorbed dose) in geometries of tissue or tissue-equivalent materials can be calculated. After this stage, for NCT the data begins to become more sparse and in some areas ambiguous. Uncertainties in the Relative Biological Effectiveness (RBE) of some NCT dose components means that beam designs based on assumptions considered valid a few years ago may have to be reassessed. A standard method is therefore useful for comparing different NCT facilities

  11. PIV Measurement of Pulsatile Flows in 3D Curved Tubes Using Refractive Index Matching Method

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Hyeon Ji; Ji, Ho Seong; Kim, Kyung Chun [Pusan Nat’l Univ., Busan (Korea, Republic of)

    2016-08-15

    Three-dimensional models of stenosis blood vessels were prepared using a 3D printer. The models included a straight pipe with axisymmetric stenosis and a pipe that was bent 10° from the center of stenosis. A refractive index matching method was utilized to measure accurate velocity fields inside the 3D tubes. Three different pulsatile flows were generated and controlled by changing the rotational speed frequency of the peristaltic pump. Unsteady velocity fields were measured by a time-resolved particle image velocimetry method. Periodic shedding of vortices occurred and moves depended on the maximum velocity region. The sizes and the positions of the vortices and symmetry are influenced by mean Reynolds number and tube geometry. In the case of the bent pipe, a recirculation zone observed at the post-stenosis could explain the possibility of blood clot formation and blood clot adhesion in view of hemodynamics.

  12. PIV Measurement of Pulsatile Flows in 3D Curved Tubes Using Refractive Index Matching Method

    International Nuclear Information System (INIS)

    Hong, Hyeon Ji; Ji, Ho Seong; Kim, Kyung Chun

    2016-01-01

    Three-dimensional models of stenosis blood vessels were prepared using a 3D printer. The models included a straight pipe with axisymmetric stenosis and a pipe that was bent 10° from the center of stenosis. A refractive index matching method was utilized to measure accurate velocity fields inside the 3D tubes. Three different pulsatile flows were generated and controlled by changing the rotational speed frequency of the peristaltic pump. Unsteady velocity fields were measured by a time-resolved particle image velocimetry method. Periodic shedding of vortices occurred and moves depended on the maximum velocity region. The sizes and the positions of the vortices and symmetry are influenced by mean Reynolds number and tube geometry. In the case of the bent pipe, a recirculation zone observed at the post-stenosis could explain the possibility of blood clot formation and blood clot adhesion in view of hemodynamics.

  13. Receiver operating characteristic (ROC) curves: review of methods with applications in diagnostic medicine

    Science.gov (United States)

    Obuchowski, Nancy A.; Bullen, Jennifer A.

    2018-04-01

    Receiver operating characteristic (ROC) analysis is a tool used to describe the discrimination accuracy of a diagnostic test or prediction model. While sensitivity and specificity are the basic metrics of accuracy, they have many limitations when characterizing test accuracy, particularly when comparing the accuracies of competing tests. In this article we review the basic study design features of ROC studies, illustrate sample size calculations, present statistical methods for measuring and comparing accuracy, and highlight commonly used ROC software. We include descriptions of multi-reader ROC study design and analysis, address frequently seen problems of verification and location bias, discuss clustered data, and provide strategies for testing endpoints in ROC studies. The methods are illustrated with a study of transmission ultrasound for diagnosing breast lesions.

  14. Analysis and Extension of the PCA Method, Estimating a Noise Curve from a Single Image

    Directory of Open Access Journals (Sweden)

    Miguel Colom

    2016-12-01

    Full Text Available In the article 'Image Noise Level Estimation by Principal Component Analysis', S. Pyatykh, J. Hesser, and L. Zheng propose a new method to estimate the variance of the noise in an image from the eigenvalues of the covariance matrix of the overlapping blocks of the noisy image. Instead of using all the patches of the noisy image, the authors propose an iterative strategy to adaptively choose the optimal set containing the patches with lowest variance. Although the method measures uniform Gaussian noise, it can be easily adapted to deal with signal-dependent noise, which is realistic with the Poisson noise model obtained by a CMOS or CCD device in a digital camera.

  15. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    Science.gov (United States)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  16. Gold Nanoparticle-Aptamer-Based LSPR Sensing of Ochratoxin A at a Widened Detection Range by Double Calibration Curve Method.

    Science.gov (United States)

    Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin

    2018-01-01

    Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium , and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a "double calibration curve" method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.

  17. A new method of testing pile using dynamic P-S-curve made by amplitude of wave train

    Science.gov (United States)

    Hu, Yi-Li; Xu, Jun; Duan, Yong-Kong; Xu, Zhao-Yong; Yang, Run-Hai; Zhao, Jin-Ming

    2004-11-01

    A new method of detecting the vertical bearing capacity for single-pile with high strain is discussed in this paper. A heavy hammer or a small type of rocket is used to strike the pile top and the detectors are used to record vibration graphs. An expression of higher degree of strain (deformation force) is introduced. It is testified theoretically that the displacement, velocity and acceleration cannot be obtained by simple integral acceleration and differential velocity when long displacement and high strain exist, namely when the pile phase generates a whole slip relative to the soil body. That is to say that there are non-linear relations between them. It is educed accordingly that the force P and displacement S are calculated from the amplitude of wave train and (dynamic) P-S curve is drew so as to determine the yield points. Further, a method of determining the vertical bearing capacity for single-pile is discussed. A static load test is utilized to check the result of dynamic test and determine the correlative constants of dynamic-static P( Q)- S curve.

  18. AGAPEROS Searches for microlensing in the LMC with the Pixel Method; 1, Data treatment and pixel light curves production

    CERN Document Server

    Melchior, A.-L.; Ansari, R.; Aubourg, E.; Baillon, P.; Bareyre, P.; Bauer, F.; Beaulieu, J.-Ph.; Bouquet, A.; Brehin, S.; Cavalier, F.; Char, S.; Couchot, F.; Coutures, C.; Ferlet, R.; Fernandez, J.; Gaucherel, C.; Giraud-Heraud, Y.; Glicenstein, J.-F.; Goldman, B.; Gondolo, P.; Gros, M.; Guibert, J.; Gry, C.; Hardin, D.; Kaplan, J.; de Kat, J.; Lachieze-Rey, M.; Laurent, B.; Lesquoy, E.; Magneville, Ch.; Mansoux, B.; Marquette, J.-B.; Maurice, E.; Milsztajn, A.; Moniez, M.; Moreau, O.; Moscoso, L.; Palanque-Delabrouille, N.; Perdereau, O.; Prevot, L.; Renault, C.; Queinnec, F.; Rich, J.; Spiro, M.; Vigroux, L.; Zylberajch, S.; Vidal-Madjar, A.; Magneville, Ch.

    1999-01-01

    The presence and abundance of MAssive Compact Halo Objects (MACHOs) towards the Large Magellanic Cloud (LMC) can be studied with microlensing searches. The 10 events detected by the EROS and MACHO groups suggest that objects with 0.5 Mo could fill 50% of the dark halo. This preferred mass is quite surprising, and increasing the presently small statistics is a crucial issue. Additional microlensing of stars too dim to be resolved in crowded fields should be detectable using the Pixel Method. We present here an application of this method to the EROS 91-92 data (one tenth of the whole existing data set). We emphasize the data treatment required for monitoring pixel fluxes. Geometric and photometric alignments are performed on each image. Seeing correction and error estimates are discussed. 3.6" x 3.6" super-pixel light curves, thus produced, are very stable over the 120 days time-span. Fluctuations at a level of 1.8% of the flux in blue and 1.3% in red are measured on the pixel light curves. This level of stabil...

  19. Slicing Method for curved façade and window extraction from point clouds

    Science.gov (United States)

    Iman Zolanvari, S. M.; Laefer, Debra F.

    2016-09-01

    Laser scanning technology is a fast and reliable method to survey structures. However, the automatic conversion of such data into solid models for computation remains a major challenge, especially where non-rectilinear features are present. Since, openings and the overall dimensions of the buildings are the most critical elements in computational models for structural analysis, this article introduces the Slicing Method as a new, computationally-efficient method for extracting overall façade and window boundary points for reconstructing a façade into a geometry compatible for computational modelling. After finding a principal plane, the technique slices a façade into limited portions, with each slice representing a unique, imaginary section passing through a building. This is done along a façade's principal axes to segregate window and door openings from structural portions of the load-bearing masonry walls. The method detects each opening area's boundaries, as well as the overall boundary of the façade, in part, by using a one-dimensional projection to accelerate processing. Slices were optimised as 14.3 slices per vertical metre of building and 25 slices per horizontal metre of building, irrespective of building configuration or complexity. The proposed procedure was validated by its application to three highly decorative, historic brick buildings. Accuracy in excess of 93% was achieved with no manual intervention on highly complex buildings and nearly 100% on simple ones. Furthermore, computational times were less than 3 sec for data sets up to 2.6 million points, while similar existing approaches required more than 16 hr for such datasets.

  20. SRF cavity alignment detection method using beam-induced HOM with curved beam orbit

    Science.gov (United States)

    Hattori, Ayaka; Hayano, Hitoshi

    2017-09-01

    We have developed a method to obtain mechanical centers of nine cell superconducting radio frequency (SRF) cavities from localized dipole modes, that is one of the higher order modes (HOM) induced by low-energy beams. It is to be noted that low-energy beams, which are used as alignment probes, are easy to bend in fringe fields of accelerator cavities. The estimation of the beam passing orbit is important because only information about the beam positions measured by beam position monitors outside the cavities is available. In this case, the alignment information about the cavities can be obtained by optimizing the parameters of the acceleration components over the beam orbit simulation to consistently represent the position of the beam position monitors measured at every beam sweep. We discuss details of the orbit estimation method, and estimate the mechanical center of the localized modes through experiments performed at the STF accelerator. The mechanical center is determined as (x , y) =(0 . 44 ± 0 . 56 mm , - 1 . 95 ± 0 . 40 mm) . We also discuss the error and the applicable range of this method.

  1. Standardized Method for High-throughput Sterilization of Arabidopsis Seeds.

    Science.gov (United States)

    Lindsey, Benson E; Rivero, Luz; Calhoun, Chistopher S; Grotewold, Erich; Brkljacic, Jelena

    2017-10-17

    Arabidopsis thaliana (Arabidopsis) seedlings often need to be grown on sterile media. This requires prior seed sterilization to prevent the growth of microbial contaminants present on the seed surface. Currently, Arabidopsis seeds are sterilized using two distinct sterilization techniques in conditions that differ slightly between labs and have not been standardized, often resulting in only partially effective sterilization or in excessive seed mortality. Most of these methods are also not easily scalable to a large number of seed lines of diverse genotypes. As technologies for high-throughput analysis of Arabidopsis continue to proliferate, standardized techniques for sterilizing large numbers of seeds of different genotypes are becoming essential for conducting these types of experiments. The response of a number of Arabidopsis lines to two different sterilization techniques was evaluated based on seed germination rate and the level of seed contamination with microbes and other pathogens. The treatments included different concentrations of sterilizing agents and times of exposure, combined to determine optimal conditions for Arabidopsis seed sterilization. Optimized protocols have been developed for two different sterilization methods: bleach (liquid-phase) and chlorine (Cl2) gas (vapor-phase), both resulting in high seed germination rates and minimal microbial contamination. The utility of these protocols was illustrated through the testing of both wild type and mutant seeds with a range of germination potentials. Our results show that seeds can be effectively sterilized using either method without excessive seed mortality, although detrimental effects of sterilization were observed for seeds with lower than optimal germination potential. In addition, an equation was developed to enable researchers to apply the standardized chlorine gas sterilization conditions to airtight containers of different sizes. The protocols described here allow easy, efficient, and

  2. Analysis of Indonesian educational system standard with KSIM cross-impact method

    Science.gov (United States)

    Arridjal, F.; Aldila, D.; Bustamam, A.

    2017-07-01

    The Result of The Programme of International Student Assessment (PISA) on 2012 shows that Indonesia is on 64'th position from 65 countries in Mathematics Mean Score. The 2013 Learning Curve Mapping, Indonesia is included in the 10th category of countries with the lowest performance on cognitive skills aspect, i.e. 37'th position from 40 countries. Competency is built by 3 aspects, one of them is cognitive aspect. The low result of mapping on cognitive aspect, describe the low of graduate competences as an output of Indonesia National Education System (INES). INES adopting a concept Eight Educational System Standards (EESS), one of them is graduate competency standard which connected directly with Indonesia's students. This research aims is to model INES by using KSIM cross-impact. Linear regression models of EESS constructed using the accreditation national data of Senior High Schools in Indonesia. The results then interpreted as impact value on the construction of KSIM cross-impact INES. The construction is used to analyze the interaction of EESS and doing numerical simulation for possible public policy in the education sector, i.e. stimulate the growth of education staff standard, content, process and infrastructure. All simulations of public policy has been done with 2 methods i.e with a multiplier impact method and with constant intervention method. From numerical simulation result, it is shown that stimulate the growth standard of content in the construction KSIM cross-impact EESS is the best option for public policy to maximize the growth of graduate competency standard.

  3. Estimating Composite Curve Number Using an Improved SCS-CN Method with Remotely Sensed Variables in Guangzhou, China

    Directory of Open Access Journals (Sweden)

    Qihao Weng

    2013-03-01

    Full Text Available The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS of the United States Department of Agriculture (USDA, and is one of the most enduring methods for estimating direct runoff volume in ungauged catchments. In this model, the curve number (CN is a key variable which is usually obtained by the look-up table of TR-55. Due to the limitations of TR-55 in characterizing complex urban environments and in classifying land use/cover types, the SCS-CN model cannot provide more detailed runoff information. Thus, this paper develops a method to calculate CN by using remote sensing variables, including vegetation, impervious surface, and soil (V-I-S. The specific objectives of this paper are: (1 To extract the V-I-S fraction images using Linear Spectral Mixture Analysis; (2 To obtain composite CN by incorporating vegetation types, soil types, and V-I-S fraction images; and (3 To simulate direct runoff under the scenarios with precipitation of 57mm (occurred once every five years by average and 81mm (occurred once every ten years. Our experiment shows that the proposed method is easy to use and can derive composite CN effectively.

  4. Standardization of 18F by coincidence and LSC methods

    International Nuclear Information System (INIS)

    Roteta, Miguel; Garcia-Torano, Eduardo; Rodriguez Barquero, Leonor

    2006-01-01

    The nuclide 18 F disintegrates to 18 O by β + emission (96.86%) and electron capture (3.14%) with a half-life of 1.8288 h. It is widely used in nuclear medicine for positron emission tomography (PET). A radioactive solution of this nuclide has been standardized by two techniques: coincidence measurements with a pressurized proportional counter and liquid scintillation counting using the CIEMAT/NIST method. One ampoule containing a solution calibrated in activity was sent for measurement at the International Reference System maintained by the BIPM. Results are in excellent agreement with SIR values

  5. Alternative wind power modeling methods using chronological and load duration curve production cost models

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M R

    1996-04-01

    As an intermittent resource, capturing the temporal variation in windpower is an important issue in the context of utility production cost modeling. Many of the production cost models use a method that creates a cumulative probability distribution that is outside the time domain. The purpose of this report is to examine two production cost models that represent the two major model types: chronological and load duration cure models. This report is part of the ongoing research undertaken by the Wind Technology Division of the National Renewable Energy Laboratory in utility modeling and wind system integration.

  6. Wavelength selection method with standard deviation: application to pulse oximetry.

    Science.gov (United States)

    Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija

    2011-07-01

    Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.

  7. Identification of Dynamic Flow Stress Curves Using the Virtual Fields Methods: Theoretical Feasibility Analysis

    Science.gov (United States)

    Leem, Dohyun; Kim, Jin-Hwan; Barlat, Frédéric; Song, Jung Han; Lee, Myoung-Gyu

    2018-03-01

    An inverse approach based on the virtual fields method (VFM) is presented to identify the material hardening parameters under dynamic deformation. This dynamic-VFM (D-VFM) method does not require load information for the parameter identification. Instead, it utilizes acceleration fields in a specimen's gage region. To investigate the feasibility of the proposed inverse approach for dynamic deformation, the virtual experiments using dynamic finite element simulations were conducted. The simulation could provide all the necessary data for the identification such as displacement, strain, and acceleration fields. The accuracy of the identification results was evaluated by changing several parameters such as specimen geometry, velocity, and traction boundary conditions. The analysis clearly shows that the D-VFM which utilizes acceleration fields can be a good alternative to the conventional identification procedure that uses load information. Also, it was found that proper deformation conditions are required for generating sufficient acceleration fields during dynamic deformation to enhance the identification accuracy with the D-VFM.

  8. Calibration curves for on-line leakage detection using radiotracer injection method

    Directory of Open Access Journals (Sweden)

    Ayoub Khatooni

    2017-11-01

    Full Text Available One of the most important requirements for industrial pipelines is the leakage detection. In this paper, detection of leak and determination of its amount using radioactive tracer injection method has been simulated by Monte Carlo MCNP code. The detector array included two NaI (Tl detectors which were located before and after the considered position, measure emitted gamma from radioactive tracer. After calibration of radiation detectors, the amount of leakage can be calculated based on the count difference of detectors. Also, the effect of material and thickness and diameter of pipe, crystal dimension, types of fluid, activity of tracer and its type (24Na, 82Br, 131I, 99mTc, 113mIn as well as have been investigated on the detectable amount of leakage. According to the results, for example, leakage more than 0.007% in volume of the inlet fluid for iron pipe with outer diameter 4 inch and thickness of 0.5 cm, Petrol as fluid inside pipe, 3 3 inch detector and 24Na with activity of 100 mCi can be detected by this presented method.

  9. A MACHINE-LEARNING METHOD TO INFER FUNDAMENTAL STELLAR PARAMETERS FROM PHOTOMETRIC LIGHT CURVES

    International Nuclear Information System (INIS)

    Miller, A. A.; Bloom, J. S.; Richards, J. W.; Starr, D. L.; Lee, Y. S.; Butler, N. R.; Tokarz, S.; Smith, N.; Eisner, J. A.

    2015-01-01

    A fundamental challenge for wide-field imaging surveys is obtaining follow-up spectroscopic observations: there are >10 9 photometrically cataloged sources, yet modern spectroscopic surveys are limited to ∼few× 10 6 targets. As we approach the Large Synoptic Survey Telescope era, new algorithmic solutions are required to cope with the data deluge. Here we report the development of a machine-learning framework capable of inferring fundamental stellar parameters (T eff , log g, and [Fe/H]) using photometric-brightness variations and color alone. A training set is constructed from a systematic spectroscopic survey of variables with Hectospec/Multi-Mirror Telescope. In sum, the training set includes ∼9000 spectra, for which stellar parameters are measured using the SEGUE Stellar Parameters Pipeline (SSPP). We employed the random forest algorithm to perform a non-parametric regression that predicts T eff , log g, and [Fe/H] from photometric time-domain observations. Our final optimized model produces a cross-validated rms error (RMSE) of 165 K, 0.39 dex, and 0.33 dex for T eff , log g, and [Fe/H], respectively. Examining the subset of sources for which the SSPP measurements are most reliable, the RMSE reduces to 125 K, 0.37 dex, and 0.27 dex, respectively, comparable to what is achievable via low-resolution spectroscopy. For variable stars this represents a ≈12%-20% improvement in RMSE relative to models trained with single-epoch photometric colors. As an application of our method, we estimate stellar parameters for ∼54,000 known variables. We argue that this method may convert photometric time-domain surveys into pseudo-spectrographic engines, enabling the construction of extremely detailed maps of the Milky Way, its structure, and history

  10. Standard Test Method for Measuring Binocular Disparity in Transparent Parts

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the amount of binocular disparity that is induced by transparent parts such as aircraft windscreens, canopies, HUD combining glasses, visors, or goggles. This test method may be applied to parts of any size, shape, or thickness, individually or in combination, so as to determine the contribution of each transparent part to the overall binocular disparity present in the total “viewing system” being used by a human operator. 1.2 This test method represents one of several techniques that are available for measuring binocular disparity, but is the only technique that yields a quantitative figure of merit that can be related to operator visual performance. 1.3 This test method employs apparatus currently being used in the measurement of optical angular deviation under Method F 801. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not con...

  11. Wind Turbine Power Curves Incorporating Turbulence Intensity

    DEFF Research Database (Denmark)

    Sørensen, Emil Hedevang Lohse

    2014-01-01

    . The model and method are parsimonious in the sense that only a single function (the zero-turbulence power curve) and a single auxiliary parameter (the equivalent turbulence factor) are needed to predict the mean power at any desired turbulence intensity. The method requires only ten minute statistics......The performance of a wind turbine in terms of power production (the power curve) is important to the wind energy industry. The current IEC-61400-12-1 standard for power curve evaluation recognizes only the mean wind speed at hub height and the air density as relevant to the power production...

  12. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    Science.gov (United States)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  13. Probing the A1 to L10 transformation in FeCuPt using the first order reversal curve method

    Directory of Open Access Journals (Sweden)

    Dustin A. Gilbert

    2014-08-01

    Full Text Available The A1-L10 phase transformation has been investigated in (001 FeCuPt thin films prepared by atomic-scale multilayer sputtering and rapid thermal annealing (RTA. Traditional x-ray diffraction is not always applicable in generating a true order parameter, due to non-ideal crystallinity of the A1 phase. Using the first-order reversal curve (FORC method, the A1 and L10 phases are deconvoluted into two distinct features in the FORC distribution, whose relative intensities change with the RTA temperature. The L10 ordering takes place via a nucleation-and-growth mode. A magnetization-based phase fraction is extracted, providing a quantitative measure of the L10 phase homogeneity.

  14. 41Ca standardization by the CIEMAT/NIST LSC method

    International Nuclear Information System (INIS)

    Rodriguez Barquero, L.; Los Arcos, J.M.

    1996-01-01

    A procedure for the liquid scintillation counting standardization of the electron-capture nuclide 41 Ca has been successfully developed and applied with 41 CaCl 2 and 41 Ca-(HDEHP) n samples synthesized in the laboratory from 41 CaCO 3 supplied by Oak Ridge National Laboratory. Six scintillators were tested: the organic samples were stable in toluene-alcohol, Ultima-Gold TM and HiSafe III TM for 30 d, whereas the inorganic samples were only stable in toluene-alcohol and HiSafe III TM for the same period of time. Despite of the low counting efficiencies (1%-13%) due to the very low-energy of less than 3.6 keV of the X-rays and Auger electrons of 41 Ca, the stable samples were standardized by the CIEMAT/NIST method to a combined uncertainty of 2.4% over a range of figures of merit of 1.75 to 7.25 ( 3 H equivalent efficiency of 40% to 7%). (orig.)

  15. Standardized methods for Grand Canyon fisheries research 2015

    Science.gov (United States)

    Persons, William R.; Ward, David L.; Avery, Luke A.

    2013-01-01

    This document presents protocols and guidelines to persons sampling fishes in the Grand Canyon, to help ensure consistency in fish handling, fish tagging, and data collection among different projects and organizations. Most such research and monitoring projects are conducted under the general umbrella of the Glen Canyon Dam Adaptive Management Program and include studies by the U.S. Geological Survey (USGS), U.S. Fish and Wildlife Service (FWS), National Park Service (NPS), the Arizona Game and Fish Department (AGFD), various universities, and private contractors. This document is intended to provide guidance to fieldworkers regarding protocols that may vary from year to year depending on specific projects and objectives. We also provide herein documentation of standard methods used in the Grand Canyon that can be cited in scientific publications, as well as a summary of changes in protocols since the document was first created in 2002.

  16. Noise reduction methods in the analysis of near infrared lunar occultation light curves for high angular resolution measurements

    International Nuclear Information System (INIS)

    Baug Tapas; Chandrasekhar Thyagarajan

    2013-01-01

    A lunar occultation (LO) technique in the near-infrared (NIR) provides angular resolution down to milliarcseconds for an occulted source, even with ground-based 1 m class telescopes. LO observations are limited to brighter objects because they require a high signal-to-noise ratio (S/N ∼40) for proper extraction of angular diameter values. Hence, methods to improve the S/N ratio by reducing noise using Fourier and wavelet transforms have been explored in this study. A sample of 54 NIR LO light curves observed with the IR camera at Mt Abu Observatory has been used. It is seen that both Fourier and wavelet methods have shown an improvement in S/N compared to the original data. However, the application of wavelet transforms causes a slight smoothing of the fringes and results in a higher value for angular diameter. Fourier transforms which reduce discrete noise frequencies do not distort the fringe. The Fourier transform method seems to be effective in improving the S/N, as well as improving the model fit, particularly in the fainter regime of our sample. These methods also provide a better model fit for brighter sources in some cases, though there may not be a significant improvement in S/N

  17. Comparative assessment of cyclic J-R curve determination by different methods in a pressure vessel steel

    Energy Technology Data Exchange (ETDEWEB)

    Chowdhury, Tamshuk, E-mail: tamshuk@gmail.com [Deep Sea Technologies, National Institute of Ocean Technology, Chennai, 600100 (India); Sivaprasad, S.; Bar, H.N.; Tarafder, S. [Fatigue & Fracture Group, Materials Science and Technology Division, CSIR-National Metallurgical Laboratory, Jamshedpur, 831007 (India); Bandyopadhyay, N.R. [School of Materials Science and Engineering, Indian Institute of Engineering, Science and Technology, Shibpur, Howrah, 711103 (India)

    2016-04-15

    Cyclic J-R behaviour of a reactor pressure vessel steel using different methods available in literature has been examined to identify the best suitable method for cyclic fracture problems. Crack opening point was determined by moving average method. The η factor was experimentally determined for cyclic loading conditions and found to be similar to that of ASTM value. Analyses showed that adopting a procedure analogous to the ASTM standard for monotonic fracture is reasonable for cyclic fracture problems, and makes the comparison to monotonic fracture results straightforward. - Highlights: • Different methods of cyclic J-R evaluation compared. • A moving average method for closure point proposed. • η factor for cyclic J experimentally validated. • Method 1 is easier, provides a lower bound and direct comparison to monotonic fracture.

  18. Standard test method for measurement of soil resistivity using the two-electrode soil box method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method covers the equipment and a procedure for the measurement of soil resistivity, for samples removed from the ground, for use in the control of corrosion of buried structures. 1.2 Procedures allow for this test method to be used n the field or in the laboratory. 1.3 The test method procedures are for the resistivity measurement of soil samples in the saturated condition and in the as-received condition. 1.4 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. Soil resistivity values are reported in ohm-centimeter. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and to determine the applicability of regulatory limitations prior to use.

  19. A new electric method for non-invasive continuous monitoring of stroke volume and ventricular volume-time curves

    Directory of Open Access Journals (Sweden)

    Konings Maurits K

    2012-08-01

    Full Text Available Abstract Background In this paper a new non-invasive, operator-free, continuous ventricular stroke volume monitoring device (Hemodynamic Cardiac Profiler, HCP is presented, that measures the average stroke volume (SV for each period of 20 seconds, as well as ventricular volume-time curves for each cardiac cycle, using a new electric method (Ventricular Field Recognition with six independent electrode pairs distributed over the frontal thoracic skin. In contrast to existing non-invasive electric methods, our method does not use the algorithms of impedance or bioreactance cardiography. Instead, our method is based on specific 2D spatial patterns on the thoracic skin, representing the distribution, over the thorax, of changes in the applied current field caused by cardiac volume changes during the cardiac cycle. Since total heart volume variation during the cardiac cycle is a poor indicator for ventricular stroke volume, our HCP separates atrial filling effects from ventricular filling effects, and retrieves the volume changes of only the ventricles. Methods ex-vivo experiments on a post-mortem human heart have been performed to measure the effects of increasing the blood volume inside the ventricles in isolation, leaving the atrial volume invariant (which can not be done in-vivo. These effects have been measured as a specific 2D pattern of voltage changes on the thoracic skin. Furthermore, a working prototype of the HCP has been developed that uses these ex-vivo results in an algorithm to decompose voltage changes, that were measured in-vivo by the HCP on the thoracic skin of a human volunteer, into an atrial component and a ventricular component, in almost real-time (with a delay of maximally 39 seconds. The HCP prototype has been tested in-vivo on 7 human volunteers, using G-suit inflation and deflation to provoke stroke volume changes, and LVot Doppler as a reference technique. Results The ex-vivo measurements showed that ventricular filling

  20. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    International Nuclear Information System (INIS)

    Alhossen, I; Bugarin, F; Segonds, S; Villeneuve-Faure, C; Baudoin, F

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC. (paper)

  1. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    Science.gov (United States)

    Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.

  2. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures

    Science.gov (United States)

    Yehia, Ali M.

    2013-05-01

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.

  3. Method of using a nuclear magnetic resonance spectroscopy standard

    Science.gov (United States)

    Spicer, Leonard D.; Bennett, Dennis W.; Davis, Jon F.

    1985-01-01

    (CH.sub.3).sub.3 SiNSO is produced by the reaction of ((CH.sub.3).sub.3 Si).sub.2 NH with SO.sub.2. Also produced in the reaction are ((CH.sub.3).sub.3 Si).sub.2 O and a new solid compound [NH.sub.4 ][(CH.sub.3).sub.3 SiOSO.sub.2 ]. Both (CH.sub.3).sub.3 SiNSO and [NH.sub.4 ][(CH.sub.3).sub.3 SiOSO.sub.2 ] have fluorescent properties. The reaction of the subject invention is used in a method of measuring the concentration of SO.sub.2 pollutants in gases. By the method, a sample of gas is bubbled through a solution of ((CH.sub.3).sub.3 Si).sub.2 NH, whereby any SO.sub.2 present in the gas will react to produce the two fluorescent products. The measured fluorescence of these products can then be used to calculate the concentration of SO.sub.2 in the original gas sample. The solid product [NH.sub.4 ][(CH.sub.3).sub.3 SiOSO.sub.2 ] may be used as a standard in solid state NMR spectroscopy, wherein the resonance peaks of either .sup.1 H, .sup.13 C, .sup.15 N, or .sup.29 Si may be used as a reference.

  4. Reducing matrix effect error in EDXRF: Comparative study of using standard and standard less methods for stainless steel samples

    International Nuclear Information System (INIS)

    Meor Yusoff Meor Sulaiman; Masliana Muhammad; Wilfred, P.

    2013-01-01

    Even though EDXRF analysis has major advantages in the analysis of stainless steel samples such as simultaneous determination of the minor elements, analysis can be done without sample preparation and non-destructive analysis, the matrix issue arise from the inter element interaction can make the the final quantitative result to be in accurate. The paper relates a comparative quantitative analysis using standard and standard less methods in the determination of these elements. Standard method was done by plotting regression calibration graphs of the interested elements using BCS certified stainless steel standards. Different calibration plots were developed based on the available certified standards and these stainless steel grades include low alloy steel, austenitic, ferritic and high speed. The standard less method on the other hand uses a mathematical modelling with matrix effect correction derived from Lucas-Tooth and Price model. Further improvement on the accuracy of the standard less method was done by inclusion of pure elements into the development of the model. Discrepancy tests were then carried out for these quantitative methods on different certified samples and the results show that the high speed method is most reliable for determining of Ni and the standard less method for Mn. (Author)

  5. Discrete curved ray-tracing method for radiative transfer in an absorbing-emitting semitransparent slab with variable spatial refractive index

    International Nuclear Information System (INIS)

    Liu, L.H.

    2004-01-01

    A discrete curved ray-tracing method is developed to analyze the radiative transfer in one-dimensional absorbing-emitting semitransparent slab with variable spatial refractive index. The curved ray trajectory is locally treated as straight line and the complicated and time-consuming computation of ray trajectory is cut down. A problem of radiative equilibrium with linear variable spatial refractive index is taken as an example to examine the accuracy of the proposed method. The temperature distributions are determined by the proposed method and compared with the data in references, which are obtained by other different methods. The results show that the discrete curved ray-tracing method has a good accuracy in solving the radiative transfer in one-dimensional semitransparent slab with variable spatial refractive index

  6. Improving the reliability of POD curves in NDI methods using a Bayesian inversion approach for uncertainty quantification

    Science.gov (United States)

    Ben Abdessalem, A.; Jenson, F.; Calmon, P.

    2016-02-01

    This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.

  7. An analytical solution of Richards' equation providing the physical basis of SCS curve number method and its proportionality relationship

    Science.gov (United States)

    Hooshyar, Milad; Wang, Dingbao

    2016-08-01

    The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: (1) the soil is saturated at the land surface; and (2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.

  8. Vacuum expectation value of the stress tensor in an arbitrary curved background: The covariant point-separation method

    International Nuclear Information System (INIS)

    Christensen, S.M.

    1976-01-01

    A method known as covariant geodesic point separation is developed to calculate the vacuum expectation value of the stress tensor for a massive scalar field in an arbitrary gravitational field. The vacuum expectation value will diverge because the stress-tensor operator is constructed from products of field operators evaluated at the same space-time point. To remedy this problem, one of the field operators is taken to a nearby point. The resultant vacuum expectation value is finite and may be expressed in terms of the Hadamard elementary function. This function is calculated using a curved-space generalization of Schwinger's proper-time method for calculating the Feynman Green's function. The expression for the Hadamard function is written in terms of the biscalar of geodetic interval which gives a measure of the square of the geodesic distance between the separated points. Next, using a covariant expansion in terms of the tangent to the geodesic, the stress tensor may be expanded in powers of the length of the geodesic. Covariant expressions for each divergent term and for certain terms in the finite portion of the vacuum expectation value of the stress tensor are found. The properties, uses, and limitations of the results are discussed

  9. Three-dimensional topography of the gingival line of young adult maxillary teeth: curve averaging using reverse-engineering methods.

    Science.gov (United States)

    Park, Young-Seok; Chang, Mi-Sook; Lee, Seung-Pyo

    2011-01-01

    This study attempted to establish three-dimensional average curves of the gingival line of maxillary teeth using reconstructed virtual models to utilize as guides for dental implant restorations. Virtual models from 100 full-mouth dental stone cast sets were prepared with a three-dimensional scanner and special reconstruction software. Marginal gingival lines were defined by transforming the boundary points to the NURBS (nonuniform rational B-spline) curve. Using an iterative closest point algorithm, the sample models were aligned and the gingival curves were isolated. Each curve was tessellated by 200 points using a uniform interval. The 200 tessellated points of each sample model were averaged according to the index of each model. In a pilot experiment, regression and fitting analysis of one obtained average curve was performed to depict it as mathematical formulae. The three-dimensional average curves of six maxillary anterior teeth, two maxillary right premolars, and a maxillary right first molar were obtained, and their dimensions were measured. Average curves of the gingival lines of young people were investigated. It is proposed that dentists apply these data to implant platforms or abutment designs to achieve ideal esthetics. The curves obtained in the present study may be incorporated as a basis for implant component design to improve the biologic nature and related esthetics of restorations.

  10. A Method for Developing Standard Patient Education Program.

    Science.gov (United States)

    Lura, Carolina Bryne; Hauch, Sophie Misser Pallesgaard; Gøeg, Kirstine Rosenbeck; Pape-Haugaard, Louise

    2018-01-01

    In Denmark, patients being treated on Haematology Outpatients Departments get instructed to self-manage their blood sample collection from Central Venous Catheter (CVC). However, this is a complex and risky procedure, which can jeopardize patient safety. The aim of the study was to suggest a method for developing standard digital patient education programs for patients in self-administration of blood samples drawn from CVC. The Design Science Research Paradigm was used to develop a digital patient education program, called PAVIOSY, to increase patient safety during execution of the blood sample collection procedure by using videos for teaching as well as procedural support. A step-by-step guide was developed and used as basis for making the videos. Quality assurance through evaluation with a nurse was conducted on both the step-by-step guide and the videos. The quality assurance evaluation of the videos showed; 1) Errors due to the order of the procedure can be determined by reviewing the videos despite that the guide was followed. 2) Videos can be used to identify errors - important for patient safety - in the procedure, which are not identifiable in a written script. To ensure correct clinical content of the educational patient system, health professionals must be engaged early in the development of content and design phase.

  11. [Precautions of physical performance requirements and test methods during product standard drafting process of medical devices].

    Science.gov (United States)

    Song, Jin-Zi; Wan, Min; Xu, Hui; Yao, Xiu-Jun; Zhang, Bo; Wang, Jin-Hong

    2009-09-01

    The major idea of this article is to discuss standardization and normalization for the product standard of medical devices. Analyze the problem related to the physical performance requirements and test methods during product standard drafting process and make corresponding suggestions.

  12. Status of sennosides content in various Indian herbal formulations: Method standardization by HPTLC

    Directory of Open Access Journals (Sweden)

    Md.Wasim Aktar

    2008-12-01

    Full Text Available Several poly-herbal formulations containing senna (Cassia angustifolia leaves are available in the Indian market for the treatment of constipation. The purgative effect of senna is due to the presence of two unique hydroxy anthracene glycosides sennosides A and B. A HPTLC method for the quantitative analysis of sennosides A and B present in the formulation has been developed. Methanol extract of the formulations was analyzed on a silica gel 60 GF254 HPTLC plates with spot visualization under UV and scanning at 350 nm in absorption/ reflection mode. Calibration curves were found to be linear in the range 200-1000 ηg. The correlation coefficients were found to be 0.991 for sennoside A and 0.997 for sennoside B. The average recovery rate was 95% for sennoside A and 97% for sennoside B showing the reliability and reproducibility of the method. Limit of detection and quantification were determined as 0.05 and 0.25 μg/g respectively. The validity of the method with respect to analysis was confirmed by comparing the UV spectra of the herbal formulations with that of the standard within the same Rf window. The analysis revealed a significant variation in sennosides content.

  13. Status of sennosides content in various Indian herbal formulations: Method standardization by HPTLC

    Directory of Open Access Journals (Sweden)

    Md. Wasim Aktar

    2008-06-01

    Full Text Available Several poly-herbal formulations containing senna (Cassia angustifolia leaves are available in the Indian market for the treatment of constipation. The purgative effect of senna is due to the presence of two unique hydroxy anthracene glycosides sennosides A and B. A HPTLC method for the quantitative analysis of sennosides A and B present in the formulation has been developed. Methanol extract of the formulations was analyzed on a silica gel 60 GF254 HPTLCplates with spot visualization under UV and scanning at 350 nm in absorption/ reflection mode. Calibration curves were found to be linear in the range 200-1000 ηg. The correlation coefficients were found to be 0.991 for sennoside A and 0.997 for sennoside B. The average recovery rate was 95% for sennoside A and 97% for sennoside B showing the reliability and reproducibility of the method. Limit of detection and quantification were determined as 0.05 and 0.25 μg/g respectively. The validity of the method with respect to analysis was confirmed by comparing the UV spectra of the herbal formulations with that of the standard within the same Rf window. The analysis revealed a significant variation in sennosides content.

  14. Simulating Supernova Light Curves

    International Nuclear Information System (INIS)

    Even, Wesley Paul; Dolence, Joshua C.

    2016-01-01

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth's atmosphere.

  15. Simulating Supernova Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-05

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.

  16. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  17. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  18. Computing daily mean streamflow at ungaged locations in Iowa by using the Flow Anywhere and Flow Duration Curve Transfer statistical methods

    Science.gov (United States)

    Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.

    2012-01-01

    -mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and

  19. SEVEN NEW BINARIES DISCOVERED IN THE KEPLER LIGHT CURVES THROUGH THE BEER METHOD CONFIRMED BY RADIAL-VELOCITY OBSERVATIONS

    International Nuclear Information System (INIS)

    Faigler, S.; Mazeh, T.; Tal-Or, L.; Quinn, S. N.; Latham, D. W.

    2012-01-01

    We present seven newly discovered non-eclipsing short-period binary systems with low-mass companions, identified by the recently introduced BEER algorithm, applied to the publicly available 138-day photometric light curves obtained by the Kepler mission. The detection is based on the beaming effect (sometimes called Doppler boosting), which increases (decreases) the brightness of any light source approaching (receding from) the observer, enabling a prediction of the stellar Doppler radial-velocity (RV) modulation from its precise photometry. The BEER algorithm identifies the BEaming periodic modulation, with a combination of the well-known Ellipsoidal and Reflection/heating periodic effects, induced by short-period companions. The seven detections were confirmed by spectroscopic RV follow-up observations, indicating minimum secondary masses in the range 0.07-0.4 M ☉ . The binaries discovered establish for the first time the feasibility of the BEER algorithm as a new detection method for short-period non-eclipsing binaries, with the potential to detect in the near future non-transiting brown-dwarf secondaries, or even massive planets.

  20. Measurement of Internal Friction for Tungsten by the Curve Vibrating Method with Variation of Voltage and Temperature

    Directory of Open Access Journals (Sweden)

    Elin Yusibani

    2013-12-01

    Full Text Available Application of a curved vibrating wire method (CVM to measure gas viscosity has been widely used. A fine Tungsten wire with 50 mm of diameter is bent into a semi-circular shape and arranged symmetrically in a magnetic field of about 0.2 T. The frequency domain is used for calculating the viscosity as a response for forced oscillation of the wire. Internal friction is one of the parameter in the CVM which is has to be measured beforeahead. Internal friction coefficien for the wire material which is the inverse of the quality factor has to be measured in a vacuum condition. The term involving internal friction actually represents the effective resistance of motion due to all non-viscous damping phenomena including internal friction and magnetic damping. The testing of internal friction measurement shows that at different induced voltage and elevated temperature at a vacuum condition, it gives the value of internal friction for Tungsten is around 1 to 4 10-4.

  1. Synthesis, optical characterization, and size distribution determination by curve resolution methods of water-soluble CdSe quantum dots

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Calink Indiara do Livramento; Carvalho, Melissa Souza; Raphael, Ellen; Ferrari, Jefferson Luis; Schiavon, Marco Antonio, E-mail: schiavon@ufsj.edu.br [Universidade Federal de Sao Joao del-Rei (UFSJ), MG (Brazil). Grupo de Pesquisa em Quimica de Materiais; Dantas, Clecio [Universidade Estadual do Maranhao (LQCINMETRIA/UEMA), Caxias, MA (Brazil). Lab. de Quimica Computacional Inorganica e Quimiometria

    2016-11-15

    In this work a colloidal approach to synthesize water-soluble CdSe quantum dots (QDs) bearing a surface ligand, such as thioglycolic acid (TGA), 3-mercaptopropionic acid (MPA), glutathione (GSH), or thioglycerol (TGH) was applied. The synthesized material was characterized by X-ray diffraction (XRD), Fourier-transform infrared spectroscopy (FT-IR), UV-visible spectroscopy (UV-Vis), and fluorescence spectroscopy (PL). Additionally, a comparative study of the optical properties of different CdSe QDs was performed, demonstrating how the surface ligand affected crystal growth. The particles sizes were calculated from a polynomial function that correlates the particle size with the maximum fluorescence position. Curve resolution methods (EFA and MCR-ALS) were employed to decompose a series of fluorescence spectra to investigate the CdSe QDs size distribution and determine the number of fraction with different particle size. The results for the MPA-capped CdSe sample showed only two main fraction with different particle sizes with maximum emission at 642 and 686 nm. The calculated diameters from these maximum emission were, respectively, 2.74 and 3.05 nm. (author)

  2. Study of adsorption states in ZnO—Ag gas-sensitive ceramics using the ECTV curves method

    Directory of Open Access Journals (Sweden)

    Lyashkov A. Yu.

    2013-12-01

    Full Text Available The ZnO—Ag ceramic system as the material for semiconductor sensors of ethanol vapors was proposed quite a long time ago. The main goal of this work was to study surface electron states of this system and their relation with the electric properties of the material. The quantity of doping with Ag2O was changed in the range of 0,1–2,0% of mass. The increase of the Ag doping leads to a shift of the Fermi level down (closer to the valence zone. The paper presents research results on electrical properties of ZnO-Ag ceramics using the method of thermal vacuum curves of electrical conductivity. Changes in the electrical properties during heating in vacuum in the temperature range of 300—800 K were obtained and discussed. The increase of Tvac leads to removal of oxygen from the surface of samples The oxygen is adsorbed in the form of O2– and O– ions and is the acceptor for ZnO. This results in the lowering of the inter-crystallite potential barriers in the ceramic. The surface electron states (SES above the Fermi level are virtually uncharged. The increase of the conductivity causes desorption of oxygen from the SES settled below the Fermi level of the semiconductor. The model allows evaluating the depth of the Fermi level in the inhomogeneous semiconductor materials.

  3. The History of Infant Formula: Quality, Safety, and Standard Methods.

    Science.gov (United States)

    Wargo, Wayne F

    2016-01-01

    Food-related laws and regulations have existed since ancient times. Egyptian scrolls prescribed the labeling needed for certain foods. In ancient Athens, beer and wines were inspected for purity and soundness, and the Romans had a well-organized state food control system to protect consumers from fraud or bad produce. In Europe during the Middle Ages, individual countries passed laws concerning the quality and safety of eggs, sausages, cheese, beer, wine, and bread; some of these laws still exist today. But more modern dietary guidelines and food regulations have their origins in the latter half of the 19th century when the first general food laws were adopted and basic food control systems were implemented to monitor compliance. Around this time, science and food chemistry began to provide the tools to determine "purity" of food based primarily on chemical composition and to determine whether it had been adulterated in any way. Since the key chemical components of mammalian milk were first understood, infant formulas have steadily advanced in complexity as manufacturers attempt to close the compositional gap with human breast milk. To verify these compositional innovations and ensure product quality and safety, infant formula has become one of the most regulated foods in the world. The present paper examines the historical development of nutritional alternatives to breastfeeding, focusing on efforts undertaken to ensure the quality and safety from antiquity to present day. The impact of commercial infant formulas on global regulations is addressed, along with the resulting need for harmonized, fit-for-purpose, voluntary consensus standard methods.

  4. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    Science.gov (United States)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  5. ExSTA: External Standard Addition Method for Accurate High-Throughput Quantitation in Targeted Proteomics Experiments.

    Science.gov (United States)

    Mohammed, Yassene; Pan, Jingxi; Zhang, Suping; Han, Jun; Borchers, Christoph H

    2018-03-01

    Targeted proteomics using MRM with stable-isotope-labeled internal-standard (SIS) peptides is the current method of choice for protein quantitation in complex biological matrices. Better quantitation can be achieved with the internal standard-addition method, where successive increments of synthesized natural form (NAT) of the endogenous analyte are added to each sample, a response curve is generated, and the endogenous concentration is determined at the x-intercept. Internal NAT-addition, however, requires multiple analyses of each sample, resulting in increased sample consumption and analysis time. To compare the following three methods, an MRM assay for 34 high-to-moderate abundance human plasma proteins is used: classical internal SIS-addition, internal NAT-addition, and external NAT-addition-generated in buffer using NAT and SIS peptides. Using endogenous-free chicken plasma, the accuracy is also evaluated. The internal NAT-addition outperforms the other two in precision and accuracy. However, the curves derived by internal vs. external NAT-addition differ by only ≈3.8% in slope, providing comparable accuracies and precision with good CV values. While the internal NAT-addition method may be "ideal", this new external NAT-addition can be used to determine the concentration of high-to-moderate abundance endogenous plasma proteins, providing a robust and cost-effective alternative for clinical analyses or other high-throughput applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. INTEGRATION OF SATELLITE RAINFALL DATA AND CURVE NUMBER METHOD FOR RUNOFF ESTIMATION UNDER SEMI-ARID WADI SYSTEM

    Directory of Open Access Journals (Sweden)

    E. O. Adam

    2017-11-01

    Full Text Available The arid and semi-arid catchments in dry lands in general require a special effective management as the scarcity of resources and information which is needed to leverage studies and investigations is the common characteristic. Hydrology is one of the most important elements in the management of resources. Deep understanding of hydrological responses is the key towards better planning and land management. Surface runoff quantification of such ungauged semi-arid catchments considered among the important challenges. A 7586 km2 catchment under investigation is located in semi-arid region in central Sudan where mean annual rainfall is around 250 mm and represent the ultimate source for water supply. The objective is to parameterize hydrological characteristics of the catchment and estimate surface runoff using suitable methods and hydrological models that suit the nature of such ungauged catchments with scarce geospatial information. In order to produce spatial runoff estimations, satellite rainfall was used. Remote sensing and GIS were incorporated in the investigations and the generation of landcover and soil information. Five days rainfall event (50.2 mm was used for the SCS CN model which is considered the suitable for this catchment, as SCS curve number (CN method is widely used for estimating infiltration characteristics depending on the landcover and soil property. Runoff depths of 3.6, 15.7 and 29.7 mm were estimated for the three different Antecedent Moisture Conditions (AMC-I, AMC-II and AMC-III. The estimated runoff depths of AMCII and AMCIII indicate the possibility of having small artificial surface reservoirs that could provide water for domestic and small household agricultural use.

  7. Use of the cumulative sum method (CUSUM) to assess the learning curves of ultrasound-guided continuous femoral nerve block.

    Science.gov (United States)

    Kollmann-Camaiora, A; Brogly, N; Alsina, E; Gilsanz, F

    2017-10-01

    Although ultrasound is a basic competence for anaesthesia residents (AR) there is few data available on the learning process. This prospective observational study aims to assess the learning process of ultrasound-guided continuous femoral nerve block and to determine the number of procedures that a resident would need to perform in order to reach proficiency using the cumulative sum (CUSUM) method. We recruited 19 AR without previous experience. Learning curves were constructed using the CUSUM method for ultrasound-guided continuous femoral nerve block considering 2 success criteria: a decrease of pain score>2 in a [0-10] scale after 15minutes, and time required to perform it. We analyse data from 17 AR for a total of 237 ultrasound-guided continuous femoral nerve blocks. 8/17 AR became proficient for pain relief, however all the AR who did more than 12 blocks (8/8) became proficient. As for time of performance 5/17 of AR achieved the objective of 12minutes, however all the AR who did more than 20 blocks (4/4) achieved it. The number of procedures needed to achieve proficiency seems to be 12, however it takes more procedures to reduce performance time. The CUSUM methodology could be useful in training programs to allow early interventions in case of repeated failures, and develop competence-based curriculum. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. A new curved fault model and method development for asperities of the 1703 Genroku and 1923 Kanto earthquakes

    Science.gov (United States)

    Kobayashi, R.; Koketsu, K.

    2008-12-01

    Great earthquakes along the Sagami trough, where the Philippine Sea slab is subducting, have repeatedly occurred. The 1703 Genroku and 1923 (Taisho) Kanto earthquakes (M 8.2 and M 7.9, respectively) are known as typical ones, and cause severe damages in the metropolitan area. The recurrence periods of Genroku- and Taisho-type earthquakes inferred from studies of wave cut terraces are about 200-400 and 2000 years, respectively (e.g., Earthquake Research Committee, 2004). We have inferred the source process of the 1923 Kanto earthquake from geodetic, teleseismic, and strong motion data (Kobayashi and Koketsu, 2005). Two asperities of the 1923 Kanto earthquake are located around the western part of Kanagawa prefecture (the base of the Izu peninsula) and around the Miura peninsula. After we adopted an updated fault plane model, which is based on a recent model of the Philippine Sea slab, the asperity around the Miura peninsula moves to the north (Sato et al., 2005). We have also investigated the slip distribution of the 1703 Genroku earthquake. We used crustal uplift and subsidence data investigated by Shishikura (2003), and inferred the slip distribution by using the same geometry of the fault as the 1923 Kanto earthquake. The peak of slip of 16 m is located the southern part of the Boso peninsula. Shape of the upper surface of the Philippine Sea slab is important to constrain extent of the asperities well. Sato et al. (2005) presented the shape in inland part, but less information in oceanic part except for the Tokyo bay. Kimura (2006) and Takeda et al. (2007) presented the shape in oceanic part. In this study, we compiled these slab models, and planed to reanalyze the slip distributions of the 1703 and 1923 earthquakes. We developed a new curved fault plane on the plate boundary between the Philippine Sea slab and inland plate. The curved fault plane was divided into 56 triangle subfaults. Point sources for the Green's function calculations are located at centroids

  9. Determination of Impurities in Aluminum Alloy by INAA Single Comparator Method (K0-Standardization Method)

    International Nuclear Information System (INIS)

    Sarheel, A.; Khamis, I.; Somel, N.

    2007-01-01

    Multielement determination by the k0 based INAA using k0-IAEA program has been performed at Syrian Atomic Energy Commission using alloys. Concentrations of Cu, Zn, Fe, Ni, Sn and Ti in addition to aluminum element were determined in an aluminum alloy and Ni, Cr, Mo were determined in dental alloys using INAA k0-standardization method. Al-0.1%Au, Ni and Zn certified reference materials were analyzed to assess the suitability and accuracy of the method. Elements were determined in reference materials and samples after short and long irradiations, according to element half-lives.

  10. Standard test method for distribution coefficients of inorganic species by the batch method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of distribution coefficients of chemical species to quantify uptake onto solid materials by a batch sorption technique. It is a laboratory method primarily intended to assess sorption of dissolved ionic species subject to migration through pores and interstices of site specific geomedia. It may also be applied to other materials such as manufactured adsorption media and construction materials. Application of the results to long-term field behavior is not addressed in this method. Distribution coefficients for radionuclides in selected geomedia are commonly determined for the purpose of assessing potential migratory behavior of contaminants in the subsurface of contaminated sites and waste disposal facilities. This test method is also applicable to studies for parametric studies of the variables and mechanisms which contribute to the measured distribution coefficient. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement a...

  11. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  12. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep......Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach...... learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several...

  13. 29 CFR 1630.7 - Standards, criteria, or methods of administration.

    Science.gov (United States)

    2010-07-01

    ... Standards, criteria, or methods of administration. It is unlawful for a covered entity to use standards, criteria, or methods of administration, which are not job-related and consistent with business necessity... 29 Labor 4 2010-07-01 2010-07-01 false Standards, criteria, or methods of administration. 1630.7...

  14. A regret theory approach to decision curve analysis: A novel method for eliciting decision makers' preferences and decision-making

    Directory of Open Access Journals (Sweden)

    Vickers Andrew

    2010-09-01

    Full Text Available Abstract Background Decision curve analysis (DCA has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1, and analytical, deliberative process (system 2, thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. Methods First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. Results We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat vs. "commissions" (e.g. treating unnecessary and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. Conclusions We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may

  15. Standard test method for determining atmospheric chloride deposition rate by wet candle method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers a wet candle device and its use in measuring atmospheric chloride deposition (amount of chloride salts deposited from the atmosphere on a given area per unit time). 1.2 Data on atmospheric chloride deposition can be useful in classifying the corrosivity of a specific area, such as an atmospheric test site. Caution must be exercised, however, to take into consideration the season because airborne chlorides vary widely between seasons. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  16. A regret theory approach to decision curve analysis: a novel method for eliciting decision makers' preferences and decision-making.

    Science.gov (United States)

    Tsalatsanis, Athanasios; Hozo, Iztok; Vickers, Andrew; Djulbegovic, Benjamin

    2010-09-16

    Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decision-maker, particularly

  17. Water exchange method for colonoscopy: learning curve of an experienced colonoscopist in a U.S. community practice setting.

    Science.gov (United States)

    Fischer, Leonard S; Lumsden, Antoinette; Leung, Felix W

    2012-07-01

    Water exchange colonoscopy has been reported to reduce examination discomfort and to provide salvage cleansing in unsedated or minimally sedated patients. The prolonged insertion time and perceived difficulty of insertion associated with water exchange have been cited as a barrier to its widespread use. To assess the feasibility of learning and using the water exchange method of colonoscopy in a U.S. community practice setting. Quality improvement program in nonacademic community endoscopy centers. Patients undergoing sedated diagnostic, surveillance, or screening colonoscopy. After direct coaching by a knowledgeable trainer, an experienced colonoscopist initiated colonoscopy using the water method. Whenever >5 min elapsed without advancing the colonoscope, conversion to air insufflation was made to ensure timely completion of the examination. Water Method Intention-to-treat (ITT) cecal intubation rate (CIR). Female patients had a significantly higher rate of past abdominal surgery and a significantly lower ITTCIR. The ITTCIR showed a progressive increase over time in both males and females to 85-90%. Mean insertion time was maintained at 9 to 10 min. The overall CIR was 99%. Use of water exchange did not preclude cecal intubation upon conversion to usual air insufflation in sedated patients examined by an experienced colonoscopist. With practice ITTCIR increased over time in both male and female patients. Larger volumes of water exchanged were associated with higher ITTCIR and better quality scores of bowel preparation. The data suggest that learning water exchange by a busy colonoscopist in a community practice setting is feasible and outcomes conform to accepted quality standards.

  18. Principal Curves on Riemannian Manifolds.

    Science.gov (United States)

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  19. Comparative evaluation of different methods of setting hygienic standards

    International Nuclear Information System (INIS)

    Ramzaev, P.V.; Rodionova, L.F.; Mashneva, N.I.

    1978-01-01

    Long-term experiments were carried out on white mice and rats to study the relative importance of various procedures used in setting hygienic standards for exposure to adverse factors. A variety of radionuclides and chemical substances were tested and the sensitivities to them of various indices of the bodily state were determined. For each index, statistically significant minimal effective concentrations of substances were established

  20. A method for developing standard patient education program

    DEFF Research Database (Denmark)

    Lura, Carolina Bryne; Hauch, Sophie Misser Pallesgaard; Gøeg, Kirstine Rosenbeck

    2018-01-01

    for developing standard digital patient education programs for patients in self-administration of blood samples drawn from CVC. The Design Science Research Paradigm was used to develop a digital patient education program, called PAVIOSY, to increase patient safety during execution of the blood sample collection...... of the educational patient system, health professionals must be engaged early in the development of content and design phase....

  1. The Impact of Student Teaching Experience on Pre-Service Teachers' Readiness for Technology Integration: A Mixed Methods Study with Growth Curve Modeling

    Science.gov (United States)

    Sun, Yan; Strobel, Johannes; Newby, Timothy J.

    2017-01-01

    Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…

  2. Standard test methods for arsenic in uranium hexafluoride

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 These test methods are applicable to the determination of total arsenic in uranium hexafluoride (UF6) by atomic absorption spectrometry. Two test methods are given: Test Method A—Arsine Generation-Atomic Absorption (Sections 5-10), and Test Method B—Graphite Furnace Atomic Absorption (Appendix X1). 1.2 The test methods are equivalent. The limit of detection for each test method is 0.1 μg As/g U when using a sample containing 0.5 to 1.0 g U. Test Method B does not have the complete collection details for precision and bias data thus the method appears as an appendix. 1.3 Test Method A covers the measurement of arsenic in uranyl fluoride (UO2F2) solutions by converting arsenic to arsine and measuring the arsine vapor by flame atomic absorption spectrometry. 1.4 Test Method B utilizes a solvent extraction to remove the uranium from the UO2F2 solution prior to measurement of the arsenic by graphite furnace atomic absorption spectrometry. 1.5 Both insoluble and soluble arsenic are measured when UF6 is...

  3. Robust numerical methods for boundary-layer equations for a model problem of flow over a symmetric curved surface

    NARCIS (Netherlands)

    A.R. Ansari; B. Hossain; B. Koren (Barry); G.I. Shishkin (Gregori)

    2007-01-01

    textabstractWe investigate the model problem of flow of a viscous incompressible fluid past a symmetric curved surface when the flow is parallel to its axis. This problem is known to exhibit boundary layers. Also the problem does not have solutions in closed form, it is modelled by boundary-layer

  4. Carbon Lorenz Curves

    NARCIS (Netherlands)

    Groot, L.F.M.|info:eu-repo/dai/nl/073642398

    2008-01-01

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across

  5. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    Science.gov (United States)

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  6. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  7. Standard test method for creep-fatigue testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the determination of mechanical properties pertaining to creep-fatigue deformation or crack formation in nominally homogeneous materials, or both by the use of test specimens subjected to uniaxial forces under isothermal conditions. It concerns fatigue testing at strain rates or with cycles involving sufficiently long hold times to be responsible for the cyclic deformation response and cycles to crack formation to be affected by creep (and oxidation). It is intended as a test method for fatigue testing performed in support of such activities as materials research and development, mechanical design, process and quality control, product performance, and failure analysis. The cyclic conditions responsible for creep-fatigue deformation and cracking vary with material and with temperature for a given material. 1.2 The use of this test method is limited to specimens and does not cover testing of full-scale components, structures, or consumer products. 1.3 This test method is primarily ...

  8. Standard Test Methods for Constituent Content of Composite Materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 These test methods determine the constituent content of composite materials by one of two approaches. Method I physically removes the matrix by digestion or ignition by one of seven procedures, leaving the reinforcement essentially unaffected and thus allowing calculation of reinforcement or matrix content (by weight or volume) as well as percent void volume. Method II, applicable only to laminate materials of known fiber areal weight, calculates reinforcement or matrix content (by weight or volume), and the cured ply thickness, based on the measured thickness of the laminate. Method II is not applicable to the measurement of void volume. 1.1.1 These test methods are primarily intended for two-part composite material systems. However, special provisions can be made to extend these test methods to filled material systems with more than two constituents, though not all test results can be determined in every case. 1.1.2 The procedures contained within have been designed to be particularly effective for ce...

  9. Primary standardization of C-14 by means of CIEMAT/NIST, TDCR and 4πβ-γ methods

    International Nuclear Information System (INIS)

    Kuznetsova, Maria

    2016-01-01

    In this work, the primary standardization of "1"4C solution, which emits beta particles of maximum energy 156 keV, was made by means of three different methods: CIEMAT/NIST and TDCR (Triple To Double Coincidence Ratio) methods in liquid scintillation systems and the tracing method, in the 4πβ-γ coincidence system. TRICARB LSC (Liquid Scintillator Counting) system, equipped with two photomultipliers tubes, was used for CIEMAT/NIST method, using a "3H standard that emits beta particles with maximum energy of 18.7 keV, as efficiency tracing. HIDEX 300SL LSC system, equipped with three photomultipliers tubes, was used for TDCR method. Samples of "1"4C and "3H, for the liquid scintillator system, were prepared using three commercial scintillation cocktails, UltimaGold, Optiphase Hisafe3 and InstaGel-Plus, in order to compare the performance in the measurements. All samples were prepared with 15 mL scintillators, in glass vials with low potassium concentration. Known aliquots of radioactive solution were dropped onto the cocktail scintillators. In order to obtain the quenching parameter curve, a nitro methane carrier solution and 1 mL of distilled water were used. For measurements in the 4πβ-γ system, "6"0Co was used as beta gamma emitter. SCS (software coincidence system) was applied and the beta efficiency was changed by using electronic discrimination. The behavior of the extrapolation curve was predicted with code ESQUEMA, using Monte Carlo technique. The "1"4C activity obtained by the three methods applied in this work was compared and the results showed to be in agreement, within the experimental uncertainty. (author)

  10. Standard Method for Analyzing Gases in Titanium and Titanium Alloys. Standard Method for the Chemical Analysis of Titanium Alloys.

    Science.gov (United States)

    1982-10-28

    form a non- soluble complex. After filtering and burning the non-pure molybdenum trioxide is weighed. Ammonia water is used to dissolve the molybdenum...niobium and tantalum should use the methyl alcohol distillation - curcumin absorption luminosity 66 method for determination. II. The Methyl Alcohol...Distillation - Curcumin Absorption Luminosity Method 1. Summary of Method In a phosphorus sulfate medium, boron and methyl alcohol produce methyl borate

  11. Standard guide for three methods of assessing buried steel tanks

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1998-01-01

    1.1 This guide covers procedures to be implemented prior to the application of cathodic protection for evaluating the suitability of a tank for upgrading by cathodic protection alone. 1.2 Three procedures are described and identified as Methods A, B, and C. 1.2.1 Method A—Noninvasive with primary emphasis on statistical and electrochemical analysis of external site environment corrosion data. 1.2.2 Method B—Invasive ultrasonic thickness testing with external corrosion evaluation. 1.2.3 Method C—Invasive permanently recorded visual inspection and evaluation including external corrosion assessment. 1.3 This guide presents the methodology and the procedures utilizing site and tank specific data for determining a tank's condition and the suitability for such tanks to be upgraded with cathodic protection. 1.4 The tank's condition shall be assessed using Method A, B, or C. Prior to assessing the tank, a preliminary site survey shall be performed pursuant to Section 8 and the tank shall be tightness test...

  12. Methods for Prediction of Steel Temperature Curve in the Whole Process of a Localized Fire in Large Spaces

    Directory of Open Access Journals (Sweden)

    Zhang Guowei

    2014-01-01

    Full Text Available Based on a full-scale bookcase fire experiment, a fire development model is proposed for the whole process of localized fires in large-space buildings. We found that for localized fires in large-space buildings full of wooden combustible materials the fire growing phases can be simplified into a t2 fire with a 0.0346 kW/s2 fire growth coefficient. FDS technology is applied to study the smoke temperature curve for a 2 MW to 25 MW fire occurring within a large space with a height of 6 m to 12 m and a building area of 1 500 m2 to 10 000 m2 based on the proposed fire development model. Through the analysis of smoke temperature in various fire scenarios, a new approach is proposed to predict the smoke temperature curve. Meanwhile, a modified model of steel temperature development in localized fire is built. In the modified model, the localized fire source is treated as a point fire source to evaluate the flame net heat flux to steel. The steel temperature curve in the whole process of a localized fire could be accurately predicted by the above findings. These conclusions obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.

  13. Trace element analysis of water using radioisotope induced X-ray fluorescence (Cd-109) and a preconcentration-internal standard method

    International Nuclear Information System (INIS)

    Alvarez, M.; Cano, W.

    1986-10-01

    Radioisotope induced X-ray fluorescence using Cd-109 was used for the determination of iron, nickel, copper, zinc, lead and mercury in water. These metals were concentrated by precipitation with the chelating agent APDC. The precipitated formed was filtered using a membrane filter. Cobalt was added as an internal standard. Minimum detection limit, sensitivities and calibration curves linearities have been obtained to find the limits of the method. The usefulness of the method is illustrated analysing synthetic standard solutions. As an application analytical results are given for water of a highly polluted river area. (Author)

  14. Measuring fuel moisture content in Alaska: standard methods and procedures.

    Science.gov (United States)

    Rodney A. Norum; Melanie. Miller

    1984-01-01

    Methods and procedures are given for collecting and processing living and dead plant materials for the purpose of determining their water content. Wild-land fuels in Alaska are emphasized, but the methodology is applicable elsewhere. Guides are given for determining the number of samples needed to attain a chosen precision. Detailed procedures are presented for...

  15. Deformation of two-phase aggregates using standard numerical methods

    Science.gov (United States)

    Duretz, Thibault; Yamato, Philippe; Schmalholz, Stefan M.

    2013-04-01

    Geodynamic problems often involve the large deformation of material encompassing material boundaries. In geophysical fluids, such boundaries often coincide with a discontinuity in the viscosity (or effective viscosity) field and subsequently in the pressure field. Here, we employ popular implementations of the finite difference and finite element methods for solving viscous flow problems. On one hand, we implemented finite difference method coupled with a Lagrangian marker-in-cell technique to represent the deforming fluid. Thanks to it Eulerian nature, this method has a limited geometric flexibility but is characterized by a light and stable discretization. On the other hand, we employ the Lagrangian finite element method which offers full geometric flexibility at the cost of relatively heavier discretization. In order to test the accuracy of the finite difference scheme, we ran large strain simple shear deformation of aggregates containing either weak of strong circular inclusion (1e6 viscosity ratio). The results, obtained for different grid resolutions, are compared to Lagrangian finite element results which are considered as reference solution. The comparison is then used to establish up to which strain can finite difference simulations be run given the nature of the inclusions (dimensions, viscosity) and the resolution of the Eulerian mesh.

  16. Standard Test Method for Contamination Outgassing Characteristics of Spacecraft Materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers a technique for generating data to characterize the kinetics of the release of outgassing products from materials. This technique will determine both the total mass flux evolved by a material when exposed to a vacuum environment and the deposition of this flux on surfaces held at various specified temperatures. 1.2 This test method describes the test apparatus and related operating procedures for evaluating the total mass flux that is evolved from a material being subjected to temperatures that are between 298 and 398 K. Pressures external to the sample effusion cell are less than 7 × 10−3 Pa (5 × 10−5 torr). Deposition rates are measured during material outgassing tests. A test procedure for collecting data and a test method for processing and presenting the collected data are included. 1.3 This test method can be used to produce the data necessary to support mathematical models used for the prediction of molecular contaminant generation, migration, and deposition. 1.4 Al...

  17. Standard methods for research on apis mellifera gut symbionts

    Science.gov (United States)

    Gut microbes can play an important role in digestion, disease resistance, and the general health of animals, but little is known about the biology of gut symbionts in Apis mellifera. This paper is part of a series on honey bee research methods, providing protocols for studying gut symbionts. We desc...

  18. Standard methods for virus research in Apis mellifera

    NARCIS (Netherlands)

    Miranda, J.R.; Bailey, L.; Ball, B.V.; Blanchard, P.; Budge, G.E.; Chejanovsky, N.; Chen, Y.P.; Gauthier, L.; Genersch, E.; Graaf, de D.C.; Ribiere, M.; Ryabov, E.; Smet, de L.; Steen, van der J.J.M.

    2013-01-01

    Honey bee virus research is an enormously broad area, ranging from subcellular molecular biology through physiology and behaviour, to individual and colony-level symptoms, transmission and epidemiology. The research methods used in virology are therefore equally diverse. This article covers those

  19. Development of an analysis rule of diagnosis error for standard method of human reliability analysis

    International Nuclear Information System (INIS)

    Jeong, W. D.; Kang, D. I.; Jeong, K. S.

    2003-01-01

    This paper presents the status of development of Korea standard method for Human Reliability Analysis (HRA), and proposed a standard procedure and rules for the evaluation of diagnosis error probability. The quality of KSNP HRA was evaluated using the requirement of ASME PRA standard guideline, and the design requirement for the standard HRA method was defined. Analysis procedure and rules, developed so far, to analyze diagnosis error probability was suggested as a part of the standard method. And also a study of comprehensive application was performed to evaluate the suitability of the proposed rules

  20. Standard epidemiological methods to understand and improve Apis mellifera health

    OpenAIRE

    Lengerich, Eugene; Spleen, Angela; Dainat, Benjamin; Cresswell, James; Baylis , Kathy; Nguyen, Bach Kim; Soroker, Victoria; Underwood, Robyn; Human, Hannelie; Le Conte, Yves; Saegerman, Claude

    2013-01-01

    In this paper, we describe the use of epidemiological methods to understand and reduce honey bee morbidity and mortality. Essential terms are presented and defined and we also give examples for their use. Defining such terms as disease, population, sensitivity, and specificity, provides a framework for epidemiological comparisons. The term population, in particular, is quite complex for an organism like the honey bee because one can view “epidemiological unit” as individual bees, colonies, ap...

  1. Standard Test Methods for Determining Mechanical Integrity of Photovoltaic Modules

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 These test methods cover procedures for determining the ability of photovoltaic modules to withstand the mechanical loads, stresses and deflections used to simulate, on an accelerated basis, high wind conditions, heavy snow and ice accumulation, and non-planar installation effects. 1.1.1 A static load test to 2400 Pa is used to simulate wind loads on both module surfaces 1.1.2 A static load test to 5400 Pa is used to simulate heavy snow and ice accumulation on the module front surface. 1.1.3 A twist test is used to simulate the non-planar mounting of a photovoltaic module by subjecting it to a twist angle of 1.2°. 1.1.4 A cyclic load test of 10 000 cycles duration and peak loading to 1440 Pa is used to simulate dynamic wind or other flexural loading. Such loading might occur during shipment or after installation at a particular location. 1.2 These test methods define photovoltaic test specimens and mounting methods, and specify parameters that must be recorded and reported. 1.3 Any individual mech...

  2. A review and comparison of methods for recreating individual patient data from published Kaplan-Meier survival curves for economic evaluations: a simulation study.

    Science.gov (United States)

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.

  3. TL glow ratios at different temperature intervals of integration in thermoluminescence method. Comparison of Japanese standard (MHLW notified) method with CEN standard methods

    International Nuclear Information System (INIS)

    Todoriki, Setsuko; Saito, Kimie; Tsujimoto, Yuka

    2008-01-01

    The effect of the integration temperature intervals of TL intensities on the TL glow ratio was examined in comparison of the notified method of the Ministry of Health, Labour and Welfare (MHLW method) with EN1788. Two kinds of un-irradiated geological standard rock and three kinds of spices (black pepper, turmeric, and oregano) irradiated at 0.3 kGy or 1.0 kGy were subjected to TL analysis. Although the TL glow ratio exceeded 0.1 in the andesite according to the calculation of the MHLW notified method (integration interval; 70-490degC), the maximum of the first glow were observed at 300degC or more, attributed the influence of the natural radioactivity and distinguished from food irradiation. When the integration interval was set to 166-227degC according to EN1788, the TL glow ratios became remarkably smaller than 0.1, and the evaluation of the un-irradiated sample became more clear. For spices, the TL glow ratios by the MHLW notified method fell below 0.1 in un-irradiated samples and exceeded 0.1 in irradiated ones. Moreover, Glow1 maximum temperatures of the irradiated samples were observed at the range of 168-196degC, and those of un-irradiated samples were 258degC or more. Therefore, all samples were correctly judged by the criteria of the MHLW method. However, based on the temperature range of integration defined by EN1788, the TL glow ratio of un-irradiated samples remarkably became small compared with that of the MHLW method, and the discrimination of the irradiated sample from non-irradiation sample became clearer. (author)

  4. Business transactions and standards. Towards a system of concepts and a method for early problem identification in standard implementation projects

    NARCIS (Netherlands)

    Rukanova, B.D.

    2005-01-01

    To summarize, with respect to research question one we constructed a system of concepts, while in answer to research question two we proposed a method of how to apply this system of concepts in practice in order to identify potential problems in early stages of standard implementation projects.

  5. Dose-response curve for blood exposed to gamma-neutron mixed field by conventional cytogenetic method

    International Nuclear Information System (INIS)

    Brandao, Jose Odinilson de C.; Souza, Priscilla L.G.; Santos, Joelan A.L.; Vilela, Eudice C.; Lima, Fabiana F.; Calixto, Merilane S.; Santos, Neide

    2009-01-01

    There is increasing concern about airline crew members (about one million worldwide) are exposed to measurable neutrons doses. Historically, cytogenetic biodosimetry assays have been based on quantifying asymmetrical chromosome alterations (dicentrics, centric rings and acentric fragments) in mytogen-stimulated T-lymphocytes in their first mitosis after radiation exposure. Increased levels of chromosome damage in peripheral blood lymphocytes are a sensitive indicator of radiation exposure and they are routinely exploited for assessing radiation absorbed dose after accidental or occupational exposure. Since radiological accidents are not common, not all nations feel that it is economically justified to maintain biodosimetry competence. However, dependable access to biological dosimetry capabilities is completely critical in event of an accident. In this paper the dose-response curve was measured for the induction of chromosomal alterations in peripheral blood lymphocytes after chronic exposure in vitro to neutron-gamma mixes field. Blood was obtained from one healthy donor and exposed to two neutron-gamma mixed field from sources 241 AmBe (20 Ci) at the Neutron Calibration Laboratory (NCL-CRCN/NE-PE-Brazil). The evaluated absorbed doses were 0.2 Gy; 1.0 Gy and 2.5 Gy. The dicentric chromosomes were observed at metaphase, following colcemid accumulation and 1000 well-spread metaphase figures were analyzed for the presence of dicentrics by two experienced scorers after painted by giemsa 5%. Our preliminary results showed a linear dependence between radiations absorbed dose and dicentric chromosomes frequencies. Dose-response curve described in this paper will contribute to the construction of calibration curve that will be used in our laboratory for biological dosimetry. (author)

  6. Dose-response curve for blood exposed to gamma-neutron mixed field by conventional cytogenetic method

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Jose Odinilson de C.; Souza, Priscilla L.G.; Santos, Joelan A.L.; Vilela, Eudice C.; Lima, Fabiana F., E-mail: jodinilson@cnen.gov.b, E-mail: fflima@cnen.gov.b, E-mail: jasantos@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Calixto, Merilane S.; Santos, Neide, E-mail: santos_neide@yahoo.com.b [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Dept. de Genetica

    2009-07-01

    There is increasing concern about airline crew members (about one million worldwide) are exposed to measurable neutrons doses. Historically, cytogenetic biodosimetry assays have been based on quantifying asymmetrical chromosome alterations (dicentrics, centric rings and acentric fragments) in mytogen-stimulated T-lymphocytes in their first mitosis after radiation exposure. Increased levels of chromosome damage in peripheral blood lymphocytes are a sensitive indicator of radiation exposure and they are routinely exploited for assessing radiation absorbed dose after accidental or occupational exposure. Since radiological accidents are not common, not all nations feel that it is economically justified to maintain biodosimetry competence. However, dependable access to biological dosimetry capabilities is completely critical in event of an accident. In this paper the dose-response curve was measured for the induction of chromosomal alterations in peripheral blood lymphocytes after chronic exposure in vitro to neutron-gamma mixes field. Blood was obtained from one healthy donor and exposed to two neutron-gamma mixed field from sources {sup 241}AmBe (20 Ci) at the Neutron Calibration Laboratory (NCL-CRCN/NE-PE-Brazil). The evaluated absorbed doses were 0.2 Gy; 1.0 Gy and 2.5 Gy. The dicentric chromosomes were observed at metaphase, following colcemid accumulation and 1000 well-spread metaphase figures were analyzed for the presence of dicentrics by two experienced scorers after painted by giemsa 5%. Our preliminary results showed a linear dependence between radiations absorbed dose and dicentric chromosomes frequencies. Dose-response curve described in this paper will contribute to the construction of calibration curve that will be used in our laboratory for biological dosimetry. (author)

  7. Standard test method for measurement of fatigue crack growth rates

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2015-01-01

    1.1 This test method covers the determination of fatigue crack growth rates from near-threshold to Kmax controlled instability. Results are expressed in terms of the crack-tip stress-intensity factor range (ΔK), defined by the theory of linear elasticity. 1.2 Several different test procedures are provided, the optimum test procedure being primarily dependent on the magnitude of the fatigue crack growth rate to be measured. 1.3 Materials that can be tested by this test method are not limited by thickness or by strength so long as specimens are of sufficient thickness to preclude buckling and of sufficient planar size to remain predominantly elastic during testing. 1.4 A range of specimen sizes with proportional planar dimensions is provided, but size is variable to be adjusted for yield strength and applied force. Specimen thickness may be varied independent of planar size. 1.5 The details of the various specimens and test configurations are shown in Annex A1-Annex A3. Specimen configurations other than t...

  8. Standardized method for reproducing the sequential X-rays flap

    International Nuclear Information System (INIS)

    Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia

    2009-01-01

    A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es

  9. Standard test method for creep-fatigue crack growth testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of creep-fatigue crack growth properties of nominally homogeneous materials by use of pre-cracked compact type, C(T), test specimens subjected to uniaxial cyclic forces. It concerns fatigue cycling with sufficiently long loading/unloading rates or hold-times, or both, to cause creep deformation at the crack tip and the creep deformation be responsible for enhanced crack growth per loading cycle. It is intended as a guide for creep-fatigue testing performed in support of such activities as materials research and development, mechanical design, process and quality control, product performance, and failure analysis. Therefore, this method requires testing of at least two specimens that yield overlapping crack growth rate data. The cyclic conditions responsible for creep-fatigue deformation and enhanced crack growth vary with material and with temperature for a given material. The effects of environment such as time-dependent oxidation in enhancing the crack growth ra...

  10. Rapid Determination of Appropriate Source Models for Tsunami Early Warning using a Depth Dependent Rigidity Curve: Method and Numerical Tests

    Science.gov (United States)

    Tanioka, Y.; Miranda, G. J. A.; Gusman, A. R.

    2017-12-01

    Recently, tsunami early warning technique has been improved using tsunami waveforms observed at the ocean bottom pressure gauges such as NOAA DART system or DONET and S-NET systems in Japan. However, for tsunami early warning of near field tsunamis, it is essential to determine appropriate source models using seismological analysis before large tsunamis hit the coast, especially for tsunami earthquakes which generated significantly large tsunamis. In this paper, we develop a technique to determine appropriate source models from which appropriate tsunami inundation along the coast can be numerically computed The technique is tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off Central America. In this study, fault parameters were estimated from the W-phase inversion, then the fault length and width were determined from scaling relationships. At first, the slip amount was calculated from the seismic moment with a constant rigidity of 3.5 x 10**10N/m2. The tsunami numerical simulation was carried out and compared with the observed tsunami. For the 1992 Nicaragua tsunami earthquake, the computed tsunami was much smaller than the observed one. For the 2004 El Astillero earthquake, the computed tsunami was overestimated. In order to solve this problem, we constructed a depth dependent rigidity curve, similar to suggested by Bilek and Lay (1999). The curve with a central depth estimated by the W-phase inversion was used to calculate the slip amount of the fault model. Using those new slip amounts, tsunami numerical simulation was carried out again. Then, the observed tsunami heights, run-up heights, and inundation areas for the 1992 Nicaragua tsunami earthquake were well explained by the computed one. The other tsunamis from the other three earthquakes were also reasonably well explained

  11. Standard methods for sampling and sample preparation for gamma spectroscopy

    International Nuclear Information System (INIS)

    Taskaeva, M.; Taskaev, E.; Nikolov, P.

    1993-01-01

    The strategy for sampling and sample preparation is outlined: necessary number of samples; analysis and treatment of the results received; quantity of the analysed material according to the radionuclide concentrations and analytical methods; the minimal quantity and kind of the data needed for making final conclusions and decisions on the base of the results received. This strategy was tested in gamma spectroscopic analysis of radionuclide contamination of the region of Eleshnitsa Uranium Mines. The water samples was taken and stored according to the ASTM D 3370-82. The general sampling procedures were in conformity with the recommendations of ISO 5667. The radionuclides was concentrated by coprecipitation with iron hydroxide and ion exchange. The sampling of soil samples complied with the rules of ASTM C 998, and their sample preparation - with ASTM C 999. After preparation the samples were sealed hermetically and measured. (author)

  12. On the calculation of complete dissociation curves of closed-shell pseudo-onedimensional systems via the complete active space method of increments

    Energy Technology Data Exchange (ETDEWEB)

    Fertitta, E.; Paulus, B. [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Barcza, G.; Legeza, Ö. [Strongly Correlated Systems “Lendület” Research Group, Wigner Research Centre for Physics, P.O. Box 49, Budapest (Hungary)

    2015-09-21

    The method of increments (MoI) has been employed using the complete active space formalism in order to calculate the dissociation curve of beryllium ring-shaped clusters Be{sub n} of different sizes. Benchmarks obtained through different quantum chemical methods including the ab initio density matrix renormalization group were used to verify the validity of the MoI truncation which showed a reliable behavior for the whole dissociation curve. Moreover we investigated the size dependence of the correlation energy at different interatomic distances in order to extrapolate the values for the periodic chain and to discuss the transition from a metal-like to an insulator-like behavior of the wave function through quantum chemical considerations.

  13. Geometrically nonlinear dynamic analysis of doubly curved isotropic shells resting on elastic foundation by a combination of harmonic differential quadrature-finite difference methods

    International Nuclear Information System (INIS)

    Civalek, Oemer

    2005-01-01

    The nonlinear dynamic response of doubly curved shallow shells resting on Winkler-Pasternak elastic foundation has been studied for step and sinusoidal loadings. Dynamic analogues of Von Karman-Donnel type shell equations are used. Clamped immovable and simply supported immovable boundary conditions are considered. The governing nonlinear partial differential equations of the shell are discretized in space and time domains using the harmonic differential quadrature (HDQ) and finite differences (FD) methods, respectively. The accuracy of the proposed HDQ-FD coupled methodology is demonstrated by numerical examples. The shear parameter G of the Pasternak foundation and the stiffness parameter K of the Winkler foundation have been found to have a significant influence on the dynamic response of the shell. It is concluded from the present study that the HDQ-FD methodolgy is a simple, efficient, and accurate method for the nonlinear analysis of doubly curved shallow shells resting on two-parameter elastic foundation

  14. Light mirror reflection combined with heating/cooling curves as a method of studying phase transitions in transparent and opaque petroleum products: Apparatus and theory

    International Nuclear Information System (INIS)

    Shishkin, Yu.L.

    2007-01-01

    A portable low weight low cost apparatus 'Phasafot' and method for determining pour and cloud points of petroleum products, as well as precipitation and melting temperatures of paraffins in both transparent (diesel fuels), semi-transparent (lube oils) and opaque (crude oils) samples are described. The method consists in illuminating the surface of a sample with an oblique light beam and registering the intensity of specularly reflected light while heating/cooling the sample in the temperature range of its structural transitions. The mirror reflection of a light beam from an ideally smooth liquid surface falls in intensity when the surface becomes rough (dim) due to crystal formation. Simultaneous recording of the temperature ramp curve and the mirror reflection curve enables the determination of the beginning and end of crystallization of paraffins in both transparent and opaque petroleum products. Besides, their rheological properties can be accurately determined by rocking or tilting the instrument while monitoring the sample movement via its mirror reflection

  15. An evaluation of the effect of greenhouse gas accounting methods on a marginal abatement cost curve for Irish agricultural greenhouse gas emissions

    International Nuclear Information System (INIS)

    O’Brien, Donal; Shalloo, Laurence; Crosson, Paul; Donnellan, Trevor; Farrelly, Niall; Finnan, John; Hanrahan, Kevin; Lalor, Stan; Lanigan, Gary; Thorne, Fiona; Schulte, Rogier

    2014-01-01

    Highlights: • Improving productivity was the most effective strategy to reduce emissions and costs. • The accounting methods disagreed on the total abatement potential of mitigation measures. • Thus, it may be difficult to convince farmers to adopt certain abatement measures. • Domestic offsetting and consumption based accounting are options to overcome current methodological issues. - Abstract: Marginal abatement cost curve (MACC) analysis allows the evaluation of strategies to reduce agricultural greenhouse gas (GHG) emissions relative to some reference scenario and encompasses their costs or benefits. A popular approach to quantify the potential to abate national agricultural emissions is the Intergovernmental Panel on Climate Change guidelines for national GHG inventories (IPCC-NI method). This methodology is the standard for assessing compliance with binding national GHG reduction targets and uses a sector based framework to attribute emissions. There is however an alternative to the IPCC-NI method, known as life cycle assessment (LCA), which is the preferred method to assess the GHG intensity of food production (kg of GHG/unit of food). The purpose of this study was to compare the effect of using the IPCC-NI and LCA methodologies when completing a MACC analysis of national agricultural GHG emissions. The MACC was applied to the Irish agricultural sector and mitigation measures were only constrained by the biophysical environment. The reference scenario chosen assumed that the 2020 growth targets set by the Irish agricultural industry would be achieved. The comparison of methodologies showed that only 1.1 Mt of the annual GHG abatement potential that can be achieved at zero or negative cost could be attributed to agricultural sector using the IPCC-NI method, which was only 44% of the zero or negative cost abatement potential attributed to the sector using the LCA method. The difference between methodologies was because the IPCC-NI method attributes the

  16. Effects of tripolar TENS on slow and fast motoneurons: a preliminary study using H-reflex recovery curve method.

    Science.gov (United States)

    Simorgh, L; Torkaman, G; Firoozabadi, S M

    2008-01-01

    This study aimed at examining the effect of tripolar TENS of vertebral column on the activity of slow and fast motoneurons on 10 healthy non-athlete women aged 22.7 +/- 2.21 yrs. H-reflex recovery curve of soleus (slow) and gastrocnemius (fast) muscles were recorded before and after applying tripolar TENS. For recording of this curve, rectangular paired stimuli were applied on tibial nerve (with 40-520 ISI, frequency of 0.2 Hz and pulse width of 600 micros). Our findings showed that maximum H-reflex recovery in gastrocnemius muscle appeared in the shorter ISI, while in soleus muscle, it appeared in the longer ISI and its amplitude slightly decreased after applying tripolar TENS. It is suggested that tripolar TENS excites not only the skin but also Ia and Ib afferents in the dorsal column. A Synaptic interaction of these afferents in spinal cord causes the inhibition of type I MNs and facilitation of type II MNs. This effect can be used in muscle tone modulation.

  17. Calculation of isodose curves from initial neutron radiation of a hypothetical nuclear explosion using Monte Carlo Method

    International Nuclear Information System (INIS)

    Medeiros, Marcos P.C.; Rebello, Wilson F.; Andrade, Edson R.; Silva, Ademir X.

    2015-01-01

    Nuclear explosions are usually described in terms of its total yield and associated shock wave, thermal radiation and nuclear radiation effects. The nuclear radiation produced in such events has several components, consisting mainly of alpha and beta particles, neutrinos, X-rays, neutrons and gamma rays. For practical purposes, the radiation from a nuclear explosion is divided into i nitial nuclear radiation , referring to what is issued within one minute after the detonation, and 'residual nuclear radiation' covering everything else. The initial nuclear radiation can also be split between 'instantaneous or 'prompt' radiation, which involves neutrons and gamma rays from fission and from interactions between neutrons and nuclei of surrounding materials, and 'delayed' radiation, comprising emissions from the decay of fission products and from interactions of neutrons with nuclei of the air. This work aims at presenting isodose curves calculations at ground level by Monte Carlo simulation, allowing risk assessment and consequences modeling in radiation protection context. The isodose curves are related to neutrons produced by the prompt nuclear radiation from a hypothetical nuclear explosion with a total yield of 20 KT. Neutron fluency and emission spectrum were based on data available in the literature. Doses were calculated in the form of ambient dose equivalent due to neutrons H*(10) n - . (author)

  18. Calculation of isodose curves from initial neutron radiation of a hypothetical nuclear explosion using Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, Marcos P.C.; Rebello, Wilson F.; Andrade, Edson R., E-mail: rebello@ime.eb.br, E-mail: daltongirao@yahoo.com.br [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Secao de Engenharia Nuclear; Silva, Ademir X., E-mail: ademir@nuclear.ufrj.br [Corrdenacao dos Programas de Pos-Graduacao em Egenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    Nuclear explosions are usually described in terms of its total yield and associated shock wave, thermal radiation and nuclear radiation effects. The nuclear radiation produced in such events has several components, consisting mainly of alpha and beta particles, neutrinos, X-rays, neutrons and gamma rays. For practical purposes, the radiation from a nuclear explosion is divided into {sup i}nitial nuclear radiation{sup ,} referring to what is issued within one minute after the detonation, and 'residual nuclear radiation' covering everything else. The initial nuclear radiation can also be split between 'instantaneous or 'prompt' radiation, which involves neutrons and gamma rays from fission and from interactions between neutrons and nuclei of surrounding materials, and 'delayed' radiation, comprising emissions from the decay of fission products and from interactions of neutrons with nuclei of the air. This work aims at presenting isodose curves calculations at ground level by Monte Carlo simulation, allowing risk assessment and consequences modeling in radiation protection context. The isodose curves are related to neutrons produced by the prompt nuclear radiation from a hypothetical nuclear explosion with a total yield of 20 KT. Neutron fluency and emission spectrum were based on data available in the literature. Doses were calculated in the form of ambient dose equivalent due to neutrons H*(10){sub n}{sup -}. (author)

  19. Retinoblastoma: Achieving new standards with methods of chemotherapy

    Directory of Open Access Journals (Sweden)

    Swathi Kaliki

    2015-01-01

    Full Text Available The management of retinoblastoma (RB has dramatically changed over the past two decades from previous radiotherapy methods to current chemotherapy strategies. RB is a remarkably chemotherapy-sensitive tumor. Chemotherapy is currently used as a first-line approach for children with this malignancy and can be delivered by intravenous, intra-arterial, periocular, and intravitreal routes. The choice of route for chemotherapy administration depends upon the tumor laterality and tumor staging. Intravenous chemotherapy (IVC is used most often in bilateral cases, orbital RB, and as an adjuvant treatment in high-risk RB. Intra-arterial chemotherapy (IAC is used in cases with group C or D RB and selected cases of group E tumor. Periocular chemotherapy is used as an adjunct treatment in eyes with group D and E RB and those with persistent/recurrent vitreous seeds. Intravitreal chemotherapy is reserved for eyes with persistent/recurrent vitreous seeds. In this review, we describe the various forms of chemotherapy used in the management of RB. A database search was performed on PubMed, using the terms "RB," and "treatment," "chemotherapy," "systemic chemotherapy," "IVC," "IAC," "periocular chemotherapy," or "intravitreal chemotherapy." Relevant English language articles were extracted, reviewed, and referenced appropriately.

  20. Standard test methods for bend testing of material for ductility

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 These test methods cover bend testing for ductility of materials. Included in the procedures are four conditions of constraint on the bent portion of the specimen; a guided-bend test using a mandrel or plunger of defined dimensions to force the mid-length of the specimen between two supports separated by a defined space; a semi-guided bend test in which the specimen is bent, while in contact with a mandrel, through a specified angle or to a specified inside radius (r) of curvature, measured while under the bending force; a free-bend test in which the ends of the specimen are brought toward each other, but in which no transverse force is applied to the bend itself and there is no contact of the concave inside surface of the bend with other material; a bend and flatten test, in which a transverse force is applied to the bend such that the legs make contact with each other over the length of the specimen. 1.2 After bending, the convex surface of the bend is examined for evidence of a crack or surface irregu...

  1. Signature Curves Statistics of DNA Supercoils

    OpenAIRE

    Shakiban, Cheri; Lloyd, Peter

    2004-01-01

    In this paper we describe the Euclidean signature curves for two dimensional closed curves in the plane and their generalization to closed space curves. The focus will be on discrete numerical methods for approximating such curves. Further we will apply these numerical methods to plot the signature curves related to three-dimensional simulated DNA supercoils. Our primary focus will be on statistical analysis of the data generated for the signature curves of the supercoils. We will try to esta...

  2. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    Science.gov (United States)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  3. Standardization of Laser Methods and Techniques for Vibration Measurements and Calibrations

    International Nuclear Information System (INIS)

    Martens, Hans-Juergen von

    2010-01-01

    The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and refined laser methods and techniques developed by national metrology institutes and by leading manufacturers in the past two decades have been swiftly specified as standard methods for inclusion into in the series ISO 16063 of international documentary standards. A survey of ISO Standards for the calibration of vibration and shock transducers demonstrates the extended ranges and improved accuracy (measurement uncertainty) of laser methods and techniques for vibration and shock measurements and calibrations. The first standard for the calibration of laser vibrometers by laser interferometry or by a reference accelerometer calibrated by laser interferometry (ISO 16063-41) is on the stage of a Draft International Standard (DIS) and may be issued by the end of 2010. The standard methods with refined techniques proved to achieve wider measurement ranges and smaller measurement uncertainties than that specified in the ISO Standards. The applicability of different standardized interferometer methods to vibrations at high frequencies was recently demonstrated up to 347 kHz (acceleration amplitudes up to 350 km/s 2 ). The relative deviations between the amplitude measurement results of the different interferometer methods that were applied simultaneously, differed by less than 1% in all cases.

  4. Comparison of two standardized methods of methacholine inhalation challenge in young adults

    DEFF Research Database (Denmark)

    Siersted, H C; Walker, C M; O'Shaughnessy, A D

    2000-01-01

    In the European Community Respiratory Health Study (ECRHS), airway responsiveness to methacholine was determined using the Mefar dosimeter protocol. Elsewhere, the 2-min tidal breathing method has become the preferred standardized method. The relationship between measurements of responsiveness by...

  5. Treatment of thoraco-lumbar curves in adolescent females affected by idiopathic scoliosis with a progressive action short brace (PASB: assessment of results according to the SRS committee on bracing and nonoperative management standardization criteria

    Directory of Open Access Journals (Sweden)

    Perisano Carlo

    2009-09-01

    Full Text Available Abstract Background The effectiveness of conservative treatment of scoliosis is controversial. Some studies suggest that brace is effective in stopping curve progression, whilst others did not report such an effect. The purpose of the present study was to effectiveness of Progressive Action Short Brace (PASB in the correction of thoraco-lumbar curves, in agreement with the Scoliosis Research Society (SRS Committee on Bracing and Nonoperative Management Standardisation Criteria. Methods Fifty adolescent females (mean age 11.8 ± 0.5 years with thoraco-lumbar curve and a pre-treatment Risser score ranging from 0 to 2 have been enrolled. The minimum duration of follow-up was 24 months (mean: 55.4 ± 44.5 months. Antero-posterior radiographs were used to estimate the curve magnitude (CM and the torsion of the apical vertebra (TA at 5 time points: beginning of treatment (t1, one year after the beginning of treatment (t2, intermediate time between t1 and t4 (t3, end of weaning (t4, 2-year minimum follow-up from t4 (t5. Three situations were distinguished: curve correction, curve stabilisation and curve progression. The Kruskal Wallis and Spearman Rank Correlation tests have been used as statistical tests. Results CM mean value was 29,30 ± 5,16 SD at t1 and 14,67 ± 7,65 SD at t5. TA was 12.70 ± 6,14 SD at t1 and 8,95 ± 5,82 at t5. The variation between measures of Cobb and Perdriolle degrees at t1,2,3,4,5 and between CM t5-t1 and TA t5-t1 were significantly different. Curve correction was accomplished in 94% of patients, whereas a curve stabilisation was obtained in 6% of patients. Conclusion The PASB, due to its peculiar biomechanical action on vertebral modelling, is highly effective in correcting thoraco-lumbar curves.

  6. Neutron activation analysis of reference materials by the k sub 0 standardization and relative methods

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, M C; Martinho, E [LNETI/ICEN, Sacavem (Portugal)

    1989-04-15

    Instrumental neutron activation analysis with the k{sub o}-standardization method was applied to eight geological, environmental and biological reference materials, including leaves, blood, fish, sediments, soils and limestone. To a first approximation, the results were normally distributed around the certified values with a standard deviation of 10%. Results obtained by using the relative method based on well characterized multi-element standards for IAEA CRM Soil-7 are reported.

  7. 42 CFR 440.260 - Methods and standards to assure quality of services.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods and standards to assure quality of services. 440.260 Section 440.260 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH... and Limits Applicable to All Services § 440.260 Methods and standards to assure quality of services...

  8. 40 CFR 1043.50 - Approval of methods to meet Tier 1 retrofit NOX standards.

    Science.gov (United States)

    2010-07-01

    ... retrofit NOX standards. 1043.50 Section 1043.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... SUBJECT TO THE MARPOL PROTOCOL § 1043.50 Approval of methods to meet Tier 1 retrofit NOX standards. Regulation 13 of Annex VI provides for certification of Approved Methods, which are retrofit procedures that...

  9. An inverse method based on finite element model to derive the plastic flow properties from non-standard tensile specimens of Eurofer97 steel

    Directory of Open Access Journals (Sweden)

    S. Knitel

    2016-12-01

    Full Text Available A new inverse method was developed to derive the plastic flow properties of non-standard disk tensile specimens, which were so designed to fit irradiation rods used for spallation irradiations in SINQ (Schweizer Spallations Neutronen Quelle target at Paul Scherrer Institute. The inverse method, which makes use of MATLAB and the finite element code ABAQUS, is based upon the reconstruction of the load-displacement curve by a succession of connected small linear segments. To do so, the experimental engineering stress/strain curve is divided into an elastic and a plastic section, and the plastic section is further divided into small segments. Each segment is then used to determine an associated pair of true stress/plastic strain values, representing the constitutive behavior. The main advantage of the method is that it does not rely on a hypothetic analytical expression of the constitutive behavior. To account for the stress/strain gradients that develop in the non-standard specimen, the stress and strain were weighted over the volume of the deforming elements. The method was validated with tensile tests carried out at room temperature on non-standard flat disk tensile specimens as well as on standard cylindrical specimens made of the reduced-activation tempered martensitic steel Eurofer97. While both specimen geometries presented a significant difference in terms of deformation localization during necking, the same true stress/strain curve was deduced from the inverse method. The potential and usefulness of the inverse method is outlined for irradiated materials that suffer from a large uniform elongation reduction.

  10. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  11. Receiver-operating characteristic curves and likelihood ratios: improvements over traditional methods for the evaluation and application of veterinary clinical pathology tests

    DEFF Research Database (Denmark)

    Gardner, Ian A.; Greiner, Matthias

    2006-01-01

    Receiver-operating characteristic (ROC) curves provide a cutoff-independent method for the evaluation of continuous or ordinal tests used in clinical pathology laboratories. The area under the curve is a useful overall measure of test accuracy and can be used to compare different tests (or...... different equipment) used by the same tester, as well as the accuracy of different diagnosticians that use the same test material. To date, ROC analysis has not been widely used in veterinary clinical pathology studies, although it should be considered a useful complement to estimates of sensitivity...... and specificity in test evaluation studies. In addition, calculation of likelihood ratios can potentially improve the clinical utility of such studies because likelihood ratios provide an indication of how the post-test probability changes as a function of the magnitude of the test results. For ordinal test...

  12. Standardization of a sulfur quantitative analysis method by X ray fluorescence in a leaching solution for bio-available sulfates in soil

    International Nuclear Information System (INIS)

    Morales S, E.; Aguilar S, E.

    1989-11-01

    A method for bio-available sulfate analysis in soils is described. A Ca(H2PO4) leaching solution was used for soil samples treatment. A standard NaSO4 solution was used for preparing a calibration curve and also the fundamental parameters method approach was employed. An Am-241 (100 mCi) source and a Si-Li detector were employed. Analysis could be done in 5 minutes; good reproducibility, 5 and accuracy, 5 were obtained. The method is very competitive with conventional nephelometry where good and reproducible suspensions are difficult to obtain. (author)

  13. Separate base usages of genes located on the leading and lagging strands in Chlamydia muridarum revealed by the Z curve method

    Directory of Open Access Journals (Sweden)

    Yu Xiu-Juan

    2007-10-01

    Full Text Available Abstract Background The nucleotide compositional asymmetry between the leading and lagging strands in bacterial genomes has been the subject of intensive study in the past few years. It is interesting to mention that almost all bacterial genomes exhibit the same kind of base asymmetry. This work aims to investigate the strand biases in Chlamydia muridarum genome and show the potential of the Z curve method for quantitatively differentiating genes on the leading and lagging strands. Results The occurrence frequencies of bases of protein-coding genes in C. muridarum genome were analyzed by the Z curve method. It was found that genes located on the two strands of replication have distinct base usages in C. muridarum genome. According to their positions in the 9-D space spanned by the variables u1 – u9 of the Z curve method, K-means clustering algorithm can assign about 94% of genes to the correct strands, which is a few percent higher than those correctly classified by K-means based on the RSCU. The base usage and codon usage analyses show that genes on the leading strand have more G than C and more T than A, particularly at the third codon position. For genes on the lagging strand the biases is reverse. The y component of the Z curves for the complete chromosome sequences show that the excess of G over C and T over A are more remarkable in C. muridarum genome than in other bacterial genomes without separating base and/or codon usages. Furthermore, for the genomes of Borrelia burgdorferi, Treponema pallidum, Chlamydia muridarum and Chlamydia trachomatis, in which distinct base and/or codon usages have been observed, closer phylogenetic distance is found compared with other bacterial genomes. Conclusion The nature of the strand biases of base composition in C. muridarum is similar to that in most other bacterial genomes. However, the base composition asymmetry between the leading and lagging strands in C. muridarum is more significant than that in

  14. Rare earths analysis of rock samples by instrumental neutron activation analysis, internal standard method

    International Nuclear Information System (INIS)

    Silachyov, I.

    2016-01-01

    The application of instrumental neutron activation analysis for the determination of long-lived rare earth elements (REE) in rock samples is considered in this work. Two different methods are statistically compared: the well established external standard method carried out using standard reference materials, and the internal standard method (ISM), using Fe, determined through X-ray fluorescence analysis, as an element-comparator. The ISM proved to be the more precise method for a wide range of REE contents and can be recommended for routine practice. (author)

  15. Development of A Standard Method for Human Reliability Analysis (HRA) of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kang, Dae Il; Jung, Won Dea; Kim, Jae Whan

    2005-12-01

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME and ANS PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME and ANS HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method

  16. Development of A Standard Method for Human Reliability Analysis of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jung, Won Dea; Kang, Dae Il; Kim, Jae Whan

    2005-12-01

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method

  17. Development of A Standard Method for Human Reliability Analysis of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Won Dea; Kang, Dae Il; Kim, Jae Whan

    2005-12-15

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method.

  18. Development of A Standard Method for Human Reliability Analysis (HRA) of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Jung, Won Dea; Kim, Jae Whan

    2005-12-15

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME and ANS PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME and ANS HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method.

  19. DETECTION OF MICROVASCULAR COMPLICATIONS OF TYPE 2 DIABETES BY EZSCAN AND ITS COMPARISON WITH STANDARD SCREENING METHODS

    Directory of Open Access Journals (Sweden)

    Sarita Bajaj

    2016-08-01

    Full Text Available BACKGROUND EZSCAN is a new, noninvasive technique to detect sudomotor dysfunction and thus neuropathy in diabetes patients at an early stage. It further predicts chances of development of other microvascular complications. In this study, we evaluated EZSCAN for detection of microvascular complications in Type 2 diabetes patients and compared accuracy of EZSCAN with standard screening methods. MATERIALS AND METHODS 104 known diabetes patients, 56 males and 48 females, were studied. All cases underwent the EZSCAN test, Nerve Conduction Study (NCS test, Vibration perception threshold test (VPT, Monofilament test, Fundus examination and Urine micral test. The results of EZSCAN were compared with standard screening methods. The data has been analysed and assessed by applying appropriate statistical tests within different groups. RESULTS Mean age of the subjects was 53.5 ± 11.4 years. For detection of diabetic neuropathy, sensitivity and specificity of EZSCAN was found to be 77.0 % and 95.3%, respectively. Odd’s ratio (OR was 68.82 with p < 0.0001. AUC in ROC curve was 0.930. Sensitivity and specificity of EZSCAN for detection of nephropathy were 67.1% and 94.1%, respectively. OR = 32.69 with p < 0.0001. AUC was 0.926. Sensitivity of EZSCAN for detection of retinopathy was 90% while specificity is 70.3%. OR = 21.27; p< 0.0001. AUC came out to be 0.920. CONCLUSION Results of EZSCAN test compared significantly to the standard screening methods for the detection of microvascular complications of diabetes and can be used as a simple, noninvasive and quick method to detect microvascular complications of diabetes.

  20. Method for determining scan timing based on analysis of formation process of the time-density curve

    International Nuclear Information System (INIS)

    Yamaguchi, Isao; Ishida, Tomokazu; Kidoya, Eiji; Higashimura, Kyoji; Suzuki, Masayuki

    2005-01-01

    A strict determination of scan timing is needed for dynamic multi-phase scanning and 3D-CT angiography (3D-CTA) by multi-detector row CT (MDCT). In the present study, contrast media arrival time (T AR ) was measured in the abdominal aorta at the bifurcation of the celiac artery for confirmation of circulatory differences in patients. In addition, we analyzed the process of formation of the time-density curve (TDC) and examined factors that affect the time to peak aortic enhancement (T PA ). Mean T AR was 15.57±3.75 s. TDCs were plotted for each duration of injection. The rising portions of TDCs were superimposed on one another. TDCs with longer injection durations were piled up upon one another. Rise angle was approximately constant in response to each flow rate. Rise time (T R ) showed a good correlation with injection duration (T ID ). T R was 1.01 T ID (R 2 =0.994) in the phantom study and 0.94 T lD -0.60 (R 2 =0.988) in the clinical study. In conclusion, for the selection of optimal scan timing it is useful to determine T R at a given point and to determine the time from T AR . (author)

  1. Area-under-the-curve monitoring of cyclosporine therapy: Performance of different assay methods and their target concentrations

    International Nuclear Information System (INIS)

    Grevel, J.; Napoli, K.L.; Gibbons, S.; Kahan, B.D.

    1990-01-01

    The measurement of areas under the concentration-time curve (AUC) was recently introduced as an alternative to trough level monitoring of cyclosporine therapy. The AUC is divided by the oral dosing interval to calculate an average concentration. All measurements are performed at clinical steady state. The initial evaluation of AUC monitoring showed advantages over trough level monitoring with concentrations of cyclosporine measured in serum by the polyclonal radioimmunoassay of Sandoz. This assay technique is no longer available and the following assays were performed in parallel during up to 173 AUC determinations in 51 consecutive renal transplant patients: polyclonal fluorescence polarization immunoassay of Abbott in serum, specific and nonspecific monoclonal radioimmunoassays using 3 H and 125 I tracers in serum and whole blood, and high performance liquid chromatography in whole blood. Both trough levels and average concentrations at steady state measured by those different techniques were significantly correlated with the oral dose. The best correlation (r2 = 0.54) was shown by average concentrations measured in whole blood by the specific monoclonal radioimmunoassay of Sandoz ( 3 H tracer). This monitoring technique was also associated with the smallest absolute error between repeated observations in the same patient while the oral dose rate remained the same or was changed. Both allegedly specific monoclonal radioimmunoassays (with 3 H and 125 I tracer) measured significantly higher concentrations than the liquid chromatography

  2. ECM using Edwards curves

    NARCIS (Netherlands)

    Bernstein, D.J.; Birkner, P.; Lange, T.; Peters, C.P.

    2013-01-01

    This paper introduces EECM-MPFQ, a fast implementation of the elliptic-curve method of factoring integers. EECM-MPFQ uses fewer modular multiplications than the well-known GMP-ECM software, takes less time than GMP-ECM, and finds more primes than GMP-ECM. The main improvements above the

  3. Standardization and validation of a novel and simple method to assess lumbar dural sac size

    International Nuclear Information System (INIS)

    Daniels, M.L.A.; Lowe, J.R.; Roy, P.; Patrone, M.V.; Conyers, J.M.; Fine, J.P.; Knowles, M.R.; Birchard, K.R.

    2015-01-01

    Aim: To develop and validate a simple, reproducible method to assess dural sac size using standard imaging technology. Materials and methods: This study was institutional review board-approved. Two readers, blinded to the diagnoses, measured anterior–posterior (AP) and transverse (TR) dural sac diameter (DSD), and AP vertebral body diameter (VBD) of the lumbar vertebrae using MRI images from 53 control patients with pre-existing MRI examinations, 19 prospectively MRI-imaged healthy controls, and 24 patients with Marfan syndrome with prior MRI or CT lumbar spine imaging. Statistical analysis utilized linear and logistic regression, Pearson correlation, and receiver operating characteristic (ROC) curves. Results: AP-DSD and TR-DSD measurements were reproducible between two readers (r = 0.91 and 0.87, respectively). DSD (L1–L5) was not different between male and female controls in the AP or TR plane (p = 0.43; p = 0.40, respectively), and did not vary by age (p = 0.62; p = 0.25) or height (p = 0.64; p = 0.32). AP-VBD was greater in males versus females (p = 1.5 × 10 −8 ), resulting in a smaller dural sac ratio (DSR) (DSD/VBD) in males (p = 5.8 × 10 −6 ). Marfan patients had larger AP-DSDs and TR-DSDs than controls (p = 5.9 × 10 −9 ; p = 6.5 × 10 −9 , respectively). Compared to DSR, AP-DSD and TR-DSD better discriminate Marfan from control subjects based on area under the curve (AUC) values from unadjusted ROCs (AP-DSD p < 0.01; TR-DSD p = 0.04). Conclusion: Individual vertebrae and L1–L5 (average) AP-DSD and TR-DSD measurements are simple, reliable, and reproducible for quantitating dural sac size without needing to control for gender, age, or height. - Highlights: • DSD (L1-L5) does not differ in the AP or TR plane by gender, height, or age. • AP- and TR-DSD measures correlate well between readers with different experience. • Height is positively correlated to AP-VBD in both males and females. • Varying

  4. An Analytical Method for Deriving Reservoir Operation Curves to Maximize Social Benefits from Multiple Uses of Water in the Willamette River Basin

    Science.gov (United States)

    Moore, K. M.; Jaeger, W. K.; Jones, J. A.

    2013-12-01

    A central characteristic of large river basins in the western US is the spatial and temporal disjunction between the supply of and demand for water. Water sources are typically concentrated in forested mountain regions distant from municipal and agricultural water users, while precipitation is super-abundant in winter and deficient in summer. To cope with these disparities, systems of reservoirs have been constructed throughout the West. These reservoir systems are managed to serve two main competing purposes: to control flooding during winter and spring, and to store spring runoff and deliver it to populated, agricultural valleys during the summer. The reservoirs also provide additional benefits, including recreation, hydropower and instream flows for stream ecology. Since the storage capacity of the reservoirs cannot be used for both flood control and storage at the same time, these uses are traded-off during spring, as the most important, or dominant use of the reservoir, shifts from buffering floods to storing water for summer use. This tradeoff is expressed in the operations rule curve, which specifies the maximum level to which a reservoir can be filled throughout the year, apart from real-time flood operations. These rule curves were often established at the time a reservoir was built. However, climate change and human impacts may be altering the timing and amplitude of flood events and water scarcity is expected to intensify with anticipated changes in climate, land cover and population. These changes imply that reservoir management using current rule curves may not match future societal values for the diverse uses of water from reservoirs. Despite a broad literature on mathematical optimization for reservoir operation, these methods are not often used because they 1) simplify the hydrologic system, raising doubts about the real-world applicability of the solutions, 2) exhibit perfect foresight and assume stationarity, whereas reservoir operators face

  5. IMPROVING MANAGEMENT ACCOUNTING AND COST CALCULATION IN DAIRY INDUSTRY USING STANDARD COST METHOD

    Directory of Open Access Journals (Sweden)

    Bogdănoiu Cristiana-Luminiţa

    2013-04-01

    Full Text Available This paper aims to discuss issues related to the improvement of management accounting in the dairy industry by implementing standard cost method. The methods used today do not provide informational satisfaction to managers in order to conduct effectively production activities, which is why we attempted the standard cost method, it responding to the managers needs to obtain the efficiency of production, and all economic entities. The method allows an operative control of how they consume manpower and material resources by pursuing distinct, permanent and complete deviations during the activity and not at the end of the reporting period. Successful implementation of the standard method depends on the accuracy by which standards are developed and promotes consistently anticipated calculation of production costs as well as determination, tracking and controlling deviations from them, leads to increased practical value of accounting information and business improvement.

  6. A method for the fast estimation of a battery entropy-variation high-resolution curve - Application on a commercial LiFePO4/graphite cell

    Science.gov (United States)

    Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy

    2016-11-01

    The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.

  7. Method for Estimating Evaporative Potential (IM/CLO) from ASTM Standard Single Wind Velocity Measures

    Science.gov (United States)

    2016-08-10

    IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY MEASURES DISCLAIMER The opinions or assertions contained herein are the private views of the...USARIEM TECHNICAL REPORT T16-14 METHOD FOR ESTIMATING EVAPORATIVE POTENTIAL (IM/CLO) FROM ASTM STANDARD SINGLE WIND VELOCITY... ASTM STANDARD SINGLE WIND VELOCITY MEASURES Adam W. Potter Biophysics and Biomedical Modeling Division U.S. Army Research Institute of Environmental

  8. Standardization of waste acceptance test methods by the Materials Characterization Center

    International Nuclear Information System (INIS)

    Slate, S.C.

    1985-01-01

    This paper describes the role of standardized test methods in demonstrating the acceptability of high-level waste (HLW) forms for disposal. Key waste acceptance tests are standardized by the Materials Characterization Center (MCC), which the US Department of Energy (DOE) has established as the central agency in the United States for the standardization of test methods for nuclear waste materials. This paper describes the basic three-step process that is used to show that waste is acceptable for disposal and discusses how standardized tests are used in this process. Several of the key test methods and their areas of application are described. Finally, future plans are discussed for using standardized tests to show waste acceptance. 9 refs., 1 tab

  9. Application of the Fourier pseudospectral time-domain method in orthogonal curvilinear coordinates for near-rigid moderately curved surfaces.

    Science.gov (United States)

    Hornikx, Maarten; Dragna, Didier

    2015-07-01

    The Fourier pseudospectral time-domain method is an efficient wave-based method to model sound propagation in inhomogeneous media. One of the limitations of the method for atmospheric sound propagation purposes is its restriction to a Cartesian grid, confining it to staircase-like geometries. A transform from the physical coordinate system to the curvilinear coordinate system has been applied to solve more arbitrary geometries. For applicability of this method near the boundaries, the acoustic velocity variables are solved for their curvilinear components. The performance of the curvilinear Fourier pseudospectral method is investigated in free field and for outdoor sound propagation over an impedance strip for various types of shapes. Accuracy is shown to be related to the maximum grid stretching ratio and deformation of the boundary shape and computational efficiency is reduced relative to the smallest grid cell in the physical domain. The applicability of the curvilinear Fourier pseudospectral time-domain method is demonstrated by investigating the effect of sound propagation over a hill in a nocturnal boundary layer. With the proposed method, accurate and efficient results for sound propagation over smoothly varying ground surfaces with high impedances can be obtained.

  10. A note on families of fragility curves

    International Nuclear Information System (INIS)

    Kaplan, S.; Bier, V.M.; Bley, D.C.

    1989-01-01

    In the quantitative assessment of seismic risk, uncertainty in the fragility of a structural component is usually expressed by putting forth a family of fragility curves, with probability serving as the parameter of the family. Commonly, a lognormal shape is used both for the individual curves and for the expression of uncertainty over the family. A so-called composite single curve can also be drawn and used for purposes of approximation. This composite curve is often regarded as equivalent to the mean curve of the family. The equality seems intuitively reasonable, but according to the authors has never been proven. The paper presented proves this equivalence hypothesis mathematically. Moreover, the authors show that this equivalence hypothesis between fragility curves is itself equivalent to an identity property of the standard normal probability curve. Thus, in the course of proving the fragility curve hypothesis, the authors have also proved a rather obscure, but interesting and perhaps previously unrecognized, property of the standard normal curve

  11. A new probability density function for spatial distribution of soil water storage capacity leads to SCS curve number method

    OpenAIRE

    Wang, Dingbao

    2018-01-01

    Following the Budyko framework, soil wetting ratio (the ratio between soil wetting and precipitation) as a function of soil storage index (the ratio between soil wetting capacity and precipitation) is derived from the SCS-CN method and the VIC type of model. For the SCS-CN method, soil wetting ratio approaches one when soil storage index approaches infinity, due to the limitation of the SCS-CN method in which the initial soil moisture condition is not explicitly represented. However, for the ...

  12. An innovation on high-grade CNC machines tools for B-spline curve method of high-speed interpolation arithmetic

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    A novel on high-grade CNC machines tools for B Spline curve method of High-speed interpolation arithmetic is introduced. In the high-grade CNC machines tools CNC system existed the type value points is more trouble, the control precision is not strong and so on, In order to solve this problem. Through specific examples in matlab7.0 simulation result showed that that the interpolation error significantly reduced, the control precision is improved markedly, and satisfy the real-time interpolation of high speed, high accuracy requirements.

  13. Developing content standards for teaching research skills using a delphi method

    NARCIS (Netherlands)

    Schaaf, M.F. van der; Stokking, K.M.; Verloop, N.

    2005-01-01

    The increased attention for teacher assessment and current educational reforms ask for procedures to develop adequate content standards. For the development of content standards on teaching research skills, a Delphi method based on stakeholders’ judgments has been designed and tested. In three

  14. Evaluation of the Ross fast solution of Richards' equation in unfavourable conditions for standard finite element methods

    International Nuclear Information System (INIS)

    Crevoisier, D.; Voltz, M.; Chanzy, A.

    2009-01-01

    Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins: 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988:3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D. (authors)

  15. Mixing the Green-Ampt model and Curve Number method as an empirical tool for rainfall excess estimation in small ungauged catchments.

    Science.gov (United States)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-04-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.

  16. Evaluation of diagnostic tests when there is no gold standard. A review of methods

    NARCIS (Netherlands)

    Rutjes, A. W. S.; Reitsma, J. B.; Coomarasamy, A.; Khan, K. S.; Bossuyt, P. M. M.

    2007-01-01

    OBJECTIVE: To generate a classification of methods to evaluate medical tests when there is no gold standard. METHODS: Multiple search strategies were employed to obtain an overview of the different methods described in the literature, including searches of electronic databases, contacting experts

  17. Carbon Lorenz Curves

    Energy Technology Data Exchange (ETDEWEB)

    Groot, L. [Utrecht University, Utrecht School of Economics, Janskerkhof 12, 3512 BL Utrecht (Netherlands)

    2008-11-15

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across countries. These tools allow policy-makers and the general public to grasp at a single glance the impact of conventional distribution rules such as equal caps or grandfathering, or more sophisticated ones, on the distribution of greenhouse gas emissions. Second, using the Samuelson rule for the optimal provision of a public good, the Pareto-optimal distribution of carbon emissions is compared with the distribution that follows if countries follow Nash-Cournot abatement strategies. It is shown that the Pareto-optimal distribution under the Samuelson rule can be approximated by the equal cap division, represented by the diagonal in the Lorenz curve diagram.

  18. Stability and non-standard finite difference method of the generalized Chua's circuit

    KAUST Repository

    Radwan, Ahmed G.; Moaddy, K.; Momani, Shaher M.

    2011-01-01

    In this paper, we develop a framework to obtain approximate numerical solutions of the fractional-order Chua's circuit with Memristor using a non-standard finite difference method. Chaotic response is obtained with fractional-order elements as well

  19. Analysis and Comparison of Thickness and Bending Measurements from Fabric Touch Tester (FTT and Standard Methods

    Directory of Open Access Journals (Sweden)

    Musa Atiyyah Binti Haji

    2018-03-01

    Full Text Available Fabric Touch Tester (FTT is a relatively new device from SDL Atlas to determine touch properties of fabrics. It simultaneously measures 13 touch-related fabric physical properties in four modules that include bending and thickness measurements. This study aims to comparatively analyze the thickness and bending measurements made by the FTT and the common standard methods used in the textile industry. The results obtained with the FTT for 11 different fabrics were compared with that of standard methods. Despite the different measurement principle, a good correlation was found between the two methods used for the assessment of thickness and bending. As FTT is a new tool for textile comfort measurement and no standard yet exists, these findings are essential to determine the reliability of the measurements and how they relate to the well-established standard methods.

  20. Standard test method for radiochemical determination of uranium isotopes in urine by alpha spectrometry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 This test method is applicable to the determination of uranium in urine at levels of detection dependent on sample size, count time, detector background, and tracer yield. It is designed as a screening tool for detection of possible exposure of occupational workers. 1.2 This test method is designed for 50 mL of urine. This test method does not address the sampling protocol or sample preservation methods associated with its use. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  1. STANDARDIZATION AND VALIDATION OF METHODS FOR ENUMERATION OF FECAL COLIFORM AND SALMONELLA IN BIOSOLIDS

    Science.gov (United States)

    Current federal regulations require monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella were reviewed and a standard protocol was developed. The protocols were then evaluated by testi...

  2. Using commercial simulators for determining flash distillation curves for petroleum fractions

    Directory of Open Access Journals (Sweden)

    Eleonora Erdmann

    2008-01-01

    Full Text Available This work describes a new method for estimating the equilibrium flash vaporisation (EFV distillation curve for petro-leum fractions by using commercial simulators. A commercial simulator was used for implementing a stationary mo-del for flash distillation; this model was adjusted by using a distillation curve obtained from standard laboratory ana-lytical assays. Such curve can be one of many types (eg ASTM D86, D1160 or D2887 and involves an experimental procedure simpler than that required for obtaining an EFV curve. Any commercial simulator able to model petroleum can be used for the simulation (HYSYS and CHEMCAD simulators were used here. Several types of petroleum and fractions were experimentally analysed for evaluating the proposed method; this data was then put into a process si-mulator (according to the proposed method to estimate the corresponding EFV curves. HYSYS- and CHEMCAD-estimated curves were compared to those produced by two traditional estimation methods (Edmister’s and Maswell’s methods. Simulation-estimated curves were close to average Edmister and Maxwell curves in all cases. The propo-sed method has several advantages; it avoids the need for experimentally obtaining an EFV curve, it does not de-pend on the type of experimental curve used to fit the model and it enables estimating several pressures by using just one experimental curve as data.

  3. A Standardized Method for 4D Ultrasound-Guided Peripheral Nerve Blockade and Catheter Placement

    Directory of Open Access Journals (Sweden)

    N. J. Clendenen

    2014-01-01

    Full Text Available We present a standardized method for using four-dimensional ultrasound (4D US guidance for peripheral nerve blocks. 4D US allows for needle tracking in multiple planes simultaneously and accurate measurement of the local anesthetic volume surrounding the nerve following injection. Additionally, the morphology and proximity of local anesthetic spread around the target nerve is clearly seen with the described technique. This method provides additional spatial information in real time compared to standard two-dimensional ultrasound.

  4. Choose of standard materials in the method of β-testing new materials' mass thickness

    International Nuclear Information System (INIS)

    Chen Zhong

    2007-01-01

    To make sure of the standard mass thickness in beta radials testing mass thickness, this paper calculate using M. C. method and get the result of the relations between the beta radials' transmission rate of different energies and mass thickness in different materials. This result prove that in method of beta test mass thickness choosing materials whose elements are close as standard materials are viable. (authors)

  5. Standard test method for drop-weight tear tests of ferritic steels

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2003-01-01

    1.1 This test method covers drop-weight tear tests (DWTT) on ferritic steels with thicknesses between 3.18 and 19.1 mm. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  6. Standard Test Method for Measuring Heat Flux Using a Water-Cooled Calorimeter

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method covers the measurement of a steady heat flux to a given water-cooled surface by means of a system energy balance. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  7. Standard test method for uranium analysis in natural and waste water by X-ray fluorescence

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2004-01-01

    1.1 This test method applies for the determination of trace uranium content in waste water. It covers concentrations of U between 0.05 mg/L and 2 mg/L. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  8. [Standard sample preparation method for quick determination of trace elements in plastic].

    Science.gov (United States)

    Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa

    2011-08-01

    Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.

  9. Analysis of Cine-Psychometric Visual Memory Data by the Tucker Generalized Learning Curve Method: Final Report.

    Science.gov (United States)

    Reid, J. C.; Seibert, Warren F.

    The analysis of previously obtained data concerning short-term visual memory and cognition by a method suggested by Tucker is proposed. Although interesting individual differences undoubtedly exist in people's ability and capacity to process short-term visual information, studies have not generally examined these differences. In fact, conventional…

  10. An ecological method to understand agricultural standardization in peach orchard ecosystems.

    Science.gov (United States)

    Wan, Nian-Feng; Zhang, Ming-Yi; Jiang, Jie-Xian; Ji, Xiang-Yun; Hao-Zhang

    2016-02-22

    While the worldwide standardization of agricultural production has been advocated and recommended, relatively little research has focused on the ecological significance of such a shift. The ecological concerns stemming from the standardization of agricultural production may require new methodology. In this study, we concentrated on how ecological two-sidedness and ecological processes affect the standardization of agricultural production which was divided into three phrases (pre-, mid- and post-production), considering both the positive and negative effects of agricultural processes. We constructed evaluation indicator systems for the pre-, mid- and post-production phases and here we presented a Standardization of Green Production Index (SGPI) based on the Full Permutation Polygon Synthetic Indicator (FPPSI) method which we used to assess the superiority of three methods of standardized production for peaches. The values of SGPI for pre-, mid- and post-production were 0.121 (Level IV, "Excellent" standard), 0.379 (Level III, "Good" standard), and 0.769 × 10(-2) (Level IV, "Excellent" standard), respectively. Here we aimed to explore the integrated application of ecological two-sidedness and ecological process in agricultural production. Our results are of use to decision-makers and ecologists focusing on eco-agriculture and those farmers who hope to implement standardized agricultural production practices.

  11. The Standardization Method of Address Information for POIs from Internet Based on Positional Relation

    Directory of Open Access Journals (Sweden)

    WANG Yong

    2016-05-01

    Full Text Available As points of interest (POIon the internet, exists widely incomplete addresses and inconsistent literal expressions, a fast standardization processing method of network POIs address information based on spatial constraints was proposed. Based on the model of the extensible address expression, first of all, address information of POI was segmented and extracted. Address elements are updated by means of matching with the address tree layer by layer. Then, by defining four types of positional relations, corresponding set are selected from standard POI library as candidate for enrichment and amendment of non-standard address. At last, the fast standardized processing of POI address information was achieved with the help of backtracking address elements with minimum granularity. Experiments in this paper proved that the standardization processing of an address can be realized by means of this method with higher accuracy in order to build the address database.

  12. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method

    International Nuclear Information System (INIS)

    Cacais, F.L.; Delgado, J.U.; Loayza, V.M.

    2016-01-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  13. Standard Test Method for Gel Time of Carbon Fiber-Epoxy Prepreg

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This test method covers the determination of gel time of carbon fiber-epoxy tape and sheet. The test method is suitable for the measurement of gel time of resin systems having either high or low viscosity. 1.2 The values stated in SI units are to be regarded as standard. The values in parentheses are for reference only. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  14. The standard deviation method: data analysis by classical means and by neural networks

    International Nuclear Information System (INIS)

    Bugmann, G.; Stockar, U. von; Lister, J.B.

    1989-08-01

    The Standard Deviation Method is a method for determining particle size which can be used, for instance, to determine air-bubble sizes in a fermentation bio-reactor. The transmission coefficient of an ultrasound beam through a gassy liquid is measured repetitively. Due to the displacements and random positions of the bubbles, the measurements show a scatter whose standard deviation is dependent on the bubble-size. The precise relationship between the measured standard deviation, the transmission and the particle size has been obtained from a set of computer-simulated data. (author) 9 figs., 5 refs

  15. Standard test method for linear thermal expansion of glaze frits and ceramic whiteware materials by the interferometric method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1995-01-01

    1.1 This test method covers the interferometric determination of linear thermal expansion of premelted glaze frits and fired ceramic whiteware materials at temperatures lower than 1000°C (1830°F). 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  16. An automatic method to analyze the Capacity-Voltage and Current-Voltage curves of a sensor

    CERN Document Server

    AUTHOR|(CDS)2261553

    2017-01-01

    An automatic method to perform Capacity versus voltage analysis for all kind of silicon sensor is provided. It successfully calculates the depletion voltage to unirradiated and irradiated sensors, and with measurements with outliers or reaching breakdown. It is built using C++ and using ROOT trees with an analogous skeleton as TRICS, where the data as well as the results of the ts are saved, to make further analysis.

  17. Standard Test Method for Wet Insulation Integrity Testing of Photovoltaic Arrays

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers a procedure to determine the insulation resistance of a photovoltaic (PV) array (or its component strings), that is, the electrical resistance between the array's internal electrical components and is exposed, electrically conductive, non-current carrying parts and surfaces of the array. 1.2 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  18. A standardized method for sampling and extraction methods for quantifying microplastics in beach sand.

    Science.gov (United States)

    Besley, Aiken; Vijver, Martina G; Behrens, Paul; Bosker, Thijs

    2017-01-15

    Microplastics are ubiquitous in the environment, are frequently ingested by organisms, and may potentially cause harm. A range of studies have found significant levels of microplastics in beach sand. However, there is a considerable amount of methodological variability among these studies. Methodological variation currently limits comparisons as there is no standard procedure for sampling or extraction of microplastics. We identify key sampling and extraction procedures across the literature through a detailed review. We find that sampling depth, sampling location, number of repeat extractions, and settling times are the critical parameters of variation. Next, using a case-study we determine whether and to what extent these differences impact study outcomes. By investigating the common practices identified in the literature with the case-study, we provide a standard operating procedure for sampling and extracting microplastics from beach sand. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Improvement of precision method of spectrophotometry with inner standardization and its use in plutonium solutions analysis

    International Nuclear Information System (INIS)

    Stepanov, A.V.; Stepanov, D.A.; Nikitina, S.A.; Gogoleva, T.D.; Grigor'eva, M.G.; Bulyanitsa, L.S.; Panteleev, Yu.A.; Pevtsova, E.V.; Domkin, V.D.; Pen'kin, M.V.

    2006-01-01

    Precision method of spectrophotometry with inner standardization is used for analysis of pure Pu solutions. Improvement of the spectrophotometer and spectrophotometric method of analysis is done to decrease accidental constituent of relative error of the method. Influence of U, Np impurities and corrosion products on systematic constituent of error of the method, and effect of fluoride-ion on completeness of Pu oxidation in sample preparation are studied [ru

  20. Theoretical Understanding the Relations of Melting-point Determination Methods from Gibbs Thermodynamic Surface and Applications on Melting Curves of Lower Mantle Minerals

    Science.gov (United States)

    Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.

    2016-12-01

    The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.

  1. Dose response curve for micronucleus of cytokinesis-block method in human lymphocytes after 60Co-gamma ray exposure

    International Nuclear Information System (INIS)

    Gao Jinsheng; Zheng Siying; Cai Feng

    1993-08-01

    The micronucleus technique of cytokines block has been proposed as a new method to measure chromosome damage in cytogenetic. The cytokines is blocked by using cytochalasin B (Cyt-B), and micronuclei are scored in cytokines-blocked (CB) cells. This can easily be done owing to the appearance of binucleate cells and large numbers accumulated by adding 3.0 μg/ml cytochalasin B at 44 hours and scoring at 72 hours. The results show that the optimum concentration of Cyt-B is 3.0 μg/ml. the Cyt-B itself can not induce the increase of micronuclei. The micronucleus frequency of normal individuals in vivo, there is an approximately linear relationship between the frequency of induced micronuclei and irradiation dose. The formula is Y 0.36 D + 2.74 (γ 2 = 0.995 P<0.01). Because the cytokines block method is simple and reliable, it is effective for assaying chromosome damage caused by genetic toxic materials

  2. Lagrangian Curves on Spectral Curves of Monopoles

    International Nuclear Information System (INIS)

    Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.

    2010-01-01

    We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.

  3. Determination of metal impurities in MOX powder by direct current arc atomic emission spectroscopy. Application of standard addition method for direct analysis of powder sample

    International Nuclear Information System (INIS)

    Furuse, Takahiro; Taguchi, Shigeo; Kuno, Takehiko; Surugaya, Naoki

    2016-12-01

    Metal impurities in MOX powder obtained from uranium and plutonium recovered from reprocessing process of spent nuclear fuel have to be determined for its characterization. Direct current arc atomic emission spectroscopy (DCA-AES) is one of the useful methods for direct analysis of powder sample without dissolving the analyte into aqueous solution. However, the selection of standard material, which can overcome concerns such as matrix matching, is quite important to create adequate calibration curves for DCA-AES. In this study, we apply standard addition method using the certified U_3O_8 containing known amounts of metal impurities to avoid the matrix problems. The proposed method provides good results for determination of Fe, Cr and Ni contained in MOX samples at a significant quantity level. (author)

  4. Development of Mini-Compact Tension Test Method for Determining Fracture Toughness Master Curves for Reactor Pressure Vessel Steels

    Energy Technology Data Exchange (ETDEWEB)

    Sokolov, Mikhail A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-05-01

    Small specimens are playing the key role in evaluating properties of irradiated materials. The use of small specimens provides several advantages. Typically, only a small volume of material can be irradiated in a reactor at desirable conditions in terms of temperature, neutron flux, and neutron dose. A small volume of irradiated material may also allow for easier handling of specimens. Smaller specimens reduce the amount of radioactive material, minimizing personnel exposures and waste disposal. However, use of small specimens imposes a variety of challenges as well. These challenges are associated with proper accounting for size effects and transferability of small specimen data to the real structures of interest. Any fracture toughness specimen that can be made out of the broken halves of standard Charpy specimens may have exceptional utility for evaluation of reactor pressure vessels (RPVs) since it would allow one to determine and monitor directly actual fracture toughness instead of requiring indirect predictions using correlations established with impact data. The Charpy V-notch specimen is the most commonly used specimen geometry in surveillance programs. Validation of the mini compact tension specimen (mini-CT) geometry has been performed on previously well characterized Midland beltline Linde 80 (WF-70) weld in the unirradiated condition. It was shown that the fracture toughness transition temperature, To, measured by these Mini-CT specimens is almost the same as To value that was derived from various larger fracture toughness specimens. Moreover, an International collaborative program has been established to extend the assessment and validation efforts to irradiated Linde 80 weld metal. The program is underway and involves the Oak Ridge National Laboratory (ORNL), Central Research Institute for Electrical Power Industry (CRIEPI), and Electric Power Research Institute (EPRI). The irradiated Mini-CT specimens from broken halves of previously tested Charpy

  5. Using the computerized glow curve deconvolution method and the R package tgcd to determination of thermoluminescence kinetic parameters of chilli powder samples by GOK model and OTOR one

    Energy Technology Data Exchange (ETDEWEB)

    Sang, Nguyen Duy, E-mail: ndsang@ctu.edu.vn [College of Rural Development, Can Tho University, Can Tho 270000 (Viet Nam); Faculty of Physics and Engineering Physics, University of Science, Ho Chi Minh 700000 (Viet Nam); Van Hung, Nguyen [Nuclear Research Institute, VAEI, Dalat 670000 (Viet Nam); Van Hung, Tran; Hien, Nguyen Quoc [Research and Development Center for Radiation Technology, VAEI, Ho Chi Minh 700000 (Viet Nam)

    2017-03-01

    Highlights: • TL analysis aims to calculate the kinetic parameters of the chilli powder. • There is difference of the kinetic parameters caused by the difference of radiation doses. • There is difference of the kinetic parameters due to applying GOK model or OTOR one. • The software R is apllied for the first time in TL glow curve analysis of the chilli powder. - Abstract: The kinetic parameters of thermoluminescence (TL) glow peaks of chilli powder irradiated by gamma rays with the different doses of 0, 4 and 8 kGy have been calculated and estimate by computerized glow curve deconvolution (CGCD) method and the R package tgcd by using the TL glow curve data. The kinetic parameters of TL glow peaks (i.e. activation energies (E), order of kinetics (b), trapping and recombination probability coefficients (R) and frequency factors (s)) are fitted by modeled general-orders of kinetics (GOK) and one trap-one recombination (OTOR). The kinetic parameters of the chilli powder are different toward the difference of the sample time-storage, radiation doses, GOK model and OTOR one. The samples spending the shorter period of storage time have the smaller the kinetic parameters values than the samples spending the longer period of storage. The results obtained as comparing the kinetic parameters values of the three samples show that the value of non-irradiated samples are lowest whereas the 4 kGy irradiated-samples’ value are greater than the 8 kGy irradiated-samples’ one time.

  6. Method of moving frames to solve time-dependent Maxwell's equations on anisotropic curved surfaces: Applications to invisible cloak and ELF propagation

    Science.gov (United States)

    Chun, Sehun

    2017-07-01

    Applying the method of moving frames to Maxwell's equations yields two important advancements for scientific computing. The first is the use of upwind flux for anisotropic materials in Maxwell's equations, especially in the context of discontinuous Galerkin (DG) methods. Upwind flux has been available only to isotropic material, because of the difficulty of satisfying the Rankine-Hugoniot conditions in anisotropic media. The second is to solve numerically Maxwell's equations on curved surfaces without the metric tensor and composite meshes. For numerical validation, spectral convergences are displayed for both two-dimensional anisotropic media and isotropic spheres. In the first application, invisible two-dimensional metamaterial cloaks are simulated with a relatively coarse mesh by both the lossless Drude model and the piecewisely-parametered layered model. In the second application, extremely low frequency propagation on various surfaces such as spheres, irregular surfaces, and non-convex surfaces is demonstrated.

  7. Using commercial simulators for determining flash distillation curves for petroleum fractions

    OpenAIRE

    Eleonora Erdmann; Demetrio Humana; Samuel Franco Domínguez; Lorgio Mercado Fuentes

    2010-01-01

    This work describes a new method for estimating the equilibrium flash vaporisation (EFV) distillation curve for petro-leum fractions by using commercial simulators. A commercial simulator was used for implementing a stationary mo-del for flash distillation; this model was adjusted by using a distillation curve obtained from standard laboratory ana-lytical assays. Such curve can be one of many types (eg ASTM D86, D1160 or D2887) and involves an experimental procedure simpler than that required...

  8. The X-ray Power Density Spectrum of the Seyfert 2 Galaxy NGC 4945: Analysis and Application of the Method of Light Curve Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Martin; /SLAC

    2010-12-16

    The study of the power density spectrum (PDS) of fluctuations in the X-ray flux from active galactic nuclei (AGN) complements spectral studies in giving us a view into the processes operating in accreting compact objects. An important line of investigation is the comparison of the PDS from AGN with those from galactic black hole binaries; a related area of focus is the scaling relation between time scales for the variability and the black hole mass. The PDS of AGN is traditionally modeled using segments of power laws joined together at so-called break frequencies; associations of the break time scales, i.e., the inverses of the break frequencies, with time scales of physical processes thought to operate in these sources are then sought. I analyze the Method of Light Curve Simulations that is commonly used to characterize the PDS in AGN with a view to making the method as sensitive as possible to the shape of the PDS. I identify several weaknesses in the current implementation of the method and propose alternatives that can substitute for some of the key steps in the method. I focus on the complications introduced by uneven sampling in the light curve, the development of a fit statistic that is better matched to the distributions of power in the PDS, and the statistical evaluation of the fit between the observed data and the model for the PDS. Using archival data on one AGN, NGC 3516, I validate my changes against previously reported results. I also report new results on the PDS in NGC 4945, a Seyfert 2 galaxy with a well-determined black hole mass. This source provides an opportunity to investigate whether the PDS of Seyfert 1 and Seyfert 2 galaxies differ. It is also an attractive object for placement on the black hole mass-break time scale relation. Unfortunately, with the available data on NGC 4945, significant uncertainties on the break frequency in its PDS remain.

  9. [A method for inducing standardized spiral fractures of the tibia in the animal experiment].

    Science.gov (United States)

    Seibold, R; Schlegel, U; Cordey, J

    1995-07-01

    A method for the deliberate weakening of cortical bone has been developed on the basis of an already established technique for creating butterfly fractures. It enables one to create the same type of fracture, i.e., a spiral fracture, every time. The fracturing process is recorded as a force-strain curve. The results of the in vitro investigations form a basis for the preparation of experimental tasks aimed at demonstrating internal fixation techniques and their influence on the vascularity of the bone in simulated fractures. Animal protection law lays down that this fracture model must not fail in animal experiments.

  10. Standard Test Methods for Solar Energy Transmittance and Reflectance (Terrestrial) of Sheet Materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1971-01-01

    1.1 These test methods cover the measurement of solar energy transmittance and reflectance (terrestrial) of materials in sheet form. Method A, using a spectrophotometer, is applicable for both transmittance and reflectance and is the referee method. Method B is applicable only for measurement of transmittance using a pyranometer in an enclosure and the sun as the energy source. Specimens for Method A are limited in size by the geometry of the spectrophotometer while Method B requires a specimen 0.61 m2 (2 ft2). For the materials studied by the drafting task group, both test methods give essentially equivalent results. 1.2 This standard does not purport to address all of the safety problems, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  11. Addressing Next Generation Science Standards: A Method for Supporting Classroom Teachers

    Science.gov (United States)

    Pellien, Tamara; Rothenburger, Lisa

    2014-01-01

    The Next Generation Science Standards (NGSS) will define science education for the foreseeable future, yet many educators struggle to see the bridge between current practice and future practices. The inquiry-based methods used by Extension professionals (Kress, 2006) can serve as a guide for classroom educators. Described herein is a method of…

  12. Establishing Upper Limits for Item Ratings for the Angoff Method: Are Resulting Standards More 'Realistic'?

    Science.gov (United States)

    Reid, Jerry B.

    This report investigates an area of uncertainty in using the Angoff method for setting standards, namely whether or not a judge's conceptualizations of borderline group performance are realistic. Ratings are usually made with reference to the performance of this hypothetical group, therefore the Angoff method's success is dependent on this point.…

  13. Another Look at the Method of Y-Standardization in Logit and Probit Models

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    2015-01-01

    This paper takes another look at the derivation of the method of Y-standardization used in sociological analysis involving comparisons of coefficients across logit or probit models. It shows that the method can be derived under less restrictive assumptions than hitherto suggested. Rather than...

  14. Standardization of electron-capture and complex beta-gamma radionuclides by the efficiency extrapolation method

    International Nuclear Information System (INIS)

    Grigorescu, L.

    1976-07-01

    The efficiency extrapolation method was improved by establishing ''linearity conditions'' for the discrimination on the gamma channel of the coincidence equipment. These conditions were proved to eliminate the systematic error of the method. A control procedure for the fulfilment of linearity conditions and estimation of residual systematic error was given. For law-energy gamma transitions an ''equivalent scheme principle'' was established, which allow for a correct application of the method. Solutions of Cs-134, Co-57, Ba-133 and Zn-65 were standardized with an ''effective standard deviation'' of 0.3-0.7 per cent. For Zn-65 ''special linearity conditions'' were applied. (author)

  15. The development and standardization of testing methods for genetically modified organisms and their derived products.

    Science.gov (United States)

    Zhang, Dabing; Guo, Jinchao

    2011-07-01

    As the worldwide commercialization of genetically modified organisms (GMOs) increases and consumers concern the safety of GMOs, many countries and regions are issuing labeling regulations on GMOs and their products. Analytical methods and their standardization for GM ingredients in foods and feed are essential for the implementation of labeling regulations. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. This paper presents an overview of GMO testing methods as well as their standardization. © 2011 Institute of Botany, Chinese Academy of Sciences.

  16. Design of a new torque standard machine based on a torque generation method using electromagnetic force

    International Nuclear Information System (INIS)

    Nishino, Atsuhiro; Ueda, Kazunaga; Fujii, Kenichi

    2017-01-01

    To allow the application of torque standards in various industries, we have been developing torque standard machines based on a lever deadweight system, i.e. a torque generation method using gravity. However, this method is not suitable for expanding the low end of the torque range, because of the limitations to the sizes of the weights and moment arms. In this study, the working principle of the torque generation method using an electromagnetic force was investigated by referring to watt balance experiments used for the redefinition of the kilogram. Applying this principle to a rotating coordinate system, an electromagnetic force type torque standard machine was designed and prototyped. It was experimentally demonstrated that SI-traceable torque could be generated by converting electrical power to mechanical power. Thus, for the first time, SI-traceable torque was successfully realized using a method other than that based on the force of gravity. (paper)

  17. A Mapmark method of standard setting as implemented for the National Assessment Governing Board.

    Science.gov (United States)

    Schulz, E Matthew; Mitzel, Howard C

    2011-01-01

    This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is presented, followed by a detailed description of the materials and procedures used in a meeting to set standards for the 2005 National Assessment of Educational Progress (NAEP) in Grade 12 mathematics. The use of difficulty-ordered content domains to provide holistic feedback is a particularly novel feature of the method. Process evaluation results comparing Mapmark to Anghoff-based methods previously used for NAEP standard setting are also presented.

  18. Melting curve analysis after T allele enrichment (MelcaTle as a highly sensitive and reliable method for detecting the JAK2V617F mutation.

    Directory of Open Access Journals (Sweden)

    Soji Morishita

    Full Text Available Detection of the JAK2V617F mutation is essential for diagnosing patients with classical myeloproliferative neoplasms (MPNs. However, detection of the low-frequency JAK2V617F mutation is a challenging task due to the necessity of discriminating between true-positive and false-positive results. Here, we have developed a highly sensitive and accurate assay for the detection of JAK2V617F and named it melting curve analysis after T allele enrichment (MelcaTle. MelcaTle comprises three steps: 1 two cycles of JAK2V617F allele enrichment by PCR amplification followed by BsaXI digestion, 2 selective amplification of the JAK2V617F allele in the presence of a bridged nucleic acid (BNA probe, and 3 a melting curve assay using a BODIPY-FL-labeled oligonucleotide. Using this assay, we successfully detected nearly a single copy of the JAK2V617F allele, without false-positive signals, using 10 ng of genomic DNA standard. Furthermore, MelcaTle showed no positive signals in 90 assays screening healthy individuals for JAK2V617F. When applying MelcaTle to 27 patients who were initially classified as JAK2V617F-positive on the basis of allele-specific PCR analysis and were thus suspected as having MPNs, we found that two of the patients were actually JAK2V617F-negative. A more careful clinical data analysis revealed that these two patients had developed transient erythrocytosis of unknown etiology but not polycythemia vera, a subtype of MPNs. These findings indicate that the newly developed MelcaTle assay should markedly improve the diagnosis of JAK2V617F-positive MPNs.

  19. Standardization of 32P activity determination method in soil-root cores for root distribution studies

    International Nuclear Information System (INIS)

    Sharma, R.B.; Ghildyal, B.P.

    1976-01-01

    The root distribution of wheat variety UP 301 was obtained by determining the 32 P activity in soil-root cores by two methods, viz., ignition and triacid digestion. Root distribution obtained by these two methods was compared with that by standard root core washing procedure. The percent error in root distribution as determined by triacid digestion method was within +- 2.1 to +- 9.0 as against +- 5.5 to +- 21.2 by ignition method. Thus triacid digestion method proved better over the ignition method. (author)

  20. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers; Metodos de ajuste de curvas de eficiencia obtidas por meio de espectrometros de HPGe

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Vanderlei

    2002-07-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  1. A New Method for Re-Analyzing Evaluation Bias: Piecewise Growth Curve Modeling Reveals an Asymmetry in the Evaluation of Pro and Con Arguments.

    Directory of Open Access Journals (Sweden)

    Jens Jirschitzka

    Full Text Available In four studies we tested a new methodological approach to the investigation of evaluation bias. The usage of piecewise growth curve modeling allowed for investigation into the impact of people's attitudes on their persuasiveness ratings of pro- and con-arguments, measured over the whole range of the arguments' polarity from an extreme con to an extreme pro position. Moreover, this method provided the opportunity to test specific hypotheses about the course of the evaluation bias within certain polarity ranges. We conducted two field studies with users of an existing online information portal (Studies 1a and 2a as participants, and two Internet laboratory studies with mostly student participants (Studies 1b and 2b. In each of these studies we presented pro- and con-arguments, either for the topic of MOOCs (massive open online courses, Studies 1a and 1b or for the topic of M-learning (mobile learning, Studies 2a and 2b. Our results indicate that using piecewise growth curve models is more appropriate than simpler approaches. An important finding of our studies was an asymmetry of the evaluation bias toward pro- or con-arguments: the evaluation bias appeared over the whole polarity range of pro-arguments and increased with more and more extreme polarity. This clear-cut result pattern appeared only on the pro-argument side. For the con-arguments, in contrast, the evaluation bias did not feature such a systematic picture.

  2. ANSI/ASHRAE/IES Standard 90.1-2016 Performance Rating Method Reference Manual

    Energy Technology Data Exchange (ETDEWEB)

    Goel, Supriya [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Eley, Charles [Eley and Associates, Hobe Sound, FL (United States)

    2017-09-29

    This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1-2016 (Standard 90.1-2016). The PRM can be used to demonstrate compliance with the standard and to rate the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. Use of the PRM for demonstrating compliance with Standard 90.1 is a new feature of the 2016 edition. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM.

  3. Standard test method for measurement of web/roller friction characteristics

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2003-01-01

    1.1 This test method covers the simulation of a roller/web transport tribosystem and the measurement of the static and kinetic coefficient of friction of the web/roller couple when sliding occurs between the two. The objective of this test method is to provide users with web/roller friction information that can be used for process control, design calculations, and for any other function where web/roller friction needs to be known. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  4. Standard Test Method for Bond Strength of Ceramic Tile to Portland Cement Paste

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers the determination of the ability of glazed ceramic wall tile, ceramic mosaic tile, quarry tile, and pavers to be bonded to portland cement paste. This test method includes both face-mounted and back-mounted tile. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  5. Standard test method for the radiochemical determination of americium-241 in soil by alpha spectrometry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 This method covers the determination of americium–241 in soil by means of chemical separations and alpha spectrometry. It is designed to analyze up to ten grams of soil or other sample matrices that contain up to 30 mg of combined rare earths. This method allows the determination of americium–241 concentrations from ambient levels to applicable standards. The values stated in SI units are to be regarded as standard. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific precaution statements, see Section 10.

  6. Standard Test Method for Measuring Heat-Transfer Rate Using a Thermal Capacitance (Slug) Calorimeter

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method describes the measurement of heat transfer rate using a thermal capacitance-type calorimeter which assumes one-dimensional heat conduction into a cylindrical piece of material (slug) with known physical properties. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. Note 1—For information see Test Methods E 285, E 422, E 458, E 459, and E 511.

  7. Using Peano Curves to Construct Laplacians on Fractals

    Science.gov (United States)

    Molitor, Denali; Ott, Nadia; Strichartz, Robert

    2015-12-01

    We describe a new method to construct Laplacians on fractals using a Peano curve from the circle onto the fractal, extending an idea that has been used in the case of certain Julia sets. The Peano curve allows us to visualize eigenfunctions of the Laplacian by graphing the pullback to the circle. We study in detail three fractals: the pentagasket, the octagasket and the magic carpet. We also use the method for two nonfractal self-similar sets, the torus and the equilateral triangle, obtaining appealing new visualizations of eigenfunctions on the triangle. In contrast to the many familiar pictures of approximations to standard Peano curves, that do no show self-intersections, our descriptions of approximations to the Peano curves have self-intersections that play a vital role in constructing graph approximations to the fractal with explicit graph Laplacians that give the fractal Laplacian in the limit.

  8. Standard test method for determination of breaking strength of ceramic tiles by three-point loading

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2001-01-01

    1.1 This test method covers the determination of breaking strength of ceramic tiles by three-point loading. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  9. Standard Test Method for Preparing Aircraft Cleaning Compounds, Liquid Type, Water Base, for Storage Stability Testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers the determination of the stability in storage, of liquid, water-base chemical cleaning compounds, used to clean the exterior surfaces of aircraft. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  10. Field theoretical finite element method to provide theoretical calibration curves for the electrical direct-current potential crack-monitoring system as applied to a three-dimensional fracture mechanics specimen with surface crack

    International Nuclear Information System (INIS)

    Dietrich, R.

    1984-01-01

    The basic concepts of the finite element method are explained. The results are compared to existing calibration curves for such test piece geometries derived using experimental procedures. (orig./HP) [de

  11. Standard Test Method for Resin Flow of Carbon Fiber-Epoxy Prepreg

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This test method covers the determination of the amount of resin flow that will take place from prepreg tape or sheet under given conditions of temperature and pressure. 1.2 The values stated in SI units are to be regarded as standard. The values in parentheses are for reference only. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  12. Implementation of sum-peak method for standardization of positron emission radionuclides

    International Nuclear Information System (INIS)

    Fragoso, Maria da Conceicao de Farias; Oliveira, Mercia Liane de; Lima, Fernando Roberto de Andrade

    2015-01-01

    Positron Emission Tomography (PET) is being increasingly recognized as an important quantitative imaging tool for diagnosis and assessing response to therapy. As correct dose administration plays a crucial part in nuclear medicine, it is important that the instruments used to assay the activity of the short-lived radionuclides are calibrated accurately, with traceability to the national or international standards. The sum-peak method has been widely used for radionuclide standardization. The purpose of this study was to implement the methodology for standardization of PET radiopharmaceuticals at the Regional Center for Nuclear Sciences of the Northeast (CRCN-NE). (author)

  13. Standard method of test for atom percent fission in uranium fuel - radiochemical method

    International Nuclear Information System (INIS)

    Anon.

    The determination of the U at. % fission that has occurred in U fuel from an analysis of the 137 Cs ratio to U ratio after irradiation is described. The method is applicable to high-density, clad U fuels (metal, alloys, or ceramic compounds) in which no separation of U and Cs has occurred. The fuels are best aged for several months after irradiation in order to reduce the 13-day 136 Cs activity. The fuel is dissolved and diluted to produce a solution containing a final concentration of U of 100 to 1000 mg U/l. The 137 Cs concentration is determined by ASTM method E 320, for Radiochemical Determination of Cesium-137 in Nuclear Fuel Solutions, and the U concentration is determined by ASTM method E 267, for Determination of Uranium and Plutonium Concentrations and Isotopic Abundances, ASTM method E 318, for Colorimetric Determination of Uranium by Controlled-Potential Coulometry. Calculations are given for correcting the 137 Cs concentration for decay during and after irradiation. The accuracy of this method is limited, not only by the experimental errors with which the fission yield and the half-life of 137 Cs are known

  14. Technical note: comparison of 3 methods for analyzing areas under the curve for glucose and nonesterified fatty acids concentrations following epinephrine challenge in dairy cows.

    Science.gov (United States)

    Cardoso, F C; Sears, W; LeBlanc, S J; Drackley, J K

    2011-12-01

    The objective of the study was to compare 3 methods for calculating the area under the curve (AUC) for plasma glucose and nonesterified fatty acids (NEFA) after an intravenous epinephrine (EPI) challenge in dairy cows. Cows were assigned to 1 of 6 dietary niacin treatments in a completely randomized 6 × 6 Latin square with an extra period to measure carryover effects. Periods consisted of a 7-d (d 1 to 7) adaptation period followed by a 7-d (d 8 to 14) measurement period. On d 12, cows received an i.v. infusion of EPI (1.4 μg/kg of BW). Blood was sampled at -45, -30, -20, -10, and -5 min before EPI infusion and 2.5, 5, 10, 15, 20, 30, 45, 60, 90, and 120 min after. The AUC was calculated by incremental area, positive incremental area, and total area using the trapezoidal rule. The 3 methods resulted in different statistical inferences. When comparing the 3 methods for NEFA and glucose response, no significant differences among treatments and no interactions between treatment and AUC method were observed. For glucose and NEFA response, the method was statistically significant. Our results suggest that the positive incremental method and the total area method gave similar results and interpretation but differed from the incremental area method. Furthermore, the 3 methods evaluated can lead to different results and statistical inferences for glucose and NEFA AUC after an EPI challenge. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  16. Data Mining Methods Applied to Flight Operations Quality Assurance Data: A Comparison to Standard Statistical Methods

    Science.gov (United States)

    Stolzer, Alan J.; Halford, Carl

    2007-01-01

    In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.

  17. Standard Test Method for Electronic Measurement for Hydrogen Embrittlement From Cadmium-Electroplating Processes

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1996-01-01

    1.1 This test method covers an electronic hydrogen detection instrument procedure for measurement of plating permeability to hydrogen. This method measures a variable related to hydrogen absorbed by steel during plating and to the hydrogen permeability of the plate during post plate baking. A specific application of this method is controlling cadmium-plating processes in which the plate porosity relative to hydrogen is critical, such as cadmium on high-strength steel. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific hazard statement, see Section 8. 1.2 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only.

  18. Standard test method for determining residual stresses by the hole-drilling strain-gage method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 Residual Stress Determination: 1.1.1 This test method specifies a hole-drilling procedure for determining residual stress profiles near the surface of an isotropic linearly elastic material. The test method is applicable to residual stress profile determinations where in-plane stress gradients are small. The stresses may remain approximately constant with depth (“uniform” stresses) or they may vary significantly with depth (“non-uniform” stresses). The measured workpiece may be “thin” with thickness much less than the diameter of the drilled hole or “thick” with thickness much greater than the diameter of the drilled hole. Only uniform stress measurements are specified for thin workpieces, while both uniform and non-uniform stress measurements are specified for thick workpieces. 1.2 Stress Measurement Range: 1.2.1 The hole-drilling method can identify in-plane residual stresses near the measured surface of the workpiece material. The method gives localized measurements that indicate the...

  19. Bronchial histamine challenge. A combined interrupter-dosimeter method compared with a standard method

    DEFF Research Database (Denmark)

    Pavlovic, M; Holstein-Rathlou, N H; Madsen, F

    1985-01-01

    We compared the provocative concentration (PC) values obtained by two different methods of performing bronchial histamine challenge. One test was done on an APTA, an apparatus which allows simultaneous provocation with histamine and measurement of airway resistance (Rtot) by the interrupter metho...

  20. A METHOD TO SET-UP CALIBRATION CURVE FOR INSTRUMENTED SPHERE IS100 TO CONTROL MECHANICAL DAMAGE DURING POST-HARVESTING AND HANDLING OF ORANGES

    Directory of Open Access Journals (Sweden)

    Giovanni Carlo Di Renzo

    2009-12-01

    Full Text Available Oranges quality is strictly dependent on their variety, pre-harvest and post-harvest practices. Especially post harvest management is responsible for fruits damages, causing quality deterioration and commercial losses, as underlined by many authors, which studied the influence of individual post harvest operations on the fruit quality. In this article Authors, using an instrumented sphere (IS 100 similar for shape and size to a true orange, showed a method for the control of orange damages along the processing line. Results allow a fundamental knowledge about the critical damage curve, which defines the incidence of the damages during the oranges processing and packaging. Data show that the fruit discharge (bins or boxes discharge and the packaging step are the most critical operations in order to reduce or eliminate the fruits collisions and the consequent damages