WorldWideScience

Sample records for standard curve method

  1. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    Science.gov (United States)

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  2. A standard curve based method for relative real time PCR data processing

    Directory of Open Access Journals (Sweden)

    Krause Andreas

    2005-03-01

    Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that

  3. Mathematics of quantitative kinetic PCR and the application of standard curves.

    Science.gov (United States)

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  4. Incorporating experience curves in appliance standards analysis

    International Nuclear Information System (INIS)

    Desroches, Louis-Benoit; Garbesi, Karina; Kantner, Colleen; Van Buskirk, Robert; Yang, Hung-Chia

    2013-01-01

    There exists considerable evidence that manufacturing costs and consumer prices of residential appliances have decreased in real terms over the last several decades. This phenomenon is generally attributable to manufacturing efficiency gained with cumulative experience producing a certain good, and is modeled by an empirical experience curve. The technical analyses conducted in support of U.S. energy conservation standards for residential appliances and commercial equipment have, until recently, assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. This assumption does not reflect real market price dynamics. Using price data from the Bureau of Labor Statistics, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These experience curves were incorporated into recent energy conservation standards analyses for these products. Including experience curves increases the national consumer net present value of potential standard levels. In some cases a potential standard level exhibits a net benefit when considering experience, whereas without experience it exhibits a net cost. These results highlight the importance of modeling more representative market prices. - Highlights: ► Past appliance standards analyses have assumed constant equipment prices. ► There is considerable evidence of consistent real price declines. ► We incorporate experience curves for several large appliances into the analysis. ► The revised analyses demonstrate larger net present values of potential standards. ► The results imply that past standards analyses may have undervalued benefits.

  5. Incorporating Experience Curves in Appliance Standards Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery; Kantner, Colleen; Lekov, Alex; Meyers, Stephen; Rosenquist, Gregory; Buskirk, Robert Van; Yang, Hung-Chia; Desroches, Louis-Benoit

    2011-10-31

    The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners, clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.

  6. SCINFI II A program to calculate the standardization curve in liquid scintillation counting

    Energy Technology Data Exchange (ETDEWEB)

    Grau Carles, A.; Grau Malonda, A.

    1985-07-01

    A code, SCINFI II, written in BASIC, has been developed to compute the efficiency-quench standardization curve for any beta radionuclide. The free parameter method has been applied. The program requires the standardization curve for 3{sup H} and the polynomial or tabulated relating counting efficiency as figure of merit for both 3{sup H} and the problem radionuclide. The program is applied to the computation, of the counting efficiency for different values of quench when the problem is 14{sup C}. The results of four different computation methods are compared. (Author) 17 refs.

  7. SCINFI II A program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1985-01-01

    A code, SCINFI II, written in BASIC, has been developed to compute the efficiency-quench standardization curve for any beta radionuclide. The free parameter method has been applied. The program requires the standardization curve for 3 H and the polynomial or tabulated relating counting efficiency as figure of merit for both 3 H and the problem radionuclide. The program is applied to the computation, of the counting efficiency for different values of quench when the problem is 14 C . The results of four different computation methods are compared. (Author) 17 refs

  8. Research on Standard and Automatic Judgment of Press-fit Curve of Locomotive Wheel-set Based on AAR Standard

    Science.gov (United States)

    Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu

    2018-03-01

    In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.

  9. Exponential models applied to automated processing of radioimmunoassay standard curves

    International Nuclear Information System (INIS)

    Morin, J.F.; Savina, A.; Caroff, J.; Miossec, J.; Legendre, J.M.; Jacolot, G.; Morin, P.P.

    1979-01-01

    An improved computer processing is described for fitting of radio-immunological standard curves by means of an exponential model on a desk-top calculator. This method has been applied to a variety of radioassays and the results are in accordance with those obtained by more sophisticated models [fr

  10. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    Science.gov (United States)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  11. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  12. GLOBAL AND STRICT CURVE FITTING METHOD

    NARCIS (Netherlands)

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  13. Scinfi, a program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1984-01-01

    A code, Scinfi, was developed, written in Basic, to compute the efficiency-quench standardization curve for any radionuclide. The program requires the standardization curve for 3 H and the polynomial relations between counting efficiency and figure of merit for both 3 H and the problem (e.g. 14 C). The program is applied to the computation of the efficiency-quench standardization curve for 14 C. Five different liquid scintillation spectrometers and two scintillator solutions have been checked. The computation results are compared with the experimental values obtained with a set of 14 C standardized samples. (author)

  14. SCINFI, a program to calculate the standardization curve in liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau Malonda, A.

    1984-01-01

    A code, SCINFI, was developed, written in BASIC, to compute the efficiency- quench standardization curve for any radionuclide. The program requires the standardization curve for 3H and the polynomial relations between counting efficiency and figure of merit for both 3H and the problem (e.g. 14 C ). The program is applied to the computation of the efficiency-quench standardization curve for 14 c . Five different liquid scintillation spectrometers and two scintillator solutions have bean checked. The computation results are compared with the experimental values obtained with a set of 14 c standardized samples. (Author)

  15. Wind turbine performance: Methods and criteria for reliability of measured power curves

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, D.A. [Advanced Wind Turbines Inc., Seattle, WA (United States)

    1996-12-31

    In order to evaluate the performance of prototype turbines, and to quantify incremental changes in performance through field testing, Advanced Wind Turbines (AWT) has been developing methods and requirements for power curve measurement. In this paper, field test data is used to illustrate several issues and trends which have resulted from this work. Averaging and binning processes, data hours per wind-speed bin, wind turbulence levels, and anemometry methods are all shown to have significant impacts on the resulting power curves. Criteria are given by which the AWT power curves show a high degree of repeatability, and these criteria are compared and contrasted with current published standards for power curve measurement. 6 refs., 5 figs., 5 tabs.

  16. A non-iterative method for fitting decay curves with background

    International Nuclear Information System (INIS)

    Mukoyama, T.

    1982-01-01

    A non-iterative method for fitting a decay curve with background is presented. The sum of an exponential function and a constant term is linearized by the use of the difference equation and parameters are determined by the standard linear least-squares fitting. The validity of the present method has been tested against pseudo-experimental data. (orig.)

  17. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    Directory of Open Access Journals (Sweden)

    Tatsuhiro Gotanda

    2016-01-01

    Full Text Available Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  18. Method of construction spatial transition curve

    Directory of Open Access Journals (Sweden)

    S.V. Didanov

    2013-04-01

    Full Text Available Purpose. The movement of rail transport (speed rolling stock, traffic safety, etc. is largely dependent on the quality of the track. In this case, a special role is the transition curve, which ensures smooth insertion of the transition from linear to circular section of road. The article deals with modeling of spatial transition curve based on the parabolic distribution of the curvature and torsion. This is a continuation of research conducted by the authors regarding the spatial modeling of curved contours. Methodology. Construction of the spatial transition curve is numerical methods for solving nonlinear integral equations, where the initial data are taken coordinate the starting and ending points of the curve of the future, and the inclination of the tangent and the deviation of the curve from the tangent plane at these points. System solutions for the numerical method are the partial derivatives of the equations of the unknown parameters of the law of change of torsion and length of the transition curve. Findings. The parametric equations of the spatial transition curve are calculated by finding the unknown coefficients of the parabolic distribution of the curvature and torsion, as well as the spatial length of the transition curve. Originality. A method for constructing the spatial transition curve is devised, and based on this software geometric modeling spatial transition curves of railway track with specified deviations of the curve from the tangent plane. Practical value. The resulting curve can be applied in any sector of the economy, where it is necessary to ensure a smooth transition from linear to circular section of the curved space bypass. An example is the transition curve in the construction of the railway line, road, pipe, profile, flat section of the working blades of the turbine and compressor, the ship, plane, car, etc.

  19. Experimental Method for Plotting S-N Curve with a Small Number of Specimens

    Directory of Open Access Journals (Sweden)

    Strzelecki Przemysław

    2016-12-01

    Full Text Available The study presents two approaches to plotting an S-N curve based on the experimental results. The first approach is commonly used by researchers and presented in detail in many studies and standard documents. The model uses a linear regression whose parameters are estimated by using the least squares method. A staircase method is used for an unlimited fatigue life criterion. The second model combines the S-N curve defined as a straight line and the record of random occurrence of the fatigue limit. A maximum likelihood method is used to estimate the S-N curve parameters. Fatigue data for C45+C steel obtained in the torsional bending test were used to compare the estimated S-N curves. For pseudo-random numbers generated by using the Mersenne Twister algorithm, the estimated S-N curve for 10 experimental results plotted by using the second model, estimates the fatigue life in the scatter band of the factor 3. The result gives good approximation, especially regarding the time required to plot the S-N curve.

  20. Standardization of 57Co using different methods of LNMRI

    International Nuclear Information System (INIS)

    Rezende, E.A.; Lopes, R.T.; Silva, C.J. da; Poledna, R.; Silva, R.L. da; Tauhata, L.

    2015-01-01

    The activity of a 57 Co solution was determined using four LNMRI different measurement methods. The solution was standardized by live-timed anti-coincidence method and sum-peak method. The efficiency curve and standard-sample comparison methods were also used in this comparison. The results and their measurement uncertainties demonstrating the equivalence of these methods. As an additional contribution, the gamma emission probabilities of 57 Co were also determined. (author)

  1. Comparison of power curve monitoring methods

    Directory of Open Access Journals (Sweden)

    Cambron Philippe

    2017-01-01

    Full Text Available Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM of wind turbines (WT. In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.

  2. Implementation of the Master Curve method in ProSACC

    Energy Technology Data Exchange (ETDEWEB)

    Feilitzen, Carl von; Sattari-Far, Iradj [Inspecta Technology AB, Stockholm (Sweden)

    2012-03-15

    Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the

  3. Implementation of the Master Curve method in ProSACC

    International Nuclear Information System (INIS)

    Feilitzen, Carl von; Sattari-Far, Iradj

    2012-03-01

    Cleavage fracture toughness data display normally large amount of statistical scatter in the transition region. The cleavage toughness data in this region is specimen size-dependent, and should be treated statistically rather than deterministically. Master Curve methodology is a procedure for mechanical testing and statistical analysis of fracture toughness of ferritic steels in the transition region. The methodology accounts for temperature and size dependence of fracture toughness. Using the Master Curve methodology for evaluation of the fracture toughness in the transition region releases the overconservatism that has been observed in using the ASME-KIC curve. One main advantage of using the Master Curve methodology is possibility to use small Charpy-size specimens to determine fracture toughness. Detailed description of the Master Curve methodology is given by Sattari-Far and Wallin [2005). ProSACC is a suitable program in using for structural integrity assessments of components containing crack like defects and for defect tolerance analysis. The program gives possibilities to conduct assessments based on deterministic or probabilistic grounds. The method utilized in ProSACC is based on the R6-method developed at Nuclear Electric plc, Milne et al [1988]. The basic assumption in this method is that fracture in a cracked body can be described by two parameters Kr and Lr. The parameter Kr is the ratio between the stress intensity factor and the fracture toughness of the material. The parameter Lr is the ratio between applied load and the plastic limit load of the structure. The ProSACC assessment results are therefore highly dependent on the applied fracture toughness value in the assessment. In this work, the main options of the Master Curve methodology are implemented in the ProSACC program. Different options in evaluating Master Curve fracture toughness from standard fracture toughness testing data or impact testing data are considered. In addition, the

  4. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO3 standards

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s

  5. Elastic-plastic fracture assessment using a J-R curve by direct method

    International Nuclear Information System (INIS)

    Asta, E.P.

    1996-01-01

    In the elastic-plastic evaluation methods, based on J integral and tearing modulus procedures, an essential input is the material fracture resistance (J-R) curve. In order to simplify J-R determination direct, a method from load-load point displacement records of the single specimen tests may be employed. This procedure has advantages such as avoiding accuracy problems of the crack growth measuring devices and reducing testing time. This paper presents a structural integrity assessment approach, for ductile fracture, using the J-R obtained by a direct method from small single specimen fracture tests. The J-R direct method was carried out by means of a developed computational program based on theoretical elastic-plastic expressions. A comparative evaluation between the direct method J resistance curves and those obtained by the standard testing methodology on typical pressure vessel steels has been made. The J-R curves estimated from the direct method give an acceptable agreement with the approach proposed in this study which is reliable to use for engineering determinations. (orig.)

  6. Methods for predicting isochronous stress-strain curves

    International Nuclear Information System (INIS)

    Kiyoshige, Masanori; Shimizu, Shigeki; Satoh, Keisuke.

    1976-01-01

    Isochronous stress-strain curves show the relation between stress and total strain at a certain temperature with time as a parameter, and they are drawn up from the creep test results at various stress levels at a definite temperature. The concept regarding the isochronous stress-strain curves was proposed by McVetty in 1930s, and has been used for the design of aero-engines. Recently the high temperature characteristics of materials are shown as the isochronous stress-strain curves in the design guide for the nuclear energy equipments and structures used in high temperature creep region. It is prescribed that these curves are used as the criteria for determining design stress intensity or the data for analyzing the superposed effects of creep and fatigue. In case of the isochronous stress-strain curves used for the design of nuclear energy equipments with very long service life, it is impractical to determine the curves directly from the results of long time creep test, accordingly the method of predicting long time stress-strain curves from short time creep test results must be established. The method proposed by the authors, for which the creep constitution equations taking the first and second creep stages into account are used, and the method using Larson-Miller parameter were studied, and it was found that both methods were reliable for the prediction. (Kako, I.)

  7. Validation of curve-fitting method for blood retention of 99mTc-GSA. Comparison with blood sampling method

    International Nuclear Information System (INIS)

    Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa

    1997-01-01

    We investigated a curve-fitting method for the rate of blood retention of 99m Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent 99m Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of 99m Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1±2.1% (mean±standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5±6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2±11.6%, n=13; vs. 31.9±2.8%, n=7, p 99m Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p 99m Tc-GSA and could be a substitute for the blood sampling method. (author)

  8. Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards

    Science.gov (United States)

    Remer, D. S.; Moore, R. C.

    1985-01-01

    Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.

  9. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  10. Standard gestational birth weight ranges and Curve in Yaounde ...

    African Journals Online (AJOL)

    The aim of this study was to establish standard ranges and curve of mean gestational birth weights validated by ultrasonography for the Cameroonian population in Yaoundé. This cross sectional study was carried out in the Obstetrics & Gynaecology units of 4 major hospitals in the metropolis between March 5 and ...

  11. Growth curves and the international standard: How children's growth reflects challenging conditions in rural Timor-Leste.

    Science.gov (United States)

    Spencer, Phoebe R; Sanders, Katherine A; Judge, Debra S

    2018-02-01

    Population-specific growth references are important in understanding local growth variation, especially in developing countries where child growth is poor and the need for effective health interventions is high. In this article, we use mixed longitudinal data to calculate the first growth curves for rural East Timorese children to identify where, during development, deviation from the international standards occurs. Over an eight-year period, 1,245 children from two ecologically distinct rural areas of Timor-Leste were measured a total of 4,904 times. We compared growth to the World Health Organization (WHO) standards using z-scores, and modeled height and weight velocity using the SuperImposition by Translation And Rotation (SITAR) method. Using the Generalized Additive Model for Location, Scale and Shape (GAMLSS) method, we created the first growth curves for rural Timorese children for height, weight and body mass index (BMI). Relative to the WHO standards, children show early-life growth faltering, and stunting throughout childhood and adolescence. The median height and weight for this population tracks below the WHO fifth centile. Males have poorer growth than females in both z-BMI (p = .001) and z-height-for-age (p = .018) and, unlike females, continue to grow into adulthood. This is the most comprehensive investigation to date of rural Timorese children's growth, and the growth curves created may potentially be used to identify future secular trends in growth as the country develops. We show significant deviation from the international standard that becomes most pronounced at adolescence, similar to the growth of other Asian populations. Males and females show different growth responses to challenging conditions in this population. © 2017 Wiley Periodicals, Inc.

  12. New method of safety assessment for pressure vessel of nuclear power plant--brief introduction of master curve approach

    International Nuclear Information System (INIS)

    Yang Wendou

    2011-01-01

    The new Master Curve Method is called as a revolutionary advance to the assessment of- reactor pressure vessel integrity in USA. This paper explains the origin, basis and standard of the Master Curve from the reactor pressure-temperature limit curve which assures the safety of nuclear power plant. According to the characteristics of brittle fracture which is greatly susceptible to the microstructure, the theory and the test method of the Master Curve as well as its statistical law which can be modeled using Weibull distribution are described in this paper. The meaning, advantage, application and importance of the Master Curve as well as the relation between the Master Curve and nuclear power safety are understood from the fitting formula for the fracture toughness database by Weibull distribution model. (author)

  13. Finite-difference time-domain modeling of curved material interfaces by using boundary condition equations method

    International Nuclear Information System (INIS)

    Lu Jia; Zhou Huaichun

    2016-01-01

    To deal with the staircase approximation problem in the standard finite-difference time-domain (FDTD) simulation, the two-dimensional boundary condition equations (BCE) method is proposed in this paper. In the BCE method, the standard FDTD algorithm can be used as usual, and the curved surface is treated by adding the boundary condition equations. Thus, while maintaining the simplicity and computational efficiency of the standard FDTD algorithm, the BCE method can solve the staircase approximation problem. The BCE method is validated by analyzing near field and far field scattering properties of the PEC and dielectric cylinders. The results show that the BCE method can maintain a second-order accuracy by eliminating the staircase approximation errors. Moreover, the results of the BCE method show good accuracy for cylinder scattering cases with different permittivities. (paper)

  14. DAG expression: high-throughput gene expression analysis of real-time PCR data using standard curves for relative quantification.

    Directory of Open Access Journals (Sweden)

    María Ballester

    Full Text Available BACKGROUND: Real-time quantitative PCR (qPCR is still the gold-standard technique for gene-expression quantification. Recent technological advances of this method allow for the high-throughput gene-expression analysis, without the limitations of sample space and reagent used. However, non-commercial and user-friendly software for the management and analysis of these data is not available. RESULTS: The recently developed commercial microarrays allow for the drawing of standard curves of multiple assays using the same n-fold diluted samples. Data Analysis Gene (DAG Expression software has been developed to perform high-throughput gene-expression data analysis using standard curves for relative quantification and one or multiple reference genes for sample normalization. We discuss the application of DAG Expression in the analysis of data from an experiment performed with Fluidigm technology, in which 48 genes and 115 samples were measured. Furthermore, the quality of our analysis was tested and compared with other available methods. CONCLUSIONS: DAG Expression is a freely available software that permits the automated analysis and visualization of high-throughput qPCR. A detailed manual and a demo-experiment are provided within the DAG Expression software at http://www.dagexpression.com/dage.zip.

  15. Analysis of RIA standard curve by log-logistic and cubic log-logit models

    International Nuclear Information System (INIS)

    Yamada, Hideo; Kuroda, Akira; Yatabe, Tami; Inaba, Taeko; Chiba, Kazuo

    1981-01-01

    In order to improve goodness-of-fit in RIA standard analysis, programs for computing log-logistic and cubic log-logit were written in BASIC using personal computer P-6060 (Olivetti). Iterative least square method of Taylor series was applied for non-linear estimation of logistic and log-logistic. Hear ''log-logistic'' represents Y = (a - d)/(1 + (log(X)/c)sup(b)) + d As weights either 1, 1/var(Y) or 1/σ 2 were used in logistic or log-logistic and either Y 2 (1 - Y) 2 , Y 2 (1 - Y) 2 /var(Y), or Y 2 (1 - Y) 2 /σ 2 were used in quadratic or cubic log-logit. The term var(Y) represents squares of pure error and σ 2 represents estimated variance calculated using a following equation log(σ 2 + 1) = log(A) + J log(y). As indicators for goodness-of-fit, MSL/S sub(e)sup(2), CMD% and WRV (see text) were used. Better regression was obtained in case of alpha-fetoprotein by log-logistic than by logistic. Cortisol standard curve was much better fitted with cubic log-logit than quadratic log-logit. Predicted precision of AFP standard curve was below 5% in log-logistic in stead of 8% in logistic analysis. Predicted precision obtained using cubic log-logit was about five times lower than that with quadratic log-logit. Importance of selecting good models in RIA data processing was stressed in conjunction with intrinsic precision of radioimmunoassay system indicated by predicted precision. (author)

  16. Corrections for hysteresis curves for rare earth magnet materials measured by open magnetic circuit methods

    International Nuclear Information System (INIS)

    Nakagawa, Yasuaki

    1996-01-01

    The methods for testing permanent magnets stipulated in the usual industrial standards are so-called closed magnetic circuit methods which employ a loop tracer using an iron-core electromagnet. If the coercivity exceeds the highest magnetic field generated by the electromagnet, full hysteresis curves cannot be obtained. In the present work, magnetic fields up to 15 T were generated by a high-power water-cooled magnet, and the magnetization was measured by an induction method with an open magnetic circuit, in which the effect of a demagnetizing field should be taken into account. Various rare earth magnets materials such as sintered or bonded Sm-Co and Nd-Fe-B were provided by a number of manufacturers. Hysteresis curves for cylindrical samples with 10 nm in diameter and 2 mm, 3.5 mm, 5 mm, 14 mm or 28 mm in length were measured. Correction for the demagnetizing field is rather difficult because of its non-uniformity. Roughly speaking, a mean demagnetizing factor for soft magnetic materials can be used for the correction, although the application of this factor to hard magnetic material is hardly justified. Thus the dimensions of the sample should be specified when the data obtained by the open magnetic circuit method are used as industrial standards. (author)

  17. A simple method for one-loop renormalization in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@uis.no [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2013-08-01

    We present a simple method for deriving the renormalization counterterms from the components of the energy-momentum tensor in curved space-time. This method allows control over the finite parts of the counterterms and provides explicit expressions for each term separately. As an example, the method is used for the self-interacting scalar field in a Friedmann-Robertson-Walker metric in the adiabatic approximation, where we calculate the renormalized equation of motion for the field and the renormalized components of the energy-momentum tensor to fourth adiabatic order while including interactions to one-loop order. Within this formalism the trace anomaly, including contributions from interactions, is shown to have a simple derivation. We compare our results to those obtained by two standard methods, finding agreement with the Schwinger-DeWitt expansion but disagreement with adiabatic subtractions for interacting theories.

  18. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to d...

  19. High cycle fatigue test and regression methods of S-N curve

    International Nuclear Information System (INIS)

    Kim, D. W.; Park, J. Y.; Kim, W. G.; Yoon, J. H.

    2011-11-01

    The fatigue design curve in the ASME Boiler and Pressure Vessel Code Section III are based on the assumption that fatigue life is infinite after 106 cycles. This is because standard fatigue testing equipment prior to the past decades was limited in speed to less than 200 cycles per second. Traditional servo-hydraulic machines work at frequency of 50 Hz. Servo-hydraulic machines working at 1000 Hz have been developed after 1997. This machines allow high frequency and displacement of up to ±0.1 mm and dynamic load of ±20 kN are guaranteed. The frequency of resonant fatigue test machine is 50-250 Hz. Various forced vibration-based system works at 500 Hz or 1.8 kHz. Rotating bending machines allow testing frequency at 0.1-200 Hz. The main advantage of ultrasonic fatigue testing at 20 kHz is performing Although S-N curve is determined by experiment, the fatigue strength corresponding to a given fatigue life should be determined by statistical method considering the scatter of fatigue properties. In this report, the statistical methods for evaluation of fatigue test data is investigated

  20. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  1. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  2. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  3. An Investigation of Undefined Cut Scores with the Hofstee Standard-Setting Method

    Science.gov (United States)

    Wyse, Adam E.; Babcock, Ben

    2017-01-01

    This article provides an overview of the Hofstee standard-setting method and illustrates several situations where the Hofstee method will produce undefined cut scores. The situations where the cut scores will be undefined involve cases where the line segment derived from the Hofstee ratings does not intersect the score distribution curve based on…

  4. An adaptive-binning method for generating constant-uncertainty/constant-significance light curves with Fermi-LAT data

    International Nuclear Information System (INIS)

    Lott, B.; Escande, L.; Larsson, S.; Ballet, J.

    2012-01-01

    Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LAT analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.

  5. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep......Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach...... learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several...

  6. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi

    2017-02-01

    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  7. An external standard method for quantification of human cytomegalovirus by PCR

    International Nuclear Information System (INIS)

    Rongsen, Shen; Liren, Ma; Fengqi, Zhou; Qingliang, Luo

    1997-01-01

    An external standard method for PCR quantification of HCMV was reported. [α- 32 P]dATP was used as a tracer. 32 P-labelled specific amplification product was separated by agarose gel electrophoresis. A gel piece containing the specific product band was excised and counted in a plastic scintillation counter. Distribution of [α- 32 P]dATP in the electrophoretic gel plate and effect of separation between the 32 P-labelled specific product and free [α- 32 P]dATP were observed. A standard curve for quantification of HCMV by PCR was established and detective results of quality control templets were presented. The external standard method and the electrophoresis separation effect were appraised. The results showed that the method could be used for relative quantification of HCMV. (author)

  8. An extended L-curve method for choosing a regularization parameter in electrical resistance tomography

    International Nuclear Information System (INIS)

    Xu, Yanbin; Pei, Yang; Dong, Feng

    2016-01-01

    The L-curve method is a popular regularization parameter choice method for the ill-posed inverse problem of electrical resistance tomography (ERT). However the method cannot always determine a proper parameter for all situations. An investigation into those situations where the L-curve method failed show that a new corner point appears on the L-curve and the parameter corresponding to the new corner point can obtain a satisfactory reconstructed solution. Thus an extended L-curve method, which determines the regularization parameter associated with either global corner or the new corner, is proposed. Furthermore, two strategies are provided to determine the new corner–one is based on the second-order differential of L-curve, and the other is based on the curvature of L-curve. The proposed method is examined by both numerical simulations and experimental tests. And the results indicate that the extended method can handle the parameter choice problem even in the case where the typical L-curve method fails. Finally, in order to reduce the running time of the method, the extended method is combined with a projection method based on the Krylov subspace, which was able to boost the extended L-curve method. The results verify that the speed of the extended L-curve method is distinctly improved. The proposed method extends the application of the L-curve in the field of choosing regularization parameter with an acceptable running time and can also be used in other kinds of tomography. (paper)

  9. Qualitative Comparison of Contraction-Based Curve Skeletonization Methods

    NARCIS (Netherlands)

    Sobiecki, André; Yasan, Haluk C.; Jalba, Andrei C.; Telea, Alexandru C.

    2013-01-01

    In recent years, many new methods have been proposed for extracting curve skeletons of 3D shapes, using a mesh-contraction principle. However, it is still unclear how these methods perform with respect to each other, and with respect to earlier voxel-based skeletonization methods, from the viewpoint

  10. A graph-based method for fitting planar B-spline curves with intersections

    Directory of Open Access Journals (Sweden)

    Pengbo Bo

    2016-01-01

    Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.

  11. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    International Nuclear Information System (INIS)

    Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang

    2016-01-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)

  12. Atlas of stress-strain curves

    CERN Document Server

    2002-01-01

    The Atlas of Stress-Strain Curves, Second Edition is substantially bigger in page dimensions, number of pages, and total number of curves than the previous edition. It contains over 1,400 curves, almost three times as many as in the 1987 edition. The curves are normalized in appearance to aid making comparisons among materials. All diagrams include metric (SI) units, and many also include U.S. customary units. All curves are captioned in a consistent format with valuable information including (as available) standard designation, the primary source of the curve, mechanical properties (including hardening exponent and strength coefficient), condition of sample, strain rate, test temperature, and alloy composition. Curve types include monotonic and cyclic stress-strain, isochronous stress-strain, and tangent modulus. Curves are logically arranged and indexed for fast retrieval of information. The book also includes an introduction that provides background information on methods of stress-strain determination, on...

  13. Arctic curves in path models from the tangent method

    Science.gov (United States)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  14. MATHEMATICAL METHODS TO DETERMINE THE INTERSECTION CURVES OF THE CYLINDERS

    Directory of Open Access Journals (Sweden)

    POPA Carmen

    2010-07-01

    Full Text Available The aim of this paper is to establish the intersection curves between cylinders, by using the Mathematica program. This thing can be obtained by introducing the curves equations, which are inferred, in Mathematica program. This paper take into discussion three right cylinders and another inclined to 45 degrees. The intersection curves can also be obtained by using the classical methods of the descriptive geometry.

  15. Waist Circumferences of Chilean Students: Comparison of the CDC-2012 Standard and Proposed Percentile Curves

    Directory of Open Access Journals (Sweden)

    Rossana Gómez-Campos

    2015-07-01

    Full Text Available The measurement of waist circumference (WC is considered to be an important means to control overweight and obesity in children and adolescents. The objectives of the study were to (a compare the WC measurements of Chilean students with the international CDC-2012 standard and other international standards, and (b propose a specific measurement value for the WC of Chilean students based on age and sex. A total of 3892 students (6 to 18 years old were assessed. Weight, height, body mass index (BMI, and WC were measured. WC was compared with the CDC-2012 international standard. Percentiles were constructed based on the LMS method. Chilean males had a greater WC during infancy. Subsequently, in late adolescence, males showed values lower than those of the international standards. Chilean females demonstrated values similar to the standards until the age of 12. Subsequently, females showed lower values. The 85th and 95th percentiles were adopted as cutoff points for evaluating overweight and obesity based on age and sex. The WC of Chilean students differs from the CDC-2012 curves. The regional norms proposed are a means to identify children and adolescents with a high risk of suffering from overweight and obesity disorders.

  16. Waist Circumferences of Chilean Students: Comparison of the CDC-2012 Standard and Proposed Percentile Curves

    Science.gov (United States)

    Gómez-Campos, Rossana; Lee Andruske, Cinthya; Hespanhol, Jefferson; Sulla Torres, Jose; Arruda, Miguel; Luarte-Rocha, Cristian; Cossio-Bolaños, Marco Antonio

    2015-01-01

    The measurement of waist circumference (WC) is considered to be an important means to control overweight and obesity in children and adolescents. The objectives of the study were to (a) compare the WC measurements of Chilean students with the international CDC-2012 standard and other international standards, and (b) propose a specific measurement value for the WC of Chilean students based on age and sex. A total of 3892 students (6 to 18 years old) were assessed. Weight, height, body mass index (BMI), and WC were measured. WC was compared with the CDC-2012 international standard. Percentiles were constructed based on the LMS method. Chilean males had a greater WC during infancy. Subsequently, in late adolescence, males showed values lower than those of the international standards. Chilean females demonstrated values similar to the standards until the age of 12. Subsequently, females showed lower values. The 85th and 95th percentiles were adopted as cutoff points for evaluating overweight and obesity based on age and sex. The WC of Chilean students differs from the CDC-2012 curves. The regional norms proposed are a means to identify children and adolescents with a high risk of suffering from overweight and obesity disorders. PMID:26184250

  17. CURVE LSFIT, Gamma Spectrometer Calibration by Interactive Fitting Method

    International Nuclear Information System (INIS)

    Olson, D.G.

    1992-01-01

    1 - Description of program or function: CURVE and LSFIT are interactive programs designed to obtain the best data fit to an arbitrary curve. CURVE finds the type of fitting routine which produces the best curve. The types of fitting routines available are linear regression, exponential, logarithmic, power, least squares polynomial, and spline. LSFIT produces a reliable calibration curve for gamma ray spectrometry by using the uncertainty value associated with each data point. LSFIT is intended for use where an entire efficiency curve is to be made starting at 30 KeV and continuing to 1836 KeV. It creates calibration curves using up to three least squares polynomial fits to produce the best curve for photon energies above 120 KeV and a spline function to combine these fitted points with a best fit for points below 120 KeV. 2 - Method of solution: The quality of fit is tested by comparing the measured y-value to the y-value calculated from the fitted curve. The fractional difference between these two values is printed for the evaluation of the quality of the fit. 3 - Restrictions on the complexity of the problem - Maxima of: 2000 data points calibration curve output (LSFIT) 30 input data points 3 least squares polynomial fits (LSFIT) The least squares polynomial fit requires that the number of data points used exceed the degree of fit by at least two

  18. Stage discharge curve for Guillemard Bridge streamflow sation based on rating curve method using historical flood event data

    International Nuclear Information System (INIS)

    Ros, F C; Sidek, L M; Desa, M N; Arifin, K; Tosaka, H

    2013-01-01

    The purpose of the stage-discharge curves varies from water quality study, flood modelling study, can be used to project climate change scenarios and so on. As the bed of the river often changes due to the annual monsoon seasons that sometimes cause by massive floods, the capacity of the river will changed causing shifting controlled to happen. This study proposes to use the historical flood event data from 1960 to 2009 in calculating the stage-discharge curve of Guillemard Bridge located in Sg. Kelantan. Regression analysis was done to check the quality of the data and examine the correlation between the two variables, Q and H. The mean values of the two variables then were adopted to find the value of difference between zero gauge height and the level of zero flow, 'a', K and 'n' to fit into rating curve equation and finally plotting the stage-discharge rating curve. Regression analysis of the historical flood data indicate that 91 percent of the original uncertainty has been explained by the analysis with the standard error of 0.085.

  19. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    Science.gov (United States)

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  20. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    International Nuclear Information System (INIS)

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-01-01

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials

  1. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3 , No. of iterations, variance(σ2 , Durbin-Watson statistic, goodness of fit measures such as R2 , D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  2. Gompertz: A Scilab Program for Estimating Gompertz Curve Using Gauss-Newton Method of Least Squares

    Directory of Open Access Journals (Sweden)

    Surajit Ghosh Dastidar

    2006-04-01

    Full Text Available A computer program for estimating Gompertz curve using Gauss-Newton method of least squares is described in detail. It is based on the estimation technique proposed in Reddy (1985. The program is developed using Scilab (version 3.1.1, a freely available scientific software package that can be downloaded from http://www.scilab.org/. Data is to be fed into the program from an external disk file which should be in Microsoft Excel format. The output will contain sample size, tolerance limit, a list of initial as well as the final estimate of the parameters, standard errors, value of Gauss-Normal equations namely GN1 GN2 and GN3, No. of iterations, variance(σ2, Durbin-Watson statistic, goodness of fit measures such as R2, D value, covariance matrix and residuals. It also displays a graphical output of the estimated curve vis a vis the observed curve. It is an improved version of the program proposed in Dastidar (2005.

  3. Wind Turbine Power Curves Incorporating Turbulence Intensity

    DEFF Research Database (Denmark)

    Sørensen, Emil Hedevang Lohse

    2014-01-01

    . The model and method are parsimonious in the sense that only a single function (the zero-turbulence power curve) and a single auxiliary parameter (the equivalent turbulence factor) are needed to predict the mean power at any desired turbulence intensity. The method requires only ten minute statistics......The performance of a wind turbine in terms of power production (the power curve) is important to the wind energy industry. The current IEC-61400-12-1 standard for power curve evaluation recognizes only the mean wind speed at hub height and the air density as relevant to the power production...

  4. METHOD TO DEVELOP THE DOUBLE-CURVED SURFACE OF THE ROOF

    Directory of Open Access Journals (Sweden)

    JURCO Ancuta Nadia

    2017-05-01

    Full Text Available This work present two methods for determining the development of double-curved surface. The aims of this paper is to show a comparative study between methods for determination of the sheet metal requirements for complex roof cover shape. In first part of the paper are presented the basic sketch and information about the roof shape and some consecrated buildings, which have a complex roof shape. The second part of the paper shows two methods for determining the developed of the spherical roof. The graphical method is the first method used for developing of the spherical shape. In this method it used the poly-cylindrical method to develop the double-curved surface. The second method is accomplishing by using the dedicated CAD software method.

  5. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  6. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    Science.gov (United States)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  7. Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method

    Science.gov (United States)

    Verachtert, R.; Lombaert, G.; Degrande, G.

    2018-03-01

    This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.

  8. Solving eigenvalue problems on curved surfaces using the Closest Point Method

    KAUST Repository

    Macdonald, Colin B.

    2011-06-01

    Eigenvalue problems are fundamental to mathematics and science. We present a simple algorithm for determining eigenvalues and eigenfunctions of the Laplace-Beltrami operator on rather general curved surfaces. Our algorithm, which is based on the Closest Point Method, relies on an embedding of the surface in a higher-dimensional space, where standard Cartesian finite difference and interpolation schemes can be easily applied. We show that there is a one-to-one correspondence between a problem defined in the embedding space and the original surface problem. For open surfaces, we present a simple way to impose Dirichlet and Neumann boundary conditions while maintaining second-order accuracy. Convergence studies and a series of examples demonstrate the effectiveness and generality of our approach. © 2011 Elsevier Inc.

  9. Standardized Percentile Curves of Body Mass Index of Northeast Iranian Children Aged 25 to 60 Months

    Science.gov (United States)

    Emdadi, Maryam; Safarian, Mohammad; Doosti, Hassan

    2011-01-01

    Objective Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. Racial differences necessitate using local growth charts. This study aimed to provide standardized growth curves of body mass index (BMI) for children living in northeast Iran. Methods A total of 23730 apparently healthy boys and girls aged 25 to 60 months recruited for 20 days from those attending community clinics for routine health checks. Anthropometric measurements were done by trained health staff using WHO methodology. The LMSP method with maximum penalized likelihood, the Generalized Additive Models, the Box-Cox power exponential distribution distribution, Akaike Information Criteria and Generalized Akaike Criteria with penalty equal to 3 [GAIC(3)], and Worm plot and Q-tests as goodness of fit tests were used to construct the centile reference charts. Findings The BMI centile curves for boys and girls aged 25 to 60 months were drawn utilizing a population of children living in northeast Iran. Conclusion The results of the current study demonstrate the possibility of preparation of local growth charts and their importance in evaluating children's growth. Also their differences, relative to those prepared by global references, reflect the necessity of preparing local charts in future studies using longitudinal data. PMID:23056770

  10. Construction of the World Health Organization child growth standards: Selection of methods for attained growth curves

    NARCIS (Netherlands)

    Borghi, E.; Onis, M. de; Garza, C.; Broeck, J. van den; Frongillo, E.A.; Grummer-Strawn, L.; Buuren, S. van; Pan, H.; Molinari, L.; Martorell, R.; Onyango, A.W.; Martines, J.C.; Pinol, A.; Siyam, A.; Victoria, C.G.; Bhan, M.K.; Araújo, C.L.; Lartey, A.; Owusu, W.B.; Bhandari, N.; Norum, K.R.; Bjoerneboe, G.-E.Aa.; Mohamed, A.J.; Dewey, K.G.; Belbase, K.; Chumlea, C.; Cole, T.; Shrimpton, R.; Albernaz, E.; Tomasi, E.; Cássia Fossati da Silveira, R. de; Nader, G.; Sagoe-Moses, I.; Gomez, V.; Sagoe-Moses, C.; Taneja, S.; Rongsen, T.; Chetia, J.; Sharma, P.; Bahl, R.; Baerug, A.; Tufte, E.; Alasfoor, D.; Prakash, N.S.; Mabry, R.M.; Al Rajab, H.J.; Helmi, S.A.; Nommsen-Rivers, L.A.; Cohen, R.J.; Heinig, M.J.

    2006-01-01

    The World Health Organization (WHO), in collaboration with a number of research institutions worldwide, is developing new child growth standards. As part of a broad consultative process for selecting the best statistical methods, WHO convened a group of statisticians and child growth experts to

  11. A Novel Method for Detecting and Computing Univolatility Curves in Ternary Mixtures

    DEFF Research Database (Denmark)

    Shcherbakov, Nataliya; Rodriguez-Donis, Ivonne; Abildskov, Jens

    2017-01-01

    Residue curve maps (RCMs) and univolatility curves are crucial tools for analysis and design of distillation processes. Even in the case of ternary mixtures, the topology of these maps is highly non-trivial. We propose a novel method allowing detection and computation of univolatility curves...... of the generalized univolatility and unidistribution curves in the three dimensional composition – temperature state space lead to a simple and efficient algorithm of computation of the univolatility curves. Two peculiar ternary systems, namely diethylamine – chloroform – methanol and hexane – benzene...

  12. Construction of molecular potential energy curves by an optimization method

    Science.gov (United States)

    Wang, J.; Blake, A. J.; McCoy, D. G.; Torop, L.

    1991-01-01

    A technique for determining the potential energy curves for diatomic molecules from measurements of diffused or continuum spectra is presented. It is based on a numerical procedure which minimizes the difference between the calculated spectra and the experimental measurements and can be used in cases where other techniques, such as the conventional RKR method, are not applicable. With the aid of suitable spectral data, the associated dipole electronic transition moments can be simultaneously obtained. The method is illustrated by modeling the "longest band" of molecular oxygen to extract the E 3Σ u- and B 3Σ u- potential curves in analytical form.

  13. Symphysis-fundal height curve in the diagnosis of fetal growth deviations

    Directory of Open Access Journals (Sweden)

    Djacyr Magna Cabral Freire

    2010-12-01

    Full Text Available OBJECTIVE: To validate a new symphysis-fundal curve for screening fetal growth deviations and to compare its performance with the standard curve adopted by the Brazilian Ministry of Health. METHODS: Observational study including a total of 753 low-risk pregnant women with gestational age above 27 weeks between March to October 2006 in the city of João Pessoa, Northeastern Brazil. Symphisys-fundal was measured using a standard technique recommended by the Brazilian Ministry of Health. Estimated fetal weight assessed through ultrasound using the Brazilian fetal weight chart for gestational age was the gold standard. A subsample of 122 women with neonatal weight measurements was taken up to seven days after estimated fetal weight measurements and symphisys-fundal classification was compared with Lubchenco growth reference curve as gold standard. Sensitivity, specificity, positive and negative predictive values were calculated. The McNemar χ2 test was used for comparing sensitivity of both symphisys-fundal curves studied. RESULTS: The sensitivity of the new curve for detecting small for gestational age fetuses was 51.6% while that of the Brazilian Ministry of Health reference curve was significantly lower (12.5%. In the subsample using neonatal weight as gold standard, the sensitivity of the new reference curve was 85.7% while that of the Brazilian Ministry of Health was 42.9% for detecting small for gestational age. CONCLUSIONS: The diagnostic performance of the new curve for detecting small for gestational age fetuses was significantly higher than that of the Brazilian Ministry of Health reference curve.

  14. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  15. Automated pavement horizontal curve measurement methods based on inertial measurement unit and 3D profiling data

    Directory of Open Access Journals (Sweden)

    Wenting Luo

    2016-04-01

    Full Text Available Pavement horizontal curve is designed to serve as a transition between straight segments, and its presence may cause a series of driving-related safety issues to motorists and drivers. As is recognized that traditional methods for curve geometry investigation are time consuming, labor intensive, and inaccurate, this study attempts to develop a method that can automatically conduct horizontal curve identification and measurement at network level. The digital highway data vehicle (DHDV was utilized for data collection, in which three Euler angles, driving speed, and acceleration of survey vehicle were measured with an inertial measurement unit (IMU. The 3D profiling data used for cross slope calibration was obtained with PaveVision3D Ultra technology at 1 mm resolution. In this study, the curve identification was based on the variation of heading angle, and the curve radius was calculated with kinematic method, geometry method, and lateral acceleration method. In order to verify the accuracy of the three methods, the analysis of variance (ANOVA test was applied by using the control variable of curve radius measured by field test. Based on the measured curve radius, a curve safety analysis model was used to predict the crash rates and safe driving speeds at horizontal curves. Finally, a case study on 4.35 km road segment demonstrated that the proposed method could efficiently conduct network level analysis.

  16. A procedure for the improvement in the determination of a TXRF spectrometer sensitivity curve

    International Nuclear Information System (INIS)

    Bennun, Leonardo; Sanhueza, Vilma

    2010-01-01

    A simple procedure is proposed to determine the total reflection X-ray fluorescence (TXRF) spectrometer sensitivity curve; this procedure provides better accuracy and exactitude than the standard established method. It uses individual pure substances instead of the use of vendor-certified values of reference calibration standards, which are expensive and lack any method to check their quality. This method avoids problems like uncertainties in the determination of the sensitivity curve according to different standards. It also avoids the need for validation studies between different techniques, in order to assure the quality of their TXRF results. (author)

  17. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Directory of Open Access Journals (Sweden)

    Gaetano Luglio

    2015-06-01

    Conclusion: Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon.

  18. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  19. Using commercial simulators for determining flash distillation curves for petroleum fractions

    Directory of Open Access Journals (Sweden)

    Eleonora Erdmann

    2008-01-01

    Full Text Available This work describes a new method for estimating the equilibrium flash vaporisation (EFV distillation curve for petro-leum fractions by using commercial simulators. A commercial simulator was used for implementing a stationary mo-del for flash distillation; this model was adjusted by using a distillation curve obtained from standard laboratory ana-lytical assays. Such curve can be one of many types (eg ASTM D86, D1160 or D2887 and involves an experimental procedure simpler than that required for obtaining an EFV curve. Any commercial simulator able to model petroleum can be used for the simulation (HYSYS and CHEMCAD simulators were used here. Several types of petroleum and fractions were experimentally analysed for evaluating the proposed method; this data was then put into a process si-mulator (according to the proposed method to estimate the corresponding EFV curves. HYSYS- and CHEMCAD-estimated curves were compared to those produced by two traditional estimation methods (Edmister’s and Maswell’s methods. Simulation-estimated curves were close to average Edmister and Maxwell curves in all cases. The propo-sed method has several advantages; it avoids the need for experimentally obtaining an EFV curve, it does not de-pend on the type of experimental curve used to fit the model and it enables estimating several pressures by using just one experimental curve as data.

  20. Status on the selection and development of an embrittlement trend curve to use in ASTM standard guide E900

    International Nuclear Information System (INIS)

    Kirk, M.; Brian Hall, J.; Server, W.; Lucon, E.; Erickson, M.; Stoller, R.

    2015-01-01

    ASTM E900-07, Standard Guide for Predicting Radiation-Induced Transition Temperature Shift in Reactor Vessel Materials, includes an embrittlement trend curve. The trend curve can be used to predict the effect of neutron irradiation on the embrittlement of ferritic pressure vessel steels, as quantified by the shift in the Charpy V-Notch transition curve at 41 Joules of absorbed energy (ΔT 41J ). The current E900 trend curve was first adopted in the 2002 revision. In 2011 ASTM Subcommittee E10.02 undertook an extensive effort to evaluate the adequacy of the E900 trend curve for continued use. This paper summarizes the current status of this effort, which has produced a trend curve calibrated using a database of over 1800 ΔT 41J values from the light water reactor surveillance programs in thirteen countries. (authors)

  1. A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers

    International Nuclear Information System (INIS)

    Vijay, Samudra; DeCarolis, Joseph F.; Srivastava, Ravi K.

    2010-01-01

    This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology cost and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO x control configurations on a large subset of the existing coal-fired, utility-owned boilers in the US. The resultant data are used to create technology-specific marginal abatement cost curves (MACCs) and also serve as input to an integer linear program, which minimizes system-wide control costs by finding the optimal distribution of NO x controls across the modeled boilers under an emission constraint. The result is a single optimized MACC that accounts for detailed, boiler-specific information related to NO x retrofits. Because the resultant MACCs do not take into account regional differences in air-quality standards or pre-existing NO x controls, the results should not be interpreted as a policy prescription. The general method as well as NO x -specific results presented here should be of significant value to modelers and policy analysts who must estimate the costs of pollution reduction.

  2. SCINFI, a program to calculate the standardization curve in liquid scintillation counting; SCINFI, un programa para calcular la curva de calibracion eficiencia-extincion en centelleo liquido

    Energy Technology Data Exchange (ETDEWEB)

    Grau Carles, A.; Grau Malonda, A.

    1984-07-01

    A code, SCINFI, was developed, written in BASIC, to compute the efficiency- quench standardization curve for any radionuclide. The program requires the standardization curve for 3H and the polynomial relations between counting efficiency and figure of merit for both 3H and the problem (e.g. 14{sup C}). The program is applied to the computation of the efficiency-quench standardization curve for 14{sup c}. Five different liquid scintillation spectrometers and two scintillator solutions have bean checked. The computation results are compared with the experimental values obtained with a set of 14{sup c} standardized samples. (Author)

  3. A simple preparation of calibration curve standards of 134Cs and 137Cs by serial dilution of a standard reference material

    International Nuclear Information System (INIS)

    Labrecque, J.J.; Rosales, P.A.

    1990-01-01

    Two sets of calibration standards for 134 Cs and 137 Cs were prepared by small serial dilution of a natural matrix standard reference material, IAEA-154 whey powder. The first set was intended to screen imported milk powders which were suspected to be contaminated with 134 Cs and 137 Cs. Their concentration ranged from 40 to 400 Bq/kg. The other set of calibration standards was prepared to measure the environmental levels of 137 Cs in commercial Venezuelan milk powders. Their concentration ranged from 3 to 10 Bq/kg of 137 Cs. The accuracy of these calibration curves was checked by IAEA-152 and A-14 milk powders. Their measured values were in good agreement with their certified values. Finally, it is shown that these preparation techniques using serial dilution of a standard reference material were simple, rapid, precise, accurate and cost-effective. (author) 5 refs.; 5 figs.; 3 tabs

  4. Application of numerical methods in spectroscopy : fitting of the curve of thermoluminescence

    International Nuclear Information System (INIS)

    RANDRIAMANALINA, S.

    1999-01-01

    The method of non linear least squares is one of the mathematical tools widely employed in spectroscopy, it is used for the determination of parameters of a model. In other hand, the spline function is among fitting functions that introduce the smallest error. It is used for the calculation of the area under the curve. We present an application of these methods, with the details of the corresponding algorithms, to the fitting of the thermoluminescence curve. [fr

  5. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  6. THE CPA QUALIFICATION METHOD BASED ON THE GAUSSIAN CURVE FITTING

    Directory of Open Access Journals (Sweden)

    M.T. Adithia

    2015-01-01

    Full Text Available The Correlation Power Analysis (CPA attack is an attack on cryptographic devices, especially smart cards. The results of the attack are correlation traces. Based on the correlation traces, an evaluation is done to observe whether significant peaks appear in the traces or not. The evaluation is done manually, by experts. If significant peaks appear then the smart card is not considered secure since it is assumed that the secret key is revealed. We develop a method that objectively detects peaks and decides which peak is significant. We conclude that using the Gaussian curve fitting method, the subjective qualification of the peak significance can be objectified. Thus, better decisions can be taken by security experts. We also conclude that the Gaussian curve fitting method is able to show the influence of peak sizes, especially the width and height, to a significance of a particular peak.

  7. Using Peano Curves to Construct Laplacians on Fractals

    Science.gov (United States)

    Molitor, Denali; Ott, Nadia; Strichartz, Robert

    2015-12-01

    We describe a new method to construct Laplacians on fractals using a Peano curve from the circle onto the fractal, extending an idea that has been used in the case of certain Julia sets. The Peano curve allows us to visualize eigenfunctions of the Laplacian by graphing the pullback to the circle. We study in detail three fractals: the pentagasket, the octagasket and the magic carpet. We also use the method for two nonfractal self-similar sets, the torus and the equilateral triangle, obtaining appealing new visualizations of eigenfunctions on the triangle. In contrast to the many familiar pictures of approximations to standard Peano curves, that do no show self-intersections, our descriptions of approximations to the Peano curves have self-intersections that play a vital role in constructing graph approximations to the fractal with explicit graph Laplacians that give the fractal Laplacian in the limit.

  8. Accurate determination of arsenic in arsenobetaine standard solutions of BCR-626 and NMIJ CRM 7901-a by neutron activation analysis coupled with internal standard method.

    Science.gov (United States)

    Miura, Tsutomu; Chiba, Koichi; Kuroiwa, Takayoshi; Narukawa, Tomohiro; Hioki, Akiharu; Matsue, Hideaki

    2010-09-15

    Neutron activation analysis (NAA) coupled with an internal standard method was applied for the determination of As in the certified reference material (CRM) of arsenobetaine (AB) standard solutions to verify their certified values. Gold was used as an internal standard to compensate for the difference of the neutron exposure in an irradiation capsule and to improve the sample-to-sample repeatability. Application of the internal standard method significantly improved linearity of the calibration curve up to 1 microg of As, too. The analytical reliability of the proposed method was evaluated by k(0)-standardization NAA. The analytical results of As in AB standard solutions of BCR-626 and NMIJ CRM 7901-a were (499+/-55)mgkg(-1) (k=2) and (10.16+/-0.15)mgkg(-1) (k=2), respectively. These values were found to be 15-20% higher than the certified values. The between-bottle variation of BCR-626 was much larger than the expanded uncertainty of the certified value, although that of NMIJ CRM 7901-a was almost negligible. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  9. Anatomical curve identification

    Science.gov (United States)

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  10. A method to enhance the curve negotiation performance of HTS Maglev

    Science.gov (United States)

    Che, T.; Gou, Y. F.; Deng, Z. G.; Zheng, J.; Zheng, B. T.; Chen, P.

    2015-09-01

    High temperature superconducting (HTS) Maglev has attracted more and more attention due to its special self-stable characteristic, and much work has been done to achieve its actual application, but the research about the curve negotiation is not systematic and comprehensive. In this paper, we focused on the change of the lateral displacements of the Maglev vehicle when going through curves under different velocities, and studied the change of the electromagnetic forces through experimental methods. Experimental results show that setting an appropriate initial eccentric distance (ED), which is the distance between the center of the bulk unit and the center of the permanent magnet guideway (PMG), when cooling the bulks is favorable for the Maglev system’s curve negotiation. This work will provide some available suggestions for improving the curve negotiation performance of the HTS Maglev system.

  11. Development of test practice requirements for a standard method on fracture toughness testing in the transition range

    International Nuclear Information System (INIS)

    McCabe, D.E.; Zerbst, U.; Heerens, J.

    1993-01-01

    This report covers the resolution of several issues that are relevant to the ductile to brittle transition range of structural steels. One of this issues was to compare a statistical-based weakest-link method to constraint data adjustment methods for modeling the specimen size effects on fracture toughness. Another was to explore the concept of a universal transition temperature curve shape (Master Curve). Data from a Materials Properties Council round robin activity were used to test the proposals empirically. The findings of this study are inclosed in an activity for the development of a draft standard test procedure ''Test Practice for Fracture Toughness in the Transition Range''. (orig.) [de

  12. A new method for measuring coronary artery diameters with CT spatial profile curves

    International Nuclear Information System (INIS)

    Shimamoto, Ryoichi; Suzuki, Jun-ichi; Yamazaki, Tadashi; Tsuji, Taeko; Ohmoto, Yuki; Morita, Toshihiro; Yamashita, Hiroshi; Honye, Junko; Nagai, Ryozo; Akahane, Masaaki; Ohtomo, Kuni

    2007-01-01

    Purpose: Coronary artery vascular edge recognition on computed tomography (CT) angiograms is influenced by window parameters. A noninvasive method for vascular edge recognition independent of window setting with use of multi-detector row CT was contrived and its feasibility and accuracy were estimated by intravascular ultrasound (IVUS). Methods: Multi-detector row CT was performed to obtain 29 CT spatial profile curves by setting a line cursor across short-axis coronary angiograms processed by multi-planar reconstruction. IVUS was also performed to determine the reference coronary diameter. IVUS diameter was fitted horizontally between two points on the upward and downward slopes of the profile curves and Hounsfield number was measured at the fitted level to test seven candidate indexes for definition of intravascular coronary diameter. The best index from the curves should show the best agreement with IVUS diameter. Results: Of the seven candidates the agreement was the best (agreement: 16 ± 11%) when the two ratios of Hounsfield number at the level of IVUS diameter over that at the peak on the profile curves were used with water and with fat as the background tissue. These edge definitions were achieved by cutting the horizontal distance by the curves at the level defined by the ratio of 0.41 for water background and 0.57 for fat background. Conclusions: Vascular edge recognition of the coronary artery with CT spatial profile curves was feasible and the contrived method could define the coronary diameter with reasonable agreement

  13. Effects of different premature chromosome condensation method on dose-curve of 60Co γ-ray

    International Nuclear Information System (INIS)

    Guo Yicao; Yang Haoxian; Yang Yuhua; Li Xi'na; Huang Weixu; Zheng Qiaoling

    2012-01-01

    Objective: To study the effect of traditional method and improved method of the premature chromosome condensation (PCC) on the dose-effect curve of 60 Co γ ray, for choosing the rapid and accurate biological dose estimating method for the accident emergency. Methods: Collected 3 healthy male cubits venous blood (23 to 28 years old), and irradiated by 0, 1.0, 5.0, 10.0, 15.0, 20.0 Gy 60 Co γ ray (absorbed dose rate: 0.635 Gy/min). Observed the relation of dose-effect curve in the 2 incubation time (50 hours and 60 hours) of the traditional method and improved method. Used the dose-effect curve to verify the exposure of 10.0 Gy (absorbed dose rate: 0.670 Gy/min). Results: (1) In the traditional method of 50-hour culture, the PCC cell count in 15.0 Gy and 20.0 Gy was of no statistical significance. But there were statistical significance in the traditional method of 60-hours culture and improved method (50-hour culture and 60-hour culture). Used the last 3 culture methods to make dose curve. (2) In the above 3 culture methods, the related coefficient between PCC ring and exposure dose was quite close (all of more than 0.996, P 0.05), the morphology of regression straight lines almost overlap. (3) Used the above 3 dose-effect curves to estimate the irradiation results (10.0 Gy), the error was less than or equal to 8%, all of them were within the allowable range of the biological experiment (15%). Conclusion: The 3 dose-effect curves of the above 3 culture methods can apply to biological dose estimating of large doses of ionizing radiation damage. Especially the improved method of 50-hour culture,it is much faster to estimate and it should be regarded as the first choice in accident emergency. (authors)

  14. Fitting methods for constructing energy-dependent efficiency curves and their application to ionization chamber measurements

    International Nuclear Information System (INIS)

    Svec, A.; Schrader, H.

    2002-01-01

    An ionization chamber without and with an iron liner (absorber) was calibrated by a set of radionuclide activity standards of the Physikalisch-Technische Bundesanstalt (PTB). The ionization chamber is used as a secondary standard measuring system for activity at the Slovak Institute of Metrology (SMU). Energy-dependent photon-efficiency curves were established for the ionization chamber in defined measurement geometry without and with the liner, and radionuclide efficiencies were calculated. Programmed calculation with an analytical efficiency function and a nonlinear regression algorithm of Microsoft (MS) Excel for fitting was used. Efficiencies from bremsstrahlung of pure beta-particle emitters were calibrated achieving a 10% accuracy level. Such efficiency components are added to obtain the total radionuclide efficiency of photon emitters after beta decay. The method yields differences of experimental and calculated radionuclide efficiencies for most of the photon-emitting radionuclides in the order of a few percent

  15. An Efficient Method for Detection of Outliers in Tracer Curves Derived from Dynamic Contrast-Enhanced Imaging

    Directory of Open Access Journals (Sweden)

    Linning Ye

    2018-01-01

    Full Text Available Presence of outliers in tracer concentration-time curves derived from dynamic contrast-enhanced imaging can adversely affect the analysis of the tracer curves by model-fitting. A computationally efficient method for detecting outliers in tracer concentration-time curves is presented in this study. The proposed method is based on a piecewise linear model and implemented using a robust clustering algorithm. The method is noniterative and all the parameters are automatically estimated. To compare the proposed method with existing Gaussian model based and robust regression-based methods, simulation studies were performed by simulating tracer concentration-time curves using the generalized Tofts model and kinetic parameters derived from different tissue types. Results show that the proposed method and the robust regression-based method achieve better detection performance than the Gaussian model based method. Compared with the robust regression-based method, the proposed method can achieve similar detection performance with much faster computation speed.

  16. Trajectory Optimization of Spray Painting Robot for Complex Curved Surface Based on Exponential Mean Bézier Method

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2017-01-01

    Full Text Available Automated tool trajectory planning for spray painting robots is still a challenging problem, especially for a large complex curved surface. This paper presents a new method of trajectory optimization for spray painting robot based on exponential mean Bézier method. The definition and the three theorems of exponential mean Bézier curves are discussed. Then a spatial painting path generation method based on exponential mean Bézier curves is developed. A new simple algorithm for trajectory optimization on complex curved surfaces is introduced. A golden section method is adopted to calculate the values. The experimental results illustrate that the exponential mean Bézier curves enhanced flexibility of the path planning, and the trajectory optimization algorithm achieved satisfactory performance. This method can also be extended to other applications.

  17. Study and program implementation of transient curves' piecewise linearization

    International Nuclear Information System (INIS)

    Shi Yang; Zu Hongbiao

    2014-01-01

    Background: Transient curves are essential for the stress analysis of related equipment in nuclear power plant (NPP). The actually operating data or the design transient data of a NPP usually consist of a large number of data points with very short time intervals. To simplify the analysis, transient curves are generally piecewise linearized in advance. Up to now, the piecewise linearization of transient curves is accomplished manually, Purpose: The aim is to develop a method for the piecewise linearization of transient curves, and to implement it by programming. Methods: First of all, the fitting line of a number of data points was obtained by the least square method. The segment of the fitting line is set while the accumulation error of linearization exceeds the preset limit with the increasing number of points. Then the linearization of subsequent data points was begun from the last point of the preceding curve segment to get the next segment in the same way, and continue until the final data point involved. Finally, averaging of junction points is taken for the segment connection. Results: A computer program named PLTC (Piecewise Linearization for Transient Curves) was implemented and verified by the linearization of the standard sine curve and typical transient curves of a NPP. Conclusion: The method and the PLTC program can be well used to the piecewise linearization of transient curves, with improving efficiency and precision. (authors)

  18. Analysis of Indonesian educational system standard with KSIM cross-impact method

    Science.gov (United States)

    Arridjal, F.; Aldila, D.; Bustamam, A.

    2017-07-01

    The Result of The Programme of International Student Assessment (PISA) on 2012 shows that Indonesia is on 64'th position from 65 countries in Mathematics Mean Score. The 2013 Learning Curve Mapping, Indonesia is included in the 10th category of countries with the lowest performance on cognitive skills aspect, i.e. 37'th position from 40 countries. Competency is built by 3 aspects, one of them is cognitive aspect. The low result of mapping on cognitive aspect, describe the low of graduate competences as an output of Indonesia National Education System (INES). INES adopting a concept Eight Educational System Standards (EESS), one of them is graduate competency standard which connected directly with Indonesia's students. This research aims is to model INES by using KSIM cross-impact. Linear regression models of EESS constructed using the accreditation national data of Senior High Schools in Indonesia. The results then interpreted as impact value on the construction of KSIM cross-impact INES. The construction is used to analyze the interaction of EESS and doing numerical simulation for possible public policy in the education sector, i.e. stimulate the growth of education staff standard, content, process and infrastructure. All simulations of public policy has been done with 2 methods i.e with a multiplier impact method and with constant intervention method. From numerical simulation result, it is shown that stimulate the growth standard of content in the construction KSIM cross-impact EESS is the best option for public policy to maximize the growth of graduate competency standard.

  19. Using commercial simulators for determining flash distillation curves for petroleum fractions

    OpenAIRE

    Eleonora Erdmann; Demetrio Humana; Samuel Franco Domínguez; Lorgio Mercado Fuentes

    2010-01-01

    This work describes a new method for estimating the equilibrium flash vaporisation (EFV) distillation curve for petro-leum fractions by using commercial simulators. A commercial simulator was used for implementing a stationary mo-del for flash distillation; this model was adjusted by using a distillation curve obtained from standard laboratory ana-lytical assays. Such curve can be one of many types (eg ASTM D86, D1160 or D2887) and involves an experimental procedure simpler than that required...

  20. Surface charge method for molecular surfaces with curved areal elements I. Spherical triangles

    Science.gov (United States)

    Yu, Yi-Kuo

    2018-03-01

    Parametrizing a curved surface with flat triangles in electrostatics problems creates a diverging electric field. One way to avoid this is to have curved areal elements. However, charge density integration over curved patches appears difficult. This paper, dealing with spherical triangles, is the first in a series aiming to solve this problem. Here, we lay the ground work for employing curved patches for applying the surface charge method to electrostatics. We show analytically how one may control the accuracy by expanding in powers of the the arc length (multiplied by the curvature). To accommodate not extremely small curved areal elements, we have provided enough details to include higher order corrections that are needed for better accuracy when slightly larger surface elements are used.

  1. An inverse method based on finite element model to derive the plastic flow properties from non-standard tensile specimens of Eurofer97 steel

    Directory of Open Access Journals (Sweden)

    S. Knitel

    2016-12-01

    Full Text Available A new inverse method was developed to derive the plastic flow properties of non-standard disk tensile specimens, which were so designed to fit irradiation rods used for spallation irradiations in SINQ (Schweizer Spallations Neutronen Quelle target at Paul Scherrer Institute. The inverse method, which makes use of MATLAB and the finite element code ABAQUS, is based upon the reconstruction of the load-displacement curve by a succession of connected small linear segments. To do so, the experimental engineering stress/strain curve is divided into an elastic and a plastic section, and the plastic section is further divided into small segments. Each segment is then used to determine an associated pair of true stress/plastic strain values, representing the constitutive behavior. The main advantage of the method is that it does not rely on a hypothetic analytical expression of the constitutive behavior. To account for the stress/strain gradients that develop in the non-standard specimen, the stress and strain were weighted over the volume of the deforming elements. The method was validated with tensile tests carried out at room temperature on non-standard flat disk tensile specimens as well as on standard cylindrical specimens made of the reduced-activation tempered martensitic steel Eurofer97. While both specimen geometries presented a significant difference in terms of deformation localization during necking, the same true stress/strain curve was deduced from the inverse method. The potential and usefulness of the inverse method is outlined for irradiated materials that suffer from a large uniform elongation reduction.

  2. Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method ...

    Indian Academy of Sciences (India)

    Abstract. The Jurkevich method is a useful method to explore periodic- ity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states.

  3. ExSTA: External Standard Addition Method for Accurate High-Throughput Quantitation in Targeted Proteomics Experiments.

    Science.gov (United States)

    Mohammed, Yassene; Pan, Jingxi; Zhang, Suping; Han, Jun; Borchers, Christoph H

    2018-03-01

    Targeted proteomics using MRM with stable-isotope-labeled internal-standard (SIS) peptides is the current method of choice for protein quantitation in complex biological matrices. Better quantitation can be achieved with the internal standard-addition method, where successive increments of synthesized natural form (NAT) of the endogenous analyte are added to each sample, a response curve is generated, and the endogenous concentration is determined at the x-intercept. Internal NAT-addition, however, requires multiple analyses of each sample, resulting in increased sample consumption and analysis time. To compare the following three methods, an MRM assay for 34 high-to-moderate abundance human plasma proteins is used: classical internal SIS-addition, internal NAT-addition, and external NAT-addition-generated in buffer using NAT and SIS peptides. Using endogenous-free chicken plasma, the accuracy is also evaluated. The internal NAT-addition outperforms the other two in precision and accuracy. However, the curves derived by internal vs. external NAT-addition differ by only ≈3.8% in slope, providing comparable accuracies and precision with good CV values. While the internal NAT-addition method may be "ideal", this new external NAT-addition can be used to determine the concentration of high-to-moderate abundance endogenous plasma proteins, providing a robust and cost-effective alternative for clinical analyses or other high-throughput applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A preliminary study on method of saturated curve

    International Nuclear Information System (INIS)

    Cao Liguo; Chen Yan; Ao Qi; Li Huijuan

    1987-01-01

    It is an effective method to determine directly the absorption coefficient of sample with the matrix effect correction. The absorption coefficient is calculated using the relation of the characteristic X-ray intensity with the thickness of samples (saturated curve). The method explains directly the feature of the sample and the correction of the enhanced effect in certain condition. The method is not as same as the usual one in which the determination of the absorption coefficient of sample is based on the procedure of absorption of X-ray penetrating sample. The sensitivity factor KI 0 is discussed. The idea of determinating KI o by experiment and quasi-absoluted measurement of absorption coefficient μ are proposed. The experimental results with correction in different condition are shown

  5. The 1-loop effective potential for the Standard Model in curved spacetime

    Science.gov (United States)

    Markkanen, Tommi; Nurmi, Sami; Rajantie, Arttu; Stopyra, Stephen

    2018-06-01

    The renormalisation group improved Standard Model effective potential in an arbitrary curved spacetime is computed to one loop order in perturbation theory. The loop corrections are computed in the ultraviolet limit, which makes them independent of the choice of the vacuum state and allows the derivation of the complete set of β-functions. The potential depends on the spacetime curvature through the direct non-minimal Higgs-curvature coupling, curvature contributions to the loop diagrams, and through the curvature dependence of the renormalisation scale. Together, these lead to significant curvature dependence, which needs to be taken into account in cosmological applications, which is demonstrated with the example of vacuum stability in de Sitter space.

  6. A three-parameter langmuir-type model for fitting standard curves of sandwich enzyme immunoassays with special attention to the α-fetoprotein assay

    NARCIS (Netherlands)

    Kortlandt, W.; Endeman, H.J.; Hoeke, J.O.O.

    In a simplified approach to the reaction kinetics of enzyme-linked immunoassays, a Langmuir-type equation y = [ax/(b + x)] + c was derived. This model proved to be superior to logit-log and semilog models in the curve-fitting of standard curves. An assay for α-fetoprotein developed in our laboratory

  7. Standard Practice for Optical Distortion and Deviation of Transparent Parts Using the Double-Exposure Method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This photographic practice determines the optical distortion and deviation of a line of sight through a simple transparent part, such as a commercial aircraft windshield or a cabin window. This practice applies to essentially flat or nearly flat parts and may not be suitable for highly curved materials. 1.2 Test Method F 801 addresses optical deviation (angluar deviation) and Test Method F 2156 addresses optical distortion using grid line slope. These test methods should be used instead of Practice F 733 whenever practical. 1.3 This standard does not purport to address the safety concerns associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  8. Standard test method for determination of resistance to stable crack extension under low-constraint conditions

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 This standard covers the determination of the resistance to stable crack extension in metallic materials in terms of the critical crack-tip-opening angle (CTOAc), ψc and/or the crack-opening displacement (COD), δ5 resistance curve (1). This method applies specifically to fatigue pre-cracked specimens that exhibit low constraint (crack-length-to-thickness and un-cracked ligament-to-thickness ratios greater than or equal to 4) and that are tested under slowly increasing remote applied displacement. The recommended specimens are the compact-tension, C(T), and middle-crack-tension, M(T), specimens. The fracture resistance determined in accordance with this standard is measured as ψc (critical CTOA value) and/or δ5 (critical COD resistance curve) as a function of crack extension. Both fracture resistance parameters are characterized using either a single-specimen or multiple-specimen procedures. These fracture quantities are determined under the opening mode (Mode I) of loading. Influences of environment a...

  9. Light Curve Periodic Variability of Cyg X-1 using Jurkevich Method

    Indian Academy of Sciences (India)

    The Jurkevich method is a useful method to explore periodicity in the unevenly sampled observational data. In this work, we adopted the method to the light curve of Cyg X-1 from 1996 to 2012, and found that there is an interesting period of 370 days, which appears in both low/hard and high/soft states. That period may be ...

  10. Crack resistance curves determination of tube cladding material

    Energy Technology Data Exchange (ETDEWEB)

    Bertsch, J. [Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland)]. E-mail: johannes.bertsch@psi.ch; Hoffelner, W. [Paul Scherrer Institut, CH-5232 Villigen PSI (Switzerland)

    2006-06-30

    Zirconium based alloys have been in use as fuel cladding material in light water reactors since many years. As claddings change their mechanical properties during service, it is essential for the assessment of mechanical integrity to provide parameters for potential rupture behaviour. Usually, fracture mechanics parameters like the fracture toughness K {sub IC} or, for high plastic strains, the J-integral based elastic-plastic fracture toughness J {sub IC} are employed. In claddings with a very small wall thickness the determination of toughness needs the extension of the J-concept beyond limits of standards. In the paper a new method based on the traditional J approach is presented. Crack resistance curves (J-R curves) were created for unirradiated thin walled Zircaloy-4 and aluminium cladding tube pieces at room temperature using the single sample method. The procedure of creating sharp fatigue starter cracks with respect to optical recording was optimized. It is shown that the chosen test method is appropriate for the determination of complete J-R curves including the values J {sub 0.2} (J at 0.2 mm crack length), J {sub m} (J corresponding to the maximum load) and the slope of the curve.

  11. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    Science.gov (United States)

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences

  12. Creep curve modeling of hastelloy-X alloy by using the theta projection method

    International Nuclear Information System (INIS)

    Woo Gon, Kim; Woo-Seog, Ryu; Jong-Hwa, Chang; Song-Nan, Yin

    2007-01-01

    To model the creep curves of the Hastelloy-X alloy which is being considered as a candidate material for the VHTR (Very High Temperature gas-cooled Reactor) components, full creep curves were obtained by constant-load creep tests for different stress levels at 950 C degrees. Using the experimental creep data, the creep curves were modeled by applying the Theta projection method. A number of computing processes of a nonlinear least square fitting (NLSF) analysis was carried out to establish the suitably of the four Theta parameters. The results showed that the Θ 1 and Θ 2 parameters could not be optimized well with a large error during the fitting of the full creep curves. On the other hand, the Θ 3 and Θ 4 parameters were optimized well without an error. For this result, to find a suitable cutoff strain criterion, the NLSF analysis was performed with various cutoff strains for all the creep curves. An optimum cutoff strain range for defining the four Theta parameters accurately was found to be a 3% cutoff strain. At the 3% cutoff strain, the predicted curves coincided well with the experimental ones. The variation of the four Theta parameters as the function of a stress showed a good linearity, and the creep curves were modeled well for the low stress levels. Predicted minimum creep rate showed a good agreement with the experimental data. Also, for a design usage of the Hastelloy-X alloy, the plot of the log stress versus log the time to a 1% strain was predicted, and the creep rate curves with time and a cutoff strain at 950 C degrees were constructed numerically for a wide rang of stresses by using the Theta projection method. (authors)

  13. A new method for testing pile by single-impact energy and P-S curve

    Science.gov (United States)

    Xu, Zhao-Yong; Duan, Yong-Kang; Wang, Bin; Hu, Yi-Li; Yang, Run-Hai; Xu, Jun; Zhao, Jin-Ming

    2004-11-01

    By studying the pile-formula and stress-wave methods ( e.g., CASE method), the authors propose a new method for testing piles using the single-impact energy and P-S curves. The vibration and wave figures are recorded, and the dynamic and static displacements are measured by different transducers near the top of piles when the pile is impacted by a heavy hammer or micro-rocket. By observing the transformation coefficient of driving energy (total energy), the consumed energy of wave motion and vibration and so on, the vertical bearing capacity for single pile is measured and calculated. Then, using the vibration wave diagram, the dynamic relation curves between the force ( P) and the displacement ( S) is calculated and the yield points are determined. Using the static-loading test, the dynamic results are checked and the relative constants of dynamic-static P-S curves are determined. Then the subsidence quantity corresponding to the bearing capacity is determined. Moreover, the shaped quality of the pile body can be judged from the formation of P-S curves.

  14. Residual stress measurement by X-ray diffraction with the Gaussian curve method and its automation

    International Nuclear Information System (INIS)

    Kurita, M.

    1987-01-01

    X-ray technique with the Gaussian curve method and its automation are described for rapid and nondestructive measurement of residual stress. A simplified equation for measuring the stress by the Gaussian curve method is derived because in its previous form this method required laborious calculation. The residual stress can be measured in a few minutes, depending on materials, using an automated X-ray stress analyzer with a microcomputer which was developed in the laboratory. The residual stress distribution of a partially induction hardened and tempered (at 280 0 C) steel bar was measured with the Gaussian curve method. A sharp residual tensile stress peak of 182 MPa appeared right outside the hardened region at which fatigue failure is liable to occur

  15. A neural network driving curve generation method for the heavy-haul train

    Directory of Open Access Journals (Sweden)

    Youneng Huang

    2016-05-01

    Full Text Available The heavy-haul train has a series of characteristics, such as the locomotive traction properties, the longer length of train, and the nonlinear train pipe pressure during train braking. When the train is running on a continuous long and steep downgrade railway line, the safety of the train is ensured by cycle braking, which puts high demands on the driving skills of the driver. In this article, a driving curve generation method for the heavy-haul train based on a neural network is proposed. First, in order to describe the nonlinear characteristics of train braking, the neural network model is constructed and trained by practical driving data. In the neural network model, various nonlinear neurons are interconnected to work for information processing and transmission. The target value of train braking pressure reduction and release time is achieved by modeling the braking process. The equation of train motion is computed to obtain the driving curve. Finally, in four typical operation scenarios, comparing the curve data generated by the method with corresponding practical data of the Shuohuang heavy-haul railway line, the results show that the method is effective.

  16. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  17. Principal Curves on Riemannian Manifolds.

    Science.gov (United States)

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  18. Prediction Method for the Complete Characteristic Curves of a Francis Pump-Turbine

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2018-02-01

    Full Text Available Complete characteristic curves of a pump-turbine are essential for simulating the hydraulic transients and designing pumped storage power plants but are often unavailable in the preliminary design stage. To solve this issue, a prediction method for the complete characteristics of a Francis pump-turbine was proposed. First, based on Euler equations and the velocity triangles at the runners, a mathematical model describing the complete characteristics of a Francis pump-turbine was derived. According to multiple sets of measured complete characteristic curves, explicit expressions for the characteristic parameters of characteristic operating point sets (COPs, as functions of a specific speed and guide vane opening, were then developed to determine the undetermined coefficients in the mathematical model. Ultimately, by combining the mathematical model with the regression analysis of COPs, the complete characteristic curves for an arbitrary specific speed were predicted. Moreover, a case study shows that the predicted characteristic curves are in good agreement with the measured data. The results obtained by 1D numerical simulation of the hydraulic transient process using the predicted characteristics deviate little from the measured characteristics. This method is effective and sufficient for a priori simulations before obtaining the measured characteristics and provides important support for the preliminary design of pumped storage power plants.

  19. A note on families of fragility curves

    International Nuclear Information System (INIS)

    Kaplan, S.; Bier, V.M.; Bley, D.C.

    1989-01-01

    In the quantitative assessment of seismic risk, uncertainty in the fragility of a structural component is usually expressed by putting forth a family of fragility curves, with probability serving as the parameter of the family. Commonly, a lognormal shape is used both for the individual curves and for the expression of uncertainty over the family. A so-called composite single curve can also be drawn and used for purposes of approximation. This composite curve is often regarded as equivalent to the mean curve of the family. The equality seems intuitively reasonable, but according to the authors has never been proven. The paper presented proves this equivalence hypothesis mathematically. Moreover, the authors show that this equivalence hypothesis between fragility curves is itself equivalent to an identity property of the standard normal probability curve. Thus, in the course of proving the fragility curve hypothesis, the authors have also proved a rather obscure, but interesting and perhaps previously unrecognized, property of the standard normal curve

  20. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  1. Arterial pressure measurement: Is the envelope curve of the oscillometric method influenced by arterial stiffness?

    International Nuclear Information System (INIS)

    Gelido, G; Angiletta, S; Pujalte, A; Quiroga, P; Cornes, P; Craiem, D

    2007-01-01

    Measurement of peripheral arterial pressure using the oscillometric method is commonly used by professionals as well as by patients in their homes. This non invasive automatic method is fast, efficient and the required equipment is affordable with a low cost. The measurement method consists of obtaining parameters from a calibrated decreasing curve that is modulated by heart beats witch appear when arterial pressure reaches the cuff pressure. Diastolic, mean and systolic pressures are obtained calculating particular instants from the heart beats envelope curve. In this article we analyze the envelope of this amplified curve to find out if its morphology is related to arterial stiffness in patients. We found, in 33 volunteers, that the envelope waveform width correlates to systolic pressure (r=0.4, p<0.05), to pulse pressure (r=0.6, p<0.05) and to pulse pressure normalized to systolic pressure (r=0.6, p<0.05). We believe that the morphology of the heart beats envelope curve obtained with the oscillometric method for peripheral pressure measurement depends on arterial stiffness and can be used to enhance pressure measurements

  2. Aerodynamic calculational methods for curved-blade Darrieus VAWT WECS

    Science.gov (United States)

    Templin, R. J.

    1985-03-01

    Calculation of aerodynamic performance and load distributions for curved-blade wind turbines is discussed. Double multiple stream tube theory, and the uncertainties that remain in further developing adequate methods are considered. The lack of relevant airfoil data at high Reynolds numbers and high angles of attack, and doubts concerning the accuracy of models of dynamic stall are underlined. Wind tunnel tests of blade airbrake configurations are summarized.

  3. Trace element analysis of water using radioisotope induced X-ray fluorescence (Cd-109) and a preconcentration-internal standard method

    International Nuclear Information System (INIS)

    Alvarez, M.; Cano, W.

    1986-10-01

    Radioisotope induced X-ray fluorescence using Cd-109 was used for the determination of iron, nickel, copper, zinc, lead and mercury in water. These metals were concentrated by precipitation with the chelating agent APDC. The precipitated formed was filtered using a membrane filter. Cobalt was added as an internal standard. Minimum detection limit, sensitivities and calibration curves linearities have been obtained to find the limits of the method. The usefulness of the method is illustrated analysing synthetic standard solutions. As an application analytical results are given for water of a highly polluted river area. (Author)

  4. Curve Evolution in Subspaces and Exploring the Metameric Class of Histogram of Gradient Orientation based Features using Nonlinear Projection Methods

    DEFF Research Database (Denmark)

    Tatu, Aditya Jayant

    This thesis deals with two unrelated issues, restricting curve evolution to subspaces and computing image patches in the equivalence class of Histogram of Gradient orientation based features using nonlinear projection methods. Curve evolution is a well known method used in various applications like...... tracking interfaces, active contour based segmentation methods and others. It can also be used to study shape spaces, as deforming a shape can be thought of as evolving its boundary curve. During curve evolution a curve traces out a path in the infinite dimensional space of curves. Due to application...... specific requirements like shape priors or a given data model, and due to limitations of the computer, the computed curve evolution forms a path in some finite dimensional subspace of the space of curves. We give methods to restrict the curve evolution to a finite dimensional linear or implicitly defined...

  5. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Science.gov (United States)

    Luglio, Gaetano; De Palma, Giovanni Domenico; Tarquini, Rachele; Giglio, Mariano Cesare; Sollazzo, Viviana; Esposito, Emanuela; Spadarella, Emanuela; Peltrini, Roberto; Liccardo, Filomena; Bucci, Luigi

    2015-01-01

    Background Despite the proven benefits, laparoscopic colorectal surgery is still under utilized among surgeons. A steep learning is one of the causes of its limited adoption. Aim of the study is to determine the feasibility and morbidity rate after laparoscopic colorectal surgery in a single institution, “learning curve” experience, implementing a well standardized operative technique and recovery protocol. Methods The first 50 patients treated laparoscopically were included. All the procedures were performed by a trainee surgeon, supervised by a consultant surgeon, according to the principle of complete mesocolic excision with central vascular ligation or TME. Patients underwent a fast track recovery programme. Recovery parameters, short-term outcomes, morbidity and mortality have been assessed. Results Type of resections: 20 left side resections, 8 right side resections, 14 low anterior resection/TME, 5 total colectomy and IRA, 3 total panproctocolectomy and pouch. Mean operative time: 227 min; mean number of lymph-nodes: 18.7. Conversion rate: 8%. Mean time to flatus: 1.3 days; Mean time to solid stool: 2.3 days. Mean length of hospital stay: 7.2 days. Overall morbidity: 24%; major morbidity (Dindo–Clavien III): 4%. No anastomotic leak, no mortality, no 30-days readmission. Conclusion Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon. PMID:25859386

  6. Statistical inference methods for two crossing survival curves: a comparison of methods.

    Science.gov (United States)

    Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng

    2015-01-01

    A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.

  7. The nuclear fluctuation width and the method of maxima in excitation curves

    International Nuclear Information System (INIS)

    Burjan, V.

    1988-01-01

    The method of counting maxima of excitation curves in the region of the occurrence of nuclear cross section fluctuations is extended to the case of the more realistic maxima defined as a sequence of five points instead of the simpler and commonly used case of a sequence of three points of an excitation curve. The dependence of the coefficient b (5) (κ), relating the number of five-point maxima and the mean level width Γ of the compound nucleus, on the relative distance K of excitation curve points is calculated. The influence of the random background on the coefficient b (5) (κ) is discussed and a comparison with the properties of the three-point coefficient b (3) (κ) is made - also in connection with the contribution of the random background. The calculated values of b (5) (κ) are well reproduced by the data obtained from the analysis of artificial excitation curves. (orig.)

  8. Dispersion curve estimation via a spatial covariance method with ultrasonic wavefield imaging.

    Science.gov (United States)

    Chong, See Yenn; Todd, Michael D

    2018-05-01

    Numerous Lamb wave dispersion curve estimation methods have been developed to support damage detection and localization strategies in non-destructive evaluation/structural health monitoring (NDE/SHM) applications. In this paper, the covariance matrix is used to extract features from an ultrasonic wavefield imaging (UWI) scan in order to estimate the phase and group velocities of S0 and A0 modes. A laser ultrasonic interrogation method based on a Q-switched laser scanning system was used to interrogate full-field ultrasonic signals in a 2-mm aluminum plate at five different frequencies. These full-field ultrasonic signals were processed in three-dimensional space-time domain. Then, the time-dependent covariance matrices of the UWI were obtained based on the vector variables in Cartesian and polar coordinate spaces for all time samples. A spatial covariance map was constructed to show spatial correlations within the full wavefield. It was observed that the variances may be used as a feature for S0 and A0 mode properties. The phase velocity and the group velocity were found using a variance map and an enveloped variance map, respectively, at five different frequencies. This facilitated the estimation of Lamb wave dispersion curves. The estimated dispersion curves of the S0 and A0 modes showed good agreement with the theoretical dispersion curves. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Percentile curves for skinfold thickness for Canadian children and youth.

    Science.gov (United States)

    Kuhle, Stefan; Ashley-Martin, Jillian; Maguire, Bryan; Hamilton, David C

    2016-01-01

    Background. Skinfold thickness (SFT) measurements are a reliable and feasible method for assessing body fat in children but their use and interpretation is hindered by the scarcity of reference values in representative populations of children. The objective of the present study was to develop age- and sex-specific percentile curves for five SFT measures (biceps, triceps, subscapular, suprailiac, medial calf) in a representative population of Canadian children and youth. Methods. We analyzed data from 3,938 children and adolescents between 6 and 19 years of age who participated in the Canadian Health Measures Survey cycles 1 (2007/2009) and 2 (2009/2011). Standardized procedures were used to measure SFT. Age- and sex-specific centiles for SFT were calculated using the GAMLSS method. Results. Percentile curves were materially different in absolute value and shape for boys and girls. Percentile girls in girls steadily increased with age whereas percentile curves in boys were characterized by a pubertal centered peak. Conclusions. The current study has presented for the first time percentile curves for five SFT measures in a representative sample of Canadian children and youth.

  10. Ensemble Learning Method for Outlier Detection and its Application to Astronomical Light Curves

    Science.gov (United States)

    Nun, Isadora; Protopapas, Pavlos; Sim, Brandon; Chen, Wesley

    2016-09-01

    Outlier detection is necessary for automated data analysis, with specific applications spanning almost every domain from financial markets to epidemiology to fraud detection. We introduce a novel mixture of the experts outlier detection model, which uses a dynamically trained, weighted network of five distinct outlier detection methods. After dimensionality reduction, individual outlier detection methods score each data point for “outlierness” in this new feature space. Our model then uses dynamically trained parameters to weigh the scores of each method, allowing for a finalized outlier score. We find that the mixture of experts model performs, on average, better than any single expert model in identifying both artificially and manually picked outliers. This mixture model is applied to a data set of astronomical light curves, after dimensionality reduction via time series feature extraction. Our model was tested using three fields from the MACHO catalog and generated a list of anomalous candidates. We confirm that the outliers detected using this method belong to rare classes, like Novae, He-burning, and red giant stars; other outlier light curves identified have no available information associated with them. To elucidate their nature, we created a website containing the light-curve data and information about these objects. Users can attempt to classify the light curves, give conjectures about their identities, and sign up for follow up messages about the progress made on identifying these objects. This user submitted data can be used further train of our mixture of experts model. Our code is publicly available to all who are interested.

  11. Measuring the surgical 'learning curve': methods, variables and competency.

    Science.gov (United States)

    Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2014-03-01

    To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.

  12. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    Science.gov (United States)

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  13. Development and Evaluation of a Novel Curved Biopsy Device for CT-Guided Biopsy of Lesions Unreachable Using Standard Straight Needle Trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Schulze-Hagen, Maximilian Franz, E-mail: mschulze@ukaachen.de; Pfeffer, Jochen; Zimmermann, Markus; Liebl, Martin [University Hospital RWTH Aachen, Department of Diagnostic and Interventional Radiology (Germany); Stillfried, Saskia Freifrau von [University Hospital RWTH Aachen, Department of Pathology (Germany); Kuhl, Christiane; Bruners, Philipp; Isfort, Peter [University Hospital RWTH Aachen, Department of Diagnostic and Interventional Radiology (Germany)

    2017-06-15

    PurposeTo evaluate the feasibility of a novel curved CT-guided biopsy needle prototype with shape memory to access otherwise not accessible biopsy targets.Methods and MaterialsA biopsy needle curved by 90° with specific radius was designed. It was manufactured using nitinol to acquire shape memory, encased in a straight guiding trocar to be driven out for access of otherwise inaccessible targets. Fifty CT-guided punctures were conducted in a biopsy phantom and 10 CT-guided punctures in a swine corpse. Biposies from porcine liver and muscle tissue were separately gained using the biopsy device, and histological examination was performed subsequently.ResultsMean time for placement of the trocar and deployment of the inner biopsy needle was ~205 ± 69 and ~93 ± 58 s, respectively, with a mean of ~4.5 ± 1.3 steps to reach adequate biopsy position. Mean distance from the tip of the needle to the target was ~0.7 ± 0.8 mm. CT-guided punctures in the swine corpse took relatively longer and required more biopsy steps (~574 ± 107 and ~380 ± 148 s, 8 ± 2.6 steps). Histology demonstrated appropriate tissue samples in nine out of ten cases (90%).ConclusionsTargets that were otherwise inaccessible via standard straight needle trajectories could be successfully reached with the curved biopsy needle prototype. Shape memory and preformed size with specific radius of the curved needle simplify the target accessibility with a low risk of injuring adjacent structures.

  14. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  15. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  16. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry

    2013-09-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  17. A volume-based method for denoising on curved surfaces

    KAUST Repository

    Biddle, Harry; von Glehn, Ingrid; Macdonald, Colin B.; Marz, Thomas

    2013-01-01

    We demonstrate a method for removing noise from images or other data on curved surfaces. Our approach relies on in-surface diffusion: we formulate both the Gaussian diffusion and Perona-Malik edge-preserving diffusion equations in a surface-intrinsic way. Using the Closest Point Method, a recent technique for solving partial differential equations (PDEs) on general surfaces, we obtain a very simple algorithm where we merely alternate a time step of the usual Gaussian diffusion (and similarly Perona-Malik) in a small 3D volume containing the surface with an interpolation step. The method uses a closest point function to represent the underlying surface and can treat very general surfaces. Experimental results include image filtering on smooth surfaces, open surfaces, and general triangulated surfaces. © 2013 IEEE.

  18. The method of covariant symbols in curved space-time

    International Nuclear Information System (INIS)

    Salcedo, L.L.

    2007-01-01

    Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)

  19. Fitness analysis method for magnesium in drinking water with atomic absorption using quadratic curve calibration

    Directory of Open Access Journals (Sweden)

    Esteban Pérez-López

    2014-11-01

    Full Text Available Because of the importance of quantitative chemical analysis in research, quality control, sales of services and other areas of interest , and the limiting of some instrumental analysis methods for quantification with linear calibration curve, sometimes because the short linear dynamic ranges of the analyte, and sometimes by limiting the technique itself, is that there is a need to investigate a little more about the convenience of using quadratic curves for analytical quantification, which seeks demonstrate that it is a valid calculation model for chemical analysis instruments. To this was taken as an analysis method based on the technique and atomic absorption spectroscopy in particular a determination of magnesium in a sample of drinking water Tacares sector Northern Grecia, employing a nonlinear calibration curve and a curve specific quadratic behavior, which was compared with the test results obtained for the same analysis with a linear calibration curve. The results show that the methodology is valid for the determination referred to, with all confidence, since the concentrations are very similar, and as used hypothesis testing can be considered equal.

  20. Unified approach for estimating the probabilistic design S-N curves of three commonly used fatigue stress-life models

    International Nuclear Information System (INIS)

    Zhao Yongxiang; Wang Jinnuo; Gao Qing

    2001-01-01

    A unified approach, referred to as general maximum likelihood method, is presented for estimating probabilistic design S-N curves and their confidence bounds of the three commonly used fatigue stress-life models, namely three parameter, Langer and Basquin. The curves are described by a general form of mean and standard deviation S-N curves of the logarithm of fatigue life. Different from existent methods, i.e., the conventional method and the classical maximum likelihood method,present approach considers the statistical characteristics of whole test data. The parameters of the mean curve is firstly estimated by least square method and then, the parameters of the standard deviation curve is evaluated by mathematical programming method to be agreement with the maximum likelihood principle. Fit effects of the curves are assessed by fitted relation coefficient, total fitted standard error and the confidence bounds. Application to the virtual stress amplitude-crack initiation life data of a nuclear engineering material, Chinese 1Cr18Ni9Ti stainless steel pipe-weld metal, has indicated the validity of the approach to the S-N data where both S and N show the character of random variable. Practices to the two states of S-N data of Chinese 45 carbon steel notched specimens (k t = 2.0) have indicated the validity of present approach to the test results obtained respectively from group fatigue test and from maximum likelihood fatigue test. At the practices, it was revealed that in general the fit is best for the three-parameter model,slightly inferior for the Langer relation and poor for the Basquin equation. Relative to the existent methods, present approach has better fit. In addition, the possible non-conservative predictions of the existent methods, which are resulted from the influence of local statistical characteristics of the data, are also overcome by present approach

  1. A systematic methodology for creep master curve construction using the stepped isostress method (SSM): a numerical assessment

    Science.gov (United States)

    Miranda Guedes, Rui

    2018-02-01

    Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.

  2. Assessment of p-y curves from numerical methods for a non-slender monopile in cohesionless soil

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, L. B.; Ravn Roesen, H. [Aalborg Univ. Dept. of Civil Engineering, Aalborg (Denmark); Hansen, Mette; Kirk Wolf, T. [COWI, Kgs. Lyngby (Denmark); Lange Rasmussen, K. [Niras, Aalborg (Denmark)

    2013-06-15

    In current design the monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The behaviour of monopiles under lateral loading is not fully understood and the current design guidances apply the p-y curve method in a Winkler model approach. The p-y curve method was originally developed for jag-piles used in the oil and gas industry which are much more slender than the monopile foundation. In recent years the 3D finite element analysis has become a tool in the investigation of complex geotechnical situations, such as the laterally loaded monopile. In this paper a 3D FEA is conducted as basis of an extraction of p-y curves, as a basis for an evaluation of the traditional curves. Two different methods are applied to create a list of data points used for the p-y curves: A force producing a similar response as seen in the ULS situation is applied stepwise; hereby creating the most realistic soil response. This method, however, does not generate sufficient data points around the rotation point of the pile. Therefore, also a forced horizontal displacement of the entire pile is applied, whereby displacements are created over the entire length of the pile. The response is extracted from the interface and the nearby soil elements respectively, as to investigate the influence this has on the computed curves. p-y curves are obtained near the rotation point by evaluation of soil response during a prescribed displacement but the response is not in clear agreement with the response during an applied load. Two different material models are applied. It is found that the applied material models have a significant influence on the stiffness of the evaluated p-y curves. The p-y curves evaluated by means of FEA are compared to the conventional p-y curve formulation which provides a much stiffer response. It is found that the best response is computed by implementing the Hardening Soil model and

  3. Use of the Master Curve methodology for real three dimensional cracks

    International Nuclear Information System (INIS)

    Wallin, Kim

    2007-01-01

    At VTT, development work has been in progress for 15 years to develop and validate testing and analysis methods applicable for fracture resistance determination from small material samples. The VTT approach is a holistic approach by which to determine static, dynamic and crack arrest fracture toughness properties either directly or by correlations from small material samples. The development work has evolved a testing standard for fracture toughness testing in the transition region. The standard, known as the Master Curve standard is in a way 'first of a kind', since it includes guidelines on how to properly treat the test data for use in structural integrity assessment. No standard, so far, has done this. The standard is based on the VTT approach, but presently, the VTT approach goes beyond the standard. Key components in the standard are statistical expressions for describing the data scatter, and for predicting a specimens size (crack front length) effect and an expression (Master Curve) for the fracture toughness temperature dependence. The standard and the approach, it is based upon, can be considered to represent the state of the art of small specimen fracture toughness characterization. Normally, the Master Curve parameters are determined using test specimens with 'straight' crack fronts and comparatively uniform stress state along the crack front. This enables the use of a single K I value and single constraint value to describe the whole specimen. For a real crack in a structure, this is usually not the case. Normally, both K I and constraint vary along the crack front and in the case of a thermal shock, even the temperature will vary along the crack front. A proper means of applying the Master Curve methodology for such cases is presented here

  4. Use of the master curve methodology for real three dimensional cracks

    International Nuclear Information System (INIS)

    Wallin, K.; Rintamaa, R.

    2005-01-01

    At VTT, development work has been in progress for 15 years to develop and validate testing and analysis methods applicable for fracture resistance determination from small material samples. The VTT approach is a holistic approach by which to determine static, dynamic and crack arrest fracture toughness properties either directly or by correlations from small material samples. The development work has evolved a testing standard for fracture toughness testing in the transition region. The standard, known as the Master Curve standard is in a way 'first of a kind', since it includes guidelines on how to properly treat the test data for use in structural integrity assessment. No standard, so far, has done this. The standard is based on the VTT approach, but presently, the VTT approach goes beyond the standard. Key components in the standard are statistical expressions for describing the data scatter, and for predicting a specimen's size (crack front length) effect and an expression (Master Curve) for the fracture toughness temperature dependence. The standard and the approach it is based upon can be considered to represent the state of the art of small specimen fracture toughness characterization. Normally, the Master Curve parameters are determined using test specimens with 'straight' crack fronts and comparatively uniform stress state along the crack front. This enables the use of a single KI value and single constraint value to describe the whole specimen. For a real crack in a structure, this is usually not the case. Normally, both KI and constraint varies along the crack front and in the case of a thermal shock, even the temperature will vary along the crack front. A proper means of applying the Master Curve methodology for such cases is presented here. (authors)

  5. PLOTTAB, Curve and Point Plotting with Error Bars

    International Nuclear Information System (INIS)

    1999-01-01

    1 - Description of program or function: PLOTTAB is designed to plot any combination of continuous curves and/or discrete points (with associated error bars) using user supplied titles and X and Y axis labels and units. If curves are plotted, the first curve may be used as a standard; the data and the ratio of the data to the standard will be plotted. 2 - Method of solution: PLOTTAB: The program has no idea of what data is being plotted and yet by supplying titles, X and Y axis labels and units the user can produce any number of plots with each plot containing almost any combination of curves and points with each plot properly identified. In order to define a continuous curve between tabulated points, this program must know how to interpolate between points. By input the user may specify either the default option of linear x versus linear y interpolation or alternatively log x and/or log Y interpolation. In all cases, regardless of the interpolation specified, the program will always interpolate the data to the plane of the plot (linear or log x and y plane) in order to present the true variation of the data between tabulated points, based on the user specified interpolation law. Tabulated points should be tabulated at a sufficient number of x values to insure that the difference between the specified interpolation and the 'true' variation of a curve between tabulated values is relatively small. 3 - Restrictions on the complexity of the problem: A combination of up to 30 curves and sets of discrete points may appear on each plot. If the user wishes to use this program to compare different sets of data, all of the data must be in the same units

  6. Standard setting: comparison of two methods.

    Science.gov (United States)

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  7. Assessment of Estimation Methods ForStage-Discharge Rating Curve in Rippled Bed Rivers

    Directory of Open Access Journals (Sweden)

    P. Maleki

    2016-02-01

    in a flume located at the hydraulic laboratory ofShahrekordUniversity, Iran. Bass (1993 [reported in Joep (1999], determined an empirical relation between median grain size, D50, and equilibrium ripple length, l: L=75.4 (logD50+197 Eq.(1 Where l and D50 are both given in millimeters. Raudkivi (1997 [reported in Joep (1999], proposed another empirical relation to estimate the ripple length that D50 is given in millimeters: L=245(D500.35 Eq. (2 Flemming (1988 [reported in Joep (1999], derived an empirical relation between mean ripple length and ripple height based on a large dataset: hm= 0.0677l 0.8098 Eq.(3 Where hm is the mean ripple height (m and l is the mean ripple length (m. Ikeda S. and Asaeda (1983 investigated the characteristics of flow over ripples. They found that there are separation areas and vortices at lee of ripples and maximum turbulent diffusion occurs in these areas. Materials and Methods: In this research, the effects of two different type of ripples onthe hydraulic characteristics of flow were experimentally studied in a flume located at the hydraulic laboratory of ShahrekordUniversity, Iran. The flume has the dimensions of 0.4 m wide and depth and 12 m long. Generally 48 tests variety slopes of 0.0005 to 0.003 and discharges of 10 to 40 lit/s, were conducted. Velocity and the shear stress were measured by using an Acoustic Doppler Velocimeter (ADV. Two different types of ripples (parallel and flake ripples were used. The stage- discharge rating curve was then estimated in different ways, such as Einstein - Barbarvsa, shen and White et al. Results and Discussion: In order to investigateresult of the tests, were usedst atistical methods.White method as amaximum valueofα, RMSE, and average absolute error than other methods. Einstein method offitting the discharge under estimated. Evaluation of stage- discharge rating curve methods based on the obtained results from this research showed that Shen method had the highest accuracy for developing the

  8. Applicability of the θ projection method to creep curves of Ni-22Cr-18Fe-9Mo alloy

    International Nuclear Information System (INIS)

    Kurata, Yuji; Utsumi, Hirokazu

    1998-01-01

    Applicability of the θ projection method has been examined for constant-load creep test results at 800 and 1000degC on Ni-22Cr-18Fe-9Mo alloy in the solution-treated and aged conditions. The results obtained are as follows: (1) Normal type creep curves obtained at 1000degC for aged Ni-22Cr-18Fe-9Mo alloy are fitted using the θ projection method with four θ parameters. Stress dependence of θ parameters can be expressed in terms of simple equations. (2) The θ projection method with four θ parameters cannot be applied to the remaining creep curves where most of the life is occupied by a tertiary creep stage. Therefore, the θ projection method consisting of only the tertiary creep component with two θ parameters was applied. The creep curves can be fitted using this method. (3) If the θ projection method with four θ or two θ parameters is applied to creep curves in accordance with creep curve shapes, creep rupture time can be predicted in terms of formulation of stress and/or temperature dependence of θ parameters. (author)

  9. Standard setting: Comparison of two methods

    Directory of Open Access Journals (Sweden)

    Oyebode Femi

    2006-09-01

    Full Text Available Abstract Background The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard – setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. Methods The norm – reference method of standard -setting (mean minus 1 SD was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ. Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart. We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. Results The pass rate with the norm-reference method was 85% (66/78 and that by the Angoff method was 100% (78 out of 78. The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% – 87%. The modified Angoff method had an inter-rater reliability of 0.81 – 0.82 and a test-retest reliability of 0.59–0.74. Conclusion There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  10. S-curve networks and an approximate method for estimating degree distributions of complex networks

    Science.gov (United States)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  11. S-curve networks and an approximate method for estimating degree distributions of complex networks

    International Nuclear Information System (INIS)

    Guo Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)

  12. Methods for extracting dose response curves from radiation therapy data. I. A unified approach

    International Nuclear Information System (INIS)

    Herring, D.F.

    1980-01-01

    This paper discusses an approach to fitting models to radiation therapy data in order to extract dose response curves for tumor local control and normal tissue damage. The approach is based on the method of maximum likelihood and is illustrated by several examples. A general linear logistic equation which leads to the Ellis nominal standard dose (NSD) equation is discussed; the fit of this equation to experimental data for mouse foot skin reactions produced by fractionated irradiation is described. A logistic equation based on the concept that normal tissue reactions are associated with the surviving fraction of cells is also discussed, and the fit of this equation to the same set of mouse foot skin reaction data is also described. These two examples illustrate the importance of choosing a model based on underlying mechanisms when one seeks to attach biological significance to a model's parameters

  13. Percentile curves for skinfold thickness for Canadian children and youth

    Directory of Open Access Journals (Sweden)

    Stefan Kuhle

    2016-07-01

    Full Text Available Background. Skinfold thickness (SFT measurements are a reliable and feasible method for assessing body fat in children but their use and interpretation is hindered by the scarcity of reference values in representative populations of children. The objective of the present study was to develop age- and sex-specific percentile curves for five SFT measures (biceps, triceps, subscapular, suprailiac, medial calf in a representative population of Canadian children and youth. Methods. We analyzed data from 3,938 children and adolescents between 6 and 19 years of age who participated in the Canadian Health Measures Survey cycles 1 (2007/2009 and 2 (2009/2011. Standardized procedures were used to measure SFT. Age- and sex-specific centiles for SFT were calculated using the GAMLSS method. Results. Percentile curves were materially different in absolute value and shape for boys and girls. Percentile girls in girls steadily increased with age whereas percentile curves in boys were characterized by a pubertal centered peak. Conclusions. The current study has presented for the first time percentile curves for five SFT measures in a representative sample of Canadian children and youth.

  14. NEW CONCEPTS AND TEST METHODS OF CURVE PROFILE AREA DENSITY IN SURFACE: ESTIMATION OF AREAL DENSITY ON CURVED SPATIAL SURFACE

    OpenAIRE

    Hong Shen

    2011-01-01

    The concepts of curve profile, curve intercept, curve intercept density, curve profile area density, intersection density in containing intersection (or intersection density relied on intersection reference), curve profile intersection density in surface (or curve intercept intersection density relied on intersection of containing curve), and curve profile area density in surface (AS) were defined. AS expressed the amount of curve profile area of Y phase in the unit containing surface area, S...

  15. A new method for curve fitting to the data with low statistics not using the chi2-method

    International Nuclear Information System (INIS)

    Awaya, T.

    1979-01-01

    A new method which does not use the chi 2 -fitting method is investigated in order to fit the theoretical curve to data with low statistics. The method is compared with the usual and modified chi 2 -fitting ones. The analyses are done for data which are generated by computers. It is concluded that the new method gives good results in all the cases. (Auth.)

  16. Photon and proton activation analysis of iron and steel standards using the internal standard method coupled with the standard addition method

    International Nuclear Information System (INIS)

    Masumoto, K.; Hara, M.; Hasegawa, D.; Iino, E.; Yagi, M.

    1997-01-01

    The internal standard method coupled with the standard addition method has been applied to photon activation analysis and proton activation analysis of minor elements and trace impurities in various types of iron and steel samples issued by the Iron and Steel Institute of Japan (ISIJ). Samples and standard addition samples were once dissolved to mix homogeneously, an internal standard and elements to be determined and solidified as a silica-gel to make a similar matrix composition and geometry. Cerium and yttrium were used as an internal standard in photon and proton activation, respectively. In photon activation, 20 MeV electron beam was used for bremsstrahlung irradiation to reduce matrix activity and nuclear interference reactions, and the results were compared with those of 30 MeV irradiation. In proton activation, iron was removed by the MIBK extraction method after dissolving samples to reduce the radioactivity of 56 Co from iron via 56 Fe(p, n) 56 Co reaction. The results of proton and photon activation analysis were in good agreement with the standard values of ISIJ. (author)

  17. Standard test method for isotopic analysis of uranium hexafluoride by double standard single-collector gas mass spectrometer method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This is a quantitative test method applicable to determining the mass percent of uranium isotopes in uranium hexafluoride (UF6) samples with 235U concentrations between 0.1 and 5.0 mass %. 1.2 This test method may be applicable for the entire range of 235U concentrations for which adequate standards are available. 1.3 This test method is for analysis by a gas magnetic sector mass spectrometer with a single collector using interpolation to determine the isotopic concentration of an unknown sample between two characterized UF6 standards. 1.4 This test method is to replace the existing test method currently published in Test Methods C761 and is used in the nuclear fuel cycle for UF6 isotopic analyses. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appro...

  18. Test of the nonexponential deviations from decay curve of 52V using continuous kinetic function method

    International Nuclear Information System (INIS)

    Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son

    1995-01-01

    Present work is aimed at a formulation of an experimental approach to search the proposed nonexponential deviations from decay curve and at description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. A continuous kinetic function (CKF) method is described for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behaviour of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is researched. A complex type of decay is discussed. (authors). 10 refs., 4 figs., 2 tabs

  19. Purohit's spectrophotometric method for determination of stability constants of complexes using Job's curves

    International Nuclear Information System (INIS)

    Purohit, D.N.; Goswami, A.K.; Chauhan, R.S.; Ressalan, S.

    1999-01-01

    A spectrophotometric method for determination of stability constants making use of Job's curves has been developed. Using this method stability constants of Zn(II), Cd(II), Mo(VI) and V(V) complexes of hydroxytriazenes have been determined. For the sake of comparison, values of the stability constants were also determined using Harvey and Manning's method. The values of the stability constants developed by two methods compare well. This new method has been named as Purohit's method. (author)

  20. The strategy curve. A method for representing and interpreting generator bidding strategies

    International Nuclear Information System (INIS)

    Lucas, N.; Taylor, P.

    1995-01-01

    The pool is the novel trading arrangement at the heart of the privatized electricity market in England and Wales. This central role in the new system makes it crucial that it is seen to function efficiently. Unfortunately, it is governed by a set of complex rules, which leads to a lack of transparency, and this makes monitoring of its operation difficult. This paper seeks to provide a method for illuminating one aspect of the pool, that of generator bidding behaviour. We introduce the concept of a strategy curve, which is a concise device for representing generator bidding strategies. This curve has the appealing characteristic of directly revealing any deviation in the bid price of a genset from the costs of generating electricity. After a brief discussion about what constitutes price and cost in this context we present a number of strategy curves for different days and provide some interpretation of their form, based in part on our earlier work with game theory. (author)

  1. Compact Hilbert Curve Index Algorithm Based on Gray Code

    Directory of Open Access Journals (Sweden)

    CAO Xuefeng

    2016-12-01

    Full Text Available Hilbert curve has best clustering in various kinds of space filling curves, and has been used as an important tools in discrete global grid spatial index design field. But there are lots of redundancies in the standard Hilbert curve index when the data set has large differences between dimensions. In this paper, the construction features of Hilbert curve is analyzed based on Gray code, and then the compact Hilbert curve index algorithm is put forward, in which the redundancy problem has been avoided while Hilbert curve clustering preserved. Finally, experiment results shows that the compact Hilbert curve index outperforms the standard Hilbert index, their 1 computational complexity is nearly equivalent, but the real data set test shows the coding time and storage space decrease 40%, the speedup ratio of sorting speed is nearly 4.3.

  2. Uncertainty of pesticide residue concentration determined from ordinary and weighted linear regression curve.

    Science.gov (United States)

    Yolci Omeroglu, Perihan; Ambrus, Árpad; Boyacioglu, Dilek

    2018-03-28

    Determination of pesticide residues is based on calibration curves constructed for each batch of analysis. Calibration standard solutions are prepared from a known amount of reference material at different concentration levels covering the concentration range of the analyte in the analysed samples. In the scope of this study, the applicability of both ordinary linear and weighted linear regression (OLR and WLR) for pesticide residue analysis was investigated. We used 782 multipoint calibration curves obtained for 72 different analytical batches with high-pressure liquid chromatography equipped with an ultraviolet detector, and gas chromatography with electron capture, nitrogen phosphorus or mass spectrophotometer detectors. Quality criteria of the linear curves including regression coefficient, standard deviation of relative residuals and deviation of back calculated concentrations were calculated both for WLR and OLR methods. Moreover, the relative uncertainty of the predicted analyte concentration was estimated for both methods. It was concluded that calibration curve based on WLR complies with all the quality criteria set by international guidelines compared to those calculated with OLR. It means that all the data fit well with WLR for pesticide residue analysis. It was estimated that, regardless of the actual concentration range of the calibration, relative uncertainty at the lowest calibrated level ranged between 0.3% and 113.7% for OLR and between 0.2% and 22.1% for WLR. At or above 1/3 of the calibrated range, uncertainty of calibration curve ranged between 0.1% and 16.3% for OLR and 0% and 12.2% for WLR, and therefore, the two methods gave comparable results.

  3. Assessment of p-y Curves from Numerical Methods for a non-Slender Monopile in Cohesionless Soil

    DEFF Research Database (Denmark)

    Ibsen, Lars Bo; Roesen, Hanne Ravn; Wolf, Torben K.

    2013-01-01

    In current design the stiff large diameter monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The current design guidances apply the p-y curve method with formulations for the curves based on slender piles....... However, the behaviour of the stiff monopiles during lateral loading is not fully understood. In this paper case study from Barrow Offshore Wind Farm is used in a 3D finite element model. The analysis forms a basis for extraction of p-y curves which are used in an evaluation of the traditional curves...

  4. Assessment of p-y Curves from Numerical Methods for a non-Slender Monopile in Cohesionless Soil

    DEFF Research Database (Denmark)

    Wolf, Torben K.; Rasmussen, Kristian L.; Hansen, Mette

    In current design the stiff large diameter monopile is a widely used solution as foundation of offshore wind turbines. Winds and waves subject the monopile to considerable lateral loads. The current design guidances apply the p-y curve method with formulations for the curves based on slender piles....... However, the behaviour of the stiff monopiles during lateral loading is not fully understood. In this paper case study from Barrow Offshore Wind Farm is used in a 3D finite element model. The analysis forms a basis for extraction of p-y curves which are used in an evaluation of the traditional curves...

  5. Method for linearizing the potentiometric curves of precipitation titration in nonaqueous and aqueous-organic solutions

    International Nuclear Information System (INIS)

    Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.

    1995-01-01

    The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs

  6. Photoelectic BV Light Curves of Algol and the Interpretations of the Light Curves

    Directory of Open Access Journals (Sweden)

    Ho-Il Kim

    1985-06-01

    Full Text Available Standardized B and V photoelectric light curves of Algol are made with the observations obtained during 1982-84 with the 40-cm and the 61-cm reflectors of Yonsei University Observatory. These light curves show asymmetry between ascending and descending shoulders. The ascending shoulder is 0.02 mag brighter than descending shoulder in V light curve and 0.03 mag in B light curve. These asymmetric light curves are interpreted as the result of inhomogeneous energy distribution on the surface of one star of the eclipsing pair rather than the result of gaseous stream flowing from KOIV to B8V star. The 180-year periodicity, so called great inequality, are most likely the result proposed by Kim et al. (1983 that the abrupt and discrete mass losses of cooler component may be the cause of this orbital change. The amount of mass loss deduced from these discrete period changes turned out to be of the order of 10^(-6 - 10^(-5 Msolar.

  7. Signature Curves Statistics of DNA Supercoils

    OpenAIRE

    Shakiban, Cheri; Lloyd, Peter

    2004-01-01

    In this paper we describe the Euclidean signature curves for two dimensional closed curves in the plane and their generalization to closed space curves. The focus will be on discrete numerical methods for approximating such curves. Further we will apply these numerical methods to plot the signature curves related to three-dimensional simulated DNA supercoils. Our primary focus will be on statistical analysis of the data generated for the signature curves of the supercoils. We will try to esta...

  8. Primary standardization of C-14 by means of CIEMAT/NIST, TDCR and 4πβ-γ methods

    International Nuclear Information System (INIS)

    Kuznetsova, Maria

    2016-01-01

    In this work, the primary standardization of "1"4C solution, which emits beta particles of maximum energy 156 keV, was made by means of three different methods: CIEMAT/NIST and TDCR (Triple To Double Coincidence Ratio) methods in liquid scintillation systems and the tracing method, in the 4πβ-γ coincidence system. TRICARB LSC (Liquid Scintillator Counting) system, equipped with two photomultipliers tubes, was used for CIEMAT/NIST method, using a "3H standard that emits beta particles with maximum energy of 18.7 keV, as efficiency tracing. HIDEX 300SL LSC system, equipped with three photomultipliers tubes, was used for TDCR method. Samples of "1"4C and "3H, for the liquid scintillator system, were prepared using three commercial scintillation cocktails, UltimaGold, Optiphase Hisafe3 and InstaGel-Plus, in order to compare the performance in the measurements. All samples were prepared with 15 mL scintillators, in glass vials with low potassium concentration. Known aliquots of radioactive solution were dropped onto the cocktail scintillators. In order to obtain the quenching parameter curve, a nitro methane carrier solution and 1 mL of distilled water were used. For measurements in the 4πβ-γ system, "6"0Co was used as beta gamma emitter. SCS (software coincidence system) was applied and the beta efficiency was changed by using electronic discrimination. The behavior of the extrapolation curve was predicted with code ESQUEMA, using Monte Carlo technique. The "1"4C activity obtained by the three methods applied in this work was compared and the results showed to be in agreement, within the experimental uncertainty. (author)

  9. Application of Bimodal Master Curve Approach on KSNP RPV steel SA508 Gr. 3

    International Nuclear Information System (INIS)

    Kim, Jongmin; Kim, Minchul; Choi, Kwonjae; Lee, Bongsang

    2014-01-01

    In this paper, the standard MC approach and BMC are applied to the forging material of the KSNP RPV steel SA508 Gr. 3. A series of fracture toughness tests were conducted in the DBTT transition region, and fracture toughness specimens were extracted from four regions, i.e., the surface, 1/8T, 1/4T and 1/2T. Deterministic material inhomogeneity was reviewed through a conventional MC approach and the random inhomogeneity was evaluated by BMC. In the present paper, four regions, surface, 1/8T, 1/4T and 1/2T, were considered for the fracture toughness specimens of KSNP (Korean Standard Nuclear Plant) SA508 Gr. 3 steel to provide deterministic material inhomogeneity and review the applicability of BMC. T0 determined by a conventional MC has a low value owing to the higher quenching rate at the surface as expected. However, more than about 15% of the KJC values lay above the 95% probability curves indexed with the standard MC T0 at the surface and 1/8T, which implies the existence of inhomogeneity in the material. To review the applicability of the BMC method, the deterministic inhomogeneity owing to the extraction location and quenching rate is treated as random inhomogeneity. Although the lower bound and upper bound curve of the BMC covered more KJC values than that of the conventional MC, there is no significant relationship between the BMC analysis lines and measured KJC values in the higher toughness distribution, and BMC and MC provide almost the same T0 values. Therefore, the standard MC evaluation method for this material is appropriate even though the standard MC has a narrow upper/lower bound curve range from the RPV evaluation point of view. The material is not homogeneous in reality. Such inhomogeneity comes in the effect of material inhomogeneity depending on the specimen location, heat treatment, and whole manufacturing process. The conventional master curve has a limitation to be applied to a large scatted data of fracture toughness such as the weld region

  10. Determination of Dispersion Curves for Composite Materials with the Use of Stiffness Matrix Method

    Directory of Open Access Journals (Sweden)

    Barski Marek

    2017-06-01

    Full Text Available Elastic waves used in Structural Health Monitoring systems have strongly dispersive character. Therefore it is necessary to determine the appropriate dispersion curves in order to proper interpretation of a received dynamic response of an analyzed structure. The shape of dispersion curves as well as number of wave modes depends on mechanical properties of layers and frequency of an excited signal. In the current work, the relatively new approach is utilized, namely stiffness matrix method. In contrast to transfer matrix method or global matrix method, this algorithm is considered as numerically unconditionally stable and as effective as transfer matrix approach. However, it will be demonstrated that in the case of hybrid composites, where mechanical properties of particular layers differ significantly, obtaining results could be difficult. The theoretical relationships are presented for the composite plate of arbitrary stacking sequence and arbitrary direction of elastic waves propagation. As a numerical example, the dispersion curves are estimated for the lamina, which is made of carbon fibers and epoxy resin. It is assumed that elastic waves travel in the parallel, perpendicular and arbitrary direction to the fibers in lamina. Next, the dispersion curves are determined for the following laminate [0°, 90°, 0°, 90°, 0°, 90°, 0°, 90°] and hybrid [Al, 90°, 0°, 90°, 0°, 90°, 0°], where Al is the aluminum alloy PA38 and the rest of layers are made of carbon fibers and epoxy resin.

  11. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    Science.gov (United States)

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  12. Thermoluminescence glow curve analysis and CGCD method for erbium doped CaZrO{sub 3} phosphor

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, Ratnesh, E-mail: 31rati@gmail.com [Department of Physics, Bhilai Institute of Technology, Raipur, 493661 (India); Chopra, Seema [Department Physics, G.D Goenka Public School (India)

    2016-05-06

    The manuscript report the synthesis, thermoluminescence study at fixed concentration of Er{sup 3+} (1 mol%) doped CaZrO{sub 3} phosphor. The phosphors were prepared by modified solid state reaction method. The powder sample was characterized by thermoluminescence (TL) glow curve analysis. In TL glow curve the optimized concentration in 1mol% for UV irradiated sample. The kinetic parameters were calculated by computerized glow curve deconvolution (CGCD) techniaue. Trapping parameters gives the information of dosimetry loss in prepared phosphor and its usability in environmental monitoring and for personal monitoring. CGCD is the advance tool for analysis of complicated TL glow curves.

  13. About the method of approximation of a simple closed plane curve with a sharp edge

    Directory of Open Access Journals (Sweden)

    Zelenyy A.S.

    2017-02-01

    Full Text Available it was noted in the article, that initially the problem of interpolation of the simple plane curve arose in the problem of simulation of subsonic flow around a body with the subsequent calculation of the velocity potential using the vortex panel method. However, as it turned out, the practical importance of this method is much wider. This algorithm can be successfully applied in any task that requires a discrete set of points which describe an arbitrary curve: potential function method, flow around an airfoil with the trailing edge (airfoil, liquid drop, etc., analytic expression, which is very difficult to obtain, creation of the font and logo and in some tasks of architecture and garment industry.

  14. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures

    Science.gov (United States)

    Yehia, Ali M.

    2013-05-01

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.

  15. A method of non-destructive quantitative analysis of the ancient ceramics with curved surface

    International Nuclear Information System (INIS)

    He Wenquan; Xiong Yingfei

    2002-01-01

    Generally the surface of the sample should be smooth and flat in XRF analysis, but the ancient ceramics and hardly match this condition. Two simple methods are put forward in fundamental method and empirical correction method of XRF analysis, so the analysis of little sample or the sample with curved surface can be easily completed

  16. Part 5: Receiver Operating Characteristic Curve and Area under the Curve

    Directory of Open Access Journals (Sweden)

    Saeed Safari

    2016-04-01

    Full Text Available Multiple diagnostic tools are used by emergency physicians,every day. In addition, new tools are evaluated to obtainmore accurate methods and reduce time or cost of conventionalones. In the previous parts of this educationalseries, we described diagnostic performance characteristicsof diagnostic tests including sensitivity, specificity, positiveand negative predictive values, and likelihood ratios. Thereceiver operating characteristics (ROC curve is a graphicalpresentation of screening characteristics. ROC curve is usedto determine the best cutoff point and compare two or moretests or observers by measuring the area under the curve(AUC. In this part of our educational series, we explain ROCcurve and two methods to determine the best cutoff value.

  17. Carbon Lorenz Curves

    NARCIS (Netherlands)

    Groot, L.F.M.|info:eu-repo/dai/nl/073642398

    2008-01-01

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across

  18. An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.

    Science.gov (United States)

    Obuchowski, Nancy A

    2006-02-15

    ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.

  19. Comparison of wind turbines based on power curve analysis

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-01

    In the study measured power curves for 46 wind turbines were analyzed with the purpose to establish the basis for a consistent comparison of the efficiency of the wind turbines. Emphasis is on wind turbines above 500 kW rated power, with power curves measured after 1994 according to international recommendations. The available power curves fulfilling these requirements were smoothened according to a procedure developed for the purpose in such a way that the smoothened power curves are equally representative as the measured curves. The resulting smoothened power curves are presented in a standardized format for the subsequent processing. Using wind turbine data from the power curve documentation the analysis results in curves for specific energy production (kWh/M{sup 2}/yr) versus specific rotor load (kW/M{sup 2}) for a range of mean wind speeds. On this basis generalized curves for specific annual energy production versus specific rotor load are established for a number of generalized wind turbine concepts. The 46 smoothened standardized power curves presented in the report, the procedure developed to establish them, and the results of the analysis based on them aim at providers of measured power curves as well as users of them including manufacturers, advisors and decision makers. (au)

  20. A method for the measurement of dispersion curves of circumferential guided waves radiating from curved shells: experimental validation and application to a femoral neck mimicking phantom

    Science.gov (United States)

    Nauleau, Pierre; Minonzio, Jean-Gabriel; Chekroun, Mathieu; Cassereau, Didier; Laugier, Pascal; Prada, Claire; Grimal, Quentin

    2016-07-01

    Our long-term goal is to develop an ultrasonic method to characterize the thickness, stiffness and porosity of the cortical shell of the femoral neck, which could enhance hip fracture risk prediction. To this purpose, we proposed to adapt a technique based on the measurement of guided waves. We previously evidenced the feasibility of measuring circumferential guided waves in a bone-mimicking phantom of a circular cross-section of even thickness. The goal of this study is to investigate the impact of the complex geometry of the femoral neck on the measurement of guided waves. Two phantoms of an elliptical cross-section and one phantom of a realistic cross-section were investigated. A 128-element array was used to record the inter-element response matrix of these waveguides. This experiment was simulated using a custom-made hybrid code. The response matrices were analyzed using a technique based on the physics of wave propagation. This method yields portions of dispersion curves of the waveguides which were compared to reference dispersion curves. For the elliptical phantoms, three portions of dispersion curves were determined with a good agreement between experiment, simulation and theory. The method was thus validated. The characteristic dimensions of the shell were found to influence the identification of the circumferential wave signals. The method was then applied to the signals backscattered by the superior half of constant thickness of the realistic phantom. A cut-off frequency and some portions of modes were measured, with a good agreement with the theoretical curves of a plate waveguide. We also observed that the method cannot be applied directly to the signals backscattered by the lower half of varying thicknesses of the phantom. The proposed approach could then be considered to evaluate the properties of the superior part of the femoral neck, which is known to be a clinically relevant site.

  1. [Determination of the daily changes curve of nitrogen oxides in the atmosphere by digital imaging colorimetry method].

    Science.gov (United States)

    Yang, Chuan-Xiao; Sun, Xiang-Ying; Liu, Bin

    2009-06-01

    From the digital images of the red complex which resulted in the interaction of nitrite with N-(1-naphthyl) ethylenediamine dihydrochloride and P-Aminobenzene sulfonic acid, it could be seen that the solution colors obviously increased with increasing the concentration of nitrite ion. The JPEG format of the digital images was transformed into gray-scale format by origin 7.0 software, and the gray values were measured with scion image software. It could be seen that the gray values of the digital image obviously increased with increasing the concentration of nitrite ion, too. Thus a novel digital imaging colorimetric (DIC) method to determine nitrogen oxides (NO(x)) contents in air was developed. Based on the red, green and blue (RGB) tricolor theory, the principle of the digital imaging colorimetric method and the influential factors on digital imaging were discussed. The present method was successfully applied to the determination of the daily changes curve of nitrogen oxides in the atmosphere and NO2- in synthetic samples with the recovery of 97.3%-104.0%, and the relative standard deviation (RSD) was less than 5.0%. The results of the determination were consistent with those obtained by spectrophotometric method.

  2. Buckling Capacity Curves for Steel Spherical Shells Loaded by the External Pressure

    Science.gov (United States)

    Błażejewski, Paweł; Marcinowski, Jakub

    2015-03-01

    Assessment of buckling resistance of pressurised spherical cap is not an easy task. There exist two different approaches which allow to achieve this goal. The first approach involves performing advanced numerical analyses in which material and geometrical nonlinearities would be taken into account as well as considering the worst imperfections of the defined amplitude. This kind of analysis is customarily called GMNIA and is carried out by means of the computer software based on FEM. The other, comparatively easier approach, relies on the utilisation of earlier prepared procedures which enable determination of the critical resistance pRcr, the plastic resistance pRpl and buckling parameters a, b, h, l 0 needed to the definition of the standard buckling resistance curve. The determination of the buckling capacity curve for the particular class of spherical caps is the principal goal of this work. The method of determination of the critical pressure and the plastic resistance were described by the authors in [1] whereas the worst imperfection mode for the considered class of spherical shells was found in [2]. The determination of buckling parameters defining the buckling capacity curve for the whole class of shells is more complicated task. For this reason the authors focused their attention on spherical steel caps with the radius to thickness ratio of R/t = 500, the semi angle j = 30o and the boundary condition BC2 (the clamped supporting edge). Taking into account all imperfection forms considered in [2] and different amplitudes expressed by the multiple of the shell thickness, sets of buckling parameters defining the capacity curve were determined. These parameters were determined by the methods proposed by Rotter in [3] and [4] where the method of determination of the exponent h by means of additional parameter k was presented. As a result of the performed analyses the standard capacity curves for all considered imperfection modes and amplitudes 0.5t, 1.0t, 1.5t

  3. Influence of experimental methods on crossing in magnetic force-gap hysteresis curve of HTS maglev system

    Energy Technology Data Exchange (ETDEWEB)

    Lu Yiyun, E-mail: luyiyun6666@vip.sohu.co [Luoyang Institute of Science and Technology, Luoyang, Henan 471023 (China); Qin Yujie; Dang Qiaohong [Luoyang Institute of Science and Technology, Luoyang, Henan 471023 (China); Wang Jiasu [Applied Superconductivity Laboratory, Southwest Jiaotong University, P.O. Box 152, Chengdu, Sichuan 610031 (China)

    2010-12-01

    The crossing in magnetic levitation force-gap hysteresis curve of melt high-temperature superconductor (HTS) vs. NdFeB permanent magnet (PM) was experimentally studied. One HTS bulk and PM was used in the experiments. Four experimental methods were employed combining of high/low speed of movement of PM with/without heat insulation materials (HIM) enclosed respectively. Experimental results show that crossing of the levitation force-gap curve is related to experimental methods. A crossing occurs in the magnetic force-gap curve while the PM moves approaching to and departing from the sample with high or low speed of movement without HIM enclosed. When the PM is enclosed with HIM during the measurement procedures, there is no crossing in the force-gap curve no matter high speed or low speed of movement of the PM. It was found experimentally that, with the increase of the moving speed of the PM, the maximum magnitude of levitation force of the HTS increases also. The results are interpreted based on Maxwell theories and flux flow-creep models of HTS.

  4. Influence of experimental methods on crossing in magnetic force-gap hysteresis curve of HTS maglev system

    International Nuclear Information System (INIS)

    Lu Yiyun; Qin Yujie; Dang Qiaohong; Wang Jiasu

    2010-01-01

    The crossing in magnetic levitation force-gap hysteresis curve of melt high-temperature superconductor (HTS) vs. NdFeB permanent magnet (PM) was experimentally studied. One HTS bulk and PM was used in the experiments. Four experimental methods were employed combining of high/low speed of movement of PM with/without heat insulation materials (HIM) enclosed respectively. Experimental results show that crossing of the levitation force-gap curve is related to experimental methods. A crossing occurs in the magnetic force-gap curve while the PM moves approaching to and departing from the sample with high or low speed of movement without HIM enclosed. When the PM is enclosed with HIM during the measurement procedures, there is no crossing in the force-gap curve no matter high speed or low speed of movement of the PM. It was found experimentally that, with the increase of the moving speed of the PM, the maximum magnitude of levitation force of the HTS increases also. The results are interpreted based on Maxwell theories and flux flow-creep models of HTS.

  5. Discrete curved ray-tracing method for radiative transfer in an absorbing-emitting semitransparent slab with variable spatial refractive index

    International Nuclear Information System (INIS)

    Liu, L.H.

    2004-01-01

    A discrete curved ray-tracing method is developed to analyze the radiative transfer in one-dimensional absorbing-emitting semitransparent slab with variable spatial refractive index. The curved ray trajectory is locally treated as straight line and the complicated and time-consuming computation of ray trajectory is cut down. A problem of radiative equilibrium with linear variable spatial refractive index is taken as an example to examine the accuracy of the proposed method. The temperature distributions are determined by the proposed method and compared with the data in references, which are obtained by other different methods. The results show that the discrete curved ray-tracing method has a good accuracy in solving the radiative transfer in one-dimensional semitransparent slab with variable spatial refractive index

  6. One Curve Embedded Full-Bridge MMC Modeling Method with Detailed Representation of IGBT Characteristics

    Science.gov (United States)

    Hongyang, Yu; Zhengang, Lu; Xi, Yang

    2017-05-01

    Modular Multilevel Converter is more and more widely used in high voltage DC transmission system and high power motor drive system. It is a major topological structure for high power AC-DC converter. Due to the large module number, the complex control algorithm, and the high power user’s back ground, the MMC model used for simulation should be as accurate as possible to simulate the details of how MMC works for the dynamic testing of the MMC controller. But so far, there is no sample simulation MMC model which can simulate the switching dynamic process. In this paper, one curve embedded full-bridge MMC modeling method with detailed representation of IGBT characteristics is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the simulation comparison test under Matlab/Simulink, the proposed method is proved to be correct.

  7. Master curve characterization of the fracture toughness behavior in SA508 Gr.4N low alloy steels

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ki-Hyoung, E-mail: shirimp@kaist.ac.k [Department of Materials Science and Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); Kim, Min-Chul; Lee, Bong-Sang [Nuclear Materials Research Division, KAERI, Daejeon 305-353 (Korea, Republic of); Wee, Dang-Moon [Department of Materials Science and Engineering, KAIST, Daejeon 305-701 (Korea, Republic of)

    2010-08-15

    The fracture toughness properties of the tempered martensitic SA508 Gr.4N Ni-Mo-Cr low alloy steel for reactor pressure vessels were investigated by using the master curve concept. These results were compared to those of the bainitic SA508 Gr.3 Mn-Mo-Ni low alloy steel, which is a commercial RPV material. The fracture toughness tests were conducted by 3-point bending with pre-cracked charpy (PCVN) specimens according to the ASTM E1921-09c standard method. The temperature dependency of the fracture toughness was steeper than those predicted by the standard master curve, while the bainitic SA508 Gr.3 steel fitted well with the standard prediction. In order to properly evaluate the fracture toughness of the Gr.4N steels, the exponential coefficient of the master curve equation was changed and the modified curve was applied to the fracture toughness test results of model alloys that have various chemical compositions. It was found that the modified curve provided a better description for the overall fracture toughness behavior and adequate T{sub 0} determination for the tempered martensitic SA508 Gr.4N steels.

  8. [Determination of six main components in compound theophylline tablet by convolution curve method after prior separation by column partition chromatography

    Science.gov (United States)

    Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)

    1993-01-01

    On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.

  9. The estimation of I–V curves of PV panel using manufacturers’ I–V curves and evolutionary strategy

    International Nuclear Information System (INIS)

    Barukčić, M.; Hederić, Ž.; Špoljarić, Ž.

    2014-01-01

    Highlights: • The approximation of a I–V curve by two linear and a sigmoid functions is proposed. • The sigmoid function is used to estimate the knee of the I–V curve. • Dependence on irradiance and temperature of sigmoid function parameters is proposed. • The sigmoid function is used to estimate maximum power point (MPP). - Abstract: The method for estimation of I–V curves of photovoltaic (PV) panel by analytic expression is presented in the paper. The problem is defined in the form of an optimization problem. The optimization problem objective is based on data from I–V curves obtained by manufacturers’ or measured I–V curves. In order to estimate PV panel parameters, the optimization problem is solved by using an evolutionary strategy. The proposed method is tested for different PV panel technologies using data sheets. In this method the I–V curve approximation with two linear and a sigmoid function is proposed. The method for estimating the knee of the I–V curve and maximum power point at any irradiance and temperature is proposed

  10. Fitness of the analysis method of magnesium in drinking water using atomic absorption with quadratic calibration curve

    International Nuclear Information System (INIS)

    Perez-Lopez, Esteban

    2014-01-01

    The quantitative chemical analysis has been importance in research. Also, aspects like: quality control, sales of services and other areas of interest. Some instrumental analysis methods for quantification with linear calibration curve have presented limitations, because the short liner dynamic ranges of the analyte, or sometimes, by limiting the technique itself. The need has been to investigate a little more about the convenience of using quadratic calibration curves for analytical quantification, with which it has seeked demonstrate that has been a valid calculation model for chemical analysis instruments. An analysis base method is used on the technique of atomic absorption spectroscopy and in particular a determination of magnesium in a drinking water sample of the Tacares sector North of Grecia. A nonlinear calibration curve was used and specifically a curve with quadratic behavior. The same was compared with the test results obtained for the equal analysis with a linear calibration curve. The results have showed that the methodology has been valid for the determination referred with all confidence, since the concentrations have been very similar and, according to the used hypothesis testing, can be considered equal. (author) [es

  11. Feasibility of the correlation curves method in calorimeters of different types

    OpenAIRE

    Grushevskaya, E. A.; Lebedev, I. A.; Fedosimova, A. I.

    2014-01-01

    The simulation of the development of cascade processes in calorimeters of different types for the implementation of energy measurement by correlation curves method, is carried out. Heterogeneous calorimeter has a significant transient effects, associated with the difference of the critical energy in the absorber and the detector. The best option is a mixed calorimeter, which has a target block, leading to the rapid development of the cascade, and homogeneous measuring unit. Uncertainties of e...

  12. Standardization of biodosimetry operations

    International Nuclear Information System (INIS)

    Dainiak, Nicholas

    2016-01-01

    Methods and procedures for generating, interpreting and scoring the frequency of dicentric chromosomes vary among cytogenetic biodosimetry laboratories (CBLs). This variation adds to the already considerable lack of precision inherent in the dicentric chromosome assay (DCA). Although variability in sample collection, cell preparation, equipment and dicentric frequency scoring can never be eliminated with certainty, it can be substantially minimized, resulting in reduced scatter and improved precision. Use of standard operating procedures and technician exchange may help to mitigate variation. Although the development and adoption of international standards (ISO 21243 and ISO 19238) has helped to reduce variation in standard operating procedures (SOPs), all CBLs must maintain process improvement, and those with challenges may require additional assistance. Sources of variation that may not be readily apparent in the SOPs for sample collection and processing include variability in ambient laboratory conditions, media, serum lot and quantity and the use of particular combinations of cytokines. Variability in maintenance and calibration of metafer equipment, and in scoring criteria, reader proficiency and personal factors may need to be addressed. The calibration curve itself is a source of variation that requires control, using the same known-dose samples among CBLs, measurement of central tendency, and generation of common curves with periodic reassessment to detect drifts in dicentric yield. Finally, the dose estimate should be based on common scoring criteria, using of the z-statistic. Although theoretically possible, it is practically impossible to propagate uncertainty over the entire calibration curve due to the many factors contributing to variance. Periodic re-evaluation of the curve is needed by comparison with newly published curves (using statistical analysis of differences) and determining their potential causes. (author)

  13. An alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film

    International Nuclear Information System (INIS)

    Awad, M.M.

    2014-01-01

    The S-shaped curve was observed by Yilbas and Bin Mansoor (2013). In this study, an alternative method to predict the S-shaped curve for logistic characteristics of phonon transport in silicon thin film is presented by using an analytical prediction method. This analytical prediction method was introduced by Bejan and Lorente in 2011 and 2012. The Bejan and Lorente method is based on two-mechanism flow of fast “invasion” by convection and slow “consolidation” by diffusion.

  14. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    Science.gov (United States)

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  15. Semiclassical methods in curved spacetime and black hole thermodynamics

    International Nuclear Information System (INIS)

    Camblong, Horacio E.; Ordonez, Carlos R.

    2005-01-01

    Improved semiclassical techniques are developed and applied to a treatment of a real scalar field in a D-dimensional gravitational background. This analysis, leading to a derivation of the thermodynamics of black holes, is based on the simultaneous use of (i) a near-horizon description of the scalar field in terms of conformal quantum mechanics; (ii) a novel generalized WKB framework; and (iii) curved-spacetime phase-space methods. In addition, this improved semiclassical approach is shown to be asymptotically exact in the presence of hierarchical expansions of a near-horizon type. Most importantly, this analysis further supports the claim that the thermodynamics of black holes is induced by their near-horizon conformal invariance

  16. Inverse Diffusion Curves Using Shape Optimization.

    Science.gov (United States)

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  17. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    Science.gov (United States)

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…

  18. Learning curve for robotic-assisted surgery for rectal cancer: use of the cumulative sum method.

    Science.gov (United States)

    Yamaguchi, Tomohiro; Kinugasa, Yusuke; Shiomi, Akio; Sato, Sumito; Yamakawa, Yushi; Kagawa, Hiroyasu; Tomioka, Hiroyuki; Mori, Keita

    2015-07-01

    Few data are available to assess the learning curve for robotic-assisted surgery for rectal cancer. The aim of the present study was to evaluate the learning curve for robotic-assisted surgery for rectal cancer by a surgeon at a single institute. From December 2011 to August 2013, a total of 80 consecutive patients who underwent robotic-assisted surgery for rectal cancer performed by the same surgeon were included in this study. The learning curve was analyzed using the cumulative sum method. This method was used for all 80 cases, taking into account operative time. Operative procedures included anterior resections in 6 patients, low anterior resections in 46 patients, intersphincteric resections in 22 patients, and abdominoperineal resections in 6 patients. Lateral lymph node dissection was performed in 28 patients. Median operative time was 280 min (range 135-683 min), and median blood loss was 17 mL (range 0-690 mL). No postoperative complications of Clavien-Dindo classification Grade III or IV were encountered. We arranged operative times and calculated cumulative sum values, allowing differentiation of three phases: phase I, Cases 1-25; phase II, Cases 26-50; and phase III, Cases 51-80. Our data suggested three phases of the learning curve in robotic-assisted surgery for rectal cancer. The first 25 cases formed the learning phase.

  19. Standardization of a sulfur quantitative analysis method by X ray fluorescence in a leaching solution for bio-available sulfates in soil

    International Nuclear Information System (INIS)

    Morales S, E.; Aguilar S, E.

    1989-11-01

    A method for bio-available sulfate analysis in soils is described. A Ca(H2PO4) leaching solution was used for soil samples treatment. A standard NaSO4 solution was used for preparing a calibration curve and also the fundamental parameters method approach was employed. An Am-241 (100 mCi) source and a Si-Li detector were employed. Analysis could be done in 5 minutes; good reproducibility, 5 and accuracy, 5 were obtained. The method is very competitive with conventional nephelometry where good and reproducible suspensions are difficult to obtain. (author)

  20. RMS fatigue curves for random vibrations

    International Nuclear Information System (INIS)

    Brenneman, B.; Talley, J.G.

    1984-01-01

    Fatigue usage factors for deterministic or constant amplitude vibration stresses may be calculated with well known procedures and fatigue curves given in the ASME Boiler and Pressure Vessel Code. However, some phenomena produce nondeterministic cyclic stresses which can only be described and analyzed with statistical concepts and methods. Such stresses may be caused by turbulent fluid flow over a structure. Previous methods for solving this statistical fatigue problem are often difficult to use and may yield inaccurate results. Two such methods examined herein are Crandall's method and the ''3sigma'' method. The objective of this paper is to provide a method for creating ''RMS fatigue curves'' which accurately incorporate the requisite statistical information. These curves are given and may be used by analysts with the same ease and in the same manner as the ASME fatigue curves

  1. Evaluation of methods for characterizing the melting curves of a high temperature cobalt-carbon fixed point to define and determine its melting temperature

    Science.gov (United States)

    Lowe, David; Machin, Graham

    2012-06-01

    The future mise en pratique for the realization of the kelvin will be founded on the melting temperatures of particular metal-carbon eutectic alloys as thermodynamic temperature references. However, at the moment there is no consensus on what should be taken as the melting temperature. An ideal melting or freezing curve should be a completely flat plateau at a specific temperature. Any departure from the ideal is due to shortcomings in the realization and should be accommodated within the uncertainty budget. However, for the proposed alloy-based fixed points, melting takes place over typically some hundreds of millikelvins. Including the entire melting range within the uncertainties would lead to an unnecessarily pessimistic view of the utility of these as reference standards. Therefore, detailed analysis of the shape of the melting curve is needed to give a value associated with some identifiable aspect of the phase transition. A range of approaches are or could be used; some purely practical, determining the point of inflection (POI) of the melting curve, some attempting to extrapolate to the liquidus temperature just at the end of melting, and a method that claims to give the liquidus temperature and an impurity correction based on the analytical Scheil model of solidification that has not previously been applied to eutectic melting. The different methods have been applied to cobalt-carbon melting curves that were obtained under conditions for which the Scheil model might be valid. In the light of the findings of this study it is recommended that the POI continue to be used as a pragmatic measure of temperature but where required a specified limits approach should be used to define and determine the melting temperature.

  2. Impaired Curve Negotiation in Drivers with Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Ergun Y Uç

    2009-03-01

    Full Text Available OBJECTIVE: To assess the ability to negotiate curves in drivers with Parkinson’s disease (PD. METHODS: Licensed active drivers with mild-moderate PD (n= 76; 65 male, 11 female and elderly controls (n= 51; 26 male, 25 female drove on a simulated 2-lane rural highway in a high-fidelity simulator scenario in which the drivers had to negotiate 6 curves during a 37-mile drive. The participants underwent motor, cognitive, and visual testing before the simulator drive. RESULTS: Compared to controls, the drivers with PD had less vehicle control and driving safety, both on curves and straight baseline segments, as measured by significantly higher standard deviation of lateral position (SDLP and lane violation counts. The PD group also scored lower on tests of motor, cognitive, and visual abilities. In the PD group, lower scores on tests of motion perception, visuospatial ability, executive function, postural instability, and general cognition, as well as a lower level of independence in daily activities predicted low vehicle control on curves. CONCLUSION: Drivers with PD had less vehicle control and driving safety on curves compared to controls, which was associated primarily with impairments in visual perception and cognition, rather than motor function

  3. On a framework for generating PoD curves assisted by numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in; Rajagopal, Prabhu, E-mail: prajagopal@iitm.ac.in [Indian Institute of Technology Madras, Department of Mechanical Engineering, Chennai, T.N. (India); Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar [Indira Gandhi Centre for Atomic Research, Metallurgy and Materials Group, Kalpakkam, T.N. (India)

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  4. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  5. Comparison of Optimization and Two-point Methods in Estimation of Soil Water Retention Curve

    Science.gov (United States)

    Ghanbarian-Alavijeh, B.; Liaghat, A. M.; Huang, G.

    2009-04-01

    Soil water retention curve (SWRC) is one of the soil hydraulic properties in which its direct measurement is time consuming and expensive. Since, its measurement is unavoidable in study of environmental sciences i.e. investigation of unsaturated hydraulic conductivity and solute transport, in this study the attempt is to predict soil water retention curve from two measured points. By using Cresswell and Paydar (1996) method (two-point method) and an optimization method developed in this study on the basis of two points of SWRC, parameters of Tyler and Wheatcraft (1990) model (fractal dimension and air entry value) were estimated and then water content at different matric potentials were estimated and compared with their measured values (n=180). For each method, we used both 3 and 1500 kPa (case 1) and 33 and 1500 kPa (case 2) as two points of SWRC. The calculated RMSE values showed that in the Creswell and Paydar (1996) method, there exists no significant difference between case 1 and case 2. However, the calculated RMSE value in case 2 (2.35) was slightly less than case 1 (2.37). The results also showed that the developed optimization method in this study had significantly less RMSE values for cases 1 (1.63) and 2 (1.33) rather than Cresswell and Paydar (1996) method.

  6. A novel method of calculating the energy deposition curve of nanosecond pulsed surface dielectric barrier discharge

    International Nuclear Information System (INIS)

    He, Kun; Wang, Xinying; Lu, Jiayu; Cui, Quansheng; Pang, Lei; Di, Dongxu; Zhang, Qiaogen

    2015-01-01

    To obtain the energy deposition curve is very important in the fields to which nanosecond pulse dielectric barrier discharges (NPDBDs) are applied. It helps the understanding of the discharge physics and fast gas heating. In this paper, an equivalent circuit model, composed of three capacitances, is introduced and a method of calculating the energy deposition curve is proposed for a nanosecond pulse surface dielectric barrier discharge (NPSDBD) plasma actuator. The capacitance C d and the energy deposition curve E R are determined by mathematically proving that the mapping from C d to E R is bijective and numerically searching one C d that satisfies the requirement for E R to be a monotonically non-decreasing function. It is found that the value of capacitance C d varies with the amplitude of applied pulse voltage due to the change of discharge area and is dependent on the polarity of applied voltage. The bijectiveness of the mapping from C d to E R in nanosecond pulse volumetric dielectric barrier discharge (NPVDBD) is demonstrated and the feasibility of the application of the new method to NPVDBD is validated. This preliminarily shows a high possibility of developing a unified approach to calculate the energy deposition curve in NPDBD. (paper)

  7. Standard methods for sampling North American freshwater fishes

    Science.gov (United States)

    Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.

    2009-01-01

    This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.

  8. Evaluation of pyrolysis curves for volatile elements in aqueous standards and carbon-containing matrices in electrothermal vaporization inductively coupled plasma mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Silva, A.F. [Delft University of Technology, Faculty of Applied Sciences, DelftChemTech, Julianalaan 136, 2628 BL Delft (Netherlands); Universidade Federal de Santa Catarina, Departamento de Quimica, 88040-900 Florianopolis, SC (Brazil); Welz, B. [Universidade Federal de Santa Catarina, Departamento de Quimica, 88040-900 Florianopolis, SC (Brazil); Loos-Vollebregt, M.T.C. de [Delft University of Technology, Faculty of Applied Sciences, DelftChemTech, Julianalaan 136, 2628 BL Delft (Netherlands)], E-mail: m.t.c.deloos-vollebregt@tudelft.nl

    2008-07-15

    Pyrolysis curves in electrothermal atomic absorption spectrometry (ET AAS) and electrothermal vaporization inductively coupled plasma mass spectrometry (ETV-ICP-MS) have been compared for As, Se and Pb in lobster hepatopancreas certified reference material using Pd/Mg as the modifier. The ET AAS pyrolysis curves confirm that the analytes are not lost from the graphite furnace up to a pyrolysis temperature of 800 deg. C. Nevertheless, a downward slope of the pyrolysis curve was observed for these elements in the biological material using ETV-ICP-MS. This could be related to a gain of sensitivity at low pyrolysis temperatures due to the matrix, which can act as carrier and/or promote changes in the plasma ionization equilibrium. Experiments with the addition of ascorbic acid to the aqueous standards confirmed that the higher intensities obtained in ETV-ICP-MS are related to the presence of organic compounds in the slurry. Pyrolysis curves for As, Se and Pb in coal and coal fly ash were also investigated using the same Pd/Mg modifier. Carbon intensities were measured in all samples using different pyrolysis temperatures. It was observed that pyrolysis curves for the three analytes in all slurry samples were similar to the corresponding graphs that show the carbon intensity for the same slurries for pyrolysis temperatures from 200 deg. C up to 1000 deg. C.

  9. Evaluation methods for neutron cross section standards

    International Nuclear Information System (INIS)

    Bhat, M.R.

    1980-01-01

    Methods used to evaluate the neutron cross section standards are reviewed and their relative merits, assessed. These include phase-shift analysis, R-matrix fit, and a number of other methods by Poenitz, Bhat, Kon'shin and the Bayesian or generalized least-squares procedures. The problems involved in adopting these methods for future cross section standards evaluations are considered, and the prospects for their use, discussed. 115 references, 5 figures, 3 tables

  10. A method for the rapid generation of nonsequential light-response curves of chlorophyll fluorescence.

    Science.gov (United States)

    Serôdio, João; Ezequiel, João; Frommlet, Jörg; Laviale, Martin; Lavaud, Johann

    2013-11-01

    Light-response curves (LCs) of chlorophyll fluorescence are widely used in plant physiology. Most commonly, LCs are generated sequentially, exposing the same sample to a sequence of distinct actinic light intensities. These measurements are not independent, as the response to each new light level is affected by the light exposure history experienced during previous steps of the LC, an issue particularly relevant in the case of the popular rapid light curves. In this work, we demonstrate the proof of concept of a new method for the rapid generation of LCs from nonsequential, temporally independent fluorescence measurements. The method is based on the combined use of sample illumination with digitally controlled, spatially separated beams of actinic light and a fluorescence imaging system. It allows the generation of a whole LC, including a large number of actinic light steps and adequate replication, within the time required for a single measurement (and therefore named "single-pulse light curve"). This method is illustrated for the generation of LCs of photosystem II quantum yield, relative electron transport rate, and nonphotochemical quenching on intact plant leaves exhibiting distinct light responses. This approach makes it also possible to easily characterize the integrated dynamic light response of a sample by combining the measurement of LCs (actinic light intensity is varied while measuring time is fixed) with induction/relaxation kinetics (actinic light intensity is fixed and the response is followed over time), describing both how the response to light varies with time and how the response kinetics varies with light intensity.

  11. Unconditional and Conditional Standards Using Cognitive Function Curves for the Modified Mini-Mental State Exam: Cross-Sectional and Longitudinal Analyses in Older Chinese Adults in Singapore.

    Science.gov (United States)

    Cheung, Yin Bun; Xu, Ying; Feng, Lei; Feng, Liang; Nyunt, Ma Shwe Zin; Chong, Mei Sian; Lim, Wee Shiong; Lee, Tih Shih; Yap, Philip; Yap, Keng Bee; Ng, Tze Pin

    2015-09-01

    The conventional practice of assessing cognitive status and monitoring change over time in older adults using normative values of the Mini-Mental State Exam (MMSE) based on age bands is imprecise. Moreover, population-based normative data on changes in MMSE score over time are scarce and crude because they do not include age- and education-specific norms. This study aims to develop unconditional standards for assessing current cognitive status and conditional standards that take prior MMSE score into account for assessing longitudinal change, with percentile curves as smooth functions of age. Cross-sectional and longitudinal data of a modified version of the MMSE for 2,026 older Chinese adults from the Singapore Longitudinal Aging Study, aged 55-84, in Singapore were used to estimate quantile regression coefficients and create unconditional standards and conditional standards. We presented MMSE percentile curves as a smooth function of age in education strata, for unconditional and conditional standards, based on quantile regression coefficient estimates. We found the 5th and 10th percentiles were more strongly associated with age and education than were higher percentiles. Model diagnostics demonstrated the accuracy of the standards. The development and use of unconditional and conditional standards should facilitate cognitive assessment in clinical practice and deserve further studies. Copyright © 2015 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  12. Methodology for developing and implementing alternative temperature-time curves for testing the fire resistance of barriers for nuclear power plant applications

    International Nuclear Information System (INIS)

    Cooper, L.Y.; Steckler, K.D.

    1996-08-01

    Advances in fire science over the past 40 years have offered the potential for developing technically sound alternative temperature-time curves for use in evaluating fire barriers for areas where fire exposures can be expected to be significantly different than the ASTM E-119 standard temperature-time exposure. This report summarizes the development of the ASTM E-119, standard temperature-time curve, and the efforts by the federal government and the petrochemical industry to develop alternative fire endurance curves for specific applications. The report also provides a framework for the development of alternative curves for application at nuclear power plants. The staff has concluded that in view of the effort necessary for the development of nuclear power plant specific temperature-time curves, such curves are not a viable approach for resolving the issues concerning Thermo-Lag fire barriers. However, the approach may be useful to licensees in the development of performance-based fire protection methods in the future

  13. Methodology for developing and implementing alternative temperature-time curves for testing the fire resistance of barriers for nuclear power plant applications

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, L.Y.; Steckler, K.D.

    1996-08-01

    Advances in fire science over the past 40 years have offered the potential for developing technically sound alternative temperature-time curves for use in evaluating fire barriers for areas where fire exposures can be expected to be significantly different than the ASTM E-119 standard temperature-time exposure. This report summarizes the development of the ASTM E-119, standard temperature-time curve, and the efforts by the federal government and the petrochemical industry to develop alternative fire endurance curves for specific applications. The report also provides a framework for the development of alternative curves for application at nuclear power plants. The staff has concluded that in view of the effort necessary for the development of nuclear power plant specific temperature-time curves, such curves are not a viable approach for resolving the issues concerning Thermo-Lag fire barriers. However, the approach may be useful to licensees in the development of performance-based fire protection methods in the future.

  14. Variation of curve number with storm depth

    Science.gov (United States)

    Banasik, K.; Hejduk, L.

    2012-04-01

    The NRCS Curve Number (known also as SCS-CN) method is well known as a tool in predicting flood runoff depth from small ungauged catchment. The traditional way of determination the CNs, based on soil characteristics, land use and hydrological conditions, seemed to have tendency to overpredict the floods in some cases. Over 30 year rainfall-runoff data, collected in two small (A=23.4 & 82.4 km2), lowland, agricultural catchments in Center of Poland (Banasik & Woodward 2010), were used to determine runoff Curve Number and to check a tendency of changing. The observed CN declines with increasing storm size, which according recent views of Hawkins (1993) could be classified as a standard response of watershed. The analysis concluded, that using CN value according to the procedure described in USDA-SCS Handbook one receives representative value for estimating storm runoff from high rainfall depths in the analyzes catchments. This has been confirmed by applying "asymptotic approach" for estimating the watershed curve number from the rainfall-runoff data. Furthermore, the analysis indicated that CN, estimated from mean retention parameter S of recorded events with rainfall depth higher than initial abstraction, is also approaching the theoretical CN. The observed CN, ranging from 59.8 to 97.1 and from 52.3 to 95.5, in the smaller and the larger catchment respectively, declines with increasing storm size, which has been classified as a standard response of watershed. The investigation demonstrated also changeability of the CN during a year, with much lower values during the vegetation season. Banasik K. & D.E. Woodward (2010). "Empirical determination of curve number for a small agricultural watrshed in Poland". 2nd Joint Federal Interagency Conference, Las Vegas, NV, June 27 - July 1, 2010 (http://acwi.gov/sos/pubs/2ndJFIC/Contents/10E_Banasik_ 28_02_10. pdf). Hawkins R. H. (1993). "Asymptotic determination of curve numbers from data". Journal of Irrigation and Drainage

  15. Automated curved planar reformation of 3D spine images

    International Nuclear Information System (INIS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  16. A quick on-line state of health estimation method for Li-ion battery with incremental capacity curves processed by Gaussian filter

    Science.gov (United States)

    Li, Yi; Abdel-Monem, Mohamed; Gopalakrishnan, Rahul; Berecibar, Maitane; Nanini-Maury, Elise; Omar, Noshin; van den Bossche, Peter; Van Mierlo, Joeri

    2018-01-01

    This paper proposes an advanced state of health (SoH) estimation method for high energy NMC lithium-ion batteries based on the incremental capacity (IC) analysis. IC curves are used due to their ability of detect and quantify battery degradation mechanism. A simple and robust smoothing method is proposed based on Gaussian filter to reduce the noise on IC curves, the signatures associated with battery ageing can therefore be accurately identified. A linear regression relationship is found between the battery capacity with the positions of features of interest (FOIs) on IC curves. Results show that the developed SoH estimation function from one single battery cell is able to evaluate the SoH of other batteries cycled under different cycling depth with less than 2.5% maximum errors, which proves the robustness of the proposed method on SoH estimation. With this technique, partial charging voltage curves can be used for SoH estimation and the testing time can be therefore largely reduced. This method shows great potential to be applied in reality, as it only requires static charging curves and can be easily implemented in battery management system (BMS).

  17. Remote sensing used for power curves

    International Nuclear Information System (INIS)

    Wagner, R; Joergensen, H E; Paulsen, U S; Larsen, T J; Antoniou, I; Thesbjerg, L

    2008-01-01

    Power curve measurement for large wind turbines requires taking into account more parameters than only the wind speed at hub height. Based on results from aerodynamic simulations, an equivalent wind speed taking the wind shear into account was defined and found to reduce the power standard deviation in the power curve significantly. Two LiDARs and a SoDAR are used to measure the wind profile in front of a wind turbine. These profiles are used to calculate the equivalent wind speed. The comparison of the power curves obtained with the three instruments to the traditional power curve, obtained using a cup anemometer measurement, confirms the results obtained from the simulations. Using LiDAR profiles reduces the error in power curve measurement, when these are used as relative instrument together with a cup anemometer. Results from the SoDAR do not show such promising results, probably because of noisy measurements resulting in distorted profiles

  18. Creation of three-dimensional craniofacial standards from CBCT images

    Science.gov (United States)

    Subramanyan, Krishna; Palomo, Martin; Hans, Mark

    2006-03-01

    Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.

  19. Computer Drawing Method for Operating Characteristic Curve of PV Power Plant Array Unit

    Science.gov (United States)

    Tan, Jianbin

    2018-02-01

    According to the engineering design of large-scale grid-connected photovoltaic power stations and the research and development of many simulation and analysis systems, it is necessary to draw a good computer graphics of the operating characteristic curves of photovoltaic array elements and to propose a good segmentation non-linear interpolation algorithm. In the calculation method, Component performance parameters as the main design basis, the computer can get 5 PV module performances. At the same time, combined with the PV array series and parallel connection, the computer drawing of the performance curve of the PV array unit can be realized. At the same time, the specific data onto the module of PV development software can be calculated, and the good operation of PV array unit can be improved on practical application.

  20. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    Science.gov (United States)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  1. Curve aligning approach for gait authentication based on a wearable accelerometer

    International Nuclear Information System (INIS)

    Sun, Hu; Yuao, Tao

    2012-01-01

    Gait authentication based on a wearable accelerometer is a novel biometric which can be used for identity identification, medical rehabilitation and early detection of neurological disorders. The method for matching gait patterns tells heavily on authentication performances. In this paper, curve aligning is introduced as a new method for matching gait patterns and it is compared with correlation and dynamic time warping (DTW). A support vector machine (SVM) is proposed to fuse pattern-matching methods in a decision level. Accelerations collected from ankles of 22 walking subjects are processed for authentications in our experiments. The fusion of curve aligning with backward–forward accelerations and DTW with vertical accelerations promotes authentication performances substantially and consistently. This fusion algorithm is tested repeatedly. Its mean and standard deviation of equal error rates are 0.794% and 0.696%, respectively, whereas among all presented non-fusion algorithms, the best one shows an EER of 3.03%. (paper)

  2. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  3. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  4. Considerations for reference pump curves

    International Nuclear Information System (INIS)

    Stockton, N.B.

    1992-01-01

    This paper examines problems associated with inservice testing (IST) of pumps to assess their hydraulic performance using reference pump curves to establish acceptance criteria. Safety-related pumps at nuclear power plants are tested under the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (the Code), Section 11. The Code requires testing pumps at specific reference points of differential pressure or flow rate that can be readily duplicated during subsequent tests. There are many cases where test conditions cannot be duplicated. For some pumps, such as service water or component cooling pumps, the flow rate at any time depends on plant conditions and the arrangement of multiple independent and constantly changing loads. System conditions cannot be controlled to duplicate a specific reference value. In these cases, utilities frequently request to use pump curves for comparison of test data for acceptance. There is no prescribed method for developing a pump reference curve. The methods vary and may yield substantially different results. Some results are conservative when compared to the Code requirements; some are not. The errors associated with different curve testing techniques should be understood and controlled within reasonable bounds. Manufacturer's pump curves, in general, are not sufficiently accurate to use as reference pump curves for IST. Testing using reference curves generated with polynomial least squares fits over limited ranges of pump operation, cubic spline interpolation, or cubic spline least squares fits can provide a measure of pump hydraulic performance that is at least as accurate as the Code required method. Regardless of the test method, error can be reduced by using more accurate instruments, by correcting for systematic errors, by increasing the number of data points, and by taking repetitive measurements at each data point

  5. Interlaboratory comparison of the measurement of retention curves

    DEFF Research Database (Denmark)

    Hansen, M. H.; Houvenaghel, G.; Janz, M.

    1999-01-01

    The results of an interlaboratory comparison of the measurement of apparent density, solid density, open porosity and retention curves are presented. Baumberger sandstone and Sander sandstone were used as test materials.Repeatability standard deviation and reproducibility standard deviation...

  6. A Novel Reverse-Transcriptase Real-Time PCR Method for Quantification of Viable Vibrio Parahemolyticus in Raw Shrimp Based on a Rapid Construction of Standard Curve Method

    OpenAIRE

    Mengtong Jin; Haiquan Liu; Wenshuo Sun; Qin Li; Zhaohuan Zhang; Jibing Li; Yingjie Pan; Yong Zhao

    2015-01-01

    Vibrio parahemolyticus is an important pathogen that leads to food illness associated seafood. Therefore, rapid and reliable methods to detect and quantify the total viable V. parahaemolyticus in seafood are needed. In this assay, a RNA-based real-time reverse-transcriptase PCR (RT-qPCR) without an enrichment step has been developed for detection and quantification of the total viable V. parahaemolyticus in shrimp. RNA standards with the target segments were synthesized in vitro with T7 RNA p...

  7. Estimated damage from the Cascadia Subduction Zone tsunami: A model comparisons using fragility curves

    Science.gov (United States)

    Wiebe, D. M.; Cox, D. T.; Chen, Y.; Weber, B. A.; Chen, Y.

    2012-12-01

    Building damage from a hypothetical Cascadia Subduction Zone tsunami was estimated using two methods and applied at the community scale. The first method applies proposed guidelines for a new ASCE 7 standard to calculate the flow depth, flow velocity, and momentum flux from a known runup limit and estimate of the total tsunami energy at the shoreline. This procedure is based on a potential energy budget, uses the energy grade line, and accounts for frictional losses. The second method utilized numerical model results from previous studies to determine maximum flow depth, velocity, and momentum flux throughout the inundation zone. The towns of Seaside and Canon Beach, Oregon, were selected for analysis due to the availability of existing data from previously published works. Fragility curves, based on the hydrodynamic features of the tsunami flow (inundation depth, flow velocity, and momentum flux) and proposed design standards from ASCE 7 were used to estimate the probability of damage to structures located within the inundations zone. The analysis proceeded at the parcel level, using tax-lot data to identify construction type (wood, steel, and reinforced-concrete) and age, which was used as a performance measure when applying the fragility curves and design standards. The overall probability of damage to civil buildings was integrated for comparison between the two methods, and also analyzed spatially for damage patterns, which could be controlled by local bathymetric features. The two methods were compared to assess the sensitivity of the results to the uncertainty in the input hydrodynamic conditions and fragility curves, and the potential advantages of each method discussed. On-going work includes coupling the results of building damage and vulnerability to an economic input output model. This model assesses trade between business sectors located inside and outside the induction zone, and is used to measure the impact to the regional economy. Results highlight

  8. Calibration curve for germanium spectrometers from solutions calibrated by liquid scintillation counting

    International Nuclear Information System (INIS)

    Grau, A.; Navarro, N.; Rodriguez, L.; Alvarez, A.; Salvador, S.; Diaz, C.

    1996-01-01

    The beta-gamma emitters ''60Co, ''137 Cs, ''131 I, ''210 Pb y ''129 Iare radionuclides for which the calibration by the CIEMAT/NIST method ispossible with uncertainties less than 1%. We prepared, from standardized solutions of these radionuclides, samples in vials of 20 ml. We obtained the calibration curves, efficiency as a function of energy, for two germanium detectors. (Author) 5 refs

  9. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2014-01-01

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online

  10. Variability of the Wind Turbine Power Curve

    Directory of Open Access Journals (Sweden)

    Mahesh M. Bandi

    2016-09-01

    Full Text Available Wind turbine power curves are calibrated by turbine manufacturers under requirements stipulated by the International Electrotechnical Commission to provide a functional mapping between the mean wind speed v ¯ and the mean turbine power output P ¯ . Wind plant operators employ these power curves to estimate or forecast wind power generation under given wind conditions. However, it is general knowledge that wide variability exists in these mean calibration values. We first analyse how the standard deviation in wind speed σ v affects the mean P ¯ and the standard deviation σ P of wind power. We find that the magnitude of wind power fluctuations scales as the square of the mean wind speed. Using data from three planetary locations, we find that the wind speed standard deviation σ v systematically varies with mean wind speed v ¯ , and in some instances, follows a scaling of the form σ v = C × v ¯ α ; C being a constant and α a fractional power. We show that, when applicable, this scaling form provides a minimal parameter description of the power curve in terms of v ¯ alone. Wind data from different locations establishes that (in instances when this scaling exists the exponent α varies with location, owing to the influence of local environmental conditions on wind speed variability. Since manufacturer-calibrated power curves cannot account for variability influenced by local conditions, this variability translates to forecast uncertainty in power generation. We close with a proposal for operators to perform post-installation recalibration of their turbine power curves to account for the influence of local environmental factors on wind speed variability in order to reduce the uncertainty of wind power forecasts. Understanding the relationship between wind’s speed and its variability is likely to lead to lower costs for the integration of wind power into the electric grid.

  11. KBr-Li Br and KBr-LiBr doped with Ti mixed single crystal by Czochralski method and glow curve studies

    International Nuclear Information System (INIS)

    Faripour, H.; Faripour, N.

    2003-01-01

    Mixed-single Crystals: pure KBr-LiBr and KBr-LiBr with Ti dopant were grown by Czochralski method. Because of difference between lattice parameters of KBr and LiBr, the growth speed of crystals were relatively low, and they were annealed in a special temperature condition providing some cleavages. They were exposed by β radiation and the glow curve was analysed for each crystal. Analysing of glow curve, showed that Ti impurity has been the curves of main peak curve appearance temperature decreasing

  12. Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.

  13. Statistical reexamination of analytical method on the observed electron spin (or nuclear) resonance curves

    International Nuclear Information System (INIS)

    Kim, J.W.

    1980-01-01

    Observed magnetic resonance curves are statistically reexamined. Typical models of resonance lines are Lorentzian and Gaussian distribution functions. In the case of metallic, alloy or intermetallic compound samples, observed resonance lines are supperposed with the absorption line and the dispersion line. The analyzing methods of supperposed resonance lines are demonstrated. (author)

  14. Microcanonical Monte Carlo approach for computing melting curves by atomistic simulations

    OpenAIRE

    Davis, Sergio; Gutiérrez, Gonzalo

    2017-01-01

    We report microcanonical Monte Carlo simulations of melting and superheating of a generic, Lennard-Jones system starting from the crystalline phase. The isochoric curve, the melting temperature $T_m$ and the critical superheating temperature $T_{LS}$ obtained are in close agreement (well within the microcanonical temperature fluctuations) with standard molecular dynamics one-phase and two-phase methods. These results validate the use of microcanonical Monte Carlo to compute melting points, a ...

  15. Differential geometry and topology of curves

    CERN Document Server

    Animov, Yu

    2001-01-01

    Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.

  16. A bottom-up method to develop pollution abatement cost curves for coal-fired utility boilers

    Science.gov (United States)

    This paper illustrates a new method to create supply curves for pollution abatement using boiler-level data that explicitly accounts for technology costs and performance. The Coal Utility Environmental Cost (CUECost) model is used to estimate retrofit costs for five different NO...

  17. Efficient method for finding square roots for elliptic curves over OEF

    CSIR Research Space (South Africa)

    Abu-Mahfouz, Adnan M

    2009-01-01

    Full Text Available Elliptic curve cryptosystems like others public key encryption schemes, require computing a square roots modulo a prime number. The arithmetic operations in elliptic curve schemes over Optimal Extension Fields (OEF) can be efficiently computed...

  18. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    Directory of Open Access Journals (Sweden)

    Wen-long Li

    Full Text Available The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  19. Fractal properties of critical invariant curves

    International Nuclear Information System (INIS)

    Hunt, B.R.; Yorke, J.A.; Khanin, K.M.; Sinai, Y.G.

    1996-01-01

    We examine the dimension of the invariant measure for some singular circle homeomorphisms for a variety of rotation numbers, through both the thermodynamic formalism and numerical computation. The maps we consider include those induced by the action of the standard map on an invariant curve at the critical parameter value beyond which the curve is destroyed. Our results indicate that the dimension is universal for a given type of singularity and rotation number, and that among all rotation numbers, the golden mean produces the largest dimension

  20. Evaluation of Effects of Warning Sign Position on Driving Behavior in Horizontal Sharp Curves

    Directory of Open Access Journals (Sweden)

    Xiao-hua Zhao

    2015-02-01

    Full Text Available In present time, the guidelines on warning sign position in the China National Standard lack detailed and standard regulations of placing warning signs on sharp curves, which may cause road safety problems. Therefore, this paper briefly discussed how to optimize the position of a warning sign on a sharp curve through a driving simulator experiment. This study concluded that a warning sign placed at different positions prior to a sharp curve will have different influence ranges for drivers approaching and negotiating the curve. Meanwhile, different positions of a warning sign imposed different effect obviously on the adjustment of vehicle's lane position on sharp curves with the same radius, especially at the midpoint of a sharp curve. The evaluation results of five positions (0 m, 50 m, 100 m, 200 m, and 400 m in advance showed that only when the warning signs were placed 100 m or 200 m prior to sharp curves, can they achieve positive influence on driving behavior. On this basis, the authors look forward to providing rationalization proposals in selecting the best position of a warning sign on a sharp curve for the engineering implementation and national standard.

  1. Standard Test Method for Sandwich Corrosion Test

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method defines the procedure for evaluating the corrosivity of aircraft maintenance chemicals, when present between faying surfaces (sandwich) of aluminum alloys commonly used for aircraft structures. This test method is intended to be used in the qualification and approval of compounds employed in aircraft maintenance operations. 1.2 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information. 1.3 This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. Specific hazard statements appear in Section 9.

  2. A new form of the calibration curve in radiochromic dosimetry. Properties and results

    International Nuclear Information System (INIS)

    Tamponi, Matteo; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio

    2016-01-01

    Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the

  3. A new form of the calibration curve in radiochromic dosimetry. Properties and results

    Energy Technology Data Exchange (ETDEWEB)

    Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; Bona, Rossana; Poggiu, Angela; Marini, Piergiorgio [Medical Physics Unit, ASL Sassari, Via Enrico de Nicola, Sassari 07100 (Italy)

    2016-07-15

    Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve has been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the

  4. A Method for Formulizing Disaster Evacuation Demand Curves Based on SI Model

    Directory of Open Access Journals (Sweden)

    Yulei Song

    2016-10-01

    Full Text Available The prediction of evacuation demand curves is a crucial step in the disaster evacuation plan making, which directly affects the performance of the disaster evacuation. In this paper, we discuss the factors influencing individual evacuation decision making (whether and when to leave and summarize them into four kinds: individual characteristics, social influence, geographic location, and warning degree. In the view of social contagion of decision making, a method based on Susceptible-Infective (SI model is proposed to formulize the disaster evacuation demand curves to address both social influence and other factors’ effects. The disaster event of the “Tianjin Explosions” is used as a case study to illustrate the modeling results influenced by the four factors and perform the sensitivity analyses of the key parameters of the model. Some interesting phenomena are found and discussed, which is meaningful for authorities to make specific evacuation plans. For example, due to the lower social influence in isolated communities, extra actions might be taken to accelerate evacuation process in those communities.

  5. Toward a standard method for determination of waterborne radon

    International Nuclear Information System (INIS)

    Vitz, E.

    1990-01-01

    When the USEPA specifies the maximum contaminant level (MCL) for any contaminant, a standard method for analysis must be simultaneously stipulated. Promulgation of the proposed MCL and standard method for radon in drinking water is expected by early next year, but a six-month comment period and revision will precede final enactment. The standard method for radon in drinking water will probably specify that either the Lucas cell technique or liquid scintillation spectrometry be used. This paper reports results which support a standard method with the following features: samples should be collected by an explicitly stated technique to control degassing, in glass vials with or without scintillation cocktail, and possibly in duplicate; samples should be measured by liquid scintillation spectroscopy in a specified energy window', in a glass vial with particular types of cocktails; radium standards should be prepared with controlled quench levels and specified levels of carriers, but radium-free controls prepared by a specified method should be used in interlaboratory comparison studies

  6. Determination of metal impurities in MOX powder by direct current arc atomic emission spectroscopy. Application of standard addition method for direct analysis of powder sample

    International Nuclear Information System (INIS)

    Furuse, Takahiro; Taguchi, Shigeo; Kuno, Takehiko; Surugaya, Naoki

    2016-12-01

    Metal impurities in MOX powder obtained from uranium and plutonium recovered from reprocessing process of spent nuclear fuel have to be determined for its characterization. Direct current arc atomic emission spectroscopy (DCA-AES) is one of the useful methods for direct analysis of powder sample without dissolving the analyte into aqueous solution. However, the selection of standard material, which can overcome concerns such as matrix matching, is quite important to create adequate calibration curves for DCA-AES. In this study, we apply standard addition method using the certified U_3O_8 containing known amounts of metal impurities to avoid the matrix problems. The proposed method provides good results for determination of Fe, Cr and Ni contained in MOX samples at a significant quantity level. (author)

  7. SiFTO: An Empirical Method for Fitting SN Ia Light Curves

    Science.gov (United States)

    Conley, A.; Sullivan, M.; Hsiao, E. Y.; Guy, J.; Astier, P.; Balam, D.; Balland, C.; Basa, S.; Carlberg, R. G.; Fouchez, D.; Hardin, D.; Howell, D. A.; Hook, I. M.; Pain, R.; Perrett, K.; Pritchet, C. J.; Regnault, N.

    2008-07-01

    We present SiFTO, a new empirical method for modeling Type Ia supernova (SN Ia) light curves by manipulating a spectral template. We make use of high-redshift SN data when training the model, allowing us to extend it bluer than rest-frame U. This increases the utility of our high-redshift SN observations by allowing us to use more of the available data. We find that when the shape of the light curve is described using a stretch prescription, applying the same stretch at all wavelengths is not an adequate description. SiFTO therefore uses a generalization of stretch which applies different stretch factors as a function of both the wavelength of the observed filter and the stretch in the rest-frame B band. We compare SiFTO to other published light-curve models by applying them to the same set of SN photometry, and demonstrate that SiFTO and SALT2 perform better than the alternatives when judged by the scatter around the best-fit luminosity distance relationship. We further demonstrate that when SiFTO and SALT2 are trained on the same data set the cosmological results agree. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.

  8. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  9. Simulating Supernova Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-05

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.

  10. Simulating Supernova Light Curves

    International Nuclear Information System (INIS)

    Even, Wesley Paul; Dolence, Joshua C.

    2016-01-01

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth's atmosphere.

  11. Reactor Section standard analytical methods. Part 1

    Energy Technology Data Exchange (ETDEWEB)

    Sowden, D.

    1954-07-01

    the Standard Analytical Methods manual was prepared for the purpose of consolidating and standardizing all current analytical methods and procedures used in the Reactor Section for routine chemical analyses. All procedures are established in accordance with accepted practice and the general analytical methods specified by the Engineering Department. These procedures are specifically adapted to the requirements of the water treatment process and related operations. The methods included in this manual are organized alphabetically within the following five sections which correspond to the various phases of the analytical control program in which these analyses are to be used: water analyses, essential material analyses, cotton plug analyses boiler water analyses, and miscellaneous control analyses.

  12. Hazard curve evaluation method development for a forest fire as an external hazard on nuclear power plants

    International Nuclear Information System (INIS)

    Okano, Yasushi; Yamano, Hidemasa

    2016-01-01

    A method to obtain a hazard curve of a forest fire was developed. The method has four steps: a logic tree formulation, a response surface evaluation, a Monte Carlo simulation, and an annual exceedance frequency calculation. The logic tree consists domains of 'forest fire breakout and spread conditions', 'weather conditions', 'vegetation conditions', and 'forest fire simulation conditions.' Condition parameters of the logic boxes are static if stable during a forest fire or not sensitive to a forest fire intensity, and non-static parameters are variables whose frequency/probability is given based on existing databases or evaluations. Response surfaces of a reaction intensity and a fireline intensity were prepared by interpolating outputs from a number of forest fire propagation simulations by fire area simulator (FARSITE). The Monte Carlo simulation was performed where one sample represented a set of variable parameters of the logic boxes and a corresponding intensity was evaluated from the response surface. The hazard curve, i.e. an annual exceedance frequency of the intensity, was therefore calculated from the histogram of the Monte Carlo simulation outputs. The new method was applied to evaluate hazard curves of a reaction intensity and a fireline intensity for a typical location around a sodium-cooled fast reactor in Japan. (author)

  13. Characterization of KS-material by means of J-R-curves especially using the partial unloading technique

    International Nuclear Information System (INIS)

    Voss, B.; Blauel, J.G.; Schmitt, W.

    1983-01-01

    Essential components of nuclear reactor systems are fabricated from materials of high thoughness to exclude brittle failure. With increasing load, a crack tip will blunt, a plastic zone will be formed, voids may nucleate and coalesce thus initiating stable crack extension when the crack driving parameter, e.g. J, exceeds the initiation value Jsub(i). Further stable crack growth will occur with further increasing J prior to complete failure of the structure. The specific material resistance against crack extension is characterized by J resistance curves Jsub(R)=J(Δa). ASTM provides a standard to determine the initiation toughness Jsub(Ic) from a Jsub(R)-curve [1] and a tentative standard for determining the Jsub(R)-curve by a single specimen test [2]. To generate a Jsub(R)-curve values for the crack driving parameter J and the corresponding stable crack growth Δa have to be measured. Besides the multiple specimen technique [1], the potential drop and especially the partial unloading compliance method [2] are used to measure stable crack growth. Some special problems and some results for pressure vessel steels are discussed in this paper. (orig./RW)

  14. Standard test methods for characterizing duplex grain sizes

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 These test methods provide simple guidelines for deciding whether a duplex grain size exists. The test methods separate duplex grain sizes into one of two distinct classes, then into specific types within those classes, and provide systems for grain size characterization of each type. 1.2 Units—The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns associated with its use. It is the responsibility of the user of this standard to consult appropriate safety and health practices and determine the applicability of regulatory limitations prior to its use.

  15. Carbon Lorenz Curves

    Energy Technology Data Exchange (ETDEWEB)

    Groot, L. [Utrecht University, Utrecht School of Economics, Janskerkhof 12, 3512 BL Utrecht (Netherlands)

    2008-11-15

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across countries. These tools allow policy-makers and the general public to grasp at a single glance the impact of conventional distribution rules such as equal caps or grandfathering, or more sophisticated ones, on the distribution of greenhouse gas emissions. Second, using the Samuelson rule for the optimal provision of a public good, the Pareto-optimal distribution of carbon emissions is compared with the distribution that follows if countries follow Nash-Cournot abatement strategies. It is shown that the Pareto-optimal distribution under the Samuelson rule can be approximated by the equal cap division, represented by the diagonal in the Lorenz curve diagram.

  16. Complexity of Curved Glass Structures

    Science.gov (United States)

    Kosić, T.; Svetel, I.; Cekić, Z.

    2017-11-01

    Despite the increasing number of research on the architectural structures of curvilinear forms and technological and practical improvement of the glass production observed over recent years, there is still a lack of comprehensive codes and standards, recommendations and experience data linked to real-life curved glass structures applications regarding design, manufacture, use, performance and economy. However, more and more complex buildings and structures with the large areas of glass envelope geometrically complex shape are built every year. The aim of the presented research is to collect data on the existing design philosophy on curved glass structure cases. The investigation includes a survey about how architects and engineers deal with different design aspects of curved glass structures with a special focus on the design and construction process, glass types and structural and fixing systems. The current paper gives a brief overview of the survey findings.

  17. Using quasars as standard clocks for measuring cosmological redshift.

    Science.gov (United States)

    Dai, De-Chang; Starkman, Glenn D; Stojkovic, Branislav; Stojkovic, Dejan; Weltman, Amanda

    2012-06-08

    We report hitherto unnoticed patterns in quasar light curves. We characterize segments of the quasar's light curves with the slopes of the straight lines fit through them. These slopes appear to be directly related to the quasars' redshifts. Alternatively, using only global shifts in time and flux, we are able to find significant overlaps between the light curves of different pairs of quasars by fitting the ratio of their redshifts. We are then able to reliably determine the redshift of one quasar from another. This implies that one can use quasars as standard clocks, as we explicitly demonstrate by constructing two independent methods of finding the redshift of a quasar from its light curve.

  18. Using LMS Method in Smoothing Reference Centile Curves for Lipid Profile of Iranian Children and Adolescents: A CASPIAN Study

    Directory of Open Access Journals (Sweden)

    M Hoseini

    2012-05-01

    Full Text Available

    Background and Objectives: LMS is a general monitoring method for fitting smooth reference centile curves in medical sciences. They provide the distribution of a measurement as it changes according to some covariates like age or time. This method describes the distribution of changes by three parameters; Mean, Coefficient of variation and Cox-Box power (skewness. Applying maximum penalized likelihood and spline function, the three curves are estimated and fitted and optimum smoothness is expressed by three curves. This study was conducted to provide the percentiles of lipid profile of Iranian children and adolescents by LMS.

     

    Methods: Smoothed reference centile curves of four groups of lipids (triglycerides, total-LDL- and HDL-cholesterol were developed from the data of 4824 Iranian school students, aged 6-18 years, living in six cities (Tabriz, Rasht, Gorgan, Mashad, Yazd and Tehran-Firouzkouh in Iran. Demographic and laboratory data were taken from the national study of the surveillance and prevention of non-communicable diseases from childhood (CASPIAN Study. After data management, data of 4824 students were included in the statistical analysis, which was conducted by the modified LMS method proposed by Cole. The curves were developed with a degree of freedom of four to ten with some tools such as deviance, Q tests, and detrended Q-Q plot were used for monitoring goodness of fit models.

     

    Results: All tools confirmed the model, and the LMS method was used as an appropriate method in smoothing reference centile. This method revealed the distributing features of variables serving as an objective tool to determine their relative importance.

     

    Conclusion: This study showed that the triglycerides level is higher and

  19. MODELING THE TRANSITION CURVE ON A LIMITED TERAIN

    Directory of Open Access Journals (Sweden)

    V. D. Borisenko

    2017-04-01

    Full Text Available Purpose. Further development of the method of geometric modelling of transition curves, which are placed between rectilinear and circular sections of railway tracks and are created in localities, the relief of which causes certain restrictions on the size of the transition curves of the railway track. Methodology. The equation of the transition curve is taken in parametric form, in which the length of the arc of the modelled curve is used as a parameter. As initial data in the modelling of the transition curve, the coordinates of its initial point and the angle of inclination in it are tangent, the radius of the circumference of the circular section and the parameter that is used as a constraint when placing a section of the railway track. The transition curve is modelled under the condition that the distribution of its curvature from the length of the arc - the natural parameter - is described by a cubic dependence. This dependence contains four unknown coefficients; the unknown is also the length of the arc. The coefficients of the cubic dependence and the length of the arc of the transition curve, the coordinates of its end point, the angle of inclination in it of the tangent are determined during the simulation of the transition curve. The application of boundary conditions and methods of differential geometry with respect to the distribution of the slope angle of the tangent to the simulated curve from the initial to the end points of the transition curve and the calculation of the coordinates of the end point of the curve allows us to reduce the problem of modelling the transition curve to determine the arc length of this curve. Directly the length of the transition curve is in the process of minimizing the deviation of the circumference of the circular path from its current value obtained when searching for the arc length. Findings. As a result of the computational experiment, the possibility of modelling a transition curve between a

  20. Not proper ROC curves as new tool for the analysis of differentially expressed genes in microarray experiments

    Directory of Open Access Journals (Sweden)

    Pistoia Vito

    2008-10-01

    Full Text Available Abstract Background Most microarray experiments are carried out with the purpose of identifying genes whose expression varies in relation with specific conditions or in response to environmental stimuli. In such studies, genes showing similar mean expression values between two or more groups are considered as not differentially expressed, even if hidden subclasses with different expression values may exist. In this paper we propose a new method for identifying differentially expressed genes, based on the area between the ROC curve and the rising diagonal (ABCR. ABCR represents a more general approach than the standard area under the ROC curve (AUC, because it can identify both proper (i.e., concave and not proper ROC curves (NPRC. In particular, NPRC may correspond to those genes that tend to escape standard selection methods. Results We assessed the performance of our method using data from a publicly available database of 4026 genes, including 14 normal B cell samples (NBC and 20 heterogeneous lymphomas (namely: 9 follicular lymphomas and 11 chronic lymphocytic leukemias. Moreover, NBC also included two sub-classes, i.e., 6 heavily stimulated and 8 slightly or not stimulated samples. We identified 1607 differentially expressed genes with an estimated False Discovery Rate of 15%. Among them, 16 corresponded to NPRC and all escaped standard selection procedures based on AUC and t statistics. Moreover, a simple inspection to the shape of such plots allowed to identify the two subclasses in either one class in 13 cases (81%. Conclusion NPRC represent a new useful tool for the analysis of microarray data.

  1. Weighted curve-fitting program for the HP 67/97 calculator

    International Nuclear Information System (INIS)

    Stockli, M.P.

    1983-01-01

    The HP 67/97 calculator provides in its standard equipment a curve-fit program for linear, logarithmic, exponential and power functions that is quite useful and popular. However, in more sophisticated applications, proper weights for data are often essential. For this purpose a program package was created which is very similar to the standard curve-fit program but which includes the weights of the data for proper statistical analysis. This allows accurate calculation of the uncertainties of the fitted curve parameters as well as the uncertainties of interpolations or extrapolations, or optionally the uncertainties can be normalized with chi-square. The program is very versatile and allows one to perform quite difficult data analysis in a convenient way with the pocket calculator HP 67/97

  2. 29 CFR 1630.7 - Standards, criteria, or methods of administration.

    Science.gov (United States)

    2010-07-01

    ... Standards, criteria, or methods of administration. It is unlawful for a covered entity to use standards, criteria, or methods of administration, which are not job-related and consistent with business necessity... 29 Labor 4 2010-07-01 2010-07-01 false Standards, criteria, or methods of administration. 1630.7...

  3. Optimization of curved drift tubes for ultraviolet-ion mobility spectrometry

    Science.gov (United States)

    Ni, Kai; Ou, Guangli; Zhang, Xiaoguo; Yu, Zhou; Yu, Quan; Qian, Xiang; Wang, Xiaohao

    2015-08-01

    Ion mobility spectrometry (IMS) is a key trace detection technique for toxic pollutants and explosives in the atmosphere. Ultraviolet radiation photoionization source is widely used as an ionization source for IMS due to its advantages of high selectivity and non-radioactivity. However, UV-IMS bring problems that UV rays will be launched into the drift tube which will cause secondary ionization and lead to the photoelectric effect of the Faraday disk. So air is often used as working gas to reduce the effective distance of UV rays, but it will limit the application areas of UV-IMS. In this paper, we propose a new structure of curved drift tube, which can avoid abnormally incident UV rays. Furthermore, using curved drift tube may increase the length of drift tube and then improve the resolution of UV-IMS according to previous research. We studied the homogeneity of electric field in the curved drift tube, which determined the performance of UV-IMS. Numerical simulation of electric field in curved drift tube was conducted by SIMION in our study. In addition, modeling method and homogeneity standard for electric field were also presented. The influences of key parameters include radius of gyration, gap between electrode as well as inner diameter of curved drift tube, on the homogeneity of electric field were researched and some useful laws were summarized. Finally, an optimized curved drift tube is designed to achieve homogenous drift electric field. There is more than 98.75% of the region inside the curved drift tube where the fluctuation of the electric field strength along the radial direction is less than 0.2% of that along the axial direction.

  4. Transition curves for highway geometric design

    CERN Document Server

    Kobryń, Andrzej

    2017-01-01

    This book provides concise descriptions of the various solutions of transition curves, which can be used in geometric design of roads and highways. It presents mathematical methods and curvature functions for defining transition curves. .

  5. Determination of endogenous inflammation-related lipid mediators in ischemic stroke rats using background subtracting calibration curves by liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Yang, Yang; Zhong, Qisheng; Mo, Canlong; Zhang, Hao; Zhou, Ting; Tan, Wen

    2017-11-01

    Accurate and reliable quantification of endogenous lipid mediators in complex biological samples is a daunting challenge. In this study, a robust and direct endogenous quantitative method using background subtracting calibration curves by liquid chromatography-tandem mass spectrometry was first developed for the determination of endogenous lipid mediators in ischemic stroke rats. Absolute quantification without surrogate matrix could be achieved by using background subtracting calibration curves, which were corrected and verified from standard curves constructed on original matrix. The recoveries of this method were in the range of 50.3-98.3%, the precision with the relative standard deviation was less than 13.8%, and the accuracy with the relative error was within ± 15.0%. In addition, background subtracting calibration curves were further verified by validation factors ranging from 90.3 to 110.9%. This validated method has been successfully applied to the analysis of seven endogenous inflammation-related lipid mediators in the brain tissues of ischemic stroke rats. The results indicated that prostaglandins as inflammatory factors and some lipid mediators with neuroprotective effects increased apparently (p endogenous compounds in the complex biological samples. Graphical abstract The analysis procedure of determining endogenous inflammation-related lipid mediators using BSCC by LC-MS/MS.

  6. Measuring Model Rocket Engine Thrust Curves

    Science.gov (United States)

    Penn, Kim; Slaton, William V.

    2010-01-01

    This paper describes a method and setup to quickly and easily measure a model rocket engine's thrust curve using a computer data logger and force probe. Horst describes using Vernier's LabPro and force probe to measure the rocket engine's thrust curve; however, the method of attaching the rocket to the force probe is not discussed. We show how a…

  7. Determination of efficiency curves for HPGE detector in different counting geometries

    International Nuclear Information System (INIS)

    Rodrigues, Josianne L.; Kastner, Geraldo F.; Ferreira, Andrea V.

    2011-01-01

    This paper presents the first experimental results related to determination of efficiency curves for HPGe detector in different counting geometries. The detector is a GX2520 Canberra belonging to CDTN/CNEN. Efficiency curves for punctual were determined by using a certified set of gamma sources. These curves were determined for three counting geometries. Following that, efficiency curves for non punctual samples were determined by using standard solutions of radionuclides in 500 ml and 1000 ml wash bottle Marinelli

  8. DETECTION OF MICROVASCULAR COMPLICATIONS OF TYPE 2 DIABETES BY EZSCAN AND ITS COMPARISON WITH STANDARD SCREENING METHODS

    Directory of Open Access Journals (Sweden)

    Sarita Bajaj

    2016-08-01

    Full Text Available BACKGROUND EZSCAN is a new, noninvasive technique to detect sudomotor dysfunction and thus neuropathy in diabetes patients at an early stage. It further predicts chances of development of other microvascular complications. In this study, we evaluated EZSCAN for detection of microvascular complications in Type 2 diabetes patients and compared accuracy of EZSCAN with standard screening methods. MATERIALS AND METHODS 104 known diabetes patients, 56 males and 48 females, were studied. All cases underwent the EZSCAN test, Nerve Conduction Study (NCS test, Vibration perception threshold test (VPT, Monofilament test, Fundus examination and Urine micral test. The results of EZSCAN were compared with standard screening methods. The data has been analysed and assessed by applying appropriate statistical tests within different groups. RESULTS Mean age of the subjects was 53.5 ± 11.4 years. For detection of diabetic neuropathy, sensitivity and specificity of EZSCAN was found to be 77.0 % and 95.3%, respectively. Odd’s ratio (OR was 68.82 with p < 0.0001. AUC in ROC curve was 0.930. Sensitivity and specificity of EZSCAN for detection of nephropathy were 67.1% and 94.1%, respectively. OR = 32.69 with p < 0.0001. AUC was 0.926. Sensitivity of EZSCAN for detection of retinopathy was 90% while specificity is 70.3%. OR = 21.27; p< 0.0001. AUC came out to be 0.920. CONCLUSION Results of EZSCAN test compared significantly to the standard screening methods for the detection of microvascular complications of diabetes and can be used as a simple, noninvasive and quick method to detect microvascular complications of diabetes.

  9. Estimating Aquifer Transmissivity Using the Recession-Curve-Displacement Method in Tanzania’s Kilombero Valley

    Directory of Open Access Journals (Sweden)

    William Senkondo

    2017-12-01

    Full Text Available Information on aquifer processes and characteristics across scales has long been a cornerstone for understanding water resources. However, point measurements are often limited in extent and representativeness. Techniques that increase the support scale (footprint of measurements or leverage existing observations in novel ways can thus be useful. In this study, we used a recession-curve-displacement method to estimate regional-scale aquifer transmissivity (T from streamflow records across the Kilombero Valley of Tanzania. We compare these estimates to local-scale estimates made from pumping tests across the Kilombero Valley. The median T from the pumping tests was 0.18 m2/min. This was quite similar to the median T estimated from the recession-curve-displacement method applied during the wet season for the entire basin (0.14 m2/min and for one of the two sub-basins tested (0.16 m2/min. On the basis of our findings, there appears to be reasonable potential to inform water resource management and hydrologic model development through streamflow-derived transmissivity estimates, which is promising for data-limited environments facing rapid development, such as the Kilombero Valley.

  10. Determining the spill flow discharge of combined sewer overflows using rating curves based on computational fluid dynamics instead of the standard weir equation.

    Science.gov (United States)

    Fach, S; Sitzenfrei, R; Rauch, W

    2009-01-01

    It is state of the art to evaluate and optimise sewer systems with urban drainage models. Since spill flow data is essential in the calibration process of conceptual models it is important to enhance the quality of such data. A wide spread approach is to calculate the spill flow volume by using standard weir equations together with measured water levels. However, these equations are only applicable to combined sewer overflow (CSO) structures, whose weir constructions correspond with the standard weir layout. The objective of this work is to outline an alternative approach to obtain spill flow discharge data based on measurements with a sonic depth finder. The idea is to determine the relation between water level and rate of spill flow by running a detailed 3D computational fluid dynamics (CFD) model. Two real world CSO structures have been chosen due to their complex structure, especially with respect to the weir construction. In a first step the simulation results were analysed to identify flow conditions for discrete steady states. It will be shown that the flow conditions in the CSO structure change after the spill flow pipe acts as a controlled outflow and therefore the spill flow discharge cannot be described with a standard weir equation. In a second step the CFD results will be used to derive rating curves which can be easily applied in everyday practice. Therefore the rating curves are developed on basis of the standard weir equation and the equation for orifice-type outlets. Because the intersection of both equations is not known, the coefficients of discharge are regressed from CFD simulation results. Furthermore, the regression of the CFD simulation results are compared with the one of the standard weir equation by using historic water levels and hydrographs generated with a hydrodynamic model. The uncertainties resulting of the wide spread use of the standard weir equation are demonstrated.

  11. Customized versus population-based growth curves: prediction of low body fat percent at term corrected gestational age following preterm birth.

    Science.gov (United States)

    Law, Tameeka L; Katikaneni, Lakshmi D; Taylor, Sarah N; Korte, Jeffrey E; Ebeling, Myla D; Wagner, Carol L; Newman, Roger B

    2012-07-01

    Compare customized versus population-based growth curves for identification of small-for-gestational-age (SGA) and body fat percent (BF%) among preterm infants. Prospective cohort study of 204 preterm infants classified as SGA or appropriate-for-gestational-age (AGA) by population-based and customized growth curves. BF% was determined by air-displacement plethysmography. Differences between groups were compared using bivariable and multivariable linear and logistic regression analyses. Customized curves reclassified 30% of the preterm infants as SGA. SGA infants identified by customized method only had significantly lower BF% (13.8 ± 6.0) than the AGA (16.2 ± 6.3, p = 0.02) infants and similar to the SGA infants classified by both methods (14.6 ± 6.7, p = 0.51). Customized growth curves were a significant predictor of BF% (p = 0.02), whereas population-based growth curves were not a significant independent predictor of BF% (p = 0.50) at term corrected gestational age. Customized growth potential improves the differentiation of SGA infants and low BF% compared with a standard population-based growth curve among a cohort of preterm infants.

  12. Assessment of modification level of hypoeutectic Al -Si alloys by pattern recognition of cooling curves

    Directory of Open Access Journals (Sweden)

    CHEN Xiang

    2005-11-01

    Full Text Available Most evaluations of modification level are done according to a specific scale based on an merican Foundry Society (AFS standard wall chart as qualitative analysis in Al-Si casting production currently. This method is quite dependent on human experience when making comparisons of the microstructure with the standard chart. And the structures depicted in the AFS chart do not always resemble those seen in actual Al-Si castings. Therefore, this ualitativeanalysis procedure is subjective and can introduce human-caused errors into comparative metallographic analyses. A quantization parameter of the modification level was introduced by setting up the relationship between mean area weighted shape factor of eutectic silicon phase and the modification level using image analysis technology. In order to evaluate the modification level, a new method called "intelligent evaluating of melt quality by pattern recognition of hermal analysis cooling curves" has also been introduced. The results show that silicon modification level can be precisely assessed by comparison of the cooling curve of the melt to be evaluated with the one most similar to it in a database.

  13. An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)

    2017-03-20

    We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.

  14. Lagrangian Curves on Spectral Curves of Monopoles

    International Nuclear Information System (INIS)

    Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.

    2010-01-01

    We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.

  15. Standard test method for galling resistance of material couples

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers a laboratory test that ranks the galling resistance of material couples using a quantitative measure. Bare metals, alloys, nonmetallic materials, coatings, and surface modified materials may be evaluated by this test method. 1.2 This test method is not designed for evaluating the galling resistance of material couples sliding under lubricated conditions, because galling usually will not occur under lubricated sliding conditions using this test method. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  16. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    Science.gov (United States)

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  17. Lung function in North American Indian children: reference standards for spirometry, maximal expiratory flow volume curves, and peak expiratory flow.

    Science.gov (United States)

    Wall, M A; Olson, D; Bonn, B A; Creelman, T; Buist, A S

    1982-02-01

    Reference standards of lung function was determined in 176 healthy North American Indian children (94 girls, 82 boys) 7 to 18 yr of age. Spirometry, maximal expiratory flow volume curves, and peak expiratory flow rate were measured using techniques and equipment recommended by the American Thoracic Society. Standing height was found to be an accurate predictor of lung function, and prediction equations for each lung function variable are presented using standing height as the independent variable. Lung volumes and expiratory flow rates in North American Indian children were similar to those previously reported for white and Mexican-American children but were greater than those in black children. In both boys and girls, lung function increased in a curvilinear fashion. Volume-adjusted maximal expiratory flow rates after expiring 50 or 75% of FVC tended to decrease in both sexes as age and height increased. Our maximal expiratory flow volume curve data suggest that as North American Indian children grow, lung volume increases at a slightly faster rate than airway size does.

  18. Remote sensing used for power curves

    DEFF Research Database (Denmark)

    Wagner, Rozenn; Ejsing Jørgensen, Hans; Schmidt Paulsen, Uwe

    2008-01-01

    Power curve measurement for large wind turbines requires taking into account more parameters than only the wind speed at hub height. Based on results from aerodynamic simulations, an equivalent wind speed taking the wind shear into account was defined and found to reduce the power standard deviat...

  19. Use of universal functional optimisation for TL glow curve analysis

    International Nuclear Information System (INIS)

    Pernicka, F.; Linh, H.Q.

    1996-01-01

    The effective use of any TL instrument requires an efficient software package to be able to fulfil different tasks required by research and practical applications. One of the standard features of the package used at the NPI Prague is the application of the interactive modular system Universal Functional Optimisation (UFO) for glow curve deconvolution. The whole system has been tested on standard glow curves using different models of the TL process (a single peak described by the Podgorsak approximation, first order kinetics and/or general order kinetics). Calculated values of basic TL parameters (E and s) show a good agreement with the results obtained by other authors. The main advantage of the system is in its modularity that enables flexible changes in the TL model and mathematical procedures of the glow curve analysis. (author)

  20. Status of sennosides content in various Indian herbal formulations: Method standardization by HPTLC

    Directory of Open Access Journals (Sweden)

    Md.Wasim Aktar

    2008-12-01

    Full Text Available Several poly-herbal formulations containing senna (Cassia angustifolia leaves are available in the Indian market for the treatment of constipation. The purgative effect of senna is due to the presence of two unique hydroxy anthracene glycosides sennosides A and B. A HPTLC method for the quantitative analysis of sennosides A and B present in the formulation has been developed. Methanol extract of the formulations was analyzed on a silica gel 60 GF254 HPTLC plates with spot visualization under UV and scanning at 350 nm in absorption/ reflection mode. Calibration curves were found to be linear in the range 200-1000 ηg. The correlation coefficients were found to be 0.991 for sennoside A and 0.997 for sennoside B. The average recovery rate was 95% for sennoside A and 97% for sennoside B showing the reliability and reproducibility of the method. Limit of detection and quantification were determined as 0.05 and 0.25 μg/g respectively. The validity of the method with respect to analysis was confirmed by comparing the UV spectra of the herbal formulations with that of the standard within the same Rf window. The analysis revealed a significant variation in sennosides content.

  1. Status of sennosides content in various Indian herbal formulations: Method standardization by HPTLC

    Directory of Open Access Journals (Sweden)

    Md. Wasim Aktar

    2008-06-01

    Full Text Available Several poly-herbal formulations containing senna (Cassia angustifolia leaves are available in the Indian market for the treatment of constipation. The purgative effect of senna is due to the presence of two unique hydroxy anthracene glycosides sennosides A and B. A HPTLC method for the quantitative analysis of sennosides A and B present in the formulation has been developed. Methanol extract of the formulations was analyzed on a silica gel 60 GF254 HPTLCplates with spot visualization under UV and scanning at 350 nm in absorption/ reflection mode. Calibration curves were found to be linear in the range 200-1000 ηg. The correlation coefficients were found to be 0.991 for sennoside A and 0.997 for sennoside B. The average recovery rate was 95% for sennoside A and 97% for sennoside B showing the reliability and reproducibility of the method. Limit of detection and quantification were determined as 0.05 and 0.25 μg/g respectively. The validity of the method with respect to analysis was confirmed by comparing the UV spectra of the herbal formulations with that of the standard within the same Rf window. The analysis revealed a significant variation in sennosides content.

  2. Extended analysis of cooling curves

    International Nuclear Information System (INIS)

    Djurdjevic, M.B.; Kierkus, W.T.; Liliac, R.E.; Sokolowski, J.H.

    2002-01-01

    Thermal Analysis (TA) is the measurement of changes in a physical property of a material that is heated through a phase transformation temperature range. The temperature changes in the material are recorded as a function of the heating or cooling time in such a manner that allows for the detection of phase transformations. In order to increase accuracy, characteristic points on the cooling curve have been identified using the first derivative curve plotted versus time. In this paper, an alternative approach to the analysis of the cooling curve has been proposed. The first derivative curve has been plotted versus temperature and all characteristic points have been identified with the same accuracy achieved using the traditional method. The new cooling curve analysis also enables the Dendrite Coherency Point (DCP) to be detected using only one thermocouple. (author)

  3. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.

    Science.gov (United States)

    López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J

    2015-04-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. CHANGES AND MODIFICATIONS OF THE TROUSERS PATENS FOR NON-STANDARD FIGURES

    Directory of Open Access Journals (Sweden)

    SUDACEVSCHI SVETLANA

    2015-12-01

    Full Text Available Among the problems faced by the constructors of clothing goods are the non-standard figures of the human body. The present article examines the possibilities of modifying the curve of women’s trousers. The author proposes methods of chang­ing the basic drawing of the women’s trousers for figures with non-standard figures and to use these methods in the process of training in specialized educational institutions.

  5. A direct method for determining complete positive and negative capillary pressure curves for reservoir rock using the centrifuge

    Energy Technology Data Exchange (ETDEWEB)

    Spinler, E.A.; Baldwin, B.A. [Phillips Petroleum Co., Bartlesville, OK (United States)

    1997-08-01

    A method is being developed for direct experimental determination of capillary pressure curves from saturation distributions produced during centrifuging fluids in a rock plug. A free water level is positioned along the length of the plugs to enable simultaneous determination of both positive and negative capillary pressures. Octadecane as the oil phase is solidified by temperature reduction while centrifuging to prevent fluid redistribution upon removal from the centrifuge. The water saturation is then measured via magnetic resonance imaging. The saturation profile within the plug and the calculation of pressures for each point of the saturation profile allows for a complete capillary pressure curve to be determined from one experiment. Centrifuging under oil with a free water level into a 100 percent water saturated plug results in the development of a primary drainage capillary pressure curve. Centrifuging similarly at an initial water saturation in the plug results in the development of an imbibition capillary pressure curve. Examples of these measurements are presented for Berea sandstone and chalk rocks.

  6. Standard Procedure for Dose Assessment using the film holder NRPB/AERE and the film AGFA Monitoring 2/10

    International Nuclear Information System (INIS)

    Guillen, J.A.

    1998-07-01

    This paper describes the calculation method to assess dose and energy using the film holder from NRPB/AERE and the film Agfa Monitoring 2/10. Also includes all the steps since preparing the standard curve, fitting of calibration curve, dose assesment, description of filtration of the film holder and the form of the calibration curve

  7. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-01-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal

  8. Natural frequencies of the frames having curved member

    International Nuclear Information System (INIS)

    Tekelioglu, M.; Ozyigit, H.A.; Ridvan, H.

    2001-01-01

    In-plane and out-of-plane vibrations of a frame having a curved member are studied. Although the analysis is carried out on a frame having a straight and a curve beam, it can be applicable for all the frame type structures. Different end conditions are considered for the system. Rotary inertia and extensional effects are included for the curved member. Finite element method is used as analysis tool. Natural frequencies of the curved beams for different end conditions are calculated first, and then the frequencies of the frames are investigated. The transformation from local coordinates to global coordinates for curved beams needs special attention in the analysis. The results are compared with other methods. (author)

  9. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  10. Provincial carbon intensity abatement potential estimation in China: A PSO–GA-optimized multi-factor environmental learning curve method

    International Nuclear Information System (INIS)

    Yu, Shiwei; Zhang, Junjie; Zheng, Shuhong; Sun, Han

    2015-01-01

    This study aims to estimate carbon intensity abatement potential in China at the regional level by proposing a particle swarm optimization–genetic algorithm (PSO–GA) multivariate environmental learning curve estimation method. The model uses two independent variables, namely, per capita gross domestic product (GDP) and the proportion of the tertiary industry in GDP, to construct carbon intensity learning curves (CILCs), i.e., CO 2 emissions per unit of GDP, of 30 provinces in China. Instead of the traditional ordinary least squares (OLS) method, a PSO–GA intelligent optimization algorithm is used to optimize the coefficients of a learning curve. The carbon intensity abatement potentials of the 30 Chinese provinces are estimated via PSO–GA under the business-as-usual scenario. The estimation reveals the following results. (1) For most provinces, the abatement potentials from improving a unit of the proportion of the tertiary industry in GDP are higher than the potentials from raising a unit of per capita GDP. (2) The average potential of the 30 provinces in 2020 will be 37.6% based on the emission's level of 2005. The potentials of Jiangsu, Tianjin, Shandong, Beijing, and Heilongjiang are over 60%. Ningxia is the only province without intensity abatement potential. (3) The total carbon intensity in China weighted by the GDP shares of the 30 provinces will decline by 39.4% in 2020 compared with that in 2005. This intensity cannot achieve the 40%–45% carbon intensity reduction target set by the Chinese government. Additional mitigation policies should be developed to uncover the potentials of Ningxia and Inner Mongolia. In addition, the simulation accuracy of the CILCs optimized by PSO–GA is higher than that of the CILCs optimized by the traditional OLS method. - Highlights: • A PSO–GA-optimized multi-factor environmental learning curve method is proposed. • The carbon intensity abatement potentials of the 30 Chinese provinces are estimated by

  11. A study of potential energy curves from the model space quantum Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)

    2015-12-07

    We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.

  12. Constructing forward price curves in electricity markets

    DEFF Research Database (Denmark)

    Fleten, S.-E.; Lemming, Jørgen Kjærgaard

    2003-01-01

    We present and analyze a method for constructing approximated high-resolution forward price curves in electricity markets. Because a limited number of forward or futures contracts are traded in the market, only a limited picture of the theoretical continuous forward price curve is available...... to the analyst. Our method combines the information contained in observed bid and ask prices with information from the forecasts generated by bottom-up models. As an example, we use information concerning the shape of the seasonal variation from a bottom-up model to improve the forward price curve quoted...

  13. Comparison of the methods for tissue triiodothyronine T(3) extraction and subsequent radioimmunoassay

    International Nuclear Information System (INIS)

    Takaishi, M.; Miyachi, Y.; Aoki, M.; Shishiba, Y.; Asahi Life Foundation, Tokyo

    1978-01-01

    Although there have been various reports on tissue T 3 concentration, the examination of the quality of radioimmunoassay has not been available. In the present study, we tried to determine whether the available methods for T 3 extraction are adequate for the various methods of T 3 radioimmunoassays used. T 3 was extracted from liver by ethanol extraction or by acid butanol extraction (Flock's method) and the extract was applied to radioimmunoassay either by Seralute T 3 column, ANS-double antibody or the ANS-charcoal method. The values of T 3 were compared with those obtained by isotope-equilibration method. The dilution curve of ethanol extract was not parallel with that of the standard in ANS-charcoal or ANS-double antibody technique. When the extract was tested by Seralate method, the dilution curve was parallel to the standard, whereas the T 3 value obtained with this method was two-fold higher than that with the isotope equilibration technique. The analysis of the ethanol extract suggested that the lipid extracted by ethanol interfered with the assay. The acid butanol extract when tested either by the ANS-double antibody or Seralate method, showed parallelism to the standard curve and gave T 3 values almost identical with those by the isotope-equilibration method. When tested by ANS-charcoal method, the dilution curve of the acid butanol extract was not parallel to the standard. Thus, to obtain reliable results, tissue extraction by Flock's method and subsequent T 3 radioimmunoassay by either ANS-double antibody or Seralate T 3 method are recommended. (author)

  14. Designing the Alluvial Riverbeds in Curved Paths

    Science.gov (United States)

    Macura, Viliam; Škrinár, Andrej; Štefunková, Zuzana; Muchová, Zlatica; Majorošová, Martina

    2017-10-01

    The paper presents the method of determining the shape of the riverbed in curves of the watercourse, which is based on the method of Ikeda (1975) developed for a slightly curved path in sandy riverbed. Regulated rivers have essentially slightly and smoothly curved paths; therefore, this methodology provides the appropriate basis for river restoration. Based on the research in the experimental reach of the Holeška Brook and several alluvial mountain streams the methodology was adjusted. The method also takes into account other important characteristics of bottom material - the shape and orientation of the particles, settling velocity and drag coefficients. Thus, the method is mainly meant for the natural sand-gravel material, which is heterogeneous and the particle shape of the bottom material is very different from spherical. The calculation of the river channel in the curved path provides the basis for the design of optimal habitat, but also for the design of foundations of armouring of the bankside of the channel. The input data is adapted to the conditions of design practice.

  15. a new approach of Analysing GRB light curves

    International Nuclear Information System (INIS)

    Varga, B.; Horvath, I.

    2005-01-01

    We estimated the T xx quantiles of the cumulative GRB light curves using our recalculated background. The basic information of the light curves was extracted by multivariate statistical methods. The possible classes of the light curves are also briefly discussed

  16. Rationalization in architecture with surfaces foliated by elastic curves

    DEFF Research Database (Denmark)

    Nørbjerg, Toke Bjerge

    analytic form using elliptic functions. We use a gradient-driven optimization to approximate arbitrary planar curves by planar elastic curves. The method depends on an explicit parameterization of the space of elastic curves and on a method for finding a good initial guess for the optimization. We......We develop methods for rationalization of CAD surfaces using elastic curves, aiming at a costeffective fabrication method for architectural designs of complex shapes. By moving a heated flexible metal rod though a block of expanded polystyrene, it is possible to produce shapes with both positive...... and negative Gaussian curvature, either for direct use or for use as moulds for concrete casting. If we can control the shape of the rod, while moving, we can produce prescribed shapes. The flexible rod assumes at all times the shape of an Euler elastica (or elastic curve). The elastica are given in closed...

  17. Standardization of Laser Methods and Techniques for Vibration Measurements and Calibrations

    International Nuclear Information System (INIS)

    Martens, Hans-Juergen von

    2010-01-01

    The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and refined laser methods and techniques developed by national metrology institutes and by leading manufacturers in the past two decades have been swiftly specified as standard methods for inclusion into in the series ISO 16063 of international documentary standards. A survey of ISO Standards for the calibration of vibration and shock transducers demonstrates the extended ranges and improved accuracy (measurement uncertainty) of laser methods and techniques for vibration and shock measurements and calibrations. The first standard for the calibration of laser vibrometers by laser interferometry or by a reference accelerometer calibrated by laser interferometry (ISO 16063-41) is on the stage of a Draft International Standard (DIS) and may be issued by the end of 2010. The standard methods with refined techniques proved to achieve wider measurement ranges and smaller measurement uncertainties than that specified in the ISO Standards. The applicability of different standardized interferometer methods to vibrations at high frequencies was recently demonstrated up to 347 kHz (acceleration amplitudes up to 350 km/s 2 ). The relative deviations between the amplitude measurement results of the different interferometer methods that were applied simultaneously, differed by less than 1% in all cases.

  18. Light curves for ''bump Cepheids'' computed with a dynamically zoned pulsation code

    International Nuclear Information System (INIS)

    Adams, T.F.; Castor, J.E.; Davis, C.G.

    1978-01-01

    The dynamically zoned pulsation code developed by Castor, Davis, and Davison has been used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. This study, with a code that is capable of producing reliable light curves, shows again that the light and velocity curves for 9.8-day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the ''evolutionary mass.'' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators

  19. Particles and Dirac-type operators on curved spaces

    International Nuclear Information System (INIS)

    Visinescu, Mihai

    2003-01-01

    We review the geodesic motion of pseudo-classical particles in curved spaces. Investigating the generalized Killing equations for spinning spaces, we express the constants of motion in terms of Killing-Yano tensors. Passing from the spinning spaces to the Dirac equation in curved backgrounds we point out the role of the Killing-Yano tensors in the construction of the Dirac-type operators. The general results are applied to the case of the four-dimensional Euclidean Taub-Newman-Unti-Tamburino space. From the covariantly constant Killing-Yano tensors of this space we construct three new Dirac-type operators which are equivalent with the standard Dirac operator. Finally the Runge-Lenz operator for the Dirac equation in this background is expressed in terms of the fourth Killing-Yano tensor which is not covariantly constant. As a rule the covariantly constant Killing-Yano tensors realize certain square roots of the metric tensor. Such a Killing-Yano tensor produces simultaneously a Dirac-type operator and the generator of a one-parameter Lie group connecting this operator with the standard Dirac one. On the other hand, the not covariantly constant Killing-Yano tensors are important in generating hidden symmetries. The presence of not covariantly constant Killing-Yano tensors implies the existence of non-standard supersymmetries in point particle theories on curved background. (author)

  20. Large Display Interaction via Multiple Acceleration Curves and Multifinger Pointer Control

    Directory of Open Access Journals (Sweden)

    Andrey Esakia

    2014-01-01

    Full Text Available Large high-resolution displays combine high pixel density with ample physical dimensions. The combination of these factors creates a multiscale workspace where interactive targeting of on-screen objects requires both high speed for distant targets and high accuracy for small targets. Modern operating systems support implicit dynamic control-display gain adjustment (i.e., a pointer acceleration curve that helps to maintain both speed and accuracy. However, large high-resolution displays require a broader range of control-display gains than a single acceleration curve can usably enable. Some interaction techniques attempt to solve the problem by utilizing multiple explicit modes of interaction, where different modes provide different levels of pointer precision. Here, we investigate the alternative hypothesis of using a single mode of interaction for continuous pointing that enables both (1 standard implicit granularity control via an acceleration curve and (2 explicit switching between multiple acceleration curves in an efficient and dynamic way. We evaluate a sample solution that augments standard touchpad accelerated pointer manipulation with multitouch capability, where the choice of acceleration curve dynamically changes depending on the number of fingers in contact with the touchpad. Specifically, users can dynamically switch among three different acceleration curves by using one, two, or three fingers on the touchpad.

  1. A New Processing Method Combined with BP Neural Network for Francis Turbine Synthetic Characteristic Curve Research

    Directory of Open Access Journals (Sweden)

    Junyi Li

    2017-01-01

    Full Text Available A BP (backpropagation neural network method is employed to solve the problems existing in the synthetic characteristic curve processing of hydroturbine at present that most studies are only concerned with data in the high efficiency and large guide vane opening area, which can hardly meet the research requirements of transition process especially in large fluctuation situation. The principle of the proposed method is to convert the nonlinear characteristics of turbine to torque and flow characteristics, which can be used for real-time simulation directly based on neural network. Results show that obtained sample data can be extended successfully to cover working areas wider under different operation conditions. Another major contribution of this paper is the resampling technique proposed in the paper to overcome the limitation to sample period simulation. In addition, a detailed analysis for improvements of iteration convergence of the pressure loop is proposed, leading to a better iterative convergence during the head pressure calculation. Actual applications verify that methods proposed in this paper have better simulation results which are closer to the field and provide a new perspective for hydroturbine synthetic characteristic curve fitting and modeling.

  2. Reference Curve for the Mean Uterine Artery Pulsatility Index in Singleton Pregnancies.

    Science.gov (United States)

    Weichert, Alexander; Hagen, Andreas; Tchirikov, Michael; Fuchs, Ilka B; Henrich, Wolfgang; Entezami, Michael

    2017-05-01

    Doppler sonography of the uterine artery (UA) is done to monitor pregnancies, because the detected flow patterns are useful to draw inferences about possible disorders of trophoblast invasion. Increased resistance in the UA is associated with an increased risk of preeclampsia and/or intrauterine growth restriction (IUGR) and perinatal mortality. In the absence of standardized figures, the normal ranges of the various available reference curves sometimes differ quite substantially from one another. The causes for this are differences in the flow patterns of the UA depending on the position of the pulsed Doppler gates as well as branching of the UA. Because of the discrepancies between the different reference curves and the practical problems this poses for guideline recommendations, we thought it would be useful to create our own reference curves for Doppler measurements of the UA obtained from a singleton cohort under standardized conditions. This retrospective cohort study was carried out in the Department of Obstetrics of the Charité - Universitätsmedizin Berlin, the Department for Obstetrics and Prenatal Medicine of the University Hospital Halle (Saale) and the Center for Prenatal Diagnostics and Human Genetics Kurfürstendamm 199. Available datasets from the three study locations were identified and reference curves were generated using the LMS method. Measured values were correlated with age of gestation, and a cubic model and Box-Cox power transformation (L), the median (M) and the coefficient of variation (S) were used to smooth the curves. 103 720 Doppler examinations of the UA carried out in singleton pregnancies from the 11th week of gestation (10 + 1 GW) were analyzed. The mean pulsatility index (Mean PI) showed a continuous decline over the course of pregnancy, dropping to a plateau of around 0.84 between the 23rd and 27th GW, after which it decreased again. Age of gestation, placental position, position of pulsed Doppler gates and branching of

  3. Symmetry Properties of Potentiometric Titration Curves.

    Science.gov (United States)

    Macca, Carlo; Bombi, G. Giorgio

    1983-01-01

    Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)

  4. An information preserving method for producing full coverage CoRoT light curves

    Directory of Open Access Journals (Sweden)

    Pascual-Granado J.

    2015-01-01

    Full Text Available Invalid flux measurements, caused mainly by the South Atlantic Anomaly crossing of the CoRoT satellite, introduce aliases in the periodogram and wrong amplitudes. It has been demonstrated that replacing such invalid data with a linear interpolation is not harmless. On the other side, using power spectrum estimators for unevenly sampled time series is not only less computationally efficient but it leads to difficulties in the interpretation of the results. Therefore, even when the gaps are rather small and the duty cycle is high enough the use of gap-filling methods is a gain in frequency analysis. However, the method must preserve the information contained in the time series. In this work we give a short description of an information preserving method (MIARMA and show some results when applying it to CoRoT seismo light curves. The method is implemented as the second step of a pipeline for CoRoT data analysis.

  5. Evaluation of Strain-Life Fatigue Curve Estimation Methods and Their Application to a Direct-Quenched High-Strength Steel

    Science.gov (United States)

    Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.

    2018-03-01

    Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.

  6. Constructing forward price curves in electricity markets

    International Nuclear Information System (INIS)

    Fleten, Stein-Erik; Lemming, Jacob

    2003-01-01

    We present and analyze a method for constructing approximated high-resolution forward price curves in electricity markets. Because a limited number of forward or futures contracts are traded in the market, only a limited picture of the theoretical continuous forward price curve is available to the analyst. Our method combines the information contained in observed bid and ask prices with information from the forecasts generated by bottom-up models. As an example, we use information concerning the shape of the seasonal variation from a bottom-up model to improve the forward price curve quoted on the Nordic power exchange

  7. Multivariate analysis of diagnostic parameters derived from whole-kidney and parenchymal time-activity curves

    International Nuclear Information System (INIS)

    Bergmann, H.; Mostbeck, A.; Samal, M.; Nimmon, C.C.; Staudenherz, A.; Dudczak, R.

    2002-01-01

    Aim: In a previous work, we have confirmed earlier reports that time-activity curves of renal cortex provide additional useful diagnostic information. The aim of this experiment was to support the finding quantitatively using multiple regression. Materials and Methods: In a retrospective study, we have analyzed MAG3 renal data (90 kidneys in 57 children). Whole-kidney (WK) and parenchymal (PA) time-activity curves were extracted from 20 min pre-diuretic phase using standard WK and parenchymal fuzzy ROIs. Using multiple regression analysis, peak time, mean transit time, output efficiency, and four additional indices of residual activity in WK and PA ROIs were related to the maximum elimination rate (EM) of urine after the diuretic. The kidneys were divided into four groups according to the WK peak time (WKPT): WKPT longer than 0 (all kidneys), 5, 10, and 15 min. Results: Multiple correlation coefficients between the set of WK, PA, and WK+PA curve parameters (independent variables) and the log EM (dependent variable) for each group are summarized. Conclusions: Using pre-diuretic time-activity curves, it is possible to predict diuretic response. This can be useful when interpreting dubious results. Parenchymal curves predict diuretic response better than the whole-kidney curves. With increasing WKPT the whole-kidney curves become useless, while the parenchymal curves are still useful. Using both WK and PA curves produces the best results. This demonstrates that both WK and PA curves carry independent diagnostic information. The contribution obtained from the parenchymal curves certainly worth the difficulties and time required to draw additional ROIs. However, substantial efforts have to be given to the accurate and reproducible definition of parenchymal ROIs

  8. A Method of Timbre-Shape Synthesis Based On Summation of Spherical Curves

    DEFF Research Database (Denmark)

    Putnam, Lance Jonathan

    2014-01-01

    It is well-known that there is a rich correspondence between sound and visual curves, perhaps most widely explored through direct input of sound into an oscilloscope. However, there have been relatively few proposals on how to translate sound into three-dimensional curves. We present a novel meth...

  9. Optimization on Spaces of Curves

    DEFF Research Database (Denmark)

    Møller-Andersen, Jakob

    in Rd, and methods to solve the initial and boundary value problem for geodesics allowing us to compute the Karcher mean and principal components analysis of data of curves. We apply the methods to study shape variation in synthetic data in the Kimia shape database, in HeLa cell nuclei and cycles...... of cardiac deformations. Finally we investigate a new application of Riemannian shape analysis in shape optimization. We setup a simple elliptic model problem, and describe how to apply shape calculus to obtain directional derivatives in the manifold of planar curves. We present an implementation based...

  10. Potential Energy Curve of N2 Revisited

    Czech Academy of Sciences Publication Activity Database

    Špirko, Vladimír; Xiangzhu, L.; Paldus, J.

    2011-01-01

    Roč. 76, č. 4 (2011), s. 327-341 ISSN 0010-0765 R&D Projects: GA MŠk LC512; GA ČR GAP208/11/0436 Institutional research plan: CEZ:AV0Z40550506 Keywords : reduced multireference coupled-cluster method * reduced potential curve method * nitrogen molecule potential energy curves Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.283, year: 2011

  11. Reducing matrix effect error in EDXRF: Comparative study of using standard and standard less methods for stainless steel samples

    International Nuclear Information System (INIS)

    Meor Yusoff Meor Sulaiman; Masliana Muhammad; Wilfred, P.

    2013-01-01

    Even though EDXRF analysis has major advantages in the analysis of stainless steel samples such as simultaneous determination of the minor elements, analysis can be done without sample preparation and non-destructive analysis, the matrix issue arise from the inter element interaction can make the the final quantitative result to be in accurate. The paper relates a comparative quantitative analysis using standard and standard less methods in the determination of these elements. Standard method was done by plotting regression calibration graphs of the interested elements using BCS certified stainless steel standards. Different calibration plots were developed based on the available certified standards and these stainless steel grades include low alloy steel, austenitic, ferritic and high speed. The standard less method on the other hand uses a mathematical modelling with matrix effect correction derived from Lucas-Tooth and Price model. Further improvement on the accuracy of the standard less method was done by inclusion of pure elements into the development of the model. Discrepancy tests were then carried out for these quantitative methods on different certified samples and the results show that the high speed method is most reliable for determining of Ni and the standard less method for Mn. (Author)

  12. Strain- and stress-based forming limit curves for DP 590 steel sheet using Marciniak-Kuczynski method

    Science.gov (United States)

    Kumar, Gautam; Maji, Kuntal

    2018-04-01

    This article deals with the prediction of strain-and stress-based forming limit curves for advanced high strength steel DP590 sheet using Marciniak-Kuczynski (M-K) method. Three yield criteria namely Von-Mises, Hill's 48 and Yld2000-2d and two hardening laws i.e., Hollomon power and Swift hardening laws were considered to predict the forming limit curves (FLCs) for DP590 steel sheet. The effects of imperfection factor and initial groove angle on prediction of FLC were also investigated. It was observed that the FLCs shifted upward with the increase of imperfection factor value. The initial groove angle was found to have significant effects on limit strains in the left side of FLC, and insignificant effect for the right side of FLC for certain range of strain paths. The limit strains were calculated at zero groove angle for the right side of FLC, and a critical groove angle was used for the left side of FLC. The numerically predicted FLCs considering the different combinations of yield criteria and hardening laws were compared with the published experimental results of FLCs for DP590 steel sheet. The FLC predicted using the combination of Yld2000-2d yield criterion and swift hardening law was in better coorelation with the experimental data. Stress based forming limit curves (SFLCs) were also calculated from the limiting strain values obtained by M-K model. Theoretically predicted SFLCs were compared with that obtained from the experimental forming limit strains. Stress based forming limit curves were seen to better represent the forming limits of DP590 steel sheet compared to that by strain-based forming limit curves.

  13. Standard methods for sampling freshwater fishes: Opportunities for international collaboration

    Science.gov (United States)

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.

  14. Determination of trace elements in standard reference materials by the ko-standardization method

    International Nuclear Information System (INIS)

    Smodis, B.; Jacimovic, R.; Stegnar, P.; Jovanovic, S.

    1990-01-01

    The k o -standardization method is suitable for routine multielement determinations by reactor neutron activation analysis (NAA). Investigation of NIST standard reference materials SRM 1571 Orchard Leaves, SRM 1572 Citrus leaves, and SRM 1573 Tomato Leaves showed the systematic error of 12 certified elements determined to be less than 8%. Thirty-four elements were determined in NIST proposed SRM 1515 Apple Leaves

  15. The Short- and Long-Run Marginal Cost Curve: A Pedagogical Note.

    Science.gov (United States)

    Sexton, Robert L.; And Others

    1993-01-01

    Contends that the standard description of the relationship between the long-run marginal cost curve and the short-run marginal cost curve is often misleading and imprecise. Asserts that a sampling of college-level textbooks confirms this confusion. Provides a definition and instructional strategy that can be used to promote student understanding…

  16. Computing daily mean streamflow at ungaged locations in Iowa by using the Flow Anywhere and Flow Duration Curve Transfer statistical methods

    Science.gov (United States)

    Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.

    2012-01-01

    -mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and

  17. Using Floquet periodicity to easily calculate dispersion curves and wave structures of homogeneous waveguides

    Science.gov (United States)

    Hakoda, Christopher; Rose, Joseph; Shokouhi, Parisa; Lissenden, Clifford

    2018-04-01

    Dispersion curves are essential to any guided-wave-related project. The Semi-Analytical Finite Element (SAFE) method has become the conventional way to compute dispersion curves for homogeneous waveguides. However, only recently has a general SAFE formulation for commercial and open-source software become available, meaning that until now SAFE analyses have been variable and more time consuming than desirable. Likewise, the Floquet boundary conditions enable analysis of waveguides with periodicity and have been an integral part of the development of metamaterials. In fact, we have found the use of Floquet boundary conditions to be an extremely powerful tool for homogeneous waveguides, too. The nuances of using periodic boundary conditions for homogeneous waveguides that do not exhibit periodicity are discussed. Comparisons between this method and SAFE are made for selected homogeneous waveguide applications. The COMSOL Multiphysics software is used for the results shown, but any standard finite element software that can implement Floquet periodicity (user-defined or built-in) should suffice. Finally, we identify a number of complex waveguides for which dispersion curves can be found with relative ease by using the periodicity inherent to the Floquet boundary conditions.

  18. Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method.

    Science.gov (United States)

    Bruno, Thomas J; Allen, Samuel

    2013-01-01

    One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC.

  19. Weathering Patterns of Ignitable Liquids with the Advanced Distillation Curve Method

    Science.gov (United States)

    Bruno, Thomas J; Allen, Samuel

    2013-01-01

    One can take advantage of the striking similarity of ignitable liquid vaporization (or weathering) patterns and the separation observed during distillation to predict the composition of residual compounds in fire debris. This is done with the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. Analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, Karl Fischer coulombic titrimetry, refractometry, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. We have applied this method on product streams such as finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this paper, we present results on a variety of ignitable liquids that are not commodity fuels, chosen from the Ignitable Liquids Reference Collection (ILRC). These measurements are assembled into a preliminary database. From this selection, we discuss the significance and forensic application of the temperature data grid and the composition explicit data channel of the ADC. PMID:26401423

  20. Sketching Curves for Normal Distributions--Geometric Connections

    Science.gov (United States)

    Bosse, Michael J.

    2006-01-01

    Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…

  1. Matter fields in curved space-time

    International Nuclear Information System (INIS)

    Viet, Nguyen Ai; Wali, Kameshwar C.

    2000-01-01

    We study the geometry of a two-sheeted space-time within the framework of non-commutative geometry. As a prelude to the Standard Model in curved space-time, we present a model of a left- and a right- chiral field living on the two sheeted-space time and construct the action functionals that describe their interactions

  2. Beyond the SCS curve number: A new stochastic spatial runoff approach

    Science.gov (United States)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  3. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    Science.gov (United States)

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  4. Determination of sieve grading curves using an optical device

    OpenAIRE

    PHAM, AM; DESCANTES, Yannick; DE LARRARD, François

    2011-01-01

    The grading curve of an aggregate is a fundamental characteristic for mix design that can easily be modified to adjust several mix properties. While sieve analysis remains the reference method to determine this curve, optical devices are developing, allowing easier and faster assessment of aggregate grading. Unfortunately, optical grading results significantly differ from sieve grading curves. As a consequence, getting full acceptance of these new methods requires building bridges between the...

  5. Sex- and Site-Specific Normative Data Curves for HR-pQCT.

    Science.gov (United States)

    Burt, Lauren A; Liang, Zhiying; Sajobi, Tolulope T; Hanley, David A; Boyd, Steven K

    2016-11-01

    The purpose of this study was to develop age-, site-, and sex-specific centile curves for common high-resolution peripheral quantitative computed tomography (HR-pQCT) and finite-element (FE) parameters for males and females older than 16 years. Participants (n = 866) from the Calgary cohort of the Canadian Multicentre Osteoporosis Study (CaMos) between the ages of 16 and 98 years were included in this study. Participants' nondominant radius and left tibia were scanned using HR-pQCT. Standard and automated segmentation methods were performed and FE analysis estimated apparent bone strength. Centile curves were generated for males and females at the tibia and radius using the generalized additive models for location, scale, and shape (GAMLSS) package in R. After GAMLSS analysis, age-, sex-, and site-specific centiles (10th, 25th, 50th, 75th, 90th) for total bone mineral density and trabecular number as well as failure load have been calculated. Clinicians and researchers can use these reference curves as a tool to assess bone health and changes in bone quality. © 2016 American Society for Bone and Mineral Research. © 2016 American Society for Bone and Mineral Research.

  6. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  7. Experimental verification of different parameters influencing the fatigue S/N-curve

    International Nuclear Information System (INIS)

    Roos, E.; Maile, K.; Herter, K.-H.; Schuler, X.

    2005-01-01

    For the construction, design and operation of nuclear components the appropriate technical codes and standards provide detailed stress analysis procedures, material data and a design philosophy which guarantees a reliable behavior throughout the specified lifetime. Especially for cyclic stress evaluation the different codes and standards provide different fatigue analyses procedures to be performed considering the various (specified or measured) loading histories which are of mechanical and/or thermal origin and the geometric complexities of the components. In order to fully understand the background of the fatigue analysis included in the codes and standards as well as of the fatigue design curves used as a limiting criteria (to determine the fatigue life usage factor), it is important to understand the history, background as well as the methodologies which are important for the design engineers to get reliable results. The design rules according to the technical codes and standards provide for explicit consideration of cyclic operation, using design fatigue curves of allowable alternating loads (allowable stress or strain amplitudes) vs. number of loading cycles (S/N-curves), specific rules for assessing the cumulative fatigue damage (cumulative fatigue life usage factor) caused by different specified or monitored load cycles. The influence of different factors like welds, environment, surface finish, temperature, mean stress and size must be taken into consideration. In the paper parameters influencing the S/N-curves used within a fatigue analysis, like different type of material, the surface finish, the temperature, the difference between unwelded and welded areas, the strain rate as well as the influences of notches are verified on the basis of experimental results obtained by specimens testing in the LCF regime for high strain amplitudes. Thus safety margins relevant for the assessment of fatigue life depending on the different influencing parameters are

  8. Is It Time to Change Our Reference Curve for Femur Length? Using the Z-Score to Select the Best Chart in a Chinese Population

    Science.gov (United States)

    Yang, Huixia; Wei, Yumei; Su, Rina; Wang, Chen; Meng, Wenying; Wang, Yongqing; Shang, Lixin; Cai, Zhenyu; Ji, Liping; Wang, Yunfeng; Sun, Ying; Liu, Jiaxiu; Wei, Li; Sun, Yufeng; Zhang, Xueying; Luo, Tianxia; Chen, Haixia; Yu, Lijun

    2016-01-01

    Objective To use Z-scores to compare different charts of femur length (FL) applied to our population with the aim of identifying the most appropriate chart. Methods A retrospective study was conducted in Beijing. Fifteen hospitals in Beijing were chosen as clusters using a systemic cluster sampling method, in which 15,194 pregnant women delivered from June 20th to November 30th, 2013. The measurements of FL in the second and third trimester were recorded, as well as the last measurement obtained before delivery. Based on the inclusion and exclusion criteria, we identified FL measurements from 19996 ultrasounds from 7194 patients between 11 and 42 weeks gestation. The FL data were then transformed into Z-scores that were calculated using three series of reference equations obtained from three reports: Leung TN, Pang MW et al (2008); Chitty LS, Altman DG et al (1994); and Papageorghiou AT et al (2014). Each Z-score distribution was presented as the mean and standard deviation (SD). Skewness and kurtosis and were compared with the standard normal distribution using the Kolmogorov-Smirnov test. The histogram of their distributions was superimposed on the non-skewed standard normal curve (mean = 0, SD = 1) to provide a direct visual impression. Finally, the sensitivity and specificity of each reference chart for identifying fetuses 95th percentile (based on the observed distribution of Z-scores) were calculated. The Youden index was also listed. A scatter diagram with the 5th, 50th, and 95th percentile curves calculated from and superimposed on each reference chart was presented to provide a visual impression. Results The three Z-score distribution curves appeared to be normal, but none of them matched the expected standard normal distribution. In our study, the Papageorghiou reference curve provided the best results, with a sensitivity of 100% for identifying fetuses with measurements 95th percentile, and specificities of 99.9% and 81.5%, respectively. Conclusions It

  9. Percentile Curves for Anthropometric Measures for Canadian Children and Youth

    Science.gov (United States)

    Kuhle, Stefan; Maguire, Bryan; Ata, Nicole; Hamilton, David

    2015-01-01

    Body mass index (BMI) is commonly used to assess a child's weight status but it does not provide information about the distribution of body fat. Since the disease risks associated with obesity are related to the amount and distribution of body fat, measures that assess visceral or subcutaneous fat, such as waist circumference (WC), waist-to-height ratio (WHtR), or skinfolds thickness may be more suitable. The objective of this study was to develop percentile curves for BMI, WC, WHtR, and sum of 5 skinfolds (SF5) in a representative sample of Canadian children and youth. The analysis used data from 4115 children and adolescents between 6 and 19 years of age that participated in the Canadian Health Measures Survey Cycles 1 (2007/2009) and 2 (2009/2011). BMI, WC, WHtR, and SF5 were measured using standardized procedures. Age- and sex-specific centiles were calculated using the LMS method and the percentiles that intersect the adult cutpoints for BMI, WC, and WHtR at age 18 years were determined. Percentile curves for all measures showed an upward shift compared to curves from the pre-obesity epidemic era. The adult cutoffs for overweight and obesity corresponded to the 72nd and 91st percentile, respectively, for both sexes. The current study has presented for the first time percentile curves for BMI, WC, WHtR, and SF5 in a representative sample of Canadian children and youth. The percentile curves presented are meant to be descriptive rather than prescriptive as associations with cardiovascular disease markers or outcomes were not assessed. PMID:26176769

  10. Evaluation of the Ross fast solution of Richards' equation in unfavourable conditions for standard finite element methods

    International Nuclear Information System (INIS)

    Crevoisier, D.; Voltz, M.; Chanzy, A.

    2009-01-01

    Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins: 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988:3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D. (authors)

  11. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers

    International Nuclear Information System (INIS)

    Cardoso, Vanderlei

    2002-01-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  12. Automatic processing of isotopic dilution curves obtained by precordial detection

    International Nuclear Information System (INIS)

    Verite, J.C.

    1973-01-01

    Dilution curves pose two distinct problems: that of their acquisition and that of their processing. A study devoted to the latter aspect only was presented. It was necessary to satisfy two important conditions: the treatment procedure, although applied to a single category of curves (isotopic dilution curves obtained by precordial detection), had to be as general as possible; to allow dissemination of the method the equipment used had to be relatively modest and inexpensive. A simple method, considering the curve processing as a process identification, was developed and should enable the mean heart cavity volume and certain pulmonary circulation parameters to be determined. Considerable difficulties were encountered, limiting the value of the results obtained though not condemning the method itself. The curve processing question raised the problem of their acquisition, i.e. the number of these curves and their meaning. A list of the difficulties encountered is followed by a set of possible solutions, a solution being understood to mean a curve processing combination where the overlapping between the two aspects of the problem is accounted for [fr

  13. 51Cr - erythrocyte survival curves

    International Nuclear Information System (INIS)

    Paiva Costa, J. de.

    1982-07-01

    Sixteen patients were studied, being fifteen patients in hemolytic state, and a normal individual as a witness. The aim was to obtain better techniques for the analysis of the erythrocytes, survival curves, according to the recommendations of the International Committee of Hematology. It was used the radiochromatic method as a tracer. Previously a revisional study of the International Literature was made in its aspects inherent to the work in execution, rendering possible to establish comparisons and clarify phonomena observed in cur investigation. Several parameters were considered in this study, hindering both the exponential and the linear curves. The analysis of the survival curves of the erythrocytes in the studied group, revealed that the elution factor did not present a homogeneous answer quantitatively to all, though, the result of the analysis of these curves have been established, through listed programs in the electronic calculator. (Author) [pt

  14. Estimating Composite Curve Number Using an Improved SCS-CN Method with Remotely Sensed Variables in Guangzhou, China

    OpenAIRE

    Fan, Fenglei; Deng, Yingbin; Hu, Xuefei; Weng, Qihao

    2013-01-01

    The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN) method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS) of the United States Department of Agriculture (USDA), and is on...

  15. Investigation of learning and experience curves

    Energy Technology Data Exchange (ETDEWEB)

    Krawiec, F.; Thornton, J.; Edesess, M.

    1980-04-01

    The applicability of learning and experience curves for predicting future costs of solar technologies is assessed, and the major test case is the production economics of heliostats. Alternative methods for estimating cost reductions in systems manufacture are discussed, and procedures for using learning and experience curves to predict costs are outlined. Because adequate production data often do not exist, production histories of analogous products/processes are analyzed and learning and aggregated cost curves for these surrogates estimated. If the surrogate learning curves apply, they can be used to estimate solar technology costs. The steps involved in generating these cost estimates are given. Second-generation glass-steel and inflated-bubble heliostat design concepts, developed by MDAC and GE, respectively, are described; a costing scenario for 25,000 units/yr is detailed; surrogates for cost analysis are chosen; learning and aggregate cost curves are estimated; and aggregate cost curves for the GE and MDAC designs are estimated. However, an approach that combines a neoclassical production function with a learning-by-doing hypothesis is needed to yield a cost relation compatible with the historical learning curve and the traditional cost function of economic theory.

  16. Standardization of waste acceptance test methods by the Materials Characterization Center

    International Nuclear Information System (INIS)

    Slate, S.C.

    1985-01-01

    This paper describes the role of standardized test methods in demonstrating the acceptability of high-level waste (HLW) forms for disposal. Key waste acceptance tests are standardized by the Materials Characterization Center (MCC), which the US Department of Energy (DOE) has established as the central agency in the United States for the standardization of test methods for nuclear waste materials. This paper describes the basic three-step process that is used to show that waste is acceptable for disposal and discusses how standardized tests are used in this process. Several of the key test methods and their areas of application are described. Finally, future plans are discussed for using standardized tests to show waste acceptance. 9 refs., 1 tab

  17. Polar representation of centrifugal pump homologous curves

    International Nuclear Information System (INIS)

    Veloso, Marcelo Antonio; Mattos, Joao Roberto Loureiro de

    2008-01-01

    Essential for any mathematical model designed to simulate flow transient events caused by pump operations is the pump performance data. The performance of a centrifugal pump is characterized by four basic parameters: the rotational speed, the volumetric flow rate, the dynamic head, and the hydraulic torque. Any one of these quantities can be expressed as a function of any two others. The curves showing the relationships between these four variables are called the pump characteristic curves, also referred to as four-quadrant curves. The characteristic curves are empirically developed by the pump manufacturer and uniquely describe head and torque as functions of volumetric flow rate and rotation speed. Because of comprising a large amount of points, the four-quadrant configuration is not suitable for computational purposes. However, it can be converted to a simpler form by the development of the homologous curves, in which dynamic head and hydraulic torque ratios are expressed as functions of volumetric flow and rotation speed ratios. The numerical use of the complete set of homologous curves requires specification of sixteen partial curves, being eight for the dynamic head and eight for the hydraulic torque. As a consequence, the handling of homologous curves is still somewhat complicated. In solving flow transient problems that require the pump characteristic data for all the operation zones, the polar form appears as the simplest way to represent the homologous curves. In the polar method, the complete characteristics of a pump can be described by only two closed curves, one for the dynamic head and other for the hydraulic torque, both in function of a single angular coordinate defined adequately in terms of the quotient between volumetric flow ratio and rotation speed ratio. The usefulness and advantages of this alternative method are demonstrated through a practical example in which the homologous curves for a pump of the type used in the main coolant loops of a

  18. Accuracy of total oxidant measurement as obtained by the phenolphthalin method

    Energy Technology Data Exchange (ETDEWEB)

    Louw, C W; Halliday, E C

    1963-01-01

    The phenolphthalin method of Haagen-Smit and Brunelle (1958) was chosen for a preliminary survey of total oxidant level in Pretoria air, because of its sensitivity. Difficulty, however, was encountered in obtaining reliable standard curves. Some improvement was obtained when conducting all operations except photometer measurements at the temperature of melting ice. It was also found that when the sequence of adding the reagents was changed, so as to simulate conditions during actual sampling, a standard curve approximating a straight line and differing considerably from that of McCabe (1953) was obtained. It follows that values of total oxidant obtained by any experimentor will depend to a certain extent upon the method of standard curve preparation he uses, and when comparisons are made between measurements by experimentors in different towns or countries this factor should be taken into consideration. The accuracy (95% confidence) obtained by the phenolphthalin method, using the mean of three successive samples, was shown to be in the region of 30% for very low amounts of oxidant.

  19. EXTRACTING PERIODIC TRANSIT SIGNALS FROM NOISY LIGHT CURVES USING FOURIER SERIES

    Energy Technology Data Exchange (ETDEWEB)

    Samsing, Johan [Department of Astrophysical Sciences, Princeton University, Peyton Hall, 4 Ivy Lane, Princeton, NJ 08544 (United States)

    2015-07-01

    We present a simple and powerful method for extracting transit signals associated with a known transiting planet from noisy light curves. Assuming the orbital period of the planet is known and the signal is periodic, we illustrate that systematic noise can be removed in Fourier space at all frequencies by only using data within a fixed time frame with a width equal to an integer number of orbital periods. This results in a reconstruction of the full transit signal, which on average is unbiased despite no prior knowledge of either the noise or the transit signal itself being used in the analysis. The method therefore has clear advantages over standard phase folding, which normally requires external input such as nearby stars or noise models for removing systematic components. In addition, we can extract the full orbital transit signal (360°) simultaneously, and Kepler-like data can be analyzed in just a few seconds. We illustrate the performance of our method by applying it to a dataset composed of light curves from Kepler with a fake injected signal emulating a planet with rings. For extracting periodic transit signals, our presented method is in general the optimal and least biased estimator and could therefore lead the way toward the first detections of, e.g., planet rings and exo-trojan asteroids.

  20. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  1. 7 CFR 43.105 - Operating characteristics (OC) curves.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Operating characteristics (OC) curves. 43.105 Section 43.105 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE... CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.105 Operating characteristics (OC...

  2. Mixed gamma emitting gas standard and method

    International Nuclear Information System (INIS)

    McFarland, R.C.; McFarland, P.A.

    1986-01-01

    The invention in one aspect pertains to a method of calibrating gamma spectroscopy systems for gas counting in a variety of counting containers comprising withdrawing a precision volume of a mixed gamma-emitting gas standard from a precision volume vial and delivering the withdrawn precision volume of the gas standard to the interior of a gas counting container. Another aspect of the invention pertains to a mixed gamma-emitting gas standard, comprising a precision spherical vial of predetermined volume, multiple mixed emitting gas components enclosed within the vial, and means for withdrawing from the vial a predetermined amount of the components wherein the gas standard is used to calibrate a gamma spectrometer system for gas counting over a wide energy range without the use of additional standards. A third aspect comprehends a gamma spectrometer calibration system for gas counting, comprising a precision volume spherical glass vial for receiving mixed multiisotope gas components, and two tubular arms extending from the vial. A ground glass stopcock is positioned on each arm, and the outer end of one arm is provided with a rubber septum port

  3. A simple method for determining the critical point of the soil water retention curve

    DEFF Research Database (Denmark)

    Chen, Chong; Hu, Kelin; Ren, Tusheng

    2017-01-01

    he transition point between capillary water and adsorbed water, which is the critical point Pc [defined by the critical matric potential (ψc) and the critical water content (θc)] of the soil water retention curve (SWRC), demarcates the energy and water content region where flow is dominated......, a fixed tangent line method was developed to estimate Pc as an alternative to the commonly used flexible tangent line method. The relationships between Pc, and particle-size distribution and specific surface area (SSA) were analyzed. For 27 soils with various textures, the mean RMSE of water content from...... the fixed tangent line method was 0.007 g g–1, which was slightly better than that of the flexible tangent line method. With increasing clay content or SSA, ψc was more negative initially but became less negative at clay contents above ∼30%. Increasing the silt contents resulted in more negative ψc values...

  4. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    Energy Technology Data Exchange (ETDEWEB)

    Dias, Mafalda; Seery, David [Astronomy Centre, University of Sussex, Brighton BN1 9QH (United Kingdom); Frazer, Jonathan, E-mail: m.dias@sussex.ac.uk, E-mail: j.frazer@sussex.ac.uk, E-mail: a.liddle@sussex.ac.uk [Department of Theoretical Physics, University of the Basque Country, UPV/EHU, 48040 Bilbao (Spain)

    2015-12-01

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.

  5. Computing observables in curved multifield models of inflation—A guide (with code) to the transport method

    International Nuclear Information System (INIS)

    Dias, Mafalda; Seery, David; Frazer, Jonathan

    2015-01-01

    We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development

  6. Cosmological applications of algebraic quantum field theory in curved spacetimes

    CERN Document Server

    Hack, Thomas-Paul

    2016-01-01

    This book provides a largely self-contained and broadly accessible exposition on two cosmological applications of algebraic quantum field theory (QFT) in curved spacetime: a fundamental analysis of the cosmological evolution according to the Standard Model of Cosmology; and a fundamental study of the perturbations in inflation. The two central sections of the book dealing with these applications are preceded by sections providing a pedagogical introduction to the subject. Introductory material on the construction of linear QFTs on general curved spacetimes with and without gauge symmetry in the algebraic approach, physically meaningful quantum states on general curved spacetimes, and the backreaction of quantum fields in curved spacetimes via the semiclassical Einstein equation is also given. The reader should have a basic understanding of General Relativity and QFT on Minkowski spacetime, but no background in QFT on curved spacetimes or the algebraic approach to QFT is required.

  7. Shape optimization of self-avoiding curves

    Science.gov (United States)

    Walker, Shawn W.

    2016-04-01

    This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.

  8. Fixing the Phillips curve: The case of downward nominal wage rigidity in the US

    OpenAIRE

    Reitz, Stefan; Slopek, Ulf D.

    2012-01-01

    Whereas microeconomic studies point to pronounced downward rigidity of nominal wages in the US economy, the standard Phillips curve neglects such a feature. Using a stochastic frontier model we find macroeconomic evidence of a strictly nonnegative error in an otherwise standard Phillips curve in post-war data on the US nonfinancial corporate sector. This error depends on growth in the profit ratio, output, and trend productivity, which should all determine the flexibility of wage adjustments....

  9. Analysis of characteristic performance curves in radiodiagnosis by an observer

    International Nuclear Information System (INIS)

    Kossovoj, A.L.

    1988-01-01

    Methods and ways of construction of performance characteristic curves (PX-curves) in roentgenology, their qualitative and quantitative estimation are described. Estimation of PX curves application for analysis of scintigraphic and sonographic images is presented

  10. Production of Curved Precast Concrete Elements for Shell Structures and Free-form Architecture using the Flexible Mould Method

    NARCIS (Netherlands)

    Schipper, H.R.; Grünewald, S.; Eigenraam, P.; Raghunath, P.; Kok, M.A.D.

    2014-01-01

    Free-form buildings tend to be expensive. By optimizing the production process, economical and well-performing precast concrete structures can be manufactured. In this paper, a method is presented that allows producing highly accurate double curved-elements without the need for milling two expensive

  11. Next-Generation Intensity-Duration-Frequency Curves for Hydrologic Design in Snow-Dominated Environments

    Science.gov (United States)

    Yan, Hongxiang; Sun, Ning; Wigmosta, Mark; Skaggs, Richard; Hou, Zhangshuan; Leung, Ruby

    2018-02-01

    There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In these regions, precipitation-based IDF curves may lead to substantial overestimation/underestimation of design basis events and subsequent overdesign/underdesign of infrastructure. To overcome this deficiency, we proposed next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme events at 376 Snowpack Telemetry (SNOTEL) stations across the western United States that each had at least 30 years of high-quality records. We found standard precipitation-based IDF curves at 45% of the stations were subject to underdesign, many with significant underestimation of 100 year extreme events, for which the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 125% due to snowmelt and ROS events. The regions with the greatest potential for underdesign were in the Pacific Northwest, the Sierra Nevada Mountains, and the Middle and Southern Rockies. We also found the potential for overdesign at 20% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in the development of IDF curves, and they suggest use of the more robust NG-IDF curves for hydrologic design in snow-dominated environments.

  12. [Modified Delphi method in the constitution of school sanitation standard].

    Science.gov (United States)

    Yin, Xunqiang; Liang, Ying; Tan, Hongzhuan; Gong, Wenjie; Deng, Jing; Luo, Jiayou; Di, Xiaokang; Wu, Yue

    2012-11-01

    To constitute school sanitation standard using modified Delphi method, and to explore the feasibility and the predominance of Delphi method in the constitution of school sanitation standard. Two rounds of expert consultations were adopted in this study. The data were analyzed with SPSS15.0 to screen indices of school sanitation standard. Thirty-two experts accomplished the 2 rounds of consultations. The average length of expert service was (24.69 ±8.53) years. The authority coefficient was 0.729 ±0.172. The expert positive coefficient was 94.12% (32/34) in the first round and 100% (32/32) in the second round. The harmonious coefficients of importance, feasibility and rationality in the second round were 0.493 (PDelphi method is a rapid, effective and feasible method in this field.

  13. A new method of testing pile using dynamic P-S-curve made by amplitude of wave train

    Science.gov (United States)

    Hu, Yi-Li; Xu, Jun; Duan, Yong-Kong; Xu, Zhao-Yong; Yang, Run-Hai; Zhao, Jin-Ming

    2004-11-01

    A new method of detecting the vertical bearing capacity for single-pile with high strain is discussed in this paper. A heavy hammer or a small type of rocket is used to strike the pile top and the detectors are used to record vibration graphs. An expression of higher degree of strain (deformation force) is introduced. It is testified theoretically that the displacement, velocity and acceleration cannot be obtained by simple integral acceleration and differential velocity when long displacement and high strain exist, namely when the pile phase generates a whole slip relative to the soil body. That is to say that there are non-linear relations between them. It is educed accordingly that the force P and displacement S are calculated from the amplitude of wave train and (dynamic) P-S curve is drew so as to determine the yield points. Further, a method of determining the vertical bearing capacity for single-pile is discussed. A static load test is utilized to check the result of dynamic test and determine the correlative constants of dynamic-static P( Q)- S curve.

  14. The role of scintimammography and mammography in recurrent breast cancer. Evaluation of their accuracy using ROC curves

    International Nuclear Information System (INIS)

    Kolasinska, A.D.; Buscombe, J.R.; Cwikla, J.B.; Hilson, A.J.W.; Holloway, B.; Parbhoo, S.P.; Davidson, T.

    2001-01-01

    With the increasing demand for breast conservation surgery, the probability of recurrent tumour within the breast increases. Traditionally x-ray mammography (XMM) was used to assess the post-surgical breast, but post-surgery and radiotherapy changes have reduced the accuracy of this method. Scintimammography (SMM) has also been proposed and appears to be more accurate than XMM. A total of 101 women received Tc99m MIBI SMM and 88 had a subsequent XMM. There were 142 sites suspected of loco-regional recurrence breast cancer. During the study the patients did not receive any treatment other then hormonotherapy. SMM was performed by the standard Diggles-Khalkhali method and XMM was performed using standard 2 views. Analysis was performed and the results of each type of imaging compared with histology. In the ROC curve analysis 5 points of certainty were used: from 1 being definitely normal to 5 being definitely cancer; grades 4 and 5 were counted as positive. The overall sensitivity value of SMM was 84% and specificity was 85%, compared with a sensitivity of 52% for XMM and a specificity of 84%. Analysis of areas under ROC curves provides statistically significant difference between SMM and XMM (p < 0.05). Combining the two tests did not significantly improve the diagnostic accuracy of sequence imaging over SMM. ROC curve analysis demonstrates that scintimammography should be the primary investigation in suspected local recurrence following breast conservation surgery. (author)

  15. The hidden X-ray breaks in afterglow light curves

    International Nuclear Information System (INIS)

    Curran, P. A.; Wijers, R. A. M. J.; Horst, A. J. van der; Starling, R. L. C.

    2008-01-01

    Gamma-Ray Burst (GRB) afterglow observations in the Swift era have a perceived lack of achromatic jet breaks compared to the BeppoSAX, or pre-Swift era. Specifically, relatively few breaks, consistent with jet breaks, are observed in the X-ray light curves of these bursts. If these breaks are truly missing, it has serious consequences for the interpretation of GRB jet collimation and energy requirements, and the use of GRBs as standard candles.Here we address the issue of X-ray breaks which are possibly 'hidden' and hence the light curves are misinterpreted as being single power-laws. We show how a number of precedents, including GRB 990510 and GRB 060206, exist for such hidden breaks and how, even with the well sampled light curves of the Swift era, these breaks may be left misidentified. We do so by synthesising X-ray light curves and finding general trends via Monte Carlo analysis. Furthermore, in light of these simulations, we discuss how to best identify achromatic breaks in afterglow light curves via multi-wavelength analysis

  16. Theoretical study of a melting curve for tin

    International Nuclear Information System (INIS)

    Feng, Xi; Ling-Cang, Cai

    2009-01-01

    The melting curve of Sn has been calculated using the dislocation-mediated melting model with the 'zone-linking method'. The results are in good agreement with the experimental data. According to our calculation, the melting temperature of γ-Sn at zero pressure is about 436 K obtained by the extrapolation of the method from the triple point of Sn. The results show that this calculation method is better than other theoretical methods for predicting the melting curve of polymorphic material Sn. (condensed matter: structure, thermal and mechanical properties)

  17. Design of a rotary dielectric elastomer actuator using a topology optimization method based on pairs of curves

    Science.gov (United States)

    Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin

    2018-05-01

    Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.

  18. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    Science.gov (United States)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  19. Mass estimates from optical-light curves for binary X-ray sources

    International Nuclear Information System (INIS)

    Avni, Y.

    1978-01-01

    The small amplitude variations with orbital phase of the optical light from X-ray binaries are caused by the changing geometrical aspect of the primary as seen by a fixed observer. The shape and the amplitude of the light curve depends on the stellar masses and on the orbital elements. The light curve can, therefore, be used to determine, or set limits on, the parameters of the binary system. A self-consistent procedure for the calculation of the light curve can be formulated if the primary is formulated if the primary is uniformly rotating at an angular velocity equal to the angular velocity of its orbital revolution in a circular orbit, and if the primary is in a hydrostatic and radiative equilibrium in the co-rotating frame. When the primary is further approximated to be centrally condensed, the above set of assumptions is called the standard picture. The standard picture is described, its validity discussed and its application to various systems reviewed. (C.F.)

  20. Observational evidence of dust evolution in galactic extinction curves

    Energy Technology Data Exchange (ETDEWEB)

    Cecchi-Pestellini, Cesare [INAF-Osservatorio Astronomico di Palermo, P.zza Parlamento 1, I-90134 Palermo (Italy); Casu, Silvia; Mulas, Giacomo [INAF-Osservatorio Astronomico di Cagliari, Via della Scienza, I-09047 Selargius (Italy); Zonca, Alberto, E-mail: cecchi-pestellini@astropa.unipa.it, E-mail: silvia@oa-cagliari.inaf.it, E-mail: gmulas@oa-cagliari.inaf.it, E-mail: azonca@oa-cagliari.inaf.it [Dipartimento di Fisica, Università di Cagliari, Strada Prov.le Monserrato-Sestu Km 0.700, I-09042 Monserrato (Italy)

    2014-04-10

    Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds. Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.

  1. Daylight calculations using constant luminance curves

    Energy Technology Data Exchange (ETDEWEB)

    Betman, E. [CRICYT, Mendoza (Argentina). Laboratorio de Ambiente Humano y Vivienda

    2005-02-01

    This paper presents a simple method to manually estimate daylight availability and to make daylight calculations using constant luminance curves calculated with local illuminance and irradiance data and the all-weather model for sky luminance distribution developed in the Atmospheric Science Research Center of the University of New York (ARSC) by Richard Perez et al. Work with constant luminance curves has the advantage that daylight calculations include the problem's directionality and preserve the information of the luminous climate of the place. This permits accurate knowledge of the resource and a strong basis to establish conclusions concerning topics related to the energy efficiency and comfort in buildings. The characteristics of the proposed method are compared with the method that uses the daylight factor. (author)

  2. The development of a curved beam element model applied to finite elements method

    International Nuclear Information System (INIS)

    Bento Filho, A.

    1980-01-01

    A procedure for the evaluation of the stiffness matrix for a thick curved beam element is developed, by means of the minimum potential energy principle, applied to finite elements. The displacement field is prescribed through polynomial expansions, and the interpolation model is determined by comparison of results obtained by the use of a sample of different expansions. As a limiting case of the curved beam, three cases of straight beams, with different dimensional ratios are analised, employing the approach proposed. Finally, an interpolation model is proposed and applied to a curved beam with great curvature. Desplacements and internal stresses are determined and the results are compared with those found in the literature. (Author) [pt

  3. Absolute method of measuring magnetic susceptibility

    Science.gov (United States)

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  4. Statistical methods for evaluating the attainment of cleanup standards

    Energy Technology Data Exchange (ETDEWEB)

    Gilbert, R.O.; Simpson, J.C.

    1992-12-01

    This document is the third volume in a series of volumes sponsored by the US Environmental Protection Agency (EPA), Statistical Policy Branch, that provide statistical methods for evaluating the attainment of cleanup Standards at Superfund sites. Volume 1 (USEPA 1989a) provides sampling designs and tests for evaluating attainment of risk-based standards for soils and solid media. Volume 2 (USEPA 1992) provides designs and tests for evaluating attainment of risk-based standards for groundwater. The purpose of this third volume is to provide statistical procedures for designing sampling programs and conducting statistical tests to determine whether pollution parameters in remediated soils and solid media at Superfund sites attain site-specific reference-based standards. This.document is written for individuals who may not have extensive training or experience with statistical methods. The intended audience includes EPA regional remedial project managers, Superfund-site potentially responsible parties, state environmental protection agencies, and contractors for these groups.

  5. Fabricating defensible reference standards for the NDA lab

    Energy Technology Data Exchange (ETDEWEB)

    Ceo, R.N.; May, P.K. [Oak Ridge Y-12 Plant, TN (United States)

    1997-11-01

    Nondestructive analysis (NDA) is performed at the Oak Ridge Y-12 Plant in support of the enriched uranium operations. Process materials are analyzed using gamma ray- and neutron-based instruments including segmented gamma scanners, solution assay systems, and an active well coincidence counter. Process wastes are also discarded based on results of these measurements. Good analytical practice, as well as applicable regulations, mandates that these analytical methods be calibrated using reference materials traceable to the national standards base. Reference standards for NDA instruments are not commercially available owing to the large quantities of special nuclear materials involved. Instead, representative materials are selected from each process stream, then thoroughly characterized by methods that are traceable to the national standards base. This paper discusses the process materials to be analyzed, reference materials selected for calibrating each NDA instrument, and details of their characterization and fabrication into working calibrations standards. Example calibration curves are also presented. 4 figs.

  6. Fabricating defensible reference standards for the NDA lab

    International Nuclear Information System (INIS)

    Ceo, R.N.; May, P.K.

    1997-01-01

    Nondestructive analysis (NDA) is performed at the Oak Ridge Y-12 Plant in support of the enriched uranium operations. Process materials are analyzed using gamma ray- and neutron-based instruments including segmented gamma scanners, solution assay systems, and an active well coincidence counter. Process wastes are also discarded based on results of these measurements. Good analytical practice, as well as applicable regulations, mandates that these analytical methods be calibrated using reference materials traceable to the national standards base. Reference standards for NDA instruments are not commercially available owing to the large quantities of special nuclear materials involved. Instead, representative materials are selected from each process stream, then thoroughly characterized by methods that are traceable to the national standards base. This paper discusses the process materials to be analyzed, reference materials selected for calibrating each NDA instrument, and details of their characterization and fabrication into working calibrations standards. Example calibration curves are also presented. 4 figs

  7. Standard test method for determination of surface lubrication on flexible webs

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This test method has been used since 1988 as an ANSI/ISO standard test for determination of lubrication on processed photographic films. Its purpose was to determine the presence of process-surviving lubricants on photographic films. It is the purpose of this test method to expand the applicability of this test method to other flexible webs that may need lubrication for suitable performance. This test measures the breakaway (static) coefficient of friction of a metal rider on the web by the inclined plane method. The objectives of the test is to determine if a web surface has a lubricant present or not. It is not intended to assign a friction coefficient to a material. It is not intended to rank lubricants. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish ...

  8. Next-Generation Intensity-Duration-Frequency Curves for Hydrologic Design in Snow-Dominated Environments

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Hongxiang [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland Washington United States; Sun, Ning [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland Washington United States; Wigmosta, Mark [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland Washington United States; Distinguished Faculty Fellow, Department of Civil and Environmental Engineering, University of Washington, Seattle Washington United States; Skaggs, Richard [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland Washington United States; Hou, Zhangshuan [Energy and Environment Directorate, Pacific Northwest National Laboratory, Richland Washington United States; Leung, Ruby [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland Washington United States

    2018-02-01

    There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In these regions, precipitation-based IDF curves may lead to substantial over-/under-estimation of design basis events and subsequent over-/under-design of infrastructure. To overcome this deficiency, we proposed next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme events at 376 Snowpack Telemetry (SNOTEL) stations across the western United States that each had at least 30 years of high-quality records. We found standard precipitation-based IDF curves at 45% of the stations were subject to under-design, many with significant under-estimation of 100-year extreme events, for which the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 125% due to snowmelt and ROS events. The regions with the greatest potential for under-design were in the Pacific Northwest, the Sierra Nevada Mountains, and the Middle and Southern Rockies. We also found the potential for over-design at 20% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in the development of IDF curves, and they suggest use of the more robust NG-IDF curves for hydrologic design in snow-dominated environments.

  9. Reactor Pressure Vessel P-T Limit Curve Round Robin

    Energy Technology Data Exchange (ETDEWEB)

    Jang, C.H.; Moon, H.R.; Jeong, I.S. [Korea Electric Power Research Institute, Taejon (Korea)

    2002-07-01

    This report is the summary of the analysis results for the P-T Limit Curve construction which have been subjected to the round robin analysis. The purpose of the round robin is to compare the procedure and method used in various organizations to construct P-T limit curve to prevent brittle fracture of reactor pressure vessel of nuclear power plants. Each Participant used its own approach to construct the P-T limit curve and submitted the results, By analyzing the results, the reference procedure for the P-T limit curve could be established. This report include the results of the comparison of the procedure and method used by the participants, and sensitivity study of the key parameters. (author) 23 refs, 88 figs, 17 tabs.

  10. W-curve alignments for HIV-1 genomic comparisons.

    Directory of Open Access Journals (Sweden)

    Douglas J Cork

    2010-06-01

    Full Text Available The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly.We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison.The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE.Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison

  11. W-curve alignments for HIV-1 genomic comparisons.

    Science.gov (United States)

    Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H

    2010-06-01

    The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of

  12. On the calculation of complete dissociation curves of closed-shell pseudo-onedimensional systems via the complete active space method of increments

    Energy Technology Data Exchange (ETDEWEB)

    Fertitta, E.; Paulus, B. [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Barcza, G.; Legeza, Ö. [Strongly Correlated Systems “Lendület” Research Group, Wigner Research Centre for Physics, P.O. Box 49, Budapest (Hungary)

    2015-09-21

    The method of increments (MoI) has been employed using the complete active space formalism in order to calculate the dissociation curve of beryllium ring-shaped clusters Be{sub n} of different sizes. Benchmarks obtained through different quantum chemical methods including the ab initio density matrix renormalization group were used to verify the validity of the MoI truncation which showed a reliable behavior for the whole dissociation curve. Moreover we investigated the size dependence of the correlation energy at different interatomic distances in order to extrapolate the values for the periodic chain and to discuss the transition from a metal-like to an insulator-like behavior of the wave function through quantum chemical considerations.

  13. Standard test method for dynamic tear testing of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1983-01-01

    1.1 This test method covers the dynamic tear (DT) test using specimens that are 3/16 in. to 5/8 in. (5 mm to 16 mm) inclusive in thickness. 1.2 This test method is applicable to materials with a minimum thickness of 3/16 in. (5 mm). 1.3 The pressed-knife procedure described for sharpening the notch tip generally limits this test method to materials with a hardness level less than 36 HRC. Note 1—The designation 36 HRC is a Rockwell hardness number of 36 on Rockwell C scale as defined in Test Methods E 18. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  14. ECM using Edwards curves

    NARCIS (Netherlands)

    Bernstein, D.J.; Birkner, P.; Lange, T.; Peters, C.P.

    2013-01-01

    This paper introduces EECM-MPFQ, a fast implementation of the elliptic-curve method of factoring integers. EECM-MPFQ uses fewer modular multiplications than the well-known GMP-ECM software, takes less time than GMP-ECM, and finds more primes than GMP-ECM. The main improvements above the

  15. Stokes-Darcy coupling for periodically curved interfaces

    DEFF Research Database (Denmark)

    Dobberschütz, Sören

    2014-01-01

    We investigate the boundary condition between a free fluid and a porous medium, where the interface between the two is given as a periodically curved structure. Using a coordinate transformation, we can employ methods of periodic homogenisation to derive effective boundary conditions for the tran...... be interpreted as a generalised law of Beavers and Joseph for curved interfaces....

  16. Estimating Composite Curve Number Using an Improved SCS-CN Method with Remotely Sensed Variables in Guangzhou, China

    Directory of Open Access Journals (Sweden)

    Qihao Weng

    2013-03-01

    Full Text Available The rainfall and runoff relationship becomes an intriguing issue as urbanization continues to evolve worldwide. In this paper, we developed a simulation model based on the soil conservation service curve number (SCS-CN method to analyze the rainfall-runoff relationship in Guangzhou, a rapid growing metropolitan area in southern China. The SCS-CN method was initially developed by the Natural Resources Conservation Service (NRCS of the United States Department of Agriculture (USDA, and is one of the most enduring methods for estimating direct runoff volume in ungauged catchments. In this model, the curve number (CN is a key variable which is usually obtained by the look-up table of TR-55. Due to the limitations of TR-55 in characterizing complex urban environments and in classifying land use/cover types, the SCS-CN model cannot provide more detailed runoff information. Thus, this paper develops a method to calculate CN by using remote sensing variables, including vegetation, impervious surface, and soil (V-I-S. The specific objectives of this paper are: (1 To extract the V-I-S fraction images using Linear Spectral Mixture Analysis; (2 To obtain composite CN by incorporating vegetation types, soil types, and V-I-S fraction images; and (3 To simulate direct runoff under the scenarios with precipitation of 57mm (occurred once every five years by average and 81mm (occurred once every ten years. Our experiment shows that the proposed method is easy to use and can derive composite CN effectively.

  17. Study of the separate exposure method for bootstrap sensitometry on X-ray cine film

    International Nuclear Information System (INIS)

    Matsuda, Eiji; Sanada, Taizo; Hitomi, Go; Kakuba, Koki; Kangai, Yoshiharu; Ishii, Koushi

    1997-01-01

    We developed a new method for bootstrap sensitometry that obtained the characteristic curve from a wide range, with a smaller number of aluminum steps than the conventional bootstrap method. In this method, the density-density curve was obtained from standard and multiplied exposures to the aluminum step wedge and used for bootstrap manipulation. The curve was acquired from two regions separated and added together, e.g., lower and higher photographic density regions. In this study, we evaluated the usefulness of a new cinefluorography method in comparison with N.D. filter sensitometry. The shape of the characteristic curve and the gradient curve obtained with the new method were highly similar to that obtained with N.D. filter sensitometry. Also, the average gradient obtained with the new bootstrap sensitometry method was not significantly different from that obtained by the N.D. filter method. The study revealed that the reliability of the characteristic curve was improved by increasing the measured value used to calculate the density-density curve. This new method was useful for obtaining a characteristic curve with a sufficient density range, and the results suggested that this new method could be applied to specific systems to which the conventional bootstrap method is not applicable. (author)

  18. An Advanced Encryption Standard Powered Mutual Authentication Protocol Based on Elliptic Curve Cryptography for RFID, Proven on WISP

    Directory of Open Access Journals (Sweden)

    Alaauldin Ibrahim

    2017-01-01

    Full Text Available Information in patients’ medical histories is subject to various security and privacy concerns. Meanwhile, any modification or error in a patient’s medical data may cause serious or even fatal harm. To protect and transfer this valuable and sensitive information in a secure manner, radio-frequency identification (RFID technology has been widely adopted in healthcare systems and is being deployed in many hospitals. In this paper, we propose a mutual authentication protocol for RFID tags based on elliptic curve cryptography and advanced encryption standard. Unlike existing authentication protocols, which only send the tag ID securely, the proposed protocol could also send the valuable data stored in the tag in an encrypted pattern. The proposed protocol is not simply a theoretical construct; it has been coded and tested on an experimental RFID tag. The proposed scheme achieves mutual authentication in just two steps and satisfies all the essential security requirements of RFID-based healthcare systems.

  19. Development of standard testing methods for nuclear-waste forms

    International Nuclear Information System (INIS)

    Mendel, J.E.; Nelson, R.D.

    1981-11-01

    Standard test methods for waste package component development and design, safety analyses, and licensing are being developed for the Nuclear Waste Materials Handbook. This paper describes mainly the testing methods for obtaining waste form materials data

  20. Standard methods for analysis of phosphorus-32

    International Nuclear Information System (INIS)

    Anon.

    1975-01-01

    Methods are described for the determination of the radiochemical purity and the absolute disintegration rate of 32 P radioisotope preparations. The 32 P activity is determined by β counting, and other low-energy β radioactive contaminants are determined by aluminum-absorption curve data. Any γ-radioactive contaminants are determined by γ counting. Routine chemical testing is used to establish the chemical characteristics. The presence or absence of heavy metals is established by spot tests; free acid is determined by use of a pH meter; total solids are determined gravimetrically by evaporation and ignition at a temperature sufficient to evaporate the mineral acids, HCl and HNO 3 ; and nonvolatile matter, defined as that material which does not evaporate or ignite at a temperature sufficient to convert C to CO or CO 2 , is determined gravimetrically after such ignition

  1. Evaluation of the H-point standard additions method (HPSAM) and the generalized H-point standard additions method (GHPSAM) for the UV-analysis of two-component mixtures.

    Science.gov (United States)

    Hund, E; Massart, D L; Smeyers-Verbeke, J

    1999-10-01

    The H-point standard additions method (HPSAM) and two versions of the generalized H-point standard additions method (GHPSAM) are evaluated for the UV-analysis of two-component mixtures. Synthetic mixtures of anhydrous caffeine and phenazone as well as of atovaquone and proguanil hydrochloride were used. Furthermore, the method was applied to pharmaceutical formulations that contain these compounds as active drug substances. This paper shows both the difficulties that are related to the methods and the conditions by which acceptable results can be obtained.

  2. New configuration factors for curved surfaces

    International Nuclear Information System (INIS)

    Cabeza-Lainez, Jose M.; Pulido-Arcas, Jesus A.

    2013-01-01

    Curved surfaces have not been thoroughly considered in radiative transfer analysis mainly due to the difficulties arisen in the integration process and perhaps because of the lack of spatial vision of the researchers. It is a fact, especially for architectural lighting, that when concave geometries appear inside a curved space, they are mostly avoided. In this way, a vast repertoire of significant forms is neglected and energy waste is evident. Starting from the properties of volumes enclosed by the minimum number of surfaces, the authors formulate, with little calculus, new simple laws, which enable them to discover a set of configuration factors for caps and various segments of the sphere. The procedure is subsequently extended to previously unimagined surfaces as the paraboloid, the ellipsoid or the cone. Appropriate combination of the said forms with right truncated cones produces several complex volumes, often used in architectural and engineering creations and whose radiative performance could not be accurately predicted for decades. To complete the research, a new method for determining interreflections in curved volumes is also presented. Radiative transfer simulation benefits from these findings, as the simplicity of the results has led the authors to create innovative software more efficient for design and evaluation and applicable to emerging fields like LED lighting. -- Highlights: ► Friendly revision of fundamentals of radiative transfer. ► New configuration factors for curved surfaces obtained without calculus. ► New method for interreflections in curved geometries. ► Enhanced simulation algorithms. ► Fast comparison of radiative performances of surfaces

  3. Can anthropometry measure gender discrimination? An analysis using WHO standards to assess the growth of Bangladeshi children.

    Science.gov (United States)

    Moestue, Helen

    2009-08-01

    To examine the potential of anthropometry as a tool to measure gender discrimination, with particular attention to the WHO growth standards. Surveillance data collected from 1990 to 1999 were analysed. Height-for-age Z-scores were calculated using three norms: the WHO standards, the 1978 National Center for Health Statistics (NCHS) reference and the 1990 British growth reference (UK90). Bangladesh. Boys and girls aged 6-59 months (n 504 358). The three sets of growth curves provided conflicting pictures of the relative growth of girls and boys by age and over time. Conclusions on sex differences in growth depended also on the method used to analyse the curves, be it according to the shape or the relative position of the sex-specific curves. The shapes of the WHO-generated curves uniquely implied that Bangladeshi girls faltered faster or caught up slower than boys throughout their pre-school years, a finding consistent with the literature. In contrast, analysis of the relative position of the curves suggested that girls had higher WHO Z-scores than boys below 24 months of age. Further research is needed to help establish whether and how the WHO international standards can measure gender discrimination in practice, which continues to be a serious problem in many parts of the world.

  4. Standard test methods for rockwell hardness of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 These test methods cover the determination of the Rockwell hardness and the Rockwell superficial hardness of metallic materials by the Rockwell indentation hardness principle. This standard provides the requirements for Rockwell hardness machines and the procedures for performing Rockwell hardness tests. 1.2 This standard includes additional requirements in annexes: Verification of Rockwell Hardness Testing Machines Annex A1 Rockwell Hardness Standardizing Machines Annex A2 Standardization of Rockwell Indenters Annex A3 Standardization of Rockwell Hardness Test Blocks Annex A4 Guidelines for Determining the Minimum Thickness of a Test Piece Annex A5 Hardness Value Corrections When Testing on Convex Cylindrical Surfaces Annex A6 1.3 This standard includes nonmandatory information in appendixes which relates to the Rockwell hardness test. List of ASTM Standards Giving Hardness Values Corresponding to Tensile Strength Appendix X1 Examples of Procedures for Determining Rockwell Hardness Uncertainty Appendix X...

  5. Standard test methods for rockwell hardness of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 These test methods cover the determination of the Rockwell hardness and the Rockwell superficial hardness of metallic materials by the Rockwell indentation hardness principle. This standard provides the requirements for Rockwell hardness machines and the procedures for performing Rockwell hardness tests. 1.2 This standard includes additional requirements in annexes: Verification of Rockwell Hardness Testing Machines Annex A1 Rockwell Hardness Standardizing Machines Annex A2 Standardization of Rockwell Indenters Annex A3 Standardization of Rockwell Hardness Test Blocks Annex A4 Guidelines for Determining the Minimum Thickness of a Test Piece Annex A5 Hardness Value Corrections When Testing on Convex Cylindrical Surfaces Annex A6 1.3 This standard includes nonmandatory information in appendixes which relates to the Rockwell hardness test. List of ASTM Standards Giving Hardness Values Corresponding to Tensile Strength Appendix X1 Examples of Procedures for Determining Rockwell Hardness Uncertainty Appendix X...

  6. Application of dissociation curve analysis to radiation hybrid panel marker scoring: generation of a map of river buffalo (B. bubalis chromosome 20

    Directory of Open Access Journals (Sweden)

    Schäffer Alejandro A

    2008-11-01

    Full Text Available Abstract Background Fluorescence of dyes bound to double-stranded PCR products has been utilized extensively in various real-time quantitative PCR applications, including post-amplification dissociation curve analysis, or differentiation of amplicon length or sequence composition. Despite the current era of whole-genome sequencing, mapping tools such as radiation hybrid DNA panels remain useful aids for sequence assembly, focused resequencing efforts, and for building physical maps of species that have not yet been sequenced. For placement of specific, individual genes or markers on a map, low-throughput methods remain commonplace. Typically, PCR amplification of DNA from each panel cell line is followed by gel electrophoresis and scoring of each clone for the presence or absence of PCR product. To improve sensitivity and efficiency of radiation hybrid panel analysis in comparison to gel-based methods, we adapted fluorescence-based real-time PCR and dissociation curve analysis for use as a novel scoring method. Results As proof of principle for this dissociation curve method, we generated new maps of river buffalo (Bubalus bubalis chromosome 20 by both dissociation curve analysis and conventional marker scoring. We also obtained sequence data to augment dissociation curve results. Few genes have been previously mapped to buffalo chromosome 20, and sequence detail is limited, so 65 markers were screened from the orthologous chromosome of domestic cattle. Thirty bovine markers (46% were suitable as cross-species markers for dissociation curve analysis in the buffalo radiation hybrid panel under a standard protocol, compared to 25 markers suitable for conventional typing. Computational analysis placed 27 markers on a chromosome map generated by the new method, while the gel-based approach produced only 20 mapped markers. Among 19 markers common to both maps, the marker order on the map was maintained perfectly. Conclusion Dissociation curve

  7. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  8. Knowledge fusion: Comparison of fuzzy curve smoothers to statistically motivated curve smoothers

    International Nuclear Information System (INIS)

    Burr, T.; Strittmatter, R.B.

    1996-03-01

    This report describes work during FY 95 that was sponsored by the Department of Energy, Office of Nonproliferation and National Security (NN) Knowledge Fusion (KF) Project. The project team selected satellite sensor data to use as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features, which make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series that define a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report gives a detailed comparison of two of the forecasting methods (fuzzy forecaster and statistically motivated curve smoothers as forecasters). The two methods are compared on five simulated and five real data sets. One of the five real data sets is satellite sensor data. The conclusion is the statistically motivated curve smoother is superior on simulated data of the type we studied. The statistically motivated method is also superior on most real data. In defense of the fuzzy-logic motivated methods, we point out that fuzzy-logic methods were never intended to compete with statistical methods on numeric data. Fuzzy logic was developed to handle real-world situations where either real data was not available or was supplemented with either ''expert opinion'' or some sort of linguistic information

  9. Functional methods for arbitrary densities in curved spacetime

    International Nuclear Information System (INIS)

    Basler, M.

    1993-01-01

    This paper gives an introduction to the technique of functional differentiation and integration in curved spacetime, applied to examples from quantum field theory. Special attention is drawn on the choice of functional integral measure. Referring to a suggestion by Toms, fields are choosen as arbitrary scalar, spinorial or vectorial densities. The technique developed by Toms for a pure quadratic Lagrangian are extended to the calculation of the generating functional with external sources. Included are two examples of interacting theories, a self-interacting scalar field and a Yang-Mills theory. For these theories the complete set of Feynman graphs depending on the weight of variables is derived. (orig.)

  10. DEVELOPING A METHOD TO IDENTIFY HORIZONTAL CURVE SEGMENTS WITH HIGH CRASH OCCURRENCES USING THE HAF ALGORITHM

    Science.gov (United States)

    2018-04-01

    Crashes occur every day on Utahs highways. Curves can be particularly dangerous as they require driver focus due to potentially unseen hazards. Often, crashes occur on curves due to poor curve geometry, a lack of warning signs, or poor surface con...

  11. Development of A Standard Method for Human Reliability Analysis of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Jung, Won Dea; Kang, Dae Il; Kim, Jae Whan

    2005-12-01

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method

  12. Development of A Standard Method for Human Reliability Analysis of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Won Dea; Kang, Dae Il; Kim, Jae Whan

    2005-12-15

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method.

  13. Projection-based curve clustering

    International Nuclear Information System (INIS)

    Auder, Benjamin; Fischer, Aurelie

    2012-01-01

    This paper focuses on unsupervised curve classification in the context of nuclear industry. At the Commissariat a l'Energie Atomique (CEA), Cadarache (France), the thermal-hydraulic computer code CATHARE is used to study the reliability of reactor vessels. The code inputs are physical parameters and the outputs are time evolution curves of a few other physical quantities. As the CATHARE code is quite complex and CPU time-consuming, it has to be approximated by a regression model. This regression process involves a clustering step. In the present paper, the CATHARE output curves are clustered using a k-means scheme, with a projection onto a lower dimensional space. We study the properties of the empirically optimal cluster centres found by the clustering method based on projections, compared with the 'true' ones. The choice of the projection basis is discussed, and an algorithm is implemented to select the best projection basis among a library of orthonormal bases. The approach is illustrated on a simulated example and then applied to the industrial problem. (authors)

  14. Spin structures on algebraic curves and their applications in string theories

    International Nuclear Information System (INIS)

    Ferrari, F.

    1990-01-01

    The free fields on a Riemann surface carrying spin structures live on an unramified r-covering of the surface itself. When the surface is represented as an algebraic curve related to the vanishing of the Weierstrass polynomial, its r-coverings are algebraic curves as well. We construct explicitly the Weierstrass polynomial associated to the r-coverings of an algebraic curve. Using standard techniques of algebraic geometry it is then possible to solve the inverse Jacobi problem for the odd spin structures. As an application we derive the partition functions of bosonic string theories in many examples, including two general curves of genus three and four. The partition functions are explicitly expressed in terms of branch points apart from a factor which is essentially a theta constant. 53 refs., 4 figs. (Author)

  15. Projection of curves on B-spline surfaces using quadratic reparameterization

    KAUST Repository

    Yang, Yijun

    2010-09-01

    Curves on surfaces play an important role in computer aided geometric design. In this paper, we present a hyperbola approximation method based on the quadratic reparameterization of Bézier surfaces, which generates reasonable low degree curves lying completely on the surfaces by using iso-parameter curves of the reparameterized surfaces. The Hausdorff distance between the projected curve and the original curve is controlled under the user-specified distance tolerance. The projected curve is T-G 1 continuous, where T is the user-specified angle tolerance. Examples are given to show the performance of our algorithm. © 2010 Elsevier Inc. All rights reserved.

  16. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions

    NARCIS (Netherlands)

    López, S.; France, J.; Odongo, N.E.; McBride, R.A.; Kebreab, E.; Alzahal, O.; McBride, B.W.; Dijkstra, J.

    2015-01-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records

  17. The advanced geometry of plane curves and their applications

    CERN Document Server

    Zwikker, C

    2005-01-01

    ""Of chief interest to mathematicians, but physicists and others will be fascinated ... and intrigued by the fruitful use of non-Cartesian methods. Students ... should find the book stimulating."" - British Journal of Applied PhysicsThis study of many important curves, their geometrical properties, and their applications features material not customarily treated in texts on synthetic or analytic Euclidean geometry. Its wide coverage, which includes both algebraic and transcendental curves, extends to unusual properties of familiar curves along with the nature of lesser known curves.Informativ

  18. Numerical Characterization of Piezoceramics Using Resonance Curves

    Science.gov (United States)

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-01

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875

  19. Numerical Characterization of Piezoceramics Using Resonance Curves

    Directory of Open Access Journals (Sweden)

    Nicolás Pérez

    2016-01-01

    Full Text Available Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM, to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.

  20. Graphical evaluation of complexometric titration curves.

    Science.gov (United States)

    Guinon, J L

    1985-04-01

    A graphical method, based on logarithmic concentration diagrams, for construction, without any calculations, of complexometric titration curves is examined. The titration curves obtained for different kinds of unidentate, bidentate and quadridentate ligands clearly show why only chelating ligands are usually used in titrimetric analysis. The method has also been applied to two practical cases where unidentate ligands are used: (a) the complexometric determination of mercury(II) with halides and (b) the determination of cyanide with silver, which involves both a complexation and a precipitation system; for this purpose construction of the diagrams for the HgCl(2)/HgCl(+)/Hg(2+) and Ag(CN)(2)(-)/AgCN/CN(-) systems is considered in detail.

  1. Soil Conservation Service Curve Number method: How to mend a wrong soil moisture accounting procedure?

    Science.gov (United States)

    Michel, Claude; Andréassian, Vazken; Perrin, Charles

    2005-02-01

    This paper unveils major inconsistencies in the age-old and yet efficient Soil Conservation Service Curve Number (SCS-CN) procedure. Our findings are based on an analysis of the continuous soil moisture accounting procedure implied by the SCS-CN equation. It is shown that several flaws plague the original SCS-CN procedure, the most important one being a confusion between intrinsic parameter and initial condition. A change of parameterization and a more complete assessment of the initial condition lead to a renewed SCS-CN procedure, while keeping the acknowledged efficiency of the original method.

  2. Analysis and Extension of the Percentile Method, Estimating a Noise Curve from a Single Image

    Directory of Open Access Journals (Sweden)

    Miguel Colom

    2013-12-01

    Full Text Available Given a white Gaussian noise signal on a sampling grid, its variance can be estimated from a small block sample. However, in natural images we observe the combination of the geometry of the scene being photographed and the added noise. In this case, estimating directly the standard deviation of the noise from block samples is not reliable since the measured standard deviation is not explained just by the noise but also by the geometry of the image. The Percentile method tries to estimate the standard deviation of the noise from blocks of a high-passed version of the image and a small p-percentile of these standard deviations. The idea behind is that edges and textures in a block of the image increase the observed standard deviation but they never make it decrease. Therefore, a small percentile (0.5%, for example in the list of standard deviations of the blocks is less likely to be affected by the edges and textures than a higher percentile (50%, for example. The 0.5%-percentile is empirically proven to be adequate for most natural, medical and microscopy images. The Percentile method is adapted to signal-dependent noise, which is realistic with the Poisson noise model obtained by a CCD device in a digital camera.

  3. Design fatigue curve for Hastelloy-X

    International Nuclear Information System (INIS)

    Nishiguchi, Isoharu; Muto, Yasushi; Tsuji, Hirokazu

    1983-12-01

    In the design of components intended for elevated temperature service as the experimental Very High-Temperature gas-cooled Reactor (VHTR), it is essential to prevent fatigue failure and creep-fatigue failure. The evaluation method which uses design fatigue curves is adopted in the design rules. This report discussed several aspects of these design fatigue curves for Hastelloy-X (-XR) which is considered for use as a heat-resistant alloy in the VHTR. Examination of fatigue data gathered by a literature search including unpublished data showed that Brinkman's equation is suitable for the design curve of Hastelloy-X (-XR), where total strain range Δ epsilon sub(t) is used as independent variable and fatigue life Nsub(f) is transformed into log(log Nsub(f)). (author)

  4. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-01-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results are presented: the radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach test methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transporation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance

  5. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-11-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results is presented: The radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach tests methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transportation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance. 2 references, 2 figures

  6. Standard Test Method for Abrasive Wear Resistance of Cemented

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method covers the determination of abrasive wear resistance of cemented carbides. 1.2 The values stated in inch-pound units are to be regarded as the standard. The SI equivalents of inch-pound units are in parentheses and may be approximate. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  7. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    Science.gov (United States)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  8. Standard Test Method for Wet Insulation Integrity Testing of Photovoltaic Arrays

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers a procedure to determine the insulation resistance of a photovoltaic (PV) array (or its component strings), that is, the electrical resistance between the array's internal electrical components and is exposed, electrically conductive, non-current carrying parts and surfaces of the array. 1.2 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  9. ESTABLISHMENT OF THE PERMISSIBLE TRAIN SPEED ON THE CURVED TURNOUTS

    Directory of Open Access Journals (Sweden)

    O. M. Patlasov

    2016-04-01

    Full Text Available Purpose. Turnouts play a key role in the railway transportation process. One-sided and many-sided curved turnouts were railed over the last 20 years in difficult conditions (curved sections, yard necks. They have a number of geometric features, unlike the conventional one-sided turnouts. Today the normative documents prohibit laying such turnouts in curved track sections and only partially regulate the assessment procedure of their real condition. The question of establishment the permissible train speed within the curved turnouts is still open. In this regard, authors propose to set the train speed according to the driving comfort criterion using the results of field measurements of ordinates from the baseline for the particular curved turnout. Methodology. The article considers the criteria using which one can set the permissible speed on the turnouts. It defines the complexity of their application, advantages and disadvantages. Findings. The work analyzes the speed distribution along the length of the real curved turnout for the forward and lateral directions. It establishes the change rate values of unbalanced accelerations for the existing norms of the curved track sections maintenance according to the difference in the adjacent bend versine at speeds up to 160 km/h. Originality. A method for establishing the trains’ speed limit within the curved turnouts was developed. It takes into account the actual geometric position in the plan of forward and lateral turnout directions. This approach makes it possible to establish a barrier places in plan on the turnouts limiting the train speed. Practical value. The proposed method makes it possible to objectively assess and set the trains’ permissible speed on the basis of the ordinate measurement of the forward and lateral directions of the curved turnouts from the baseline using the driving comfort criteria. The method was tested using real turnouts, which are located within the Pridneprovsk

  10. ICP curve morphology and intracranial flow-volume changes

    DEFF Research Database (Denmark)

    Unnerbäck, Mårten; Ottesen, Johnny T.; Reinstrup, Peter

    2018-01-01

    proposed to shape the ICP curve. This study tested the hypothesis that the ICP curve correlates to intracranial volume changes. METHODS: Cine phase contrast magnetic resonance imaging (MRI) examinations were performed in neuro-intensive care patients with simultaneous ICP monitoring. The MRI was set......BACKGROUND: The intracranial pressure (ICP) curve with its different peaks has been extensively studied, but the exact physiological mechanisms behind its morphology are still not fully understood. Both intracranial volume change (ΔICV) and transmission of the arterial blood pressure have been...

  11. Standard test method for measurement of web/roller friction characteristics

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2003-01-01

    1.1 This test method covers the simulation of a roller/web transport tribosystem and the measurement of the static and kinetic coefficient of friction of the web/roller couple when sliding occurs between the two. The objective of this test method is to provide users with web/roller friction information that can be used for process control, design calculations, and for any other function where web/roller friction needs to be known. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  12. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (pvolume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary

  13. Environmental bias and elastic curves on surfaces

    International Nuclear Information System (INIS)

    Guven, Jemal; María Valencia, Dulce; Vázquez-Montejo, Pablo

    2014-01-01

    The behavior of an elastic curve bound to a surface will reflect the geometry of its environment. This may occur in an obvious way: the curve may deform freely along directions tangent to the surface, but not along the surface normal. However, even if the energy itself is symmetric in the curve's geodesic and normal curvatures, which control these modes, very distinct roles are played by the two. If the elastic curve binds preferentially on one side, or is itself assembled on the surface, not only would one expect the bending moduli associated with the two modes to differ, binding along specific directions, reflected in spontaneous values of these curvatures, may be favored. The shape equations describing the equilibrium states of a surface curve described by an elastic energy accommodating environmental factors will be identified by adapting the method of Lagrange multipliers to the Darboux frame associated with the curve. The forces transmitted to the surface along the surface normal will be determined. Features associated with a number of different energies, both of physical relevance and of mathematical interest, are described. The conservation laws associated with trajectories on surface geometries exhibiting continuous symmetries are also examined. (paper)

  14. Statistical benchmarking in utility regulation: Role, standards and methods

    International Nuclear Information System (INIS)

    Newton Lowry, Mark; Getachew, Lullit

    2009-01-01

    Statistical benchmarking is being used with increasing frequency around the world in utility rate regulation. We discuss how and where benchmarking is in use for this purpose and the pros and cons of regulatory benchmarking. We then discuss alternative performance standards and benchmarking methods in regulatory applications. We use these to propose guidelines for the appropriate use of benchmarking in the rate setting process. The standards, which we term the competitive market and frontier paradigms, have a bearing on method selection. These along with regulatory experience suggest that benchmarking can either be used for prudence review in regulation or to establish rates or rate setting mechanisms directly

  15. Transient finite element magnetic field calculation method in the anisotropic magnetic material based on the measured magnetization curves

    International Nuclear Information System (INIS)

    Jesenik, M.; Gorican, V.; Trlep, M.; Hamler, A.; Stumberger, B.

    2006-01-01

    A lot of magnetic materials are anisotropic. In the 3D finite element method calculation, anisotropy of the material is taken into account. Anisotropic magnetic material is described with magnetization curves for different magnetization directions. The 3D transient calculation of the rotational magnetic field in the sample of the round rotational single sheet tester with circular sample considering eddy currents is made and compared with the measurement to verify the correctness of the method and to analyze the magnetic field in the sample

  16. Standard test method for measurement of soil resistivity using the two-electrode soil box method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This test method covers the equipment and a procedure for the measurement of soil resistivity, for samples removed from the ground, for use in the control of corrosion of buried structures. 1.2 Procedures allow for this test method to be used n the field or in the laboratory. 1.3 The test method procedures are for the resistivity measurement of soil samples in the saturated condition and in the as-received condition. 1.4 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information only. Soil resistivity values are reported in ohm-centimeter. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and to determine the applicability of regulatory limitations prior to use.

  17. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  18. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    Science.gov (United States)

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  19. Standard methods for sampling freshwater fishes: opportunities for international collaboration

    OpenAIRE

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, T. Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D.S.; Lester, Nigel P.; Porath, Mark; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by...

  20. Numerical analysis of thermoluminescence glow curves

    International Nuclear Information System (INIS)

    Gomez Ros, J. M.; Delgado, A.

    1989-01-01

    This report presents a method for the numerical analysis of complex thermoluminescence glow curves resolving the individual glow peak components. The method employs first order kinetics analytical expressions and is based In a Marquart-Levenberg minimization procedure. A simplified version of this method for thermoluminescence dosimetry (TLD) is also described and specifically developed to operate whit Lithium Fluoride TLD-100. (Author). 36 refs

  1. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    International Nuclear Information System (INIS)

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    2016-01-01

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North American datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate. - Highlights: • We develop a segmented regression technique to estimate historical CFL learning curves. • CFL experience curves do not have a constant learning rate. • CFLs exhibited a learning rate of approximately 21% from 1990 to 1997. • The CFL learning rate significantly increased after 1998. • Increased CFL learning rate is correlated to technology deployment programs.

  2. The new fabrication method of standard surface sources

    Energy Technology Data Exchange (ETDEWEB)

    Sato, Yasushi E-mail: yss.sato@aist.go.jp; Hino, Yoshio; Yamada, Takahiro; Matsumoto, Mikio

    2004-04-01

    We developed a new fabrication method for standard surface sources by using an inkjet printer with inks in which a radioactive material is mixed to print on a sheet of paper. Three printed test patterns have been prepared: (1) 100 mmx100 mm uniformity-test patterns, (2) positional-resolution test patterns with different widths and intervals of straight lines, and (3) logarithmic intensity test patterns with different radioactive intensities. The results revealed that the fabricated standard surface sources had high uniformity, high positional resolution, arbitrary shapes and a broad intensity range.

  3. Rational points on elliptic curves

    CERN Document Server

    Silverman, Joseph H

    2015-01-01

    The theory of elliptic curves involves a pleasing blend of algebra, geometry, analysis, and number theory. This book stresses this interplay as it develops the basic theory, thereby providing an opportunity for advanced undergraduates to appreciate the unity of modern mathematics. At the same time, every effort has been made to use only methods and results commonly included in the undergraduate curriculum. This accessibility, the informal writing style, and a wealth of exercises make Rational Points on Elliptic Curves an ideal introduction for students at all levels who are interested in learning about Diophantine equations and arithmetic geometry. Most concretely, an elliptic curve is the set of zeroes of a cubic polynomial in two variables. If the polynomial has rational coefficients, then one can ask for a description of those zeroes whose coordinates are either integers or rational numbers. It is this number theoretic question that is the main subject of this book. Topics covered include the geometry and ...

  4. Heat rate curve approximation for power plants without data measuring devices

    Energy Technology Data Exchange (ETDEWEB)

    Poullikkas, Andreas [Electricity Authority of Cyprus, P.O. Box 24506, 1399 Nicosia (CY

    2012-07-01

    In this work, a numerical method, based on the one-dimensional finite difference technique, is proposed for the approximation of the heat rate curve, which can be applied for power plants in which no data acquisition is available. Unlike other methods in which three or more data points are required for the approximation of the heat rate curve, the proposed method can be applied when the heat rate curve data is available only at the maximum and minimum operating capacities of the power plant. The method is applied on a given power system, in which we calculate the electricity cost using the CAPSE (computer aided power economics) algorithm. Comparisons are made when the least squares method is used. The results indicate that the proposed method give accurate results.

  5. Recrystallization curve study of zircaloy-4 with DRX line width method

    International Nuclear Information System (INIS)

    Juarez, G; Buioli, C; Samper, R; Vizcaino, P

    2012-01-01

    X-ray diffraction peak broadening analysis is a method that allows to characterize the plastic deformation in metals. This technique is a complement of transmission electron microscopy (TEM) to determine dislocation densities. So that, both techniques may cover a wide range in the analysis of metals deformation. The study of zirconium alloys is an issue of usual interest in the nuclear industry since such materials present the best combination of good mechanical properties, corrosion behavior and low neutron cross section. It is worth noting there are two factors to be taken into account in the application of the method developed for this purpose: the characteristic anisotropy of the hexagonals and the strong texture that these alloys acquire during the manufacturing process. In order to assess the recrystallization curve of Zircaloy-4, a powder of this alloy was produced through filing. Then, fractions of the powder were subjected to thermal treatments at different temperatures for the same time. Since the powder has a random crystallographic orientation, the texture effect practically disappears; this is the reason why the Williamson and Hall method may be easily used, producing good fittings and predicting confidence values of diffraction domain size and the accumulated deformation. The temperatures selected for the thermal treatments were 1000, 700, 600, 500, 420, 300 and 200 o C during 2 hs. As a result of these annealings, powders in different recovery stages were obtained (completely recrystallized, partially recrystallized and non-recrystallized structures with different levels of stress relieve). The obtained values were also compared with the non annealed powder ones. All the microstructural evolution through the annealings was followed by optical microscopy (author)

  6. Two-Point Codes for the Generalised GK curve

    DEFF Research Database (Denmark)

    Barelli, Élise; Beelen, Peter; Datta, Mrinmoy

    2017-01-01

    completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We......We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti–Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti–Korchmaros curve (GK). Our results...

  7. An ecological method to understand agricultural standardization in peach orchard ecosystems.

    Science.gov (United States)

    Wan, Nian-Feng; Zhang, Ming-Yi; Jiang, Jie-Xian; Ji, Xiang-Yun; Hao-Zhang

    2016-02-22

    While the worldwide standardization of agricultural production has been advocated and recommended, relatively little research has focused on the ecological significance of such a shift. The ecological concerns stemming from the standardization of agricultural production may require new methodology. In this study, we concentrated on how ecological two-sidedness and ecological processes affect the standardization of agricultural production which was divided into three phrases (pre-, mid- and post-production), considering both the positive and negative effects of agricultural processes. We constructed evaluation indicator systems for the pre-, mid- and post-production phases and here we presented a Standardization of Green Production Index (SGPI) based on the Full Permutation Polygon Synthetic Indicator (FPPSI) method which we used to assess the superiority of three methods of standardized production for peaches. The values of SGPI for pre-, mid- and post-production were 0.121 (Level IV, "Excellent" standard), 0.379 (Level III, "Good" standard), and 0.769 × 10(-2) (Level IV, "Excellent" standard), respectively. Here we aimed to explore the integrated application of ecological two-sidedness and ecological process in agricultural production. Our results are of use to decision-makers and ecologists focusing on eco-agriculture and those farmers who hope to implement standardized agricultural production practices.

  8. Radioligand assays - methods and applications. IV. Uniform regression of hyperbolic and linear radioimmunoassay calibration curves

    Energy Technology Data Exchange (ETDEWEB)

    Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic)

    1980-10-01

    In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature.

  9. Inferring Lévy walks from curved trajectories: A rescaling method

    Science.gov (United States)

    Tromer, R. M.; Barbosa, M. B.; Bartumeus, F.; Catalan, J.; da Luz, M. G. E.; Raposo, E. P.; Viswanathan, G. M.

    2015-08-01

    An important problem in the study of anomalous diffusion and transport concerns the proper analysis of trajectory data. The analysis and inference of Lévy walk patterns from empirical or simulated trajectories of particles in two and three-dimensional spaces (2D and 3D) is much more difficult than in 1D because path curvature is nonexistent in 1D but quite common in higher dimensions. Recently, a new method for detecting Lévy walks, which considers 1D projections of 2D or 3D trajectory data, has been proposed by Humphries et al. The key new idea is to exploit the fact that the 1D projection of a high-dimensional Lévy walk is itself a Lévy walk. Here, we ask whether or not this projection method is powerful enough to cleanly distinguish 2D Lévy walk with added curvature from a simple Markovian correlated random walk. We study the especially challenging case in which both 2D walks have exactly identical probability density functions (pdf) of step sizes as well as of turning angles between successive steps. Our approach extends the original projection method by introducing a rescaling of the projected data. Upon projection and coarse-graining, the renormalized pdf for the travel distances between successive turnings is seen to possess a fat tail when there is an underlying Lévy process. We exploit this effect to infer a Lévy walk process in the original high-dimensional curved trajectory. In contrast, no fat tail appears when a (Markovian) correlated random walk is analyzed in this way. We show that this procedure works extremely well in clearly identifying a Lévy walk even when there is noise from curvature. The present protocol may be useful in realistic contexts involving ongoing debates on the presence (or not) of Lévy walks related to animal movement on land (2D) and in air and oceans (3D).

  10. Seismic Fragility Curves of Industrial Buildings by Using Nonlinear Analysis

    Directory of Open Access Journals (Sweden)

    Mohamed Nazri Fadzli

    2017-01-01

    Full Text Available This study presents the steel fragility curves and performance curves of industrial buildings of different geometries. The fragility curves were obtained for different building geometries, and the performance curves were developed based on lateral load, which is affected by the geometry of the building. Three records of far-field ground motion were used for incremental dynamic analysis (IDA, and the design lateral loads for pushover analysis (POA. All designs were based on British Standard (BS 5950; however, Eurocode 8 was preferred for seismic consideration in the analysis because BS 5950 does not specify any seismic provision. The five levels of performance stated by FEMA-273, namely, operational phase, immediate occupancy, damage control, life safety, and collapse prevention (CP were used as main guidelines for evaluating structural performance. For POA, Model 2 had highest base shear, followed by Model 1 and Model 3, even though Model 2 has a smaller structure compared with Model 3. Meanwhile, the fragility curves showed that the probability of reaching or exceeding the CP level of Model 2 is the highest, followed by that of Models 1 and 3.

  11. Statistical assessment of the learning curves of health technologies.

    Science.gov (United States)

    Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T

    2001-01-01

    (1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second

  12. Linear transform of the multi-target survival curve

    Energy Technology Data Exchange (ETDEWEB)

    Watson, J V [Cambridge Univ. (UK). Dept. of Clinical Oncology and Radiotherapeutics

    1978-07-01

    A completely linear transform of the multi-target survival curve is presented. This enables all data, including those on the shoulder region of the curve, to be analysed. The necessity to make a subjective assessment about which data points to exclude for conventional methods of analysis is, therefore, removed. The analysis has also been adapted to include a 'Pike-Alper' method of assessing dose modification factors. For the data cited this predicts compatibility with the hypothesis of a true oxygen 'dose-modification' whereas the conventional Pike-Alper analysis does not.

  13. Parametric representation of centrifugal pump homologous curves

    International Nuclear Information System (INIS)

    Veloso, Marcelo A.; Mattos, Joao R.L. de

    2015-01-01

    Essential for any mathematical model designed to simulate flow transient events caused by pump operations is the pump performance data. The performance of a centrifugal pump is characterized by four basic quantities: the rotational speed, the volumetric flow rate, the dynamic head, and the hydraulic torque. The curves showing the relationships between these four variables are called the pump characteristic curves. The characteristic curves are empirically developed by the pump manufacturer and uniquely describe head and torque as functions of volumetric flow rate and rotation speed. Because of comprising a large amount of points, this configuration is not suitable for computational purposes. However, it can be converted to a simpler form by the development of the homologous curves, in which dynamic head and hydraulic torque ratios are expressed as functions of volumetric flow and rotation speed ratios. The numerical use of the complete set of homologous curves requires specification of sixteen partial curves, being eight for the dynamic head and eight for the hydraulic torque. As a consequence, the handling of homologous curves is still somewhat complicated. In solving flow transient problems that require the pump characteristic data for all the operation zones, the parametric form appears as the simplest way to deal with the homologous curves. In this approach, the complete characteristics of a pump can be described by only two closed curves, one for the dynamic head and other for the hydraulic torque, both in function of a single angular coordinate defined adequately in terms of the quotient between volumetric flow ratio and rotation speed ratio. The usefulness and advantages of this alternative method are demonstrated through a practical example in which the homologous curves for a pump of the type used in the main coolant loops of a pressurized water reactor (PWR) are transformed to the parametric form. (author)

  14. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    Energy Technology Data Exchange (ETDEWEB)

    Jenkin, Thomas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Larson, Andrew [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ruth, Mark F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ben [U.S. Department of Energy; Spitsen, Paul [U.S. Department of Energy

    2018-03-27

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market and dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing

  15. From Curve Fitting to Machine Learning

    CERN Document Server

    Zielesny, Achim

    2011-01-01

    The analysis of experimental data is at heart of science from its beginnings. But it was the advent of digital computers that allowed the execution of highly non-linear and increasingly complex data analysis procedures - methods that were completely unfeasible before. Non-linear curve fitting, clustering and machine learning belong to these modern techniques which are a further step towards computational intelligence. The goal of this book is to provide an interactive and illustrative guide to these topics. It concentrates on the road from two dimensional curve fitting to multidimensional clus

  16. Curved sensors for compact high-resolution wide-field designs: prototype demonstration and optical characterization

    Science.gov (United States)

    Chambion, Bertrand; Gaschet, Christophe; Behaghel, Thibault; Vandeneynde, Aurélie; Caplet, Stéphane; Gétin, Stéphane; Henry, David; Hugot, Emmanuel; Jahn, Wilfried; Lombardo, Simona; Ferrari, Marc

    2018-02-01

    Over the recent years, a huge interest has grown for curved electronics, particularly for opto-electronics systems. Curved sensors help the correction of off-axis aberrations, such as Petzval Field Curvature, astigmatism, and bring significant optical and size benefits for imaging systems. In this paper, we first describe advantages of curved sensor and associated packaging process applied on a 1/1.8'' format 1.3Mpx global shutter CMOS sensor (Teledyne EV76C560) into its standard ceramic package with a spherical radius of curvature Rc=65mm and 55mm. The mechanical limits of the die are discussed (Finite Element Modelling and experimental), and electro-optical performances are investigated. Then, based on the monocentric optical architecture, we proposed a new design, compact and with a high resolution, developed specifically for a curved image sensor including optical optimization, tolerances, assembly and optical tests. Finally, a functional prototype is presented through a benchmark approach and compared to an existing standard optical system with same performances and a x2.5 reduction of length. The finality of this work was a functional prototype demonstration on the CEA-LETI during Photonics West 2018 conference. All these experiments and optical results demonstrate the feasibility and high performances of systems with curved sensors.

  17. Statistical re-evaluation of the ASME KIC and KIR fracture toughness reference curves

    International Nuclear Information System (INIS)

    Wallin, K.

    1999-01-01

    Historically the ASME reference curves have been treated as representing absolute deterministic lower bound curves of fracture toughness. In reality, this is not the case. They represent only deterministic lower bound curves to a specific set of data, which represent a certain probability range. A recently developed statistical lower bound estimation method called the 'master curve', has been proposed as a candidate for a new lower bound reference curve concept. From a regulatory point of view, the master curve is somewhat problematic in that it does not claim to be an absolute deterministic lower bound, but corresponds to a specific theoretical failure probability that can be chosen freely based on application. In order to be able to substitute the old ASME reference curves with lower bound curves based on the master curve concept, the inherent statistical nature (and confidence level) of the ASME reference curves must be revealed. In order to estimate the true inherent level of safety, represented by the reference curves, the original database was re-evaluated with statistical methods and compared to an analysis based on the master curve concept. The analysis reveals that the 5% lower bound master curve has the same inherent degree of safety as originally intended for the K IC -reference curve. Similarly, the 1% lower bound master curve corresponds to the K IR -reference curve. (orig.)

  18. Statistical re-evaluation of the ASME KIC and KIR fracture toughness reference curves

    International Nuclear Information System (INIS)

    Wallin, K.; Rintamaa, R.

    1998-01-01

    Historically the ASME reference curves have been treated as representing absolute deterministic lower bound curves of fracture toughness. In reality, this is not the case. They represent only deterministic lower bound curves to a specific set of data, which represent a certain probability range. A recently developed statistical lower bound estimation method called the 'Master curve', has been proposed as a candidate for a new lower bound reference curve concept. From a regulatory point of view, the Master curve is somewhat problematic in that it does not claim to be an absolute deterministic lower bound, but corresponds to a specific theoretical failure probability that can be chosen freely based on application. In order to be able to substitute the old ASME reference curves with lower bound curves based on the master curve concept, the inherent statistical nature (and confidence level) of the ASME reference curves must be revealed. In order to estimate the true inherent level of safety, represented by the reference curves, the original data base was re-evaluated with statistical methods and compared to an analysis based on the master curve concept. The analysis reveals that the 5% lower bound Master curve has the same inherent degree of safety as originally intended for the K IC -reference curve. Similarly, the 1% lower bound Master curve corresponds to the K IR -reference curve. (orig.)

  19. Standard test method for plutonium assay by plutonium (III) diode array spectrophotometry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method describes the determination of total plutonium as plutonium(III) in nitrate and chloride solutions. The technique is applicable to solutions of plutonium dioxide powders and pellets (Test Methods C 697), nuclear grade mixed oxides (Test Methods C 698), plutonium metal (Test Methods C 758), and plutonium nitrate solutions (Test Methods C 759). Solid samples are dissolved using the appropriate dissolution techniques described in Practice C 1168. The use of this technique for other plutonium-bearing materials has been reported (1-5), but final determination of applicability must be made by the user. The applicable concentration range for plutonium sample solutions is 10–200 g Pu/L. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropria...

  20. Gold Nanoparticle-Aptamer-Based LSPR Sensing of Ochratoxin A at a Widened Detection Range by Double Calibration Curve Method.

    Science.gov (United States)

    Liu, Boshi; Huang, Renliang; Yu, Yanjun; Su, Rongxin; Qi, Wei; He, Zhimin

    2018-01-01

    Ochratoxin A (OTA) is a type of mycotoxin generated from the metabolism of Aspergillus and Penicillium , and is extremely toxic to humans, livestock, and poultry. However, traditional assays for the detection of OTA are expensive and complicated. Other than OTA aptamer, OTA itself at high concentration can also adsorb on the surface of gold nanoparticles (AuNPs), and further inhibit AuNPs salt aggregation. We herein report a new OTA assay by applying the localized surface plasmon resonance effect of AuNPs and their aggregates. The result obtained from only one single linear calibration curve is not reliable, and so we developed a "double calibration curve" method to address this issue and widen the OTA detection range. A number of other analytes were also examined, and the structural properties of analytes that bind with the AuNPs were further discussed. We found that various considerations must be taken into account in the detection of these analytes when applying AuNP aggregation-based methods due to their different binding strengths.

  1. AGAPEROS Searches for microlensing in the LMC with the Pixel Method; 1, Data treatment and pixel light curves production

    CERN Document Server

    Melchior, A.-L.; Ansari, R.; Aubourg, E.; Baillon, P.; Bareyre, P.; Bauer, F.; Beaulieu, J.-Ph.; Bouquet, A.; Brehin, S.; Cavalier, F.; Char, S.; Couchot, F.; Coutures, C.; Ferlet, R.; Fernandez, J.; Gaucherel, C.; Giraud-Heraud, Y.; Glicenstein, J.-F.; Goldman, B.; Gondolo, P.; Gros, M.; Guibert, J.; Gry, C.; Hardin, D.; Kaplan, J.; de Kat, J.; Lachieze-Rey, M.; Laurent, B.; Lesquoy, E.; Magneville, Ch.; Mansoux, B.; Marquette, J.-B.; Maurice, E.; Milsztajn, A.; Moniez, M.; Moreau, O.; Moscoso, L.; Palanque-Delabrouille, N.; Perdereau, O.; Prevot, L.; Renault, C.; Queinnec, F.; Rich, J.; Spiro, M.; Vigroux, L.; Zylberajch, S.; Vidal-Madjar, A.; Magneville, Ch.

    1999-01-01

    The presence and abundance of MAssive Compact Halo Objects (MACHOs) towards the Large Magellanic Cloud (LMC) can be studied with microlensing searches. The 10 events detected by the EROS and MACHO groups suggest that objects with 0.5 Mo could fill 50% of the dark halo. This preferred mass is quite surprising, and increasing the presently small statistics is a crucial issue. Additional microlensing of stars too dim to be resolved in crowded fields should be detectable using the Pixel Method. We present here an application of this method to the EROS 91-92 data (one tenth of the whole existing data set). We emphasize the data treatment required for monitoring pixel fluxes. Geometric and photometric alignments are performed on each image. Seeing correction and error estimates are discussed. 3.6" x 3.6" super-pixel light curves, thus produced, are very stable over the 120 days time-span. Fluctuations at a level of 1.8% of the flux in blue and 1.3% in red are measured on the pixel light curves. This level of stabil...

  2. Nonparametric estimation of age-specific reference percentile curves with radial smoothing.

    Science.gov (United States)

    Wan, Xiaohai; Qu, Yongming; Huang, Yao; Zhang, Xiao; Song, Hanping; Jiang, Honghua

    2012-01-01

    Reference percentile curves represent the covariate-dependent distribution of a quantitative measurement and are often used to summarize and monitor dynamic processes such as human growth. We propose a new nonparametric method based on a radial smoothing (RS) technique to estimate age-specific reference percentile curves assuming the underlying distribution is relatively close to normal. We compared the RS method with both the LMS and the generalized additive models for location, scale and shape (GAMLSS) methods using simulated data and found that our method has smaller estimation error than the two existing methods. We also applied the new method to analyze height growth data from children being followed in a clinical observational study of growth hormone treatment, and compared the growth curves between those with growth disorders and the general population. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Normalization method for metabolomics data using optimal selection of multiple internal standards

    Directory of Open Access Journals (Sweden)

    Yetukuri Laxman

    2007-03-01

    Full Text Available Abstract Background Success of metabolomics as the phenotyping platform largely depends on its ability to detect various sources of biological variability. Removal of platform-specific sources of variability such as systematic error is therefore one of the foremost priorities in data preprocessing. However, chemical diversity of molecular species included in typical metabolic profiling experiments leads to different responses to variations in experimental conditions, making normalization a very demanding task. Results With the aim to remove unwanted systematic variation, we present an approach that utilizes variability information from multiple internal standard compounds to find optimal normalization factor for each individual molecular species detected by metabolomics approach (NOMIS. We demonstrate the method on mouse liver lipidomic profiles using Ultra Performance Liquid Chromatography coupled to high resolution mass spectrometry, and compare its performance to two commonly utilized normalization methods: normalization by l2 norm and by retention time region specific standard compound profiles. The NOMIS method proved superior in its ability to reduce the effect of systematic error across the full spectrum of metabolite peaks. We also demonstrate that the method can be used to select best combinations of standard compounds for normalization. Conclusion Depending on experiment design and biological matrix, the NOMIS method is applicable either as a one-step normalization method or as a two-step method where the normalization parameters, influenced by variabilities of internal standard compounds and their correlation to metabolites, are first calculated from a study conducted in repeatability conditions. The method can also be used in analytical development of metabolomics methods by helping to select best combinations of standard compounds for a particular biological matrix and analytical platform.

  4. Development of A Standard Method for Human Reliability Analysis (HRA) of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kang, Dae Il; Jung, Won Dea; Kim, Jae Whan

    2005-12-01

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME and ANS PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME and ANS HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method

  5. Development of A Standard Method for Human Reliability Analysis (HRA) of Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Jung, Won Dea; Kim, Jae Whan

    2005-12-15

    According as the demand of risk-informed regulation and applications increase, the quality and reliability of a probabilistic safety assessment (PSA) has been more important. KAERI started a study to standardize the process and the rules of HRA (Human Reliability Analysis) which was known as a major contributor to the uncertainty of PSA. The study made progress as follows; assessing the level of quality of the HRAs in Korea and identifying the weaknesses of the HRAs, determining the requirements for developing a standard HRA method, developing the process and rules for quantifying human error probability. Since the risk-informed applications use the ASME and ANS PSA standard to ensure PSA quality, the standard HRA method was developed to meet the ASME and ANS HRA requirements with level of category II. The standard method was based on THERP and ASEP HRA that are widely used for conventional HRA. However, the method focuses on standardizing and specifying the analysis process, quantification rules and criteria to minimize the deviation of the analysis results caused by different analysts. Several HRA experts from different organizations in Korea participated in developing the standard method. Several case studies were interactively undertaken to verify the usability and applicability of the standard method.

  6. Sensitivity of the probability of failure to probability of detection curve regions

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2016-01-01

    Non-destructive inspection (NDI) techniques have been shown to play a vital role in fracture control plans, structural health monitoring, and ensuring availability and reliability of piping, pressure vessels, mechanical and aerospace equipment. Probabilistic fatigue simulations are often used in order to determine the efficacy of an inspection procedure with the NDI method modeled as a probability of detection (POD) curve. These simulations can be used to determine the most advantageous NDI method for a given application. As an aid to this process, a first order sensitivity method of the probability-of-failure (POF) with respect to regions of the POD curve (lower tail, middle region, right tail) is developed and presented here. The sensitivity method computes the partial derivative of the POF with respect to a change in each region of a POD or multiple POD curves. The sensitivities are computed at no cost by reusing the samples from an existing Monte Carlo (MC) analysis. A numerical example is presented considering single and multiple inspections. - Highlights: • Sensitivities of probability-of-failure to a region of probability-of-detection curve. • The sensitivities are computed with negligible cost. • Sensitivities identify the important region of a POD curve. • Sensitivities can be used as a guide to selecting the optimal POD curve.

  7. Standard Test Methods for Wet Insulation Integrity Testing of Photovoltaic Modules

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 These test methods provide procedures to determine the insulation resistance of a photovoltaic (PV) module, i.e. the electrical resistance between the module's internal electrical components and its exposed, electrically conductive, non-current carrying parts and surfaces. 1.2 The insulation integrity procedures are a combination of wet insulation resistance and wet dielectric voltage withstand test procedures. 1.3 These procedures are similar to and reference the insulation integrity test procedures described in Test Methods E 1462, with the difference being that the photovoltaic module under test is immersed in a wetting solution during the procedures. 1.4 These test methods do not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of these test methods. 1.5 The values stated in SI units are to be regarded as the standard. 1.6 There is no similar or equivalent ISO standard. 1.7 This standard does not purport to address all of the safety conce...

  8. Projection of curves on B-spline surfaces using quadratic reparameterization

    KAUST Repository

    Yang, Yijun; Zeng, Wei; Zhang, Hui; Yong, Junhai; Paul, Jean Claude

    2010-01-01

    Curves on surfaces play an important role in computer aided geometric design. In this paper, we present a hyperbola approximation method based on the quadratic reparameterization of Bézier surfaces, which generates reasonable low degree curves lying

  9. Distance of Sample Measurement Points to Prototype Catalog Curve

    DEFF Research Database (Denmark)

    Hjorth, Poul G.; Karamehmedovic, Mirza; Perram, John

    2006-01-01

    We discuss strategies for comparing discrete data points to a catalog (reference) curve by means of the Euclidean distance from each point to the curve in a pump's head H vs. flow Qdiagram. In particular we find that a method currently in use is inaccurate. We propose several alternatives...

  10. An endogenous standard, radioisotopic ratio method in NAA

    International Nuclear Information System (INIS)

    Byrne, A.R.; Dermelj, M.

    1997-01-01

    A derivative form of NAA is proposed which is based on the use of an endogenous internal standard of already known concentration in the sample. If a comparator with a known ratio of the determinand and endogenous standard are co-irradiated with the sample, the determinand concentration is derived in terms of the endogenous standard concentration and the activity ratios of the two induced nuclides in the sample and comparator. As well as eliminating the sample mass and greatly reducing errors caused by pulse pile-up and geometrical differences, it was shown that in the radiochemical mode, if the endogenous standard is chosen so that the induced activity is radioisotopic with that from the determinand, the radiochemical yield is also eliminated and the risk non-achievement of isotopic exchange greatly reduced. The method is demonstrated with good results on reference materials for the determination of I, Mn and Ni. The advantages and disadvantages of this approach are discussed. It is suggested that it may be of application in quality control and in extending the range of certified elements in reference materials. (author)

  11. Quantitative chemical analysis for the standardization of copaiba oil by high resolution gas chromatography

    International Nuclear Information System (INIS)

    Tappin, Marcelo R.R.; Pereira, Jislaine F.G.; Lima, Lucilene A.; Siani, Antonio C.; Mazzei, Jose L.; Ramos, Monica F.S.

    2004-01-01

    Quantitative GC-FID was evaluated for analysis of methylated copaiba oils, using trans-(-)-caryophyllene or methyl copalate as external standards. Analytical curves showed good linearity and reproducibility in terms of correlation coefficients (0.9992 and 0.996, respectively) and relative standard deviation (< 3%). Quantification of sesquiterpenes and diterpenic acids were performed with each standard, separately. When compared with the integrator response normalization, the standardization was statistically similar for the case of methyl copalate, but the response of trans-(-)-caryophyllene was statistically (P < 0.05) different. This method showed to be suitable for classification and quality control of commercial samples of the oils. (author)

  12. Overall Memory Impairment Identification with Mathematical Modeling of the CVLT-II Learning Curve in Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Igor I. Stepanov

    2012-01-01

    Full Text Available The CVLT-II provides standardized scores for each of the List A five learning trials, so that the clinician can compare the patient's raw trials 1–5 scores with standardized ones. However, frequently, a patient's raw scores fluctuate making a proper interpretation difficult. The CVLT-II does not offer any other methods for classifying a patient's learning and memory status on the background of the learning curve. The main objective of this research is to illustrate that discriminant analysis provides an accurate assessment of the learning curve, if suitable predictor variables are selected. Normal controls were ninety-eight healthy volunteers (78 females and 20 males. A group of MS patients included 365 patients (266 females and 99 males with clinically defined multiple sclerosis. We show that the best predictor variables are coefficients 3 and 4 of our mathematical model 3∗exp(−2∗(−1+4∗(1−exp(−2∗(−1 because discriminant functions, calculated separately for 3 and 4, allow nearly 100% correct classification. These predictors allow identification of separate impairment of readiness to learn or ability to learn, or both.

  13. Evaluating the Capacity of Standard Investment Appraisal Methods

    NARCIS (Netherlands)

    M.M. Akalu

    2002-01-01

    textabstractThe survey findings indicate the existence of gap between theory and practice of capital budgeting. Standard appraisal methods have shown a wider project value discrepancy, which is beyond and above the contingency limit. In addition, the research has found the growing trend in the use

  14. Scaling of counter-current imbibition recovery curves using artificial neural networks

    Science.gov (United States)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  15. Legislation, standards and methods for mercury emissions control

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-04-15

    Mercury is an element of growing global concern. The United Nations Environment Programme plans to finalise and ratify a new global legally-binding convention on mercury by 2013. Canada already has legislation on mercury emissions from coal-fired utilities and the USA has recently released the new Mercury and Air Toxics Standard. Although other countries may not have mercury-specific legislation as such, many have legislation which results in significant co-benefit mercury reduction due to the installation of effective flue-gas cleaning technologies. This report reviews the current situation and trends in mercury emission legislation and, where possible, discusses the actions that will be taken under proposed or impending standards globally and regionally. The report also reviews the methods currently applied for mercury control and for mercury emission measurement with emphasis on the methodologies most appropriate for compliance. Examples of the methods of mercury control currently deployed in the USA, Canada and elsewhere are included.

  16. Standard Test Method for Gel Time of Carbon Fiber-Epoxy Prepreg

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This test method covers the determination of gel time of carbon fiber-epoxy tape and sheet. The test method is suitable for the measurement of gel time of resin systems having either high or low viscosity. 1.2 The values stated in SI units are to be regarded as standard. The values in parentheses are for reference only. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  17. 42 CFR 440.260 - Methods and standards to assure quality of services.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods and standards to assure quality of services. 440.260 Section 440.260 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH... and Limits Applicable to All Services § 440.260 Methods and standards to assure quality of services...

  18. Separate base usages of genes located on the leading and lagging strands in Chlamydia muridarum revealed by the Z curve method

    Directory of Open Access Journals (Sweden)

    Yu Xiu-Juan

    2007-10-01

    Full Text Available Abstract Background The nucleotide compositional asymmetry between the leading and lagging strands in bacterial genomes has been the subject of intensive study in the past few years. It is interesting to mention that almost all bacterial genomes exhibit the same kind of base asymmetry. This work aims to investigate the strand biases in Chlamydia muridarum genome and show the potential of the Z curve method for quantitatively differentiating genes on the leading and lagging strands. Results The occurrence frequencies of bases of protein-coding genes in C. muridarum genome were analyzed by the Z curve method. It was found that genes located on the two strands of replication have distinct base usages in C. muridarum genome. According to their positions in the 9-D space spanned by the variables u1 – u9 of the Z curve method, K-means clustering algorithm can assign about 94% of genes to the correct strands, which is a few percent higher than those correctly classified by K-means based on the RSCU. The base usage and codon usage analyses show that genes on the leading strand have more G than C and more T than A, particularly at the third codon position. For genes on the lagging strand the biases is reverse. The y component of the Z curves for the complete chromosome sequences show that the excess of G over C and T over A are more remarkable in C. muridarum genome than in other bacterial genomes without separating base and/or codon usages. Furthermore, for the genomes of Borrelia burgdorferi, Treponema pallidum, Chlamydia muridarum and Chlamydia trachomatis, in which distinct base and/or codon usages have been observed, closer phylogenetic distance is found compared with other bacterial genomes. Conclusion The nature of the strand biases of base composition in C. muridarum is similar to that in most other bacterial genomes. However, the base composition asymmetry between the leading and lagging strands in C. muridarum is more significant than that in

  19. Standard test method for calibration of surface/stress measuring devices

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1997-01-01

    Return to Contents page 1.1 This test method covers calibration or verification of calibration, or both, of surface-stress measuring devices used to measure stress in annealed and heat-strengthened or tempered glass using polariscopic or refractometry based principles. 1.2 This test method is nondestructive. 1.3 This test method uses transmitted light, and therefore, is applicable to light-transmitting glasses. 1.4 This test method is not applicable to chemically tempered glass. 1.5 Using the procedure described, surface stresses can be measured only on the “tin” side of float glass. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  20. Stenting for curved lesions using a novel curved balloon: Preliminary experimental study.

    Science.gov (United States)

    Tomita, Hideshi; Higaki, Takashi; Kobayashi, Toshiki; Fujii, Takanari; Fujimoto, Kazuto

    2015-08-01

    Stenting may be a compelling approach to dilating curved lesions in congenital heart diseases. However, balloon-expandable stents, which are commonly used for congenital heart diseases, are usually deployed in a straight orientation. In this study, we evaluated the effect of stenting with a novel curved balloon considered to provide better conformability to the curved-angled lesion. In vitro experiments: A Palmaz Genesis(®) stent (Johnson & Johnson, Cordis Co, Bridgewater, NJ, USA) mounted on the Goku(®) curve (Tokai Medical Co. Nagoya, Japan) was dilated in vitro to observe directly the behavior of the stent and balloon assembly during expansion. Animal experiment: A short Express(®) Vascular SD (Boston Scientific Co, Marlborough, MA, USA) stent and a long Express(®) Vascular LD stent (Boston Scientific) mounted on the curved balloon were deployed in the curved vessel of a pig to observe the effect of stenting in vivo. In vitro experiments: Although the stent was dilated in a curved fashion, stent and balloon assembly also rotated conjointly during expansion of its curved portion. In the primary stenting of the short stent, the stent was dilated with rotation of the curved portion. The excised stent conformed to the curved vessel. As the long stent could not be negotiated across the mid-portion with the balloon in expansion when it started curving, the mid-portion of the stent failed to expand fully. Furthermore, the balloon, which became entangled with the stent strut, could not be retrieved even after complete deflation. This novel curved balloon catheter might be used for implantation of the short stent in a curved lesion; however, it should not be used for primary stenting of the long stent. Post-dilation to conform the stent to the angled vessel would be safer than primary stenting irrespective of stent length. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  1. Evaluation of diastolic phase by left ventricular volume curve using s2-gated equilibrium method among radioisotope angiography

    International Nuclear Information System (INIS)

    Watanabe, Yoshirou; Sakai, Akira; Inada, Mitsuo; Shiraishi, Tomokuni; Kobayashi, Akitoshi

    1982-01-01

    S2-gated (the second heart sound) method was designed by authors. In 6 normal subjects and 16 patients (old myocardial infarction 12 cases, hypertension 2 cases and aortic regurgitation 2 cases), radioisotope (RI) angiography using S2-gated equilibrium method was performed. In RI angiography, sup(99m)Tc-human serum albumin (HSA) 555MBq (15mCi) as tracer, PDP11/34 as minicomputer and PCG/ECG symchromizer (Metro Inst.) were used. Then left ventricular (LV) volume curve by S2-gated and electrocardiogram (ECG) R wave-gated method were obtained. Using LV volume curve, left ventricular ejection fraction (EF), mean ejection rate (mER, s -1 ), mean filling rate (mFR, -1 ) and rapid filling fraction (RFF) were calculated. mFR indicated mean filling rate during rapid filling phase. RFF was defined as the filling fraction during rapid filling phase among stroke volume. S2-gated method was reliable in evaluation of early diastolic phase, compared with ECG-gated method. There was the difference between RFF in normal group and myocardial infarction (MI) group (p < 0.005). RFF in 2 groups were correlated with EF (r = 0.82, p < 0.01). RFF was useful in evaluating MI cases who had normal EF values. The comparison with mER by ECG-gated and mFR by S2-gated was useful in evaluating MI cases who had normal mER values. mFR was remarkably lower than mER in MI group, but was equal to mER in normal group approximately. In conclusion, the evaluation using RFF and mFR by S2-gated method was useful in MI cases who had normal systolic phase indices. (author)

  2. Comparison of different standards used in radioimmunoassay for atrial natriuretic factor (ANF)

    DEFF Research Database (Denmark)

    Rasmussen, Peter Have; Nielsen, M. Damkjær; Giese, J.

    1991-01-01

    , estimates of the ANF content in human plasma samples with different standard preparations as the reference showed a considerable variability. With the international standard as the gold reference (plasma ANF concentration 100%) the apparent plasma ANF concentrations measured with the other reference......Six different standards for determination of atrial natriuretic factor (ANF) in human plasma samples have been compared using our radio-immunoassay for ANF: International standard 85/669, National Biological Standard Boards, UK; Bachem standard, Torrance, USA; Bachem standard, Bubendorf......, Switzerland; Bissendorf standard, Wedemark, Germany; Peninsula standard, Belmont, USA; UCB-Bioproducts standard, Brussels, Belgium, Standard curves obtained with different preparations were in parallel but showed considerable quantitative differences. Standard curves referring to the Bissendorf standard...

  3. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  4. Quantification of Soil Physical Properties by Using X-Ray Computerized Tomography (CT) and Standard Laboratory (STD) Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, Maria Ambert [Iowa State Univ., Ames, IA (United States)

    2003-12-12

    The implementation of x-ray computerized tomography (CT) on agricultural soils has been used in this research to quantify soil physical properties to be compared with standard laboratory (STD) methods. The overall research objective was to more accurately quantify soil physical properties for long-term management systems. Two field studies were conducted at Iowa State University's Northeast Research and Demonstration Farm near Nashua, IA using two different soil management strategies. The first field study was conducted in 1999 using continuous corn crop rotation for soil under chisel plow with no-till treatments. The second study was conducted in 2001 and on soybean crop rotation for the same soil but under chisel plow and no-till practices with wheel track and no-wheel track compaction treatments induced by a tractor-manure wagon. In addition, saturated hydraulic (K{sub s}) conductivity and the convection-dispersion (CDE) model were also applied using long-term soil management systems only during 2001. The results obtained for the 1999 field study revealed no significant differences between treatments and laboratory methods, but significant differences were found at deeper depths of the soil column for tillage treatments. The results for standard laboratory procedure versus CT method showed significant differences at deeper depths for the chisel plow treatment and at the second lower depth for no-till treatment for both laboratory methods. The macroporosity distribution experiment showed significant differences at the two lower depths between tillage practices. Bulk density and percent porosity had significant differences at the two lower depths of the soil column. The results obtained for the 2001 field study showed no significant differences between tillage practices and compaction practices for both laboratory methods, but significant differences between tillage practices with wheel track and no-wheel compaction treatments were found along the soil

  5. Standard CMMIsm Appraisal Method for Process Improvement (SCAMPIsm), Version 1.1: Method Definition Document

    National Research Council Canada - National Science Library

    2001-01-01

    The Standard CMMI Appraisal Method for Process Improvement (SCAMPI(Service Mark)) is designed to provide benchmark quality ratings relative to Capability Maturity Model(registered) Integration (CMMI(Service Mark)) models...

  6. Radioactive standards and calibration methods for contamination monitoring instruments

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Makoto [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-06-01

    Contamination monitoring in the facilities for handling unsealed radioactive materials is one of the most important procedures for radiation protection as well as radiation dose monitoring. For implementation of the proper contamination monitoring, radiation measuring instruments should not only be suitable to the purpose of monitoring, but also be well calibrated for the objective qualities of measurement. In the calibration of contamination monitoring instruments, quality reference activities need to be used. They are supplied in different such as extended sources, radioactive solutions or radioactive gases. These reference activities must be traceable to the national standards or equivalent standards. On the other hand, the appropriate calibration methods must be applied for each type of contamination monitoring instruments. In this paper, the concepts of calibration for contamination monitoring instruments, reference sources, determination methods of reference quantities and practical calibration methods of contamination monitoring instruments, including the procedures carried out in Japan Atomic Energy Research Institute and some relevant experimental data. (G.K.)

  7. JUMPING THE CURVE

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-01-01

    Full Text Available This paper explores the notion ofjump ing the curve,following from Handy 's S-curve onto a new curve with new rules policies and procedures. . It claims that the curve does not generally lie in wait but has to be invented by leadership. The focus of this paper is the identification (mathematically and inferentially ofthat point in time, known as the cusp in catastrophe theory, when it is time to change - pro-actively, pre-actively or reactively. These three scenarios are addressed separately and discussed in terms ofthe relevance ofeach.

  8. IMPROVING MANAGEMENT ACCOUNTING AND COST CALCULATION IN DAIRY INDUSTRY USING STANDARD COST METHOD

    Directory of Open Access Journals (Sweden)

    Bogdănoiu Cristiana-Luminiţa

    2013-04-01

    Full Text Available This paper aims to discuss issues related to the improvement of management accounting in the dairy industry by implementing standard cost method. The methods used today do not provide informational satisfaction to managers in order to conduct effectively production activities, which is why we attempted the standard cost method, it responding to the managers needs to obtain the efficiency of production, and all economic entities. The method allows an operative control of how they consume manpower and material resources by pursuing distinct, permanent and complete deviations during the activity and not at the end of the reporting period. Successful implementation of the standard method depends on the accuracy by which standards are developed and promotes consistently anticipated calculation of production costs as well as determination, tracking and controlling deviations from them, leads to increased practical value of accounting information and business improvement.

  9. Fitting the curve in Excel® : Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NARCIS (Netherlands)

    McCraig, M.A.; Osinski, G.R.; Cloutis, E.A.; Flemming, R.L.; Izawa, M.R.M.; Reddy, V.; Fieber-Beyer, S.K.; Pompilio, L.; van der Meer, F.D.; Berger, J.A.; Bramble, M.S.; Applin, D.M.

    2017-01-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to

  10. Empirical method for matrix effects correction in liquid samples

    International Nuclear Information System (INIS)

    Vigoda de Leyt, Dora; Vazquez, Cristina

    1987-01-01

    A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

  11. Dual Smarandache Curves of a Timelike Curve lying on Unit dual Lorentzian Sphere

    OpenAIRE

    Kahraman, Tanju; Hüseyin Ugurlu, Hasan

    2016-01-01

    In this paper, we give Darboux approximation for dual Smarandache curves of time like curve on unit dual Lorentzian sphere. Firstly, we define the four types of dual Smarandache curves of a timelike curve lying on dual Lorentzian sphere.

  12. Standardization method for alpha and beta surface sources

    Energy Technology Data Exchange (ETDEWEB)

    Sahagia, M; Grigorescu, E L; Razdolescu, A C; Ivan, C [Institute of Physics and Nuclear Engineering, Institute of Atomic Physics, PO Box MG-6, R-76900 Bucharest, (Romania)

    1994-01-01

    The installation and method of standardization of large surface alpha and beta sources are presented. A multiwire, flow-type proportional counter and the associated electronics is used. The counter is placed in a lead-shield. The response of the system in (s[sup -1]/Bq) or (s[sup -1]/(particle x s[sup -1])) was determined for [sup 241] Am, [sup 239] Pu, [sup 147] Pm, [sup 204] Tl, [sup 90](Sr+Y) and [sup 137] Cs using standard sources with different dimensions, from some mm[sup 2] to 180 x 220 mm[sup 2]. The system was legally attested for expanded uncertainties of +7%. (Author).

  13. Relating oxygen partial pressure, saturation and content: the haemoglobin-oxygen dissociation curve.

    Science.gov (United States)

    Collins, Julie-Ann; Rudenski, Aram; Gibson, John; Howard, Luke; O'Driscoll, Ronan

    2015-09-01

    The delivery of oxygen by arterial blood to the tissues of the body has a number of critical determinants including blood oxygen concentration (content), saturation (S O2 ) and partial pressure, haemoglobin concentration and cardiac output, including its distribution. The haemoglobin-oxygen dissociation curve, a graphical representation of the relationship between oxygen satur-ation and oxygen partial pressure helps us to understand some of the principles underpinning this process. Historically this curve was derived from very limited data based on blood samples from small numbers of healthy subjects which were manipulated in vitro and ultimately determined by equations such as those described by Severinghaus in 1979. In a study of 3524 clinical specimens, we found that this equation estimated the S O2 in blood from patients with normal pH and S O2 >70% with remarkable accuracy and, to our knowledge, this is the first large-scale validation of this equation using clinical samples. Oxygen saturation by pulse oximetry (S pO2 ) is nowadays the standard clinical method for assessing arterial oxygen saturation, providing a convenient, pain-free means of continuously assessing oxygenation, provided the interpreting clinician is aware of important limitations. The use of pulse oximetry reduces the need for arterial blood gas analysis (S aO2 ) as many patients who are not at risk of hypercapnic respiratory failure or metabolic acidosis and have acceptable S pO2 do not necessarily require blood gas analysis. While arterial sampling remains the gold-standard method of assessing ventilation and oxygenation, in those patients in whom blood gas analysis is indicated, arterialised capillary samples also have a valuable role in patient care. The clinical role of venous blood gases however remains less well defined.

  14. Relating oxygen partial pressure, saturation and content: the haemoglobin–oxygen dissociation curve

    Directory of Open Access Journals (Sweden)

    Julie-Ann Collins

    2015-09-01

    The delivery of oxygen by arterial blood to the tissues of the body has a number of critical determinants including blood oxygen concentration (content, saturation (SO2 and partial pressure, haemoglobin concentration and cardiac output, including its distribution. The haemoglobin–oxygen dissociation curve, a graphical representation of the relationship between oxygen satur­ation and oxygen partial pressure helps us to understand some of the principles underpinning this process. Historically this curve was derived from very limited data based on blood samples from small numbers of healthy subjects which were manipulated in vitro and ultimately determined by equations such as those described by Severinghaus in 1979. In a study of 3524 clinical specimens, we found that this equation estimated the SO2 in blood from patients with normal pH and SO2 >70% with remarkable accuracy and, to our knowledge, this is the first large-scale validation of this equation using clinical samples. Oxygen saturation by pulse oximetry (SpO2 is nowadays the standard clinical method for assessing arterial oxygen saturation, providing a convenient, pain-free means of continuously assessing oxygenation, provided the interpreting clinician is aware of important limitations. The use of pulse oximetry reduces the need for arterial blood gas analysis (SaO2 as many patients who are not at risk of hypercapnic respiratory failure or metabolic acidosis and have acceptable SpO2 do not necessarily require blood gas analysis. While arterial sampling remains the gold-standard method of assessing ventilation and oxygenation, in those patients in whom blood gas analysis is indicated, arterialised capillary samples also have a valuable role in patient care. The clinical role of venous blood gases however remains less well defined.

  15. Minimal families of curves on surfaces

    KAUST Repository

    Lubbes, Niels

    2014-11-01

    A minimal family of curves on an embedded surface is defined as a 1-dimensional family of rational curves of minimal degree, which cover the surface. We classify such minimal families using constructive methods. This allows us to compute the minimal families of a given surface.The classification of minimal families of curves can be reduced to the classification of minimal families which cover weak Del Pezzo surfaces. We classify the minimal families of weak Del Pezzo surfaces and present a table with the number of minimal families of each weak Del Pezzo surface up to Weyl equivalence.As an application of this classification we generalize some results of Schicho. We classify algebraic surfaces that carry a family of conics. We determine the minimal lexicographic degree for the parametrization of a surface that carries at least 2 minimal families. © 2014 Elsevier B.V.

  16. Multi-q pattern classification of polarization curves

    Science.gov (United States)

    Fabbri, Ricardo; Bastos, Ivan N.; Neto, Francisco D. Moura; Lopes, Francisco J. P.; Gonçalves, Wesley N.; Bruno, Odemir M.

    2014-02-01

    Several experimental measurements are expressed in the form of one-dimensional profiles, for which there is a scarcity of methodologies able to classify the pertinence of a given result to a specific group. The polarization curves that evaluate the corrosion kinetics of electrodes in corrosive media are applications where the behavior is chiefly analyzed from profiles. Polarization curves are indeed a classic method to determine the global kinetics of metallic electrodes, but the strong nonlinearity from different metals and alloys can overlap and the discrimination becomes a challenging problem. Moreover, even finding a typical curve from replicated tests requires subjective judgment. In this paper, we used the so-called multi-q approach based on the Tsallis statistics in a classification engine to separate the multiple polarization curve profiles of two stainless steels. We collected 48 experimental polarization curves in an aqueous chloride medium of two stainless steel types, with different resistance against localized corrosion. Multi-q pattern analysis was then carried out on a wide potential range, from cathodic up to anodic regions. An excellent classification rate was obtained, at a success rate of 90%, 80%, and 83% for low (cathodic), high (anodic), and both potential ranges, respectively, using only 2% of the original profile data. These results show the potential of the proposed approach towards efficient, robust, systematic and automatic classification of highly nonlinear profile curves.

  17. Standard Test Method for Measuring Optical Angular Deviation of Transparent Parts

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1996-01-01

    1.1 This test method covers measuring the angular deviation of a light ray imposed by transparent parts such as aircraft windscreens and canopies. The results are uncontaminated by the effects of lateral displacement, and the procedure may be performed in a relatively short optical path length. This is not intended as a referee standard. It is one convenient method for measuring angular deviations through transparent windows. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  18. ECM using Edwards curves

    DEFF Research Database (Denmark)

    Bernstein, Daniel J.; Birkner, Peter; Lange, Tanja

    2013-01-01

    -arithmetic level are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use extended Edwards coordinates; (3) use signed-sliding-window addition-subtraction chains; (4) batch primes to increase the window size; (5) choose curves with small parameters and base points; (6) choose curves with large...

  19. Guidelines for using the Delphi Technique to develop habitat suitability index curves

    Science.gov (United States)

    Crance, Johnie H.

    1987-01-01

    Habitat Suitability Index (SI) curves are one method of presenting species habitat suitability criteria. The curves are often used with the Habitat Evaluation Procedures (HEP) and are necessary components of the Instream Flow Incremental Methodology (IFIM) (Armour et al. 1984). Bovee (1986) described three categories of SI curves or habitat suitability criteria based on the procedures and data used to develop the criteria. Category I curves are based on professional judgment, with 1ittle or no empirical data. Both Category II (utilization criteria) and Category III (preference criteria) curves have as their source data collected at locations where target species are observed or collected. Having Category II and Category III curves for all species of concern would be ideal. In reality, no SI curves are available for many species, and SI curves that require intensive field sampling often cannot be developed under prevailing constraints on time and costs. One alternative under these circumstances is the development and interim use of SI curves based on expert opinion. The Delphi technique (Pill 1971; Delbecq et al. 1975; Linstone and Turoff 1975) is one method used for combining the knowledge and opinions of a group of experts. The purpose of this report is to describe how the Delphi technique may be used to develop expert-opinion-based SI curves.

  20. Changes in the Flow-Volume Curve According to the Degree of Stenosis in Patients With Unilateral Main Bronchial Stenosis

    Science.gov (United States)

    Yoo, Jung-Geun; Yi, Chin A; Lee, Kyung Soo; Jeon, Kyeongman; Um, Sang-Won; Koh, Won-Jung; Suh, Gee Young; Chung, Man Pyo; Kwon, O Jung

    2015-01-01

    Objectives The shape of the flow-volume (F-V) curve is known to change to showing a prominent plateau as stenosis progresses in patients with tracheal stenosis. However, no study has evaluated changes in the F-V curve according to the degree of bronchial stenosis in patients with unilateral main bronchial stenosis. Methods We performed an analysis of F-V curves in 29 patients with unilateral bronchial stenosis with the aid of a graphic digitizer between January 2005 and December 2011. Results The primary diseases causing unilateral main bronchial stenosis were endobronchial tuberculosis (86%), followed by benign bronchial tumor (10%), and carcinoid (3%). All unilateral main bronchial stenoses were classified into one of five grades (I, ≤25%; II, 26%-50%; III, 51%-75%; IV, 76%-90%; V, >90% to near-complete obstruction without ipsilateral lung collapse). A monophasic F-V curve was observed in patients with grade I stenosis and biphasic curves were observed for grade II-IV stenosis. Both monophasic (81%) and biphasic shapes (18%) were observed in grade V stenosis. After standardization of the biphasic shape of the F-V curve, the breakpoints of the biphasic curve moved in the direction of high volume (x-axis) and low flow (y-axis) according to the progression of stenosis. Conclusion In unilateral bronchial stenosis, a biphasic F-V curve appeared when bronchial stenosis was >25% and disappeared when obstruction was near complete. In addition, the breakpoint moved in the direction of high volume and low flow with the progression of stenosis. PMID:26045916

  1. Modelling the Influence of Ground Surface Relief on Electric Sounding Curves Using the Integral Equations Method

    Directory of Open Access Journals (Sweden)

    Balgaisha Mukanova

    2017-01-01

    Full Text Available The problem of electrical sounding of a medium with ground surface relief is modelled using the integral equations method. This numerical method is based on the triangulation of the computational domain, which is adapted to the shape of the relief and the measuring line. The numerical algorithm is tested by comparing the results with the known solution for horizontally layered media with two layers. Calculations are also performed to verify the fulfilment of the “reciprocity principle” for the 4-electrode installations in our numerical model. Simulations are then performed for a two-layered medium with a surface relief. The quantitative influences of the relief, the resistivity ratios of the contacting media, and the depth of the second layer on the apparent resistivity curves are established.

  2. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  3. Developing content standards for teaching research skills using a delphi method

    NARCIS (Netherlands)

    Schaaf, M.F. van der; Stokking, K.M.; Verloop, N.

    2005-01-01

    The increased attention for teacher assessment and current educational reforms ask for procedures to develop adequate content standards. For the development of content standards on teaching research skills, a Delphi method based on stakeholders’ judgments has been designed and tested. In three

  4. Standard test method for guided bend test for ductility of welds

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers a guided bend test for the determination of soundness and ductility of welds in ferrous and nonferrous products. Defects, not shown by X rays, may appear in the surface of a specimen when it is subjected to progressive localized overstressing. This guided bend test has been developed primarily for plates and is not intended to be substituted for other methods of bend testing. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. Note 1—For additional information see Terminology E 6, and American Welding Society Standard D 1.1. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  5. Description of Concrete Creep under Time-Varying Stress Using Parallel Creep Curve

    OpenAIRE

    Park, Yeong-Seong; Lee, Yong-Hak; Lee, Youngwhan

    2016-01-01

    An incremental format of creep model was presented to take account of the development of concrete creep due to loading at different ages. The formulation was attained by introducing a horizontal parallel assumption of creep curves and combining it with the vertical parallel creep curve of the rate of creep method to remedy the disadvantage of the rate of creep method that significantly underestimates the amount of creep strain, regardless of its simple format. Two creep curves were combined b...

  6. Real-Time Exponential Curve Fits Using Discrete Calculus

    Science.gov (United States)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  7. Dose-effect Curve for X-radiation in Lymphocytes in Goats

    International Nuclear Information System (INIS)

    Hasanbasic, D.; Saracevic, L.; Sacirbegovic, A.

    1998-01-01

    Dose-effect curve for X-radiation was made based on the analysis of chromosome aberrations in lympocytes of goats. Blood samples from seven goats were irradiated using MOORHEAD method, slightly modified and adapted to our conditions. Linear-square model was used, and the dose-effect curves were fitted by the smallest squares method. Dose-effect curve (collective) for goats is displayed as the following expression: y(D)= 8,6639·10 -3 D + 2,9748·10 -2 D 2 +2,9475·10 -3 . Comparison with some domestic animals such as sheep and pigs showed differences not only with respect to linear-square model, but to other mathematical presentations as well. (author)

  8. Technological change in energy systems. Learning curves, logistic curves and input-output coefficients

    International Nuclear Information System (INIS)

    Pan, Haoran; Koehler, Jonathan

    2007-01-01

    Learning curves have recently been widely adopted in climate-economy models to incorporate endogenous change of energy technologies, replacing the conventional assumption of an autonomous energy efficiency improvement. However, there has been little consideration of the credibility of the learning curve. The current trend that many important energy and climate change policy analyses rely on the learning curve means that it is of great importance to critically examine the basis for learning curves. Here, we analyse the use of learning curves in energy technology, usually implemented as a simple power function. We find that the learning curve cannot separate the effects of price and technological change, cannot reflect continuous and qualitative change of both conventional and emerging energy technologies, cannot help to determine the time paths of technological investment, and misses the central role of R and D activity in driving technological change. We argue that a logistic curve of improving performance modified to include R and D activity as a driving variable can better describe the cost reductions in energy technologies. Furthermore, we demonstrate that the top-down Leontief technology can incorporate the bottom-up technologies that improve along either the learning curve or the logistic curve, through changing input-output coefficients. An application to UK wind power illustrates that the logistic curve fits the observed data better and implies greater potential for cost reduction than the learning curve does. (author)

  9. Greater Activity in the Frontal Cortex on Left Curves: A Vector-Based fNIRS Study of Left and Right Curve Driving

    Science.gov (United States)

    Oka, Noriyuki; Yoshino, Kayoko; Yamamoto, Kouji; Takahashi, Hideki; Li, Shuguang; Sugimachi, Toshiyuki; Nakano, Kimihiko; Suda, Yoshihiro; Kato, Toshinori

    2015-01-01

    Objectives In the brain, the mechanisms of attention to the left and the right are known to be different. It is possible that brain activity when driving also differs with different horizontal road alignments (left or right curves), but little is known about this. We found driver brain activity to be different when driving on left and right curves, in an experiment using a large-scale driving simulator and functional near-infrared spectroscopy (fNIRS). Research Design and Methods The participants were fifteen healthy adults. We created a course simulating an expressway, comprising straight line driving and gentle left and right curves, and monitored the participants under driving conditions, in which they drove at a constant speed of 100 km/h, and under non-driving conditions, in which they simply watched the screen (visual task). Changes in hemoglobin concentrations were monitored at 48 channels including the prefrontal cortex, the premotor cortex, the primary motor cortex and the parietal cortex. From orthogonal vectors of changes in deoxyhemoglobin and changes in oxyhemoglobin, we calculated changes in cerebral oxygen exchange, reflecting neural activity, and statistically compared the resulting values from the right and left curve sections. Results Under driving conditions, there were no sites where cerebral oxygen exchange increased significantly more during right curves than during left curves (p > 0.05), but cerebral oxygen exchange increased significantly more during left curves (p right premotor cortex, the right frontal eye field and the bilateral prefrontal cortex. Under non-driving conditions, increases were significantly greater during left curves (p right frontal eye field. Conclusions Left curve driving was thus found to require more brain activity at multiple sites, suggesting that left curve driving may require more visual attention than right curve driving. The right frontal eye field was activated under both driving and non-driving conditions

  10. Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods

    Science.gov (United States)

    Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.

    2018-06-01

    We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.

  11. Fractal characteristic study of shearer cutter cutting resistance curves

    Energy Technology Data Exchange (ETDEWEB)

    Liu, C. [Heilongjiang Scientific and Technical Institute, Haerbin (China). Dept of Mechanical Engineering

    2004-02-01

    The cutting resistance curve is the most useful tool for reflecting the overall cutting performance of a cutting machine. The cutting resistance curve is influenced by many factors such as the pick structure and arrangement, the cutter operation parameters, coal quality and geologic conditions. This paper discusses the use of fractal geometry to study the properties of the cutting resistance curve, and the use of fractal dimensions to evaluate cutting performance. On the basis of fractal theory, the general form and calculation method of fractal characteristics are given. 4 refs., 3 figs., 1 tab.

  12. Determination of electron clinical spectra from percentage depth dose (PDD) curves by classical simulated annealing method

    International Nuclear Information System (INIS)

    Visbal, Jorge H. Wilches; Costa, Alessandro M.

    2016-01-01

    Percentage depth dose of electron beams represents an important item of data in radiation therapy treatment since it describes the dosimetric properties of these. Using an accurate transport theory, or the Monte Carlo method, has been shown obvious differences between the dose distribution of electron beams of a clinical accelerator in a water simulator object and the dose distribution of monoenergetic electrons of nominal energy of the clinical accelerator in water. In radiotherapy, the electron spectra should be considered to improve the accuracy of dose calculation since the shape of PDP curve depends of way how radiation particles deposit their energy in patient/phantom, that is, the spectrum. Exist three principal approaches to obtain electron energy spectra from central PDP: Monte Carlo Method, Direct Measurement and Inverse Reconstruction. In this work it will be presented the Simulated Annealing method as a practical, reliable and simple approach of inverse reconstruction as being an optimal alternative to other options. (author)

  13. Definition and measurement of statistical gloss parameters from curved objects

    Energy Technology Data Exchange (ETDEWEB)

    Kuivalainen, Kalle; Oksman, Antti; Peiponen, Kai-Erik

    2010-09-20

    Gloss standards are commonly defined for gloss measurement from flat surfaces, and, accordingly, glossmeters are typically developed for flat objects. However, gloss inspection of convex, concave, and small products is also important. In this paper, we define statistical gloss parameters for curved objects and measure gloss data on convex and concave surfaces using two different diffractive-optical-element-based glossmeters. Examples of measurements with the two diffractive-optical-element-based glossmeters are given for convex and concave aluminum pipe samples with and without paint. The defined gloss parameters for curved objects are useful in the characterization of the surface quality of metal pipes and other objects.

  14. Definition and measurement of statistical gloss parameters from curved objects

    International Nuclear Information System (INIS)

    Kuivalainen, Kalle; Oksman, Antti; Peiponen, Kai-Erik

    2010-01-01

    Gloss standards are commonly defined for gloss measurement from flat surfaces, and, accordingly, glossmeters are typically developed for flat objects. However, gloss inspection of convex, concave, and small products is also important. In this paper, we define statistical gloss parameters for curved objects and measure gloss data on convex and concave surfaces using two different diffractive-optical-element-based glossmeters. Examples of measurements with the two diffractive-optical-element-based glossmeters are given for convex and concave aluminum pipe samples with and without paint. The defined gloss parameters for curved objects are useful in the characterization of the surface quality of metal pipes and other objects.

  15. ESTIMATING TORSION OF DIGITAL CURVES USING 3D IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Christoph Blankenburg

    2016-04-01

    Full Text Available Curvature and torsion of three-dimensional curves are important quantities in fields like material science or biomedical engineering. Torsion has an exact definition in the continuous domain. However, in the discrete case most of the existing torsion evaluation methods lead to inaccurate values, especially for low resolution data. In this contribution we use the discrete points of space curves to determine the Fourier series coefficients which allow for representing the underlying continuous curve with Cesàro’s mean. This representation of the curve suits for the estimation of curvature and torsion values with their classical continuous definition. In comparison with the literature, one major advantage of this approach is that no a priori knowledge about the shape of the cyclic curve parts approximating the discrete curves is required. Synthetic data, i.e. curves with known curvature and torsion, are used to quantify the inherent algorithm accuracy for torsion and curvature estimation. The algorithm is also tested on tomographic data of fiber structures and open foams, where discrete curves are extracted from the pore spaces.

  16. Improving runoff risk estimates: Formulating runoff as a bivariate process using the SCS curve number method

    Science.gov (United States)

    Shaw, Stephen B.; Walter, M. Todd

    2009-03-01

    The Soil Conservation Service curve number (SCS-CN) method is widely used to predict storm runoff for hydraulic design purposes, such as sizing culverts and detention basins. As traditionally used, the probability of calculated runoff is equated to the probability of the causative rainfall event, an assumption that fails to account for the influence of variations in soil moisture on runoff generation. We propose a modification to the SCS-CN method that explicitly incorporates rainfall return periods and the frequency of different soil moisture states to quantify storm runoff risks. Soil moisture status is assumed to be correlated to stream base flow. Fundamentally, this approach treats runoff as the outcome of a bivariate process instead of dictating a 1:1 relationship between causative rainfall and resulting runoff volumes. Using data from the Fall Creek watershed in western New York and the headwaters of the French Broad River in the mountains of North Carolina, we show that our modified SCS-CN method improves frequency discharge predictions in medium-sized watersheds in the eastern United States in comparison to the traditional application of the method.

  17. S-curve networks and an approximate method for estimating degree distributions of complex networks

    OpenAIRE

    Guo, Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (Logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference value for optimizing the distribution of IPv4 address resource and the development of IPv6. Based o...

  18. Poor interoperability of the Adams-Harbertson method for analysis of anthocyanins: comparison with AOAC pH differential method.

    Science.gov (United States)

    Brooks, Larry M; Kuhlman, Benjamin J; McKesson, Doug W; McCloskey, Leo

    2013-01-01

    The poor interoperability of anthocyanin glycosides measurements by two pH differential methods is documented. Adams-Harbertson, which was proposed for commercial winemaking, was compared to AOAC Official Method 2005.02 for wine. California bottled wines (Pinot Noir, Merlot, and Cabernet Sauvignon) were assayed in a collaborative study (n=105), which found mean precision of Adams-Harbertson winery versus reference measurements to be 77 +/- 20%. Maximum error is expected to be 48% for Pinot Noir, 42% for Merlot, and 34% for Cabernet Sauvignon from reproducibility RSD. Range of measurements was actually 30 to 91% for Pinot Noir. An interoperability study (n=30) found Adams-Harbertson produces measurements that are nominally 150% of the AOAC pH differential method. Large analytical chemistry differences are: AOAC method uses Beer-Lambert equation and measures absorbance at pH 1.0 and 4.5, proposed a priori by Flueki and Francis; whereas Adams-Harbertson uses "universal" standard curve and measures absorbance ad hoc at pH 1.8 and 4.9 to reduce the effects of so-called co-pigmentation. Errors relative to AOAC are produced by Adams-Harbertson standard curve over Beer-Lambert and pH 1.8 over pH 1.0. The study recommends using AOAC Official Method 2005.02 for analysis of wine anthocyanin glycosides.

  19. [Standard sample preparation method for quick determination of trace elements in plastic].

    Science.gov (United States)

    Yao, Wen-Qing; Zong, Rui-Long; Zhu, Yong-Fa

    2011-08-01

    Reference sample was prepared by masterbatch method, containing heavy metals with known concentration of electronic information products (plastic), the repeatability and precision were determined, and reference sample preparation procedures were established. X-Ray fluorescence spectroscopy (XRF) analysis method was used to determine the repeatability and uncertainty in the analysis of the sample of heavy metals and bromine element. The working curve and the metrical methods for the reference sample were carried out. The results showed that the use of the method in the 200-2000 mg x kg(-1) concentration range for Hg, Pb, Cr and Br elements, and in the 20-200 mg x kg(-1) range for Cd elements, exhibited a very good linear relationship, and the repeatability of analysis methods for six times is good. In testing the circuit board ICB288G and ICB288 from the Mitsubishi Heavy Industry Company, results agreed with the recommended values.

  20. Conservatism of ASME KIR-reference curve with respect to crack arrest

    International Nuclear Information System (INIS)

    Wallin, K.; Rintamaa, R.; Nagel, G.

    1999-01-01

    The conservatism of the RT NDT temperature indexing parameter and the ASME K IR -reference curve with respect to crack arrest toughness, has been evaluated. Based on an analysis of the original ASME K Ia data, it was established that inherently, the ASME K IR -reference curve corresponds to an overall 5% lower bound curve with respect to crack arrest. It was shown that the scatter of crack arrest toughness is essentially material independent and has a standard deviation of 18% and the temperature dependence of K Ia has the same form as predicted by the master curve for crack initiation toughness. The 'built in' offset between the mean 100 MPa√(m) crack arrest temperature, TK Ia , and RT NDT is 38 C (TK Ia =RT NDT +38 C) and the experimental relation between TK Ia and NDT is, TK Ia =NDT+28 C. The K IR -reference curve using NDT as reference temperature will be conservative with respect to the general 5% lower bound K Ia(5%) -curve, with a 75% confidence. The use of RT NDT , instead of NDT, will generally increase the degree of conservatism, both for non-irradiated as well as irradiated materials, close to a 95% confidence level. This trend is pronounced for materials with Charpy-V upper shelf energies below 100 J. It is shown that the K IR -curve effectively constitutes a deterministic lower bound curve for crack arrest. The findings are valid both for nuclear pressure vessel plates, forgings and welds. (orig.)