WorldWideScience

Sample records for curve fitting approach

  1. Comparison of ductile-to-brittle transition curve fitting approaches

    International Nuclear Information System (INIS)

    Cao, L.W.; Wu, S.J.; Flewitt, P.E.J.

    2012-01-01

    Ductile-to-brittle transition (DBT) curve fitting approaches are compared over the transition temperature range for reactor pressure vessel steels with different kinds of data, including Charpy-V notch impact energy data and fracture toughness data. Three DBT curve fitting methods have been frequently used in the past, including the Burr S-Weibull and tanh distributions. In general there is greater scatter associated with test data obtained within the transition region. Therefore these methods give results with different accuracies, especially when fitting to small quantities of data. The comparison shows that the Burr distribution and tanh distribution can almost equally fit well distributed and large data sets extending across the test temperature range to include the upper and lower shelves. The S-Weibull distribution fit is poor for the lower shelf of the DBT curve. Overall for both large and small quantities of measured data the Burr distribution provides the best description. - Highlights: ► Burr distribution offers a better fit than that of a S-Weibull and tanh fit. ► Burr and tanh methods show similar fitting ability for a large data set. ► Burr method can fit sparse data well distributed across the test temperature. ► S-Weibull method cannot fit the lower shelf well and show poor fitting quality.

  2. Estimating reaction rate constants: comparison between traditional curve fitting and curve resolution

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.

    2000-01-01

    A traditional curve fitting (TCF) algorithm is compared with a classical curve resolution (CCR) approach for estimating reaction rate constants from spectral data obtained in time of a chemical reaction. In the TCF algorithm, reaction rate constants an estimated from the absorbance versus time data

  3. Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm

    Directory of Open Access Journals (Sweden)

    Daeho Jang

    2015-09-01

    Full Text Available The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air, the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.

  4. CURVE LSFIT, Gamma Spectrometer Calibration by Interactive Fitting Method

    International Nuclear Information System (INIS)

    Olson, D.G.

    1992-01-01

    1 - Description of program or function: CURVE and LSFIT are interactive programs designed to obtain the best data fit to an arbitrary curve. CURVE finds the type of fitting routine which produces the best curve. The types of fitting routines available are linear regression, exponential, logarithmic, power, least squares polynomial, and spline. LSFIT produces a reliable calibration curve for gamma ray spectrometry by using the uncertainty value associated with each data point. LSFIT is intended for use where an entire efficiency curve is to be made starting at 30 KeV and continuing to 1836 KeV. It creates calibration curves using up to three least squares polynomial fits to produce the best curve for photon energies above 120 KeV and a spline function to combine these fitted points with a best fit for points below 120 KeV. 2 - Method of solution: The quality of fit is tested by comparing the measured y-value to the y-value calculated from the fitted curve. The fractional difference between these two values is printed for the evaluation of the quality of the fit. 3 - Restrictions on the complexity of the problem - Maxima of: 2000 data points calibration curve output (LSFIT) 30 input data points 3 least squares polynomial fits (LSFIT) The least squares polynomial fit requires that the number of data points used exceed the degree of fit by at least two

  5. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  6. GLOBAL AND STRICT CURVE FITTING METHOD

    NARCIS (Netherlands)

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  7. Testing the validity of stock-recruitment curve fits

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.

    1988-01-01

    The utilities relied heavily on the Ricker stock-recruitment model as the basis for quantifying biological compensation in the Hudson River power case. They presented many fits of the Ricker model to data derived from striped bass catch and effort records compiled by the National Marine Fisheries Service. Based on this curve-fitting exercise, a value of 4 was chosen for the parameter alpha in the Ricker model, and this value was used to derive the utilities' estimates of the long-term impact of power plants on striped bass populations. A technique was developed and applied to address a single fundamental question: if the Ricker model were applicable to the Hudson River striped bass population, could the estimates of alpha from the curve-fitting exercise be considered reliable. The technique involved constructing a simulation model that incorporated the essential biological features of the population and simulated the characteristics of the available actual catch-per-unit-effort data through time. The ability or failure to retrieve the known parameter values underlying the simulation model via the curve-fitting exercise was a direct test of the reliability of the results of fitting stock-recruitment curves to the real data. The results demonstrated that estimates of alpha from the curve-fitting exercise were not reliable. The simulation-modeling technique provides an effective way to identify whether or not particular data are appropriate for use in fitting such models. 39 refs., 2 figs., 3 tabs

  8. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  9. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  10. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  11. A graph-based method for fitting planar B-spline curves with intersections

    Directory of Open Access Journals (Sweden)

    Pengbo Bo

    2016-01-01

    Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.

  12. From Curve Fitting to Machine Learning

    CERN Document Server

    Zielesny, Achim

    2011-01-01

    The analysis of experimental data is at heart of science from its beginnings. But it was the advent of digital computers that allowed the execution of highly non-linear and increasingly complex data analysis procedures - methods that were completely unfeasible before. Non-linear curve fitting, clustering and machine learning belong to these modern techniques which are a further step towards computational intelligence. The goal of this book is to provide an interactive and illustrative guide to these topics. It concentrates on the road from two dimensional curve fitting to multidimensional clus

  13. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  14. Real-Time Exponential Curve Fits Using Discrete Calculus

    Science.gov (United States)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  15. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  16. Fitness function and nonunique solutions in x-ray reflectivity curve fitting: crosserror between surface roughness and mass density

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Mattila, M; Hakkarainen, T; Sormunen, J; Lipsanen, H

    2007-01-01

    Nonunique solutions of the x-ray reflectivity (XRR) curve fitting problem were studied by modelling layer structures with neural networks and designing a fitness function to handle the nonidealities of measurements. Modelled atomic-layer-deposited aluminium oxide film structures were used in the simulations to calculate XRR curves based on Parratt's formalism. This approach reduced the dimensionality of the parameter space and allowed the use of fitness landscapes in the study of nonunique solutions. Fitness landscapes, where the height in a map represents the fitness value as a function of the process parameters, revealed tracks where the local fitness optima lie. The tracks were projected on the physical parameter space thus allowing the construction of the crosserror equation between weakly determined parameters, i.e. between the mass density and the surface roughness of a layer. The equation gives the minimum error for the other parameters which is a consequence of the nonuniqueness of the solution if noise is present. Furthermore, the existence of a possible unique solution in a certain parameter range was found to be dependent on the layer thickness and the signal-to-noise ratio

  17. Fitting the curve in Excel® : Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NARCIS (Netherlands)

    McCraig, M.A.; Osinski, G.R.; Cloutis, E.A.; Flemming, R.L.; Izawa, M.R.M.; Reddy, V.; Fieber-Beyer, S.K.; Pompilio, L.; van der Meer, F.D.; Berger, J.A.; Bramble, M.S.; Applin, D.M.

    2017-01-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to

  18. Curve fitting for RHB Islamic Bank annual net profit

    Science.gov (United States)

    Nadarajan, Dineswary; Noor, Noor Fadiya Mohd

    2015-05-01

    The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.

  19. Prediction of Pressing Quality for Press-Fit Assembly Based on Press-Fit Curve and Maximum Press-Mounting Force

    Directory of Open Access Journals (Sweden)

    Bo You

    2015-01-01

    Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.

  20. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  1. The environmental Kuznets curve. Does one size fit all?

    International Nuclear Information System (INIS)

    List, J.A.; Gallet, C.A.

    1999-01-01

    This paper uses a new panel data set on state-level sulfur dioxide and nitrogen oxide emissions from 1929-1994 to test the appropriateness of the 'one size fits all' reduced-form regression approach commonly used in the environmental Kuznets curve literature. Empirical results provide initial evidence that an inverted-U shape characterizes the relationship between per capita emissions and per capita incomes at the state level. Parameter estimates suggest, however, that previous studies, which restrict cross-sections to undergo identical experiences over time, may be presenting statistically biased results. 25 refs

  2. Weighted curve-fitting program for the HP 67/97 calculator

    International Nuclear Information System (INIS)

    Stockli, M.P.

    1983-01-01

    The HP 67/97 calculator provides in its standard equipment a curve-fit program for linear, logarithmic, exponential and power functions that is quite useful and popular. However, in more sophisticated applications, proper weights for data are often essential. For this purpose a program package was created which is very similar to the standard curve-fit program but which includes the weights of the data for proper statistical analysis. This allows accurate calculation of the uncertainties of the fitted curve parameters as well as the uncertainties of interpolations or extrapolations, or optionally the uncertainties can be normalized with chi-square. The program is very versatile and allows one to perform quite difficult data analysis in a convenient way with the pocket calculator HP 67/97

  3. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    Science.gov (United States)

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  4. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  5. Box-Cox transformation for resolving Peelle's Pertinent Puzzle in curve fitting

    International Nuclear Information System (INIS)

    Oh, Soo-Youl

    2003-01-01

    Incorporating the Box-Cox transformation into a least-squares method is presented as one of resolutions of an anomaly known as Peelle's Pertinent Puzzle. The transformation is a strategy to make non-normal distribution data resemble normal data. A procedure is proposed: transform the measured raw data with an optimized Box-Cox transformation parameter, fit the transformed data using a usual curve fitting method, then inverse-transform the fitted results to final estimates. The generalized least-squares method utilized in GMA is adopted as the curve fitting tool for the test of proposed procedure. In the procedure, covariance matrices are correspondingly transformed and inverse-transformed with the aid of error propagation law. In addition to a sensible answer to the Peelle's problem itself, the procedure resulted in reasonable estimates of 6 Li(n,t) cross sections in several to 800 keV energy region. Meanwhile, comparisons of the present procedure with that of Chiba and Smith show that both procedures yield estimates so close each other for the sample evaluation on 6 Li(n,t) above as well as for the Peelle's problem. Two procedures, however, are conceptually very different and further discussions would be needed for a consensus on this issue of resolving the Puzzle. It is also pointed out that the transformation is applicable not only to a least-squares method but also to other parameter estimation method such as a usual Bayesian approach formulated with an assumption of normality of the probability density function. (author)

  6. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  7. Potential errors when fitting experience curves by means of spreadsheet software

    International Nuclear Information System (INIS)

    Sark, W.G.J.H.M. van; Alsema, E.A.

    2010-01-01

    Progress ratios (PRs) are widely used in forecasting development of many technologies; they are derived from historical data represented in experience curves. Fitting the double logarithmic graphs is easily done with spreadsheet software like Microsoft Excel, by adding a trend line to the graph. However, it is unknown to many that these data are transformed to linear data before a fit is performed. This leads to erroneous results or a transformation bias in the PR, as we demonstrate using the experience curve for photovoltaic technology: logarithmic transformation leads to overestimates of progress ratios and underestimates of goodness of fit. Therefore, other graphing and analysis software is recommended.

  8. Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method

    Science.gov (United States)

    Verachtert, R.; Lombaert, G.; Degrande, G.

    2018-03-01

    This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.

  9. Background does not significantly affect power-exponential fitting of gastric emptying curves

    International Nuclear Information System (INIS)

    Jonderko, K.

    1987-01-01

    Using a procedure enabling the assessment of background radiation, research was done to elucidate the course of changes in background activity during gastric emptying measurements. Attention was focused on the changes in the shape of power-exponential fitted gastric emptying curves after correction for background was performed. The observed pattern of background counts allowed to explain the shifts of the parameters characterizing power-exponential curves connected with background correction. It was concluded that background had a negligible effect on the power-exponential fitting of gastric emptying curves. (author)

  10. Unified approach for estimating the probabilistic design S-N curves of three commonly used fatigue stress-life models

    International Nuclear Information System (INIS)

    Zhao Yongxiang; Wang Jinnuo; Gao Qing

    2001-01-01

    A unified approach, referred to as general maximum likelihood method, is presented for estimating probabilistic design S-N curves and their confidence bounds of the three commonly used fatigue stress-life models, namely three parameter, Langer and Basquin. The curves are described by a general form of mean and standard deviation S-N curves of the logarithm of fatigue life. Different from existent methods, i.e., the conventional method and the classical maximum likelihood method,present approach considers the statistical characteristics of whole test data. The parameters of the mean curve is firstly estimated by least square method and then, the parameters of the standard deviation curve is evaluated by mathematical programming method to be agreement with the maximum likelihood principle. Fit effects of the curves are assessed by fitted relation coefficient, total fitted standard error and the confidence bounds. Application to the virtual stress amplitude-crack initiation life data of a nuclear engineering material, Chinese 1Cr18Ni9Ti stainless steel pipe-weld metal, has indicated the validity of the approach to the S-N data where both S and N show the character of random variable. Practices to the two states of S-N data of Chinese 45 carbon steel notched specimens (k t = 2.0) have indicated the validity of present approach to the test results obtained respectively from group fatigue test and from maximum likelihood fatigue test. At the practices, it was revealed that in general the fit is best for the three-parameter model,slightly inferior for the Langer relation and poor for the Basquin equation. Relative to the existent methods, present approach has better fit. In addition, the possible non-conservative predictions of the existent methods, which are resulted from the influence of local statistical characteristics of the data, are also overcome by present approach

  11. Repair models of cell survival and corresponding computer program for survival curve fitting

    International Nuclear Information System (INIS)

    Shen Xun; Hu Yiwei

    1992-01-01

    Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells

  12. A non-iterative method for fitting decay curves with background

    International Nuclear Information System (INIS)

    Mukoyama, T.

    1982-01-01

    A non-iterative method for fitting a decay curve with background is presented. The sum of an exponential function and a constant term is linearized by the use of the difference equation and parameters are determined by the standard linear least-squares fitting. The validity of the present method has been tested against pseudo-experimental data. (orig.)

  13. Data fitting by G1 rational cubic Bézier curves using harmony search

    Directory of Open Access Journals (Sweden)

    Najihah Mohamed

    2015-07-01

    Full Text Available A metaheuristic algorithm, called Harmony Search (HS is implemented for data fitting by rational cubic Bézier curves. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. HS is suitable for multivariate non-linear optimization problem. It is mainly achieved by data fitting using rational cubic Bézier curves with G1 continuity for every joint of segments of the whole data sets. This approach has significant contributions in making the technique automated. HS is used to optimize positions of middle points and values of the shape parameters. Test outline images and comparative experimental analysis are presented to show effectiveness and robustness of the proposed method. Statistical testing between HS and two other different metaheuristic algorithms is used in the analysis on several outline images. All of the algorithms improvised a near optimal solution but the result that is obtained by the HS is better than the results of the other two algorithms.

  14. Application of tan h curve fitting to toughness data

    International Nuclear Information System (INIS)

    Sakai, Yuzuru; Ogura, Nobukazu

    1985-01-01

    Curve-fitting regression procedures for toughness data have been examined. The objectives of fitting curve in the context of the study of nuclear pressure vessel steels are (1) convenient summarization of test data to permit comparison of materials and testing methods; (2) development of statistical base concerning the data; (3) the surveying of the relationships between charpy data and fracture toughness data; (4) estimation of fracture toughness level from charpy absorbed energy data. The computational procedures using the tanh function have been applied to the toughness data (charpy absorbed energy, static fracture toughness, dynamic fracture toughness, crack arrest toughness) of A533B cl.1 and A508 cl.3 steels. The results of the analysis shows the statistical features of the material toughness and gives the method for estimating fracture toughness level from charpy absorbed energy data. (author)

  15. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  16. Application of numerical methods in spectroscopy : fitting of the curve of thermoluminescence

    International Nuclear Information System (INIS)

    RANDRIAMANALINA, S.

    1999-01-01

    The method of non linear least squares is one of the mathematical tools widely employed in spectroscopy, it is used for the determination of parameters of a model. In other hand, the spline function is among fitting functions that introduce the smallest error. It is used for the calculation of the area under the curve. We present an application of these methods, with the details of the corresponding algorithms, to the fitting of the thermoluminescence curve. [fr

  17. PLOTnFIT: A BASIC program for data plotting and curve fitting

    Energy Technology Data Exchange (ETDEWEB)

    Schiffgens, J O

    1989-10-01

    PLOTnFIT is a BASIC program to be used with an IBM or IBM-compatible personal computer (PC) for plotting and fitting curves to measured or observed data for both extrapolation and interpolation. It uses the Least Squares method to calculate the coefficients of nth degree polynomials (e.g., up to 10th degree) of Basis Functions so that each polynomial fits the data in a Least Squares sense, then plots the data and the polynomial that a user decides best represents them. PLOTnFIT is very versatile. It can be used to generate linear, semilog, and log-log graphs and can automatically scale the coordinate axes to suit the data. It can plot more than one data set on a graph (e.g., up to 8 data sets) and more data points than a user is likely to put on one graph (e.g., up to 225 points). A PC diskette containing (1) READIST.PNF (a summary of this NUREG), (2) INI06891.SIS and FOL06891.SIS (two data files), and 3) PLOTNFIT.4TH (the latest version of the program) may be obtained from the National Energy Software Center, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439. (author)

  18. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2014-01-01

    for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  19. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  20. A sigmoidal fit for pressure-volume curves of idiopathic pulmonary fibrosis patients on mechanical ventilation: clinical implications

    Directory of Open Access Journals (Sweden)

    Juliana C. Ferreira

    2011-01-01

    Full Text Available OBJECTIVE: Respiratory pressure-volume curves fitted to exponential equations have been used to assess disease severity and prognosis in spontaneously breathing patients with idiopathic pulmonary fibrosis. Sigmoidal equations have been used to fit pressure-volume curves for mechanically ventilated patients but not for idiopathic pulmonary fibrosis patients. We compared a sigmoidal model and an exponential model to fit pressure-volume curves from mechanically ventilated patients with idiopathic pulmonary fibrosis. METHODS: Six idiopathic pulmonary fibrosis patients and five controls underwent inflation pressure-volume curves using the constant-flow technique during general anesthesia prior to open lung biopsy or thymectomy. We identified the lower and upper inflection points and fit the curves with an exponential equation, V = A-B.e-k.P, and a sigmoid equation, V = a+b/(1+e-(P-c/d. RESULTS: The mean lower inflection point for idiopathic pulmonary fibrosis patients was significantly higher (10.5 ± 5.7 cm H2O than that of controls (3.6 ± 2.4 cm H2O. The sigmoidal equation fit the pressure-volume curves of the fibrotic and control patients well, but the exponential equation fit the data well only when points below 50% of the inspiratory capacity were excluded. CONCLUSION: The elevated lower inflection point and the sigmoidal shape of the pressure-volume curves suggest that respiratory system compliance is decreased close to end-expiratory lung volume in idiopathic pulmonary fibrosis patients under general anesthesia and mechanical ventilation. The sigmoidal fit was superior to the exponential fit for inflation pressure-volume curves of anesthetized patients with idiopathic pulmonary fibrosis and could be useful for guiding mechanical ventilation during general anesthesia in this condition.

  1. A three-parameter langmuir-type model for fitting standard curves of sandwich enzyme immunoassays with special attention to the α-fetoprotein assay

    NARCIS (Netherlands)

    Kortlandt, W.; Endeman, H.J.; Hoeke, J.O.O.

    In a simplified approach to the reaction kinetics of enzyme-linked immunoassays, a Langmuir-type equation y = [ax/(b + x)] + c was derived. This model proved to be superior to logit-log and semilog models in the curve-fitting of standard curves. An assay for α-fetoprotein developed in our laboratory

  2. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  3. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  4. THE CPA QUALIFICATION METHOD BASED ON THE GAUSSIAN CURVE FITTING

    Directory of Open Access Journals (Sweden)

    M.T. Adithia

    2015-01-01

    Full Text Available The Correlation Power Analysis (CPA attack is an attack on cryptographic devices, especially smart cards. The results of the attack are correlation traces. Based on the correlation traces, an evaluation is done to observe whether significant peaks appear in the traces or not. The evaluation is done manually, by experts. If significant peaks appear then the smart card is not considered secure since it is assumed that the secret key is revealed. We develop a method that objectively detects peaks and decides which peak is significant. We conclude that using the Gaussian curve fitting method, the subjective qualification of the peak significance can be objectified. Thus, better decisions can be taken by security experts. We also conclude that the Gaussian curve fitting method is able to show the influence of peak sizes, especially the width and height, to a significance of a particular peak.

  5. Reference Curves for Field Tests of Musculoskeletal Fitness in U.S. Children and Adolescents: The 2012 NHANES National Youth Fitness Survey.

    Science.gov (United States)

    Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C

    2017-08-01

    Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over

  6. Genetic algorithm using independent component analysis in x-ray reflectivity curve fitting of periodic layer structures

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Tilli, J-M; Sormunen, J; Mattila, M; Hakkarainen, T; Lipsanen, H

    2007-01-01

    A novel genetic algorithm (GA) utilizing independent component analysis (ICA) was developed for x-ray reflectivity (XRR) curve fitting. EFICA was used to reduce mutual information, or interparameter dependences, during the combinatorial phase. The performance of the new algorithm was studied by fitting trial XRR curves to target curves which were computed using realistic multilayer models. The median convergence properties of conventional GA, GA using principal component analysis and the novel GA were compared. GA using ICA was found to outperform the other methods with problems having 41 parameters or more to be fitted without additional XRR curve calculations. The computational complexity of the conventional methods was linear but the novel method had a quadratic computational complexity due to the applied ICA method which sets a practical limit for the dimensionality of the problem to be solved. However, the novel algorithm had the best capability to extend the fitting analysis based on Parratt's formalism to multiperiodic layer structures

  7. ROBUST DECLINE CURVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Sutawanir Darwis

    2012-05-01

    Full Text Available Empirical decline curve analysis of oil production data gives reasonable answer in hyperbolic type curves situations; however the methodology has limitations in fitting real historical production data in present of unusual observations due to the effect of the treatment to the well in order to increase production capacity. The development ofrobust least squares offers new possibilities in better fitting production data using declinecurve analysis by down weighting the unusual observations. This paper proposes a robustleast squares fitting lmRobMM approach to estimate the decline rate of daily production data and compares the results with reservoir simulation results. For case study, we usethe oil production data at TBA Field West Java. The results demonstrated that theapproach is suitable for decline curve fitting and offers a new insight in decline curve analysis in the present of unusual observations.

  8. PLOTNFIT.4TH, Data Plotting and Curve Fitting by Polynomials

    International Nuclear Information System (INIS)

    Schiffgens, J.O.

    1990-01-01

    1 - Description of program or function: PLOTnFIT is used for plotting and analyzing data by fitting nth degree polynomials of basis functions to the data interactively and printing graphs of the data and the polynomial functions. It can be used to generate linear, semi-log, and log-log graphs and can automatically scale the coordinate axes to suit the data. Multiple data sets may be plotted on a single graph. An auxiliary program, READ1ST, is included which produces an on-line summary of the information contained in the PLOTnFIT reference report. 2 - Method of solution: PLOTnFIT uses the least squares method to calculate the coefficients of nth-degree (up to 10. degree) polynomials of 11 selected basis functions such that each polynomial fits the data in a least squares sense. The procedure incorporated in the code uses a linear combination of orthogonal polynomials to avoid 'i11-conditioning' and to perform the curve fitting task with single-precision arithmetic. 3 - Restrictions on the complexity of the problem - Maxima of: 225 data points per job (or graph) including all data sets 8 data sets (or tasks) per job (or graph)

  9. Potential errors when fitting experience curves by means of spreadsheet software

    NARCIS (Netherlands)

    van Sark, W.G.J.H.M.|info:eu-repo/dai/nl/074628526; Alsema, E.A.|info:eu-repo/dai/nl/073416258

    2010-01-01

    Progress ratios (PRs) are widely used in forecasting development of many technologies; they are derived from historical data represented in experience curves. Fitting the double logarithmic graphs is easily done with spreadsheet software like Microsoft Excel, by adding a trend line to the graph.

  10. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers

    International Nuclear Information System (INIS)

    Cardoso, Vanderlei

    2002-01-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  11. An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)

    2017-03-20

    We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.

  12. Box-Cox transformation for resolving the Peelle's Pertinent Puzzle in a curve fitting

    International Nuclear Information System (INIS)

    Oh, S. Y.; Seo, C. G.

    2004-01-01

    Incorporating the Box-Cox transformation into a curve fitting is presented as one of methods for resolving an anomaly known as the Peelle's Pertinent Puzzle in the nuclear data community. The Box-Cox transformation is a strategy to make non-normal distribution data resemble normal distribution data. The proposed method consists of the following steps: transform the raw data to be fitted with the optimized Box-Cox transformation parameter, fit the transformed data using a conventional curve fitting tool, the least-squares method in this study, then inverse-transform the fitted results to the final estimates. Covariance matrices are correspondingly transformed and inverse-transformed with the aid of the law of error propagation. In addition to a sensible answer to the Puzzle, the proposed method resulted in reasonable estimates for a test evaluation with pseudo-experimental 6 Li(n, t) cross sections in several to 800 keV energy region, while the GMA code resulted in systematic underestimates that characterize the Puzzle. Meanwhile, it is observed that the present method and the Chiba-Smith method yield almost the same estimates for the test evaluation on 6 Li(n, t). Conceptually, however, two methods are very different from each other and further discussions are needed for a consensus on the issue of how to resolve the Puzzle. (authors)

  13. CABAS: A freely available PC program for fitting calibration curves in chromosome aberration dosimetry

    International Nuclear Information System (INIS)

    Deperas, J.; Szluiska, M.; Deperas-Kaminska, M.; Edwards, A.; Lloyd, D.; Lindholm, C.; Romm, H.; Roy, L.; Moss, R.; Morand, J.; Wojcik, A.

    2007-01-01

    The aim of biological dosimetry is to estimate the dose and the associated uncertainty to which an accident victim was exposed. This process requires the use of the maximum-likelihood method for fitting a calibration curve, a procedure that is not implemented in most statistical computer programs. Several laboratories have produced their own programs, but these are frequently not user-friendly and not available to outside users. We developed a software for fitting a linear-quadratic dose-response relationship by the method of maximum-likelihood and for estimating a dose from the number of aberrations observed. The program called as CABAS consists of the main curve-fitting and dose estimating module and modules for calculating the dose in cases of partial body exposure, for estimating the minimum number of cells necessary to detect a given dose of radiation and for calculating the dose in the case of a protracted exposure. (authors)

  14. A Bayesian Approach to Multistage Fitting of the Variation of the Skeletal Age Features

    Directory of Open Access Journals (Sweden)

    Dong Hua

    2009-01-01

    Full Text Available Accurate assessment of skeletal maturity is important clinically. Skeletal age assessment is usually based on features encoded in ossification centers. Therefore, it is critical to design a mechanism to capture as much as possible characteristics of features. We have observed that given a feature, there exist stages of the skeletal age such that the variation pattern of the feature differs in these stages. Based on this observation, we propose a Bayesian cut fitting to describe features in response to the skeletal age. With our approach, appropriate positions for stage separation are determined automatically by a Bayesian approach, and a model is used to fit the variation of a feature within each stage. Our experimental results show that the proposed method surpasses the traditional fitting using only one line or one curve not only in the efficiency and accuracy of fitting but also in global and local feature characterization.

  15. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  16. Statistically generated weighted curve fit of residual functions for modal analysis of structures

    Science.gov (United States)

    Bookout, P. S.

    1995-01-01

    A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.

  17. Methods for extracting dose response curves from radiation therapy data. I. A unified approach

    International Nuclear Information System (INIS)

    Herring, D.F.

    1980-01-01

    This paper discusses an approach to fitting models to radiation therapy data in order to extract dose response curves for tumor local control and normal tissue damage. The approach is based on the method of maximum likelihood and is illustrated by several examples. A general linear logistic equation which leads to the Ellis nominal standard dose (NSD) equation is discussed; the fit of this equation to experimental data for mouse foot skin reactions produced by fractionated irradiation is described. A logistic equation based on the concept that normal tissue reactions are associated with the surviving fraction of cells is also discussed, and the fit of this equation to the same set of mouse foot skin reaction data is also described. These two examples illustrate the importance of choosing a model based on underlying mechanisms when one seeks to attach biological significance to a model's parameters

  18. Gamma-ray Burst X-ray Flares Light Curve Fitting

    Science.gov (United States)

    Aubain, Jonisha

    2018-01-01

    Gamma Ray Bursts (GRBs) are the most luminous explosions in the Universe. These electromagnetic explosions produce jets demonstrated by a short burst of prompt gamma-ray emission followed by a broadband afterglow. There are sharp increases of flux in the X-ray light curves known as flares that occurs in about 50% of the afterglows. In this study, we characterized all of the X-ray afterglows that were detected by the Swift X-ray Telescope (XRT), whether with flares or without. We fit flares to the Norris function (Norris et al. 2005) and power laws with breaks where necessary (Racusin et al. 2009). After fitting the Norris function and power laws, we search for the residual pattern detected in prompt GRB pulses (Hakkila et al. 2014, 2015, 2017), that may indicate a common signature of shock physics. If we find the same signature in flares and prompt pulses, it provides insight into what causes them, as well as, how these flares are produced.

  19. Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers

    Directory of Open Access Journals (Sweden)

    N. I. Tananaev

    2015-03-01

    Full Text Available Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.

  20. Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers

    Science.gov (United States)

    Tananaev, N. I.

    2015-03-01

    Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power) model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.

  1. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    Science.gov (United States)

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0

  2. Polynomial curve fitting for control rod worth using least square numerical analysis

    International Nuclear Information System (INIS)

    Muhammad Husamuddin Abdul Khalil; Mark Dennis Usang; Julia Abdul Karim; Mohd Amin Sharifuldin Salleh

    2012-01-01

    RTP must have sufficient excess reactivity to compensate the negative reactivity feedback effects such as those caused by the fuel temperature and power defects of reactivity, fuel burn-up and to allow full power operation for predetermined period of time. To compensate this excess reactivity, it is necessary to introduce an amount of negative reactivity by adjusting or controlling the control rods at will. Control rod worth depends largely upon the value of the neutron flux at the location of the rod and reflected by a polynomial curve. Purpose of this paper is to rule out the polynomial curve fitting using least square numerical techniques via MATLAB compatible language. (author)

  3. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  4. Automatic Curve Fitting Based on Radial Basis Functions and a Hierarchical Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    G. Trejo-Caballero

    2015-01-01

    Full Text Available Curve fitting is a very challenging problem that arises in a wide variety of scientific and engineering applications. Given a set of data points, possibly noisy, the goal is to build a compact representation of the curve that corresponds to the best estimate of the unknown underlying relationship between two variables. Despite the large number of methods available to tackle this problem, it remains challenging and elusive. In this paper, a new method to tackle such problem using strictly a linear combination of radial basis functions (RBFs is proposed. To be more specific, we divide the parameter search space into linear and nonlinear parameter subspaces. We use a hierarchical genetic algorithm (HGA to minimize a model selection criterion, which allows us to automatically and simultaneously determine the nonlinear parameters and then, by the least-squares method through Singular Value Decomposition method, to compute the linear parameters. The method is fully automatic and does not require subjective parameters, for example, smooth factor or centre locations, to perform the solution. In order to validate the efficacy of our approach, we perform an experimental study with several tests on benchmarks smooth functions. A comparative analysis with two successful methods based on RBF networks has been included.

  5. Validation of curve-fitting method for blood retention of 99mTc-GSA. Comparison with blood sampling method

    International Nuclear Information System (INIS)

    Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa

    1997-01-01

    We investigated a curve-fitting method for the rate of blood retention of 99m Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent 99m Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of 99m Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1±2.1% (mean±standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5±6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2±11.6%, n=13; vs. 31.9±2.8%, n=7, p 99m Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p 99m Tc-GSA and could be a substitute for the blood sampling method. (author)

  6. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  7. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  8. Research on Standard and Automatic Judgment of Press-fit Curve of Locomotive Wheel-set Based on AAR Standard

    Science.gov (United States)

    Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu

    2018-03-01

    In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.

  9. A new method for curve fitting to the data with low statistics not using the chi2-method

    International Nuclear Information System (INIS)

    Awaya, T.

    1979-01-01

    A new method which does not use the chi 2 -fitting method is investigated in order to fit the theoretical curve to data with low statistics. The method is compared with the usual and modified chi 2 -fitting ones. The analyses are done for data which are generated by computers. It is concluded that the new method gives good results in all the cases. (Auth.)

  10. Numerical generation of boundary-fitted curvilinear coordinate systems for arbitrarily curved surfaces

    International Nuclear Information System (INIS)

    Takagi, T.; Miki, K.; Chen, B.C.J.; Sha, W.T.

    1985-01-01

    A new method is presented for numerically generating boundary-fitted coordinate systems for arbitrarily curved surfaces. The three-dimensional surface has been expressed by functions of two parameters using the geometrical modeling techniques in computer graphics. This leads to new quasi-one- and two-dimensional elliptic partial differential equations for coordinate transformation. Since the equations involve the derivatives of the surface expressions, the grids geneated by the equations distribute on the surface depending on its slope and curvature. A computer program GRID-CS based on the method was developed and applied to a surface of the second order, a torus and a surface of a primary containment vessel for a nuclear reactor. These applications confirm that GRID-CS is a convenient and efficient tool for grid generation on arbitrarily curved surfaces

  11. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    Science.gov (United States)

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  12. Non-linear least squares curve fitting of a simple theoretical model to radioimmunoassay dose-response data using a mini-computer

    International Nuclear Information System (INIS)

    Wilkins, T.A.; Chadney, D.C.; Bryant, J.; Palmstroem, S.H.; Winder, R.L.

    1977-01-01

    Using the simple univalent antigen univalent-antibody equilibrium model the dose-response curve of a radioimmunoassay (RIA) may be expressed as a function of Y, X and the four physical parameters of the idealised system. A compact but powerful mini-computer program has been written in BASIC for rapid iterative non-linear least squares curve fitting and dose interpolation with this function. In its simplest form the program can be operated in an 8K byte mini-computer. The program has been extensively tested with data from 10 different assay systems (RIA and CPBA) for measurement of drugs and hormones ranging in molecular size from thyroxine to insulin. For each assay system the results have been analysed in terms of (a) curve fitting biases and (b) direct comparison with manual fitting. In all cases the quality of fitting was remarkably good in spite of the fact that the chemistry of each system departed significantly from one or more of the assumptions implicit in the model used. A mathematical analysis of departures from the model's principal assumption has provided an explanation for this somewhat unexpected observation. The essential features of this analysis are presented in this paper together with the statistical analyses of the performance of the program. From these and the results obtained to date in the routine quality control of these 10 assays, it is concluded that the method of curve fitting and dose interpolation presented in this paper is likely to be of general applicability. (orig.) [de

  13. Glycation and secondary conformational changes of human serum albumin: study of the FTIR spectroscopic curve-fitting technique

    Directory of Open Access Journals (Sweden)

    Yu-Ting Huang

    2016-05-01

    Full Text Available The aim of this study was attempted to investigate both the glycation kinetics and protein secondary conformational changes of human serum albumin (HSA after the reaction with ribose. The browning and fluorescence determinations as well as Fourier transform infrared (FTIR microspectroscopy with a curve-fitting technique were applied. Various concentrations of ribose were incubated over a 12-week period at 37 ± 0.5 oC under dark conditions. The results clearly shows that the glycation occurred in HSA-ribose reaction mixtures was markedly increased with the amount of ribose used and incubation time, leading to marked alterations of protein conformation of HSA after FTIR determination. In addition, the browning intensity of reaction solutions were colored from light to deep brown, as determined by optical observation. The increase in fluorescence intensity from HSA–ribose mixtures seemed to occur more quickly than browning, suggesting that the fluorescence products were produced earlier on in the process than compounds causing browning. Moreover, the predominant α-helical composition of HSA decreased with an increase in ribose concentration and incubation time, whereas total β-structure and random coil composition increased, as determined by curve-fitted FTIR microspectroscopy analysis. We also found that the peak intensity ratios at 1044 cm−1/1542 cm−1 markedly decreased prior to 4 weeks of incubation, then almost plateaued, implying that the consumption of ribose in the glycation reaction might have been accelerated over the first 4 weeks of incubation, and gradually decreased. This study first evidences that two unique IR peaks at 1710 cm−1 [carbonyl groups of irreversible products produced by the reaction and deposition of advanced glycation end products (AGEs] and 1621 cm−1 (aggregated HSA molecules were clearly observed from the curve-fitted FTIR spectra of HSA-ribose mixtures over the course of incubation time. This study

  14. Ionization constants by curve fitting: determination of partition and distribution coefficients of acids and bases and their ions.

    Science.gov (United States)

    Clarke, F H; Cahoon, N M

    1987-08-01

    A convenient procedure has been developed for the determination of partition and distribution coefficients. The method involves the potentiometric titration of the compound, first in water and then in a rapidly stirred mixture of water and octanol. An automatic titrator is used, and the data is collected and analyzed by curve fitting on a microcomputer with 64 K of memory. The method is rapid and accurate for compounds with pKa values between 4 and 10. Partition coefficients can be measured for monoprotic and diprotic acids and bases. The partition coefficients of the neutral compound and its ion(s) can be determined by varying the ratio of octanol to water. Distribution coefficients calculated over a wide range of pH values are presented graphically as "distribution profiles". It is shown that subtraction of the titration curve of solvent alone from that of the compound in the solvent offers advantages for pKa determination by curve fitting for compounds of low aqueous solubility.

  15. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  16. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  17. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    Science.gov (United States)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  18. An Approach of Estimating Individual Growth Curves for Young Thoroughbred Horses Based on Their Birthdays

    Science.gov (United States)

    ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro

    2014-01-01

    ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356

  19. A person-environment fit approach to volunteerism : Volunteer personality fit and culture fit as predictors of affective outcomes

    NARCIS (Netherlands)

    Van Vianen, Annelies E. M.; Nijstad, Bernard A.; Voskuijl, Olga F.

    2008-01-01

    This study employed a person-environment (P-E) fit approach to explaining volunteer satisfaction, affective commitment, and turnover intentions. It was hypothesized that personality fit would explain additional variance in volunteer affective outcomes above and beyond motives to volunteer. This

  20. A person-environment fit approach to volunteerism: Volunteer personality-fit and culture-fit as predictors of affective outcomes

    NARCIS (Netherlands)

    van Vianen, A.E.M.; Nijstad, B.A.; Voskuijl, O.F.

    2008-01-01

    This study employed a person-environment (P-E) fit approach to explaining volunteer satisfaction, affective commitment, and turnover intentions. It was hypothesized that personality fit would explain additional variance in volunteer affective outcomes above and beyond motives to volunteer. This

  1. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  2. Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series

    Directory of Open Access Journals (Sweden)

    Kanadpriya Basu

    2015-08-01

    Full Text Available This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical interpolation techniques and statistical curve fitting techniques complement each other and can add value to the study of one dimensional time series seismographic data: they can be use to add more data to the system in case the data set is not large enough to perform standard statistical tests.

  3. Testing MONDian dark matter with galactic rotation curves

    International Nuclear Information System (INIS)

    Edmonds, Doug; Farrah, Duncan; Minic, Djordje; Takeuchi, Tatsu; Ho, Chiu Man; Ng, Y. Jack

    2014-01-01

    MONDian dark matter (MDM) is a new form of dark matter quantum that naturally accounts for Milgrom's scaling, usually associated with modified Newtonian dynamics (MOND), and theoretically behaves like cold dark matter (CDM) at cluster and cosmic scales. In this paper, we provide the first observational test of MDM by fitting rotation curves to a sample of 30 local spiral galaxies (z ≈ 0.003). For comparison, we also fit the galactic rotation curves using MOND and CDM. We find that all three models fit the data well. The rotation curves predicted by MDM and MOND are virtually indistinguishable over the range of observed radii (∼1 to 30 kpc). The best-fit MDM and CDM density profiles are compared. We also compare with MDM the dark matter density profiles arising from MOND if Milgrom's formula is interpreted as Newtonian gravity with an extra source term instead of as a modification of inertia. We find that discrepancies between MDM and MOND will occur near the center of a typical spiral galaxy. In these regions, instead of continuing to rise sharply, the MDM mass density turns over and drops as we approach the center of the galaxy. Our results show that MDM, which restricts the nature of the dark matter quantum by accounting for Milgrom's scaling, accurately reproduces observed rotation curves.

  4. Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods

    Science.gov (United States)

    Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.

    2018-06-01

    We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.

  5. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    Science.gov (United States)

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  6. The New Keynesian Phillips Curve

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper provides a survey on the recent literature on the new Keynesian Phillips curve: the controversies surrounding its microfoundation and estimation, the approaches that have been tried to improve its empirical fit and the challenges it faces adapting to the open-economy framework. The new......, learning or state-dependant pricing. The introduction of openeconomy factors into the new Keynesian Phillips curve complicate matters further as it must capture the nexus between price setting, inflation and the exchange rate. This is nevertheless a crucial feature for any model to be used for inflation...... forecasting in a small open economy like Iceland....

  7. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    Science.gov (United States)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  8. A comparison of approaches in fitting continuum SEDs

    International Nuclear Information System (INIS)

    Liu Yao; Wang Hong-Chi; Madlener David; Wolf Sebastian

    2013-01-01

    We present a detailed comparison of two approaches, the use of a pre-calculated database and simulated annealing (SA), for fitting the continuum spectral energy distribution (SED) of astrophysical objects whose appearance is dominated by surrounding dust. While pre-calculated databases are commonly used to model SED data, only a few studies to date employed SA due to its unclear accuracy and convergence time for this specific problem. From a methodological point of view, different approaches lead to different fitting quality, demand on computational resources and calculation time. We compare the fitting quality and computational costs of these two approaches for the task of SED fitting to provide a guide to the practitioner to find a compromise between desired accuracy and available resources. To reduce uncertainties inherent to real datasets, we introduce a reference model resembling a typical circumstellar system with 10 free parameters. We derive the SED of the reference model with our code MC3 D at 78 logarithmically distributed wavelengths in the range [0.3 μm, 1.3 mm] and use this setup to simulate SEDs for the database and SA. Our result directly demonstrates the applicability of SA in the field of SED modeling, since the algorithm regularly finds better solutions to the optimization problem than a pre-calculated database. As both methods have advantages and shortcomings, a hybrid approach is preferable. While the database provides an approximate fit and overall probability distributions for all parameters deduced using Bayesian analysis, SA can be used to improve upon the results returned by the model grid.

  9. An interactive graphics program to retrieve, display, compare, manipulate, curve fit, difference and cross plot wind tunnel data

    Science.gov (United States)

    Elliott, R. D.; Werner, N. M.; Baker, W. M.

    1975-01-01

    The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.

  10. SiFTO: An Empirical Method for Fitting SN Ia Light Curves

    Science.gov (United States)

    Conley, A.; Sullivan, M.; Hsiao, E. Y.; Guy, J.; Astier, P.; Balam, D.; Balland, C.; Basa, S.; Carlberg, R. G.; Fouchez, D.; Hardin, D.; Howell, D. A.; Hook, I. M.; Pain, R.; Perrett, K.; Pritchet, C. J.; Regnault, N.

    2008-07-01

    We present SiFTO, a new empirical method for modeling Type Ia supernova (SN Ia) light curves by manipulating a spectral template. We make use of high-redshift SN data when training the model, allowing us to extend it bluer than rest-frame U. This increases the utility of our high-redshift SN observations by allowing us to use more of the available data. We find that when the shape of the light curve is described using a stretch prescription, applying the same stretch at all wavelengths is not an adequate description. SiFTO therefore uses a generalization of stretch which applies different stretch factors as a function of both the wavelength of the observed filter and the stretch in the rest-frame B band. We compare SiFTO to other published light-curve models by applying them to the same set of SN photometry, and demonstrate that SiFTO and SALT2 perform better than the alternatives when judged by the scatter around the best-fit luminosity distance relationship. We further demonstrate that when SiFTO and SALT2 are trained on the same data set the cosmological results agree. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.

  11. Identifying ambiguous prostate gland contours from histology using capsule shape information and least squares curve fitting

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Rania [DigiPen Institute of Technology, Department of Computer Engineering, Redmond, WA (United States); McKenzie, Frederic D. [Old Dominion University, Department of Electrical and Computer Engineering, Norfolk, VA (United States)

    2007-12-15

    To obtain an accurate assessment of the percentage and depth of extra-capsular soft tissue removed with the prostate by the various surgical techniques in order to help surgeons in determining the appropriateness of different surgical approaches. This can be enhanced by an accurate and automated means of identifying the prostate gland contour. To facilitate 3D reconstruction and, ultimately, more accurate analyses, it is essential for us to identify the capsule boundary that separates the prostate gland tissue from its extra-capsular tissue. However, the capsule is sometimes unrecognizable due to the naturally occurring intrusion of muscle and connective tissue into the prostate gland. At these regions where the capsule disappears, its contour can be arbitrarily created with a continuing contour line based on the natural shape of the prostate. We utilize an algorithm based on a least squares curve fitting technique that uses a prostate shape equation to merge previously detected capsule parts with the shape equation to produce an approximated curve that represents the prostate capsule. We have tested our algorithm using three different shapes on 13 histologic prostate slices that are cut at different locations from the apex. The best result shows a 90% average contour match when compared to pathologist-drawn contours. We believe that automatically identifying histologic prostate contours will lead to increased objective analyses of surgical margins and extracapsular spread of cancer. Our results show that this is achievable. (orig.)

  12. Identifying ambiguous prostate gland contours from histology using capsule shape information and least squares curve fitting

    International Nuclear Information System (INIS)

    Hussein, Rania; McKenzie, Frederic D.

    2007-01-01

    To obtain an accurate assessment of the percentage and depth of extra-capsular soft tissue removed with the prostate by the various surgical techniques in order to help surgeons in determining the appropriateness of different surgical approaches. This can be enhanced by an accurate and automated means of identifying the prostate gland contour. To facilitate 3D reconstruction and, ultimately, more accurate analyses, it is essential for us to identify the capsule boundary that separates the prostate gland tissue from its extra-capsular tissue. However, the capsule is sometimes unrecognizable due to the naturally occurring intrusion of muscle and connective tissue into the prostate gland. At these regions where the capsule disappears, its contour can be arbitrarily created with a continuing contour line based on the natural shape of the prostate. We utilize an algorithm based on a least squares curve fitting technique that uses a prostate shape equation to merge previously detected capsule parts with the shape equation to produce an approximated curve that represents the prostate capsule. We have tested our algorithm using three different shapes on 13 histologic prostate slices that are cut at different locations from the apex. The best result shows a 90% average contour match when compared to pathologist-drawn contours. We believe that automatically identifying histologic prostate contours will lead to increased objective analyses of surgical margins and extracapsular spread of cancer. Our results show that this is achievable. (orig.)

  13. Multilevel Models for the Analysis of Angle-Specific Torque Curves with Application to Master Athletes

    Directory of Open Access Journals (Sweden)

    Carvalho Humberto M.

    2015-12-01

    Full Text Available The aim of this paper was to outline a multilevel modeling approach to fit individual angle-specific torque curves describing concentric knee extension and flexion isokinetic muscular actions in Master athletes. The potential of the analytical approach to examine between individual differences across the angle-specific torque curves was illustrated including between-individuals variation due to gender differences at a higher level. Torques in concentric muscular actions of knee extension and knee extension at 60°·s-1 were considered within a range of motion between 5°and 85° (only torques “truly” isokinetic. Multilevel time series models with autoregressive covariance structures with standard multilevel models were superior fits compared with standard multilevel models for repeated measures to fit anglespecific torque curves. Third and fourth order polynomial models were the best fits to describe angle-specific torque curves of isokinetic knee flexion and extension concentric actions, respectively. The fixed exponents allow interpretations for initial acceleration, the angle at peak torque and the decrement of torque after peak torque. Also, the multilevel models were flexible to illustrate the influence of gender differences on the shape of torque throughout the range of motion and in the shape of the curves. The presented multilevel regression models may afford a general framework to examine angle-specific moment curves by isokinetic dynamometry, and add to the understanding mechanisms of strength development, particularly the force-length relationship, both related to performance and injury prevention.

  14. Retention and Curve Number Variability in a Small Agricultural Catchment: The Probabilistic Approach

    Directory of Open Access Journals (Sweden)

    Kazimierz Banasik

    2014-04-01

    Full Text Available The variability of the curve number (CN and the retention parameter (S of the Soil Conservation Service (SCS-CN method in a small agricultural, lowland watershed (23.4 km2 to the gauging station in central Poland has been assessed using the probabilistic approach: distribution fitting and confidence intervals (CIs. Empirical CNs and Ss were computed directly from recorded rainfall depths and direct runoff volumes. Two measures of the goodness of fit were used as selection criteria in the identification of the parent distribution function. The measures specified the generalized extreme value (GEV, normal and general logistic (GLO distributions for 100-CN and GLO, lognormal and GEV distributions for S. The characteristics estimated from theoretical distribution (median, quantiles were compared to the tabulated CN and to the antecedent runoff conditions of Hawkins and Hjelmfelt. The distribution fitting for the whole sample revealed a good agreement between the tabulated CN and the median and between the antecedent runoff conditions (ARCs of Hawkins and Hjelmfelt, which certified a good calibration of the model. However, the division of the CN sample due to heavy and moderate rainfall depths revealed a serious inconsistency between the parameters mentioned. This analysis proves that the application of the SCS-CN method should rely on deep insight into the probabilistic properties of CN and S.

  15. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han’s model for rubber vulcanization

    OpenAIRE

    Milani, G.; Milani, F.

    2016-01-01

    A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, ...

  16. Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series

    OpenAIRE

    Basu, Kanadpriya; Mariani, Maria; Serpa, Laura; Sinha, Ritwik

    2015-01-01

    This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical ...

  17. Peak oil analyzed with a logistic function and idealized Hubbert curve

    International Nuclear Information System (INIS)

    Gallagher, Brian

    2011-01-01

    A logistic function is used to characterize peak and ultimate production of global crude oil and petroleum-derived liquid fuels. Annual oil production data were incrementally summed to construct a logistic curve in its initial phase. Using a curve-fitting approach, a population-growth logistic function was applied to complete the cumulative production curve. The simulated curve was then deconstructed into a set of annual oil production data producing an 'idealized' Hubbert curve. An idealized Hubbert curve (IHC) is defined as having properties of production data resulting from a constant growth-rate under fixed resource limits. An IHC represents a potential production curve constructed from cumulative production data and provides a new perspective for estimating peak production periods and remaining resources. The IHC model data show that idealized peak oil production occurred in 2009 at 83.2 Mb/d (30.4 Gb/y). IHC simulations of truncated historical oil production data produced similar results and indicate that this methodology can be useful as a prediction tool. - Research Highlights: →Global oil production data were analyzed by a simple curve fitting method. →Best fit-curve results were obtained using two logistic functions on select data. →A broad potential oil production peak is forecast for the years from 2004 to 2014. →Similar results were obtained using historical data from about 10 to 30 years ago. →Two potential oil production decline scenarios were presented and compared.

  18. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO3 standards

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s

  19. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    Science.gov (United States)

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  20. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    Science.gov (United States)

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  1. Difficulties in fitting the thermal response of atomic force microscope cantilevers for stiffness calibration

    International Nuclear Information System (INIS)

    Cole, D G

    2008-01-01

    This paper discusses the difficulties of calibrating atomic force microscope (AFM) cantilevers, in particular the effect calibrating under light fluid-loading (in air) and under heavy fluid-loading (in water) has on the ability to use thermal motion response to fit model parameters that are used to determine cantilever stiffness. For the light fluid-loading case, the resonant frequency and quality factor can easily be used to determine stiffness. The extension of this approach to the heavy fluid-loading case is troublesome due to the low quality factor (high damping) caused by fluid-loading. Simple calibration formulae are difficult to realize, and the best approach is often to curve-fit the thermal response, using the parameters of natural frequency and mass ratio so that the curve-fit's response is within some acceptable tolerance of the actual thermal response. The parameters can then be used to calculate the cantilever stiffness. However, the process of curve-fitting can lead to erroneous results unless suitable care is taken. A feedback model of the fluid–structure interaction between the unloaded cantilever and the hydrodynamic drag provides a framework for fitting a modeled thermal response to a measured response and for evaluating the parametric uncertainty of the fit. The cases of uncertainty in the natural frequency, the mass ratio, and combined uncertainty are presented and the implications for system identification and stiffness calibration using curve-fitting techniques are discussed. Finally, considerations and recommendations for the calibration of AFM cantilevers are given in light of the results of this paper

  2. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    Science.gov (United States)

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  3. A mathematical function for the description of nutrient-response curve.

    Directory of Open Access Journals (Sweden)

    Hamed Ahmadi

    Full Text Available Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a has biological interpretation, b may be used to calculate reliable estimates of nutrient response relationships, and c provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements.

  4. Determination of the secondary energy from the electron beam with a flattening foil by computer. Percentage depth dose curve fitting using the specific higher order polynomial

    Energy Technology Data Exchange (ETDEWEB)

    Kawakami, H [Kyushu Univ., Beppu, Oita (Japan). Inst. of Balneotherapeutics

    1980-09-01

    A computer program written in FORTRAN is described for determining the secondary energy of the electron beam which passed through a flattening foil, using a time-sharing computer service. The procedure of this program is first to fit the specific higher order polynomial to the measured percentage depth dose curve. Next, the practical range is evaluated by the point of intersection R of the line tangent to the fitted curve at the inflection point P and the given dose E, as shown in Fig. 2. Finally, the secondary energy corresponded to the determined practical range can be obtained by the experimental equation (2.1) between the practial range R (g/cm/sup 2/) and the electron energy T (MeV). A graph for the fitted polynomial with the inflection points and the practical range can be plotted on a teletype machine by request of user. In order to estimate the shapes of percentage depth dose curves correspond to the electron beams of different energies, we tried to find some specific functional relationships between each coefficient of the fitted seventh-degree equation and the incident electron energies. However, exact relationships could not be obtained for irreguarity among these coefficients.

  5. Edge detection and mathematic fitting for corneal surface with Matlab software.

    Science.gov (United States)

    Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na

    2017-01-01

    To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.

  6. Improvements in Spectrum's fit to program data tool.

    Science.gov (United States)

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  7. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han's model for rubber vulcanization

    Science.gov (United States)

    Milani, G.; Milani, F.

    A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.

  8. Theoretical derivation of anodizing current and comparison between fitted curves and measured curves under different conditions

    Science.gov (United States)

    Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei

    2015-04-01

    Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current jion = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.

  9. Theoretical derivation of anodizing current and comparison between fitted curves and measured curves under different conditions.

    Science.gov (United States)

    Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei

    2015-04-10

    Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current j(ion) = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.

  10. GERMINATOR: a software package for high-throughput scoring and curve fitting of Arabidopsis seed germination.

    Science.gov (United States)

    Joosen, Ronny V L; Kodde, Jan; Willems, Leo A J; Ligterink, Wilco; van der Plas, Linus H W; Hilhorst, Henk W M

    2010-04-01

    Over the past few decades seed physiology research has contributed to many important scientific discoveries and has provided valuable tools for the production of high quality seeds. An important instrument for this type of research is the accurate quantification of germination; however gathering cumulative germination data is a very laborious task that is often prohibitive to the execution of large experiments. In this paper we present the germinator package: a simple, highly cost-efficient and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The germinator package contains three modules: (i) design of experimental setup with various options to replicate and randomize samples; (ii) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (iii) curve fitting of cumulative germination data and the extraction, recap and visualization of the various germination parameters. The curve-fitting module enables analysis of general cumulative germination data and can be used for all plant species. We show that the automatic scoring system works for Arabidopsis thaliana and Brassica spp. seeds, but is likely to be applicable to other species, as well. In this paper we show the accuracy, reproducibility and flexibility of the germinator package. We have successfully applied it to evaluate natural variation for salt tolerance in a large population of recombinant inbred lines and were able to identify several quantitative trait loci for salt tolerance. Germinator is a low-cost package that allows the monitoring of several thousands of germination tests, several times a day by a single person.

  11. Exploring Alternative Characteristic Curve Approaches to Linking Parameter Estimates from the Generalized Partial Credit Model.

    Science.gov (United States)

    Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill

    Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…

  12. A Monte Carlo Study of the Effect of Item Characteristic Curve Estimation on the Accuracy of Three Person-Fit Statistics

    Science.gov (United States)

    St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane

    2009-01-01

    To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…

  13. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han’s model for rubber vulcanization

    Directory of Open Access Journals (Sweden)

    G. Milani

    2016-01-01

    Full Text Available A GUI software (GURU for experimental data fitting of rheometer curves in Natural Rubber (NR vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer. To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.

  14. Development of probabilistic fatigue curve for asphalt concrete based on viscoelastic continuum damage mechanics

    Directory of Open Access Journals (Sweden)

    Himanshu Sharma

    2016-07-01

    Full Text Available Due to its roots in fundamental thermodynamic framework, continuum damage approach is popular for modeling asphalt concrete behavior. Currently used continuum damage models use mixture averaged values for model parameters and assume deterministic damage process. On the other hand, significant scatter is found in fatigue data generated even under extremely controlled laboratory testing conditions. Thus, currently used continuum damage models fail to account the scatter observed in fatigue data. This paper illustrates a novel approach for probabilistic fatigue life prediction based on viscoelastic continuum damage approach. Several specimens were tested for their viscoelastic properties and damage properties under uniaxial mode of loading. The data thus generated were analyzed using viscoelastic continuum damage mechanics principles to predict fatigue life. Weibull (2 parameter, 3 parameter and lognormal distributions were fit to fatigue life predicted using viscoelastic continuum damage approach. It was observed that fatigue damage could be best-described using Weibull distribution when compared to lognormal distribution. Due to its flexibility, 3-parameter Weibull distribution was found to fit better than 2-parameter Weibull distribution. Further, significant differences were found between probabilistic fatigue curves developed in this research and traditional deterministic fatigue curve. The proposed methodology combines advantages of continuum damage mechanics as well as probabilistic approaches. These probabilistic fatigue curves can be conveniently used for reliability based pavement design. Keywords: Probabilistic fatigue curve, Continuum damage mechanics, Weibull distribution, Lognormal distribution

  15. Obtaining DDF Curves of Extreme Rainfall Data Using Bivariate Copula and Frequency Analysis

    DEFF Research Database (Denmark)

    Sadri, Sara; Madsen, Henrik; Mikkelsen, Peter Steen

    2009-01-01

    , situated near Copenhagen in Denmark. For rainfall extracted using method 2, the marginal distribution of depth was found to fit the Generalized Pareto distribution while duration was found to fit the Gamma distribution, using the method of L-moments. The volume was fit with a generalized Pareto...... with duration for a given return period and name them DDF (depth-duration-frequency) curves. The copula approach does not assume the rainfall variables are independent or jointly normally distributed. Rainfall series are extracted in three ways: (1) by maximum mean intensity; (2) by depth and duration...... distribution and the duration was fit with a Pearson type III distribution for rainfall extracted using method 3. The Clayton copula was found to be appropriate for bivariate analysis of rainfall depth and duration for both methods 2 and 3. DDF curves derived using the Clayton copula for depth and duration...

  16. Nonlinear gravitons and curved twistor theory

    International Nuclear Information System (INIS)

    Penrose, R.

    1976-01-01

    A new approach to the quantization of general relativity is suggested in which a state consisting of just one graviton can be described, but in a way which involves both the curvature and nonlinearities of Einstein's theory. It is felt that this approach can be justified solely on its own merits but it also receives striking encouragement from another direction: a surprising mathematical result enables one to construct the general such nonlinear gravitation state from a curved twistor space, the construction being given in terms of one arbitrary holomorphic function of three complex variables. In this way, the approach fits naturally into the general twistor program for the description of quantized fields. (U.K.)

  17. Robust Spatial Approximation of Laser Scanner Point Clouds by Means of Free-form Curve Approaches in Deformation Analysis

    Science.gov (United States)

    Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo

    2016-03-01

    In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.

  18. Structural modeling of age specific fertility curves in Peninsular Malaysia: An approach of Lee Carter method

    Science.gov (United States)

    Hanafiah, Hazlenah; Jemain, Abdul Aziz

    2013-11-01

    In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.

  19. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers; Metodos de ajuste de curvas de eficiencia obtidas por meio de espectrometros de HPGe

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Vanderlei

    2002-07-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  20. New method of safety assessment for pressure vessel of nuclear power plant--brief introduction of master curve approach

    International Nuclear Information System (INIS)

    Yang Wendou

    2011-01-01

    The new Master Curve Method is called as a revolutionary advance to the assessment of- reactor pressure vessel integrity in USA. This paper explains the origin, basis and standard of the Master Curve from the reactor pressure-temperature limit curve which assures the safety of nuclear power plant. According to the characteristics of brittle fracture which is greatly susceptible to the microstructure, the theory and the test method of the Master Curve as well as its statistical law which can be modeled using Weibull distribution are described in this paper. The meaning, advantage, application and importance of the Master Curve as well as the relation between the Master Curve and nuclear power safety are understood from the fitting formula for the fracture toughness database by Weibull distribution model. (author)

  1. Dynamic Regulation of a Cell Adhesion Protein Complex Including CADM1 by Combinatorial Analysis of FRAP with Exponential Curve-Fitting

    Science.gov (United States)

    Sakurai-Yageta, Mika; Maruyama, Tomoko; Suzuki, Takashi; Ichikawa, Kazuhisa; Murakami, Yoshinori

    2015-01-01

    Protein components of cell adhesion machinery show continuous renewal even in the static state of epithelial cells and participate in the formation and maintenance of normal epithelial architecture and tumor suppression. CADM1 is a tumor suppressor belonging to the immunoglobulin superfamily of cell adhesion molecule and forms a cell adhesion complex with an actin-binding protein, 4.1B, and a scaffold protein, MPP3, in the cytoplasm. Here, we investigate dynamic regulation of the CADM1-4.1B-MPP3 complex in mature cell adhesion by fluorescence recovery after photobleaching (FRAP) analysis. Traditional FRAP analysis were performed for relatively short period of around 10min. Here, thanks to recent advances in the sensitive laser detector systems, we examine FRAP of CADM1 complex for longer period of 60 min and analyze the recovery with exponential curve-fitting to distinguish the fractions with different diffusion constants. This approach reveals that the fluorescence recovery of CADM1 is fitted to a single exponential function with a time constant (τ) of approximately 16 min, whereas 4.1B and MPP3 are fitted to a double exponential function with two τs of approximately 40-60 sec and 16 min. The longer τ is similar to that of CADM1, suggesting that 4.1B and MPP3 have two distinct fractions, one forming a complex with CADM1 and the other present as a free pool. Fluorescence loss in photobleaching analysis supports the presence of a free pool of these proteins near the plasma membrane. Furthermore, double exponential fitting makes it possible to estimate the ratio of 4.1B and MPP3 present as a free pool and as a complex with CADM1 as approximately 3:2 and 3:1, respectively. Our analyses reveal a central role of CADM1 in stabilizing the complex with 4.1B and MPP3 and provide insight in the dynamics of adhesion complex formation. PMID:25780926

  2. Memristance controlling approach based on modification of linear M—q curve

    International Nuclear Information System (INIS)

    Liu Hai-Jun; Li Zhi-Wei; Yu Hong-Qi; Sun Zhao-Lin; Nie Hong-Shan

    2014-01-01

    The memristor has broad application prospects in many fields, while in many cases, those fields require accurate impedance control. The nonlinear model is of great importance for realizing memristance control accurately, but the implementing complexity caused by iteration has limited the actual application of this model. Considering the approximate linear characteristics at the middle region of the memristance-charge (M—q) curve of the nonlinear model, this paper proposes a memristance controlling approach, which is achieved by linearizing the middle region of the M—q curve of the nonlinear memristor, and establishes the linear relationship between memristances M and input excitations so that it can realize impedance control precisely by only adjusting input signals briefly. First, it analyzes the feasibility for linearizing the middle part of the M—q curve of the memristor with a nonlinear model from the qualitative perspective. Then, the linearization equations of the middle region of the M—q curve is constructed by using the shift method, and under a sinusoidal excitation case, the analytical relation between the memristance M and the charge time t is derived through the Taylor series expansions. At last, the performance of the proposed approach is demonstrated, including the linearizing capability for the middle part of the M—q curve of the nonlinear model memristor, the controlling ability for memristance M, and the influence of input excitation on linearization errors. (interdisciplinary physics and related areas of science and technology)

  3. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    Science.gov (United States)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  4. Master curve approach to monitor fracture toughness of reactor pressure vessels in nuclear power plants

    International Nuclear Information System (INIS)

    2009-10-01

    A series of coordinated research projects (CRPs) have been sponsored by the IAEA, starting in the early 1970s, focused on neutron radiation effects on reactor pressure vessel (RPV) steels. The purpose of the CRPs was to develop correlative comparisons to test the uniformity of results through coordinated international research studies and data sharing. The overall scope of the eighth CRP (CRP-8), Master Curve Approach to Monitor Fracture Toughness of Reactor Pressure Vessels in Nuclear Power Plants, has evolved from previous CRPs which have focused on fracture toughness related issues. The ultimate use of embrittlement understanding is application to assure structural integrity of the RPV under current and future operation and accident conditions. The Master Curve approach for assessing the fracture toughness of a sampled irradiated material has been gaining acceptance throughout the world. This direct measurement of fracture toughness approach is technically superior to the correlative and indirect methods used in the past to assess irradiated RPV integrity. Several elements have been identified as focal points for Master Curve use: (i) limits of applicability for the Master Curve at the upper range of the transition region for loading quasi-static to dynamic/impact loading rates; (ii) effects of non-homogeneous material or changes due to environment conditions on the Master Curve, and how heterogeneity can be integrated into a more inclusive Master Curve methodology; (iii) importance of fracture mode differences and changes affect the Master Curve shape. The collected data in this report represent mostly results from non-irradiated testing, although some results from test reactor irradiations and plant surveillance programmes have been included as available. The results presented here should allow utility engineers and scientists to directly measure fracture toughness using small surveillance size specimens and apply the results using the Master Curve approach

  5. Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite

    International Nuclear Information System (INIS)

    Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.

    2007-01-01

    The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5

  6. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  7. Prediction of ion-exchange column breakthrough curves by constant-pattern wave approach.

    Science.gov (United States)

    Lee, I-Hsien; Kuan, Yu-Chung; Chern, Jia-Ming

    2008-03-21

    The release of heavy metals from industrial wastewaters represents one of major threats to environment. Compared with chemical precipitation method, fixed-bed ion-exchange process can effectively remove heavy metals from wastewaters and generate no hazardous sludge. In order to design and operate fixed-bed ion-exchange processes successfully, it is very important to understand the column dynamics. In this study, the column experiments for Cu2+/H+, Zn2+/H+, and Cd2+/H+ systems using Amberlite IR-120 were performed to measure the breakthrough curves under varying operating conditions. The experimental results showed that total cation concentration in the mobile-phase played a key role on the breakthrough curves; a higher feed concentration resulted in an earlier breakthrough. Furthermore, the column dynamics was also predicted by self-sharpening and constant-pattern wave models. The self-sharpening wave model assuming local ion-exchange equilibrium could provide a simple and quick estimation for the breakthrough volume, but the predicted breakthrough curves did not match the experimental data very well. On the contrary, the constant-pattern wave model using a constant driving force model for finite ion-exchange rate provided a better fit to the experimental data. The obtained liquid-phase mass transfer coefficient was correlated to the flow velocity and other operating parameters; the breakthrough curves under varying operating conditions could thus be predicted by the constant-pattern wave model using the correlation.

  8. Generalized Wigner functions in curved spaces: A new approach

    International Nuclear Information System (INIS)

    Kandrup, H.E.

    1988-01-01

    It is well known that, given a quantum field in Minkowski space, one can define Wigner functions f/sub W//sup N/(x 1 ,p 1 ,...,x/sub N/,p/sub N/) which (a) are convenient to analyze since, unlike the field itself, they are c-number quantities and (b) can be interpreted in a limited sense as ''quantum distribution functions.'' Recently, Winter and Calzetta, Habib and Hu have shown one way in which these flat-space Wigner functions can be generalized to a curved-space setting, deriving thereby approximate kinetic equations which make sense ''quasilocally'' for ''short-wavelength modes.'' This paper suggests a completely orthogonal approach for defining curved-space Wigner functions which generalizes instead an object such as the Fourier-transformed f/sub W/ 1 (k,p), which is effectively a two-point function viewed in terms of the ''natural'' creation and annihilation operators a/sup dagger/(p-(12k) and a(p+(12k). The approach suggested here lacks the precise phase-space interpretation implicit in the approach of Winter or Calzetta, Habib, and Hu, but it is useful in that (a) it is geared to handle any ''natural'' mode decomposition, so that (b) it can facilitate exact calculations at least in certain limits, such as for a source-free linear field in a static spacetime

  9. Fracture toughness evaluation of steels through master curve approach using Charpy impact specimens

    International Nuclear Information System (INIS)

    Chatterjee, S.; Sriharsha, H.K.; Shah, Priti Kotak

    2007-01-01

    The master curve approach can be used for the evaluation of fracture toughness of all steels which exhibit a transition between brittle to ductile mode of fracture with increasing temperature, and to monitor the extent of embrittlement caused by metallurgical damage mechanisms. This paper details the procedure followed to evaluate the fracture toughness of a typical ferritic steel used as material for pressure vessels. The potential of master curve approach to overcome the inherent limitations of the estimation of fracture toughness using ASME Code reference toughness is also illustrated. (author)

  10. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    International Nuclear Information System (INIS)

    Sanders, N. E.; Soderberg, A. M.; Betancourt, M.

    2015-01-01

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST

  11. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, N. E.; Soderberg, A. M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Betancourt, M., E-mail: nsanders@cfa.harvard.edu [Department of Statistics, University of Warwick, Coventry CV4 7AL (United Kingdom)

    2015-02-10

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.

  12. Model-based Approach for Long-term Creep Curves of Alloy 617 for a High Temperature Gas-cooled Reactor

    International Nuclear Information System (INIS)

    Kim, Woo Gon; Yin, Song Nan; Kim, Yong Wan

    2008-01-01

    Alloy 617 is a principal candidate alloy for the high temperature gas-cooled reactor (HTGR) components, because of its high creep rupture strength coupled with its good corrosion behavior in simulated HTGR-helium and its sufficient workability. To describe a creep strain-time curve well, various constitutive equations have been proposed by Kachanov-Rabotnov, Andrade, Garofalo, Evans and Maruyama, et al.. Among them, the K-R model has been used frequently, because a secondary creep resulting from a balance between a softening and a hardening of materials and a tertiary creep resulting from an appearance and acceleration of the internal or external damage processes are adequately considered. In the case of nickel-base alloys, it has been reported that a tertiary creep at a low strain range may be generated, and this tertiary stage may govern the total creep deformation. Therefore, a creep curve for nickel-based Alloy 617 will be predicted appropriately by using the K-R model that can reflect a tertiary creep. In this paper, the long-term creep curves for Alloy 617 were predicted by using the nonlinear least square fitting (NLSF) method in the K-R model. The modified K-R model was introduced to fit the full creep curves well. The values for the λ and K parameters in the modified K-R model were obtained with stresses

  13. Use of orthonormal polynomials to fit energy spectrum data for water transported through membrane

    International Nuclear Information System (INIS)

    Bogdanova, N.; Todorova, L.

    2001-01-01

    A new application of our approach with orthonormal polynomials to curve fitting is given when both variables have errors. We approximate and describe data of a new effect due to change of water energy spectrum as a result of water transport in a porous membrane

  14. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    Science.gov (United States)

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  15. Fitting methods for constructing energy-dependent efficiency curves and their application to ionization chamber measurements

    International Nuclear Information System (INIS)

    Svec, A.; Schrader, H.

    2002-01-01

    An ionization chamber without and with an iron liner (absorber) was calibrated by a set of radionuclide activity standards of the Physikalisch-Technische Bundesanstalt (PTB). The ionization chamber is used as a secondary standard measuring system for activity at the Slovak Institute of Metrology (SMU). Energy-dependent photon-efficiency curves were established for the ionization chamber in defined measurement geometry without and with the liner, and radionuclide efficiencies were calculated. Programmed calculation with an analytical efficiency function and a nonlinear regression algorithm of Microsoft (MS) Excel for fitting was used. Efficiencies from bremsstrahlung of pure beta-particle emitters were calibrated achieving a 10% accuracy level. Such efficiency components are added to obtain the total radionuclide efficiency of photon emitters after beta decay. The method yields differences of experimental and calculated radionuclide efficiencies for most of the photon-emitting radionuclides in the order of a few percent

  16. Comparing Angular and Curved Shapes in Terms of Implicit Associations and Approach/Avoidance Responses.

    Directory of Open Access Journals (Sweden)

    Letizia Palumbo

    Full Text Available Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words, valence (positive vs. negative words and gender (female vs. male names. Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC task we tested with a stick figure (i.e., the manikin approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference.

  17. Multidimentional and Multi-Parameter Fortran-Based Curve Fitting ...

    African Journals Online (AJOL)

    This work briefly describes the mathematics behind the algorithm, and also elaborates how to implement it using FORTRAN 95 programming language. The advantage of this algorithm, when it is extended to surfaces and complex functions, is that it makes researchers to have a better trust during fitting. It also improves the ...

  18. Molecular dynamics simulations of the melting curve of NiAl alloy under pressure

    OpenAIRE

    Wenjin Zhang; Yufeng Peng; Zhongli Liu

    2014-01-01

    The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fit...

  19. From curve fitting to machine learning an illustrative guide to scientific data analysis and computational intelligence

    CERN Document Server

    Zielesny, Achim

    2016-01-01

    This successful book provides in its second edition an interactive and illustrative guide from two-dimensional curve fitting to multidimensional clustering and machine learning with neural networks or support vector machines. Along the way topics like mathematical optimization or evolutionary algorithms are touched. All concepts and ideas are outlined in a clear cut manner with graphically depicted plausibility arguments and a little elementary mathematics. The major topics are extensively outlined with exploratory examples and applications. The primary goal is to be as illustrative as possible without hiding problems and pitfalls but to address them. The character of an illustrative cookbook is complemented with specific sections that address more fundamental questions like the relation between machine learning and human intelligence. All topics are completely demonstrated with the computing platform Mathematica and the Computational Intelligence Packages (CIP), a high-level function library developed with M...

  20. Fitness of gutta-percha cones in curved root canals prepared with reciprocating files correlated with tug-back sensation.

    Science.gov (United States)

    Yoon, Heeyoung; Baek, Seung-Ho; Kum, Kee-Yeon; Kim, Hyeon-Cheol; Moon, Young-Mi; Fang, Denny Y; Lee, WooCheol

    2015-01-01

    The purpose of this study was to evaluate the gutta-percha-occupied area (GPOA) and the relationship between GPOA and tug-back sensations in canals instrumented with reciprocating files. Twenty curved canals were instrumented using Reciproc R25 (VDW, Munich, Germany) (group R) and WaveOne Primary (Dentsply Maillefer, Ballaigues, Switzerland) (group W), respectively (n = 10 each). The presence or absence of a tug-back sensation was decided for both of #25/.08 and #30/.06 cones in every canal. The percentage of GPOA at 1-, 2-, and 3-mm levels from the working length was calculated using micro-computed tomographic imaging. The correlation between the sum of the GPOA and the presence of a tug-back sensation was also investigated. The data were analyzed statistically at P = .05. A tug-back sensation was present in 45% and 100% canals for #25/.08 and #30/.06 cones, respectively, with a significant difference (P sensation (P .05). Under the conditions of this study, the tug-back sensation can be a definitive determinant for indicating higher cone fitness in the curved canal regardless of the cone type. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  1. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  2. Non-linear modelling to describe lactation curve in Gir crossbred cows

    Directory of Open Access Journals (Sweden)

    Yogesh C. Bangar

    2017-02-01

    Full Text Available Abstract Background The modelling of lactation curve provides guidelines in formulating farm managerial practices in dairy cows. The aim of the present study was to determine the suitable non-linear model which most accurately fitted to lactation curves of five lactations in 134 Gir crossbred cows reared in Research-Cum-Development Project (RCDP on Cattle farm, MPKV (Maharashtra. Four models viz. gamma-type function, quadratic model, mixed log function and Wilmink model were fitted to each lactation separately and then compared on the basis of goodness of fit measures viz. adjusted R2, root mean square error (RMSE, Akaike’s Informaion Criteria (AIC and Bayesian Information Criteria (BIC. Results In general, highest milk yield was observed in fourth lactation whereas it was lowest in first lactation. Among the models investigated, mixed log function and gamma-type function provided best fit of the lactation curve of first and remaining lactations, respectively. Quadratic model gave least fit to lactation curve in almost all lactations. Peak yield was observed as highest and lowest in fourth and first lactation, respectively. Further, first lactation showed highest persistency but relatively higher time to achieve peak yield than other lactations. Conclusion Lactation curve modelling using gamma-type function may be helpful to setting the management strategies at farm level, however, modelling must be optimized regularly before implementing them to enhance productivity in Gir crossbred cows.

  3. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  4. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    International Nuclear Information System (INIS)

    Winkler, A W; Zagar, B G

    2013-01-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives. (paper)

  5. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    Science.gov (United States)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  6. A Bayesian inference approach to unveil supply curves in electricity markets

    DEFF Research Database (Denmark)

    Mitridati, Lesia Marie-Jeanne Mariane; Pinson, Pierre

    2017-01-01

    in the literature on modeling this uncertainty. In this study we introduce a Bayesian inference approach to reveal the aggregate supply curve in a day-ahead electricity market. The proposed algorithm relies on Markov Chain Monte Carlo and Sequential Monte Carlo methods. The major appeal of this approach......With increased competition in wholesale electricity markets, the need for new decision-making tools for strategic producers has arisen. Optimal bidding strategies have traditionally been modeled as stochastic profit maximization problems. However, for producers with non-negligible market power...

  7. Serial position curves in free recall.

    Science.gov (United States)

    Laming, Donald

    2010-01-01

    The scenario for free recall set out in Laming (2009) is developed to provide models for the serial position curves from 5 selected sets of data, for final free recall, and for multitrial free recall. The 5 sets of data reflect the effects of rate of presentation, length of list, delay of recall, and suppression of rehearsal. Each model accommodates the serial position curve for first recalls (where those data are available) as well as that for total recalls. Both curves are fit with the same parameter values, as also (with 1 exception) are all of the conditions compared within each experiment. The distributions of numbers of recalls are also examined and shown to have variances increased above what would be expected if successive recalls were independent. This is taken to signify that, in those experiments in which rehearsals were not recorded, the retrieval of words for possible recall follows the same pattern that is observed following overt rehearsal, namely, that retrieval consists of runs of consecutive elements from memory. Finally, 2 sets of data are examined that the present approach cannot accommodate. It is argued that the problem with these data derives from an interaction between the patterns of (covert) rehearsal and the parameters of list presentation.

  8. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  9. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Considerations for reference pump curves

    International Nuclear Information System (INIS)

    Stockton, N.B.

    1992-01-01

    This paper examines problems associated with inservice testing (IST) of pumps to assess their hydraulic performance using reference pump curves to establish acceptance criteria. Safety-related pumps at nuclear power plants are tested under the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (the Code), Section 11. The Code requires testing pumps at specific reference points of differential pressure or flow rate that can be readily duplicated during subsequent tests. There are many cases where test conditions cannot be duplicated. For some pumps, such as service water or component cooling pumps, the flow rate at any time depends on plant conditions and the arrangement of multiple independent and constantly changing loads. System conditions cannot be controlled to duplicate a specific reference value. In these cases, utilities frequently request to use pump curves for comparison of test data for acceptance. There is no prescribed method for developing a pump reference curve. The methods vary and may yield substantially different results. Some results are conservative when compared to the Code requirements; some are not. The errors associated with different curve testing techniques should be understood and controlled within reasonable bounds. Manufacturer's pump curves, in general, are not sufficiently accurate to use as reference pump curves for IST. Testing using reference curves generated with polynomial least squares fits over limited ranges of pump operation, cubic spline interpolation, or cubic spline least squares fits can provide a measure of pump hydraulic performance that is at least as accurate as the Code required method. Regardless of the test method, error can be reduced by using more accurate instruments, by correcting for systematic errors, by increasing the number of data points, and by taking repetitive measurements at each data point

  11. a new approach of Analysing GRB light curves

    International Nuclear Information System (INIS)

    Varga, B.; Horvath, I.

    2005-01-01

    We estimated the T xx quantiles of the cumulative GRB light curves using our recalculated background. The basic information of the light curves was extracted by multivariate statistical methods. The possible classes of the light curves are also briefly discussed

  12. Growth Curve Models and Applications : Indian Statistical Institute

    CERN Document Server

    2017-01-01

    Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas.   There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of app lied work, and these contributions have been externally r...

  13. Molecular dynamics simulations of the melting curve of NiAl alloy under pressure

    International Nuclear Information System (INIS)

    Zhang, Wenjin; Peng, Yufeng; Liu, Zhongli

    2014-01-01

    The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fitting the well-known Simon equation to our melting data, we yielded the melting curves for NiAl: 1783(1 + P/9.801) 0.298 (one-phase approach), 1850(1 + P/12.806) 0.357 (two-phase approach). The good agreement of the resulting equation of states and the zero-pressure melting point (calc., 1850 ± 25 K, exp., 1911 K) with experiment proved the correctness of these results. These melting data complemented the absence of experimental high-pressure melting of NiAl. To check the transferability of this EAM potential, we have also predicted the melting curves of pure nickel and pure aluminum. Results show the calculated melting point of Nickel agrees well with experiment at zero pressure, while the melting point of aluminum is slightly higher than experiment

  14. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  15. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  16. Can CCM law properly represent all extinction curves?

    International Nuclear Information System (INIS)

    Geminale, Anna; Popowski, Piotr

    2005-01-01

    We present the analysis of a large sample of lines of sight with extinction curves covering wavelength range from near-infrared (NIR) to ultraviolet (UV). We derive total to selective extinction ratios based on the Cardelli, Clayton and Mathis (1989, CCM) law, which is typically used to fit the extinction data both for diffuse and dense interstellar medium. We conclude that the CCM law is able to fit most of the extinction curves in our sample. We divide the remaining lines of sight with peculiar extinction into two groups according to two main behaviors: a) the optical/IR or/and UV wavelength region cannot be reproduced by the CCM formula; b) the optical/NIR and UV extinction data are best fit by the CCM law with different values of R v . We present examples of such curves. The study of both types of peculiar cases can help us to learn about the physical processes that affect dust in the interstellar medium, e.g., formation of mantles on the surface of grains, evaporation, growing or shattering

  17. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  18. Technological change in energy systems. Learning curves, logistic curves and input-output coefficients

    International Nuclear Information System (INIS)

    Pan, Haoran; Koehler, Jonathan

    2007-01-01

    Learning curves have recently been widely adopted in climate-economy models to incorporate endogenous change of energy technologies, replacing the conventional assumption of an autonomous energy efficiency improvement. However, there has been little consideration of the credibility of the learning curve. The current trend that many important energy and climate change policy analyses rely on the learning curve means that it is of great importance to critically examine the basis for learning curves. Here, we analyse the use of learning curves in energy technology, usually implemented as a simple power function. We find that the learning curve cannot separate the effects of price and technological change, cannot reflect continuous and qualitative change of both conventional and emerging energy technologies, cannot help to determine the time paths of technological investment, and misses the central role of R and D activity in driving technological change. We argue that a logistic curve of improving performance modified to include R and D activity as a driving variable can better describe the cost reductions in energy technologies. Furthermore, we demonstrate that the top-down Leontief technology can incorporate the bottom-up technologies that improve along either the learning curve or the logistic curve, through changing input-output coefficients. An application to UK wind power illustrates that the logistic curve fits the observed data better and implies greater potential for cost reduction than the learning curve does. (author)

  19. The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.

    Science.gov (United States)

    van Battum, L J; Huizenga, H

    2006-07-01

    Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.

  20. Application of Bimodal Master Curve Approach on KSNP RPV steel SA508 Gr. 3

    International Nuclear Information System (INIS)

    Kim, Jongmin; Kim, Minchul; Choi, Kwonjae; Lee, Bongsang

    2014-01-01

    In this paper, the standard MC approach and BMC are applied to the forging material of the KSNP RPV steel SA508 Gr. 3. A series of fracture toughness tests were conducted in the DBTT transition region, and fracture toughness specimens were extracted from four regions, i.e., the surface, 1/8T, 1/4T and 1/2T. Deterministic material inhomogeneity was reviewed through a conventional MC approach and the random inhomogeneity was evaluated by BMC. In the present paper, four regions, surface, 1/8T, 1/4T and 1/2T, were considered for the fracture toughness specimens of KSNP (Korean Standard Nuclear Plant) SA508 Gr. 3 steel to provide deterministic material inhomogeneity and review the applicability of BMC. T0 determined by a conventional MC has a low value owing to the higher quenching rate at the surface as expected. However, more than about 15% of the KJC values lay above the 95% probability curves indexed with the standard MC T0 at the surface and 1/8T, which implies the existence of inhomogeneity in the material. To review the applicability of the BMC method, the deterministic inhomogeneity owing to the extraction location and quenching rate is treated as random inhomogeneity. Although the lower bound and upper bound curve of the BMC covered more KJC values than that of the conventional MC, there is no significant relationship between the BMC analysis lines and measured KJC values in the higher toughness distribution, and BMC and MC provide almost the same T0 values. Therefore, the standard MC evaluation method for this material is appropriate even though the standard MC has a narrow upper/lower bound curve range from the RPV evaluation point of view. The material is not homogeneous in reality. Such inhomogeneity comes in the effect of material inhomogeneity depending on the specimen location, heat treatment, and whole manufacturing process. The conventional master curve has a limitation to be applied to a large scatted data of fracture toughness such as the weld region

  1. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi

    2017-02-01

    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  2. Fitting-free curve resolution of spectroscopic data: Chemometric and physical chemical viewpoints.

    Science.gov (United States)

    Rajkó, Róbert; Beyramysoltan, Samira; Abdollahi, Hamid; Eőri, János; Pongor, Gábor

    2015-08-12

    In this paper the authors have investigated spectroscopic data analysis according to a recent development, i.e. the Direct Inversion in the Spectral Subspace (DISS) procedure. DISS is a supervised curve resolution technique, consequently it can be used once the spectra of the potential pure components are known and the experimental spectrum of a chemical mixture is also presented; hence the task is to determine the composition of the unknown chemical mixture. In this paper, the original algorithm of DISS is re-examined and some further critical reasoning and essential developments are provided, including the detailed explanations of the constrained minimization task based on Lagrange multiplier regularization approach. The main conclusion is that the regularization used for DISS is needed because of the possible shifted spectra effect instead of collinearity; and this new property, i.e. treating the mild shifted spectra effect, of DISS can be considered as its main scientific advantage. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Generalization of drying curves in conductive/convective drying of cellulose

    Directory of Open Access Journals (Sweden)

    M. Stenzel

    2003-03-01

    Full Text Available The objective of this work is to analyze the possibility of applying the drying curves generalization methodology to the conductive/convective hot plate drying of cellulose. The experiments were carried out at different heated plate temperatures and air velocities over the surface of the samples. This kind of approach is very interesting because it permits comparison of the results of different experiments by reducing them to only one set, which can be divided into two groups: the generalized drying curves and the generalized drying rate curves. The experimental apparatus is an attempt to reproduce the operational conditions of conventional paper dryers (ratio of paper/air movement and consists of a metallic box heated by a thermostatic bath containing an upper surface on which the cellulose samples are placed. Sample material is short- and long-fiber cellulose sheets, about 1 mm thick, and ambient air was introduced into the system by a adjustable blower under different conditions. Long-fiber cellulose generalized curves were obtained and analyzed first individually and then together with the short-fiber cellulose results from Motta Lima et al. (2000 a,b. Finally, a set of equations to fit the generalized curves obtained was proposed and discussed.

  4. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  5. Observational evidence of dust evolution in galactic extinction curves

    Energy Technology Data Exchange (ETDEWEB)

    Cecchi-Pestellini, Cesare [INAF-Osservatorio Astronomico di Palermo, P.zza Parlamento 1, I-90134 Palermo (Italy); Casu, Silvia; Mulas, Giacomo [INAF-Osservatorio Astronomico di Cagliari, Via della Scienza, I-09047 Selargius (Italy); Zonca, Alberto, E-mail: cecchi-pestellini@astropa.unipa.it, E-mail: silvia@oa-cagliari.inaf.it, E-mail: gmulas@oa-cagliari.inaf.it, E-mail: azonca@oa-cagliari.inaf.it [Dipartimento di Fisica, Università di Cagliari, Strada Prov.le Monserrato-Sestu Km 0.700, I-09042 Monserrato (Italy)

    2014-04-10

    Although structural and optical properties of hydrogenated amorphous carbons are known to respond to varying physical conditions, most conventional extinction models are basically curve fits with modest predictive power. We compare an evolutionary model of the physical properties of carbonaceous grain mantles with their determination by homogeneously fitting observationally derived Galactic extinction curves with the same physically well-defined dust model. We find that a large sample of observed Galactic extinction curves are compatible with the evolutionary scenario underlying such a model, requiring physical conditions fully consistent with standard density, temperature, radiation field intensity, and average age of diffuse interstellar clouds. Hence, through the study of interstellar extinction we may, in principle, understand the evolutionary history of the diffuse interstellar clouds.

  6. Modeling two strains of disease via aggregate-level infectivity curves.

    Science.gov (United States)

    Romanescu, Razvan; Deardon, Rob

    2016-04-01

    Well formulated models of disease spread, and efficient methods to fit them to observed data, are powerful tools for aiding the surveillance and control of infectious diseases. Our project considers the problem of the simultaneous spread of two related strains of disease in a context where spatial location is the key driver of disease spread. We start our modeling work with the individual level models (ILMs) of disease transmission, and extend these models to accommodate the competing spread of the pathogens in a two-tier hierarchical population (whose levels we refer to as 'farm' and 'animal'). The postulated interference mechanism between the two strains is a period of cross-immunity following infection. We also present a framework for speeding up the computationally intensive process of fitting the ILM to data, typically done using Markov chain Monte Carlo (MCMC) in a Bayesian framework, by turning the inference into a two-stage process. First, we approximate the number of animals infected on a farm over time by infectivity curves. These curves are fit to data sampled from farms, using maximum likelihood estimation, then, conditional on the fitted curves, Bayesian MCMC inference proceeds for the remaining parameters. Finally, we use posterior predictive distributions of salient epidemic summary statistics, in order to assess the model fitted.

  7. Analyzing price and efficiency dynamics of large appliances with the experience curve approach

    International Nuclear Information System (INIS)

    Weiss, Martin; Patel, Martin K.; Junginger, Martin; Blok, Kornelis

    2010-01-01

    Large appliances are major power consumers in households of industrialized countries. Although their energy efficiency has been increasing substantially in past decades, still additional energy efficiency potentials exist. Energy policy that aims at realizing these potentials faces, however, growing concerns about possible adverse effects on commodity prices. Here, we address these concerns by applying the experience curve approach to analyze long-term price and energy efficiency trends of three wet appliances (washing machines, laundry dryers, and dishwashers) and two cold appliances (refrigerators and freezers). We identify a robust long-term decline in both specific price and specific energy consumption of large appliances. Specific prices of wet appliances decline at learning rates (LR) of 29±8% and thereby much faster than those of cold appliances (LR of 9±4%). Our results demonstrate that technological learning leads to substantial price decline, thus indicating that the introduction of novel and initially expensive energy efficiency technologies does not necessarily imply adverse price effects in the long term. By extending the conventional experience curve approach, we find a steady decline in the specific energy consumption of wet appliances (LR of 20-35%) and cold appliances (LR of 13-17%). Our analysis suggests that energy policy might be able to bend down energy experience curves. (author)

  8. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  9. Hot Spots Detection of Operating PV Arrays through IR Thermal Image Using Method Based on Curve Fitting of Gray Histogram

    Directory of Open Access Journals (Sweden)

    Jiang Lin

    2016-01-01

    Full Text Available The overall efficiency of PV arrays is affected by hot spots which should be detected and diagnosed by applying responsible monitoring techniques. The method using the IR thermal image to detect hot spots has been studied as a direct, noncontact, nondestructive technique. However, IR thermal images suffer from relatively high stochastic noise and non-uniformity clutter, so the conventional methods of image processing are not effective. The paper proposes a method to detect hotspots based on curve fitting of gray histogram. The result of MATLAB simulation proves the method proposed in the paper is effective to detect the hot spots suppressing the noise generated during the process of image acquisition.

  10. Fitted curve parameters for the efficiency of a coaxial HPGe Detector

    International Nuclear Information System (INIS)

    Supian Samat

    1996-01-01

    Using Ngraph software, the parameters of various functions were determined by least squares analysis of fits to experimental efficiencies , ε sub f of a coaxial HPGe detector for gamma rays in the energy range 59 keV to 1836 keV. When these parameters had been determined, their reliability was tested by the calculated goodness-of-fit parameter χ sup 2 sub cal. It is shown that the function, ln ε sub f = Σ sup n sub j=0 a sub j (ln E/E sub 0) sup j , where n=3, gives satisfactory results

  11. A new model describing the curves for repair of both DNA double-strand breaks and chromosome damage

    International Nuclear Information System (INIS)

    Foray, N.; Badie, C.; Alsbeih, G.; Malaise, E.P.; Fertil, B.

    1996-01-01

    A review of reports dealing with fittings of the data for repair of DNA double-strand breaks (DSBs) and excess chromosome fragments (ECFs) shows that several models are used to fit the repair curves. Since DSBs and ECFs are correleated, it is worth developing a model describing both phenomena. The curve-fitting models used most extensively, the two repair half-times model for DSBs and the monoexponential plus residual model for ECFs, appear to be too inflexible to describe the repair curves for both DSBs and ECFs. We have therefore developed a new concept based on a variable repair half-time. According to this concept, the repair curve is continuously bending and dependent on time and probably reflects a continuous spectrum of damage repairability. The fits of the curves for DSB repair to the variable repair half-time and the variable repair half-time plus residual models were compared to those obtained with the two half-times plus residual and two half-times models. Similarly, the fits of the curves for ECF repair to the variable repair half-time and variable half-time plus residual models were compared to that obtained with the monoexponential plus residual model. The quality of fit and the dependence of adjustable parameters on the portion of the curve fitted were used as comparison criteria. We found that: (a) It is useful to postulate the existence of a residual term for unrepairable lesions, regardless of the model adopted. (b) With the two cell lines tested (a normal and a hypersensitive one), data for both DSBs and ECTs are best fitted to the variable repair half-time plus residual model, whatever the repair time range. 47 refs., 3 figs., 3 tabs

  12. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    Energy Technology Data Exchange (ETDEWEB)

    Jenkin, Thomas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Larson, Andrew [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ruth, Mark F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ben [U.S. Department of Energy; Spitsen, Paul [U.S. Department of Energy

    2018-03-27

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market and dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing

  13. DEVELOPING AN EXCELLENT SEDIMENT RATING CURVE FROM ONE HYDROLOGICAL YEAR SAMPLING PROGRAMME DATA: APPROACH

    Directory of Open Access Journals (Sweden)

    Preksedis M. Ndomba

    2008-01-01

    Full Text Available This paper presents preliminary findings on the adequacy of one hydrological year sampling programme data in developing an excellent sediment rating curve. The study case is a 1DD1 subcatchment in the upstream of Pangani River Basin (PRB, located in the North Eastern part of Tanzania. 1DD1 is the major runoff-sediment contributing tributary to the downstream hydropower reservoir, the Nyumba Ya Mungu (NYM. In literature sediment rating curve method is known to underestimate the actual sediment load. In the case of developing countries long-term sediment sampling monitoring or conservation campaigns have been reported as unworkable options. Besides, to the best knowledge of the authors, to date there is no consensus on how to develop an excellent rating curve. Daily-midway and intermittent-cross section sediment samples from Depth Integrating sampler (D-74 were used to calibrate the subdaily automatic sediment pumping sampler (ISCO 6712 near bank point samples for developing the rating curve. Sediment load correction factors were derived from both statistical bias estimators and actual sediment load approaches. It should be noted that the ongoing study is guided by findings of other studies in the same catchment. For instance, long term sediment yield rate estimated based on reservoir survey validated the performance of the developed rating curve. The result suggests that excellent rating curve could be developed from one hydrological year sediment sampling programme data. This study has also found that uncorrected rating curve underestimates sediment load. The degreeof underestimation depends on the type of rating curve developed and data used.

  14. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions.

    Science.gov (United States)

    López, S; France, J; Odongo, N E; McBride, R A; Kebreab, E; AlZahal, O; McBride, B W; Dijkstra, J

    2015-04-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records corresponding to 122 first, 99 second, and 92 third parity individual lactation curves. The functions were fitted using nonlinear regression procedures, and their performance was assessed using goodness-of-fit statistics (coefficient of determination, residual mean squares, Akaike information criterion, and the correlation and concordance coefficients between observed and adjusted milk yields at several days in milk). Overall, all the growth functions evaluated showed an acceptable fit to the cumulative milk production curves, with the Richards equation ranking first (smallest Akaike information criterion) followed by the Morgan equation. Differences among the functions in their goodness-of-fit were enlarged when fitted to average curves by parity, where the sigmoidal functions with a variable point of inflection (Richards and Morgan) outperformed the other 4 equations. All the functions provided satisfactory predictions of milk yield (calculated from the first derivative of the functions) at different lactation stages, from early to late lactation. The Richards and Morgan equations provided the most accurate estimates of peak yield and total milk production per 305-d lactation, whereas the least accurate estimates were obtained with the logistic equation. In conclusion, classical growth functions (especially sigmoidal functions with a variable point of inflection) proved to be feasible alternatives to fit cumulative milk production curves of dairy cows, resulting in suitable statistical performance and accurate estimates of lactation traits. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Identifying multiple outliers in linear regression: robust fit and clustering approach

    International Nuclear Information System (INIS)

    Robiah Adnan; Mohd Nor Mohamad; Halim Setan

    2001-01-01

    This research provides a clustering based approach for determining potential candidates for outliers. This is modification of the method proposed by Serbert et. al (1988). It is based on using the single linkage clustering algorithm to group the standardized predicted and residual values of data set fit by least trimmed of squares (LTS). (Author)

  16. Timescale stretch parameterization of Type Ia supernova B-band light curves

    International Nuclear Information System (INIS)

    Goldhaber, G.; Groom, D.E.; Kim, A.; Aldering, G.; Astier, P.; Conley, A.; Deustua, S.E.; Ellis, R.; Fabbro, S.; Fruchter, A.S.; Goobar, A.; Hook, I.; Irwin, M.; Kim, M.; Knop, R.A.; Lidman, C.; McMahon, R.; Nugent, P.E.; Pain, R.; Panagia, N.; Pennypacker, C.R.; Perlmutter, S.; Ruiz-Lapuente, P.; Schaefer, B.; Walton, N.A.; York, T.

    2001-01-01

    R-band intensity measurements along the light curve of Type Ia supernovae discovered by the Cosmology Project (SCP) are fitted in brightness to templates allowing a free parameter the time-axis width factor w identically equal to s times (1+z). The data points are then individually aligned in the time-axis, normalized and K-corrected back to the rest frame, after which the nearly 1300 normalized intensity measurements are found to lie on a well-determined common rest-frame B-band curve which we call the ''composite curve.'' The same procedure is applied to 18 low-redshift Calan/Tololo SNe with Z < 0.11; these nearly 300 B-band photometry points are found to lie on the composite curve equally well. The SCP search technique produces several measurements before maximum light for each supernova. We demonstrate that the linear stretch factor, s, which parameterizes the light-curve timescale appears independent of z, and applies equally well to the declining and rising parts of the light curve. In fact, the B band template that best fits this composite curve fits the individual supernova photometry data when stretched by a factor s with chi 2/DoF ∼ 1, thus as well as any parameterization can, given the current data sets. The measurement of the data of explosion, however, is model dependent and not tightly constrained by the current data. We also demonstrate the 1 + z light-cure time-axis broadening expected from cosmological expansion. This argues strongly against alternative explanations, such as tired light, for the redshift of distant objects

  17. FC LSEI WNNLS, Least-Square Fitting Algorithms Using B Splines

    International Nuclear Information System (INIS)

    Hanson, R.J.; Haskell, K.H.

    1989-01-01

    1 - Description of problem or function: FC allows a user to fit dis- crete data, in a weighted least-squares sense, using piece-wise polynomial functions represented by B-Splines on a given set of knots. In addition to the least-squares fitting of the data, equality, inequality, and periodic constraints at a discrete, user-specified set of points can be imposed on the fitted curve or its derivatives. The subprograms LSEI and WNNLS solve the linearly-constrained least-squares problem. LSEI solves the class of problem with general inequality constraints, and, if requested, obtains a covariance matrix of the solution parameters. WNNLS solves the class of problem with non-negativity constraints. It is anticipated that most users will find LSEI suitable for their needs; however, users with inequalities that are single bounds on variables may wish to use WNNLS. 2 - Method of solution: The discrete data are fit by a linear combination of piece-wise polynomial curves which leads to a linear least-squares system of algebraic equations. Additional information is expressed as a discrete set of linear inequality and equality constraints on the fitted curve which leads to a linearly-constrained least-squares system of algebraic equations. The solution of this system is the main computational problem solved

  18. Fitting fatigue test data with a novel S-N curve using frequentist and Bayesian inference

    NARCIS (Netherlands)

    Leonetti, D.; Maljaars, J.; Snijder, H.H.B.

    2017-01-01

    In design against fatigue, a lower bound stress range vs. endurance curve (S-N curve) is employed to characterize fatigue resistance of plain material and structural details. With respect to the inherent variability of the fatigue life, the S-N curve is related to a certain probability of

  19. Study and program implementation of transient curves' piecewise linearization

    International Nuclear Information System (INIS)

    Shi Yang; Zu Hongbiao

    2014-01-01

    Background: Transient curves are essential for the stress analysis of related equipment in nuclear power plant (NPP). The actually operating data or the design transient data of a NPP usually consist of a large number of data points with very short time intervals. To simplify the analysis, transient curves are generally piecewise linearized in advance. Up to now, the piecewise linearization of transient curves is accomplished manually, Purpose: The aim is to develop a method for the piecewise linearization of transient curves, and to implement it by programming. Methods: First of all, the fitting line of a number of data points was obtained by the least square method. The segment of the fitting line is set while the accumulation error of linearization exceeds the preset limit with the increasing number of points. Then the linearization of subsequent data points was begun from the last point of the preceding curve segment to get the next segment in the same way, and continue until the final data point involved. Finally, averaging of junction points is taken for the segment connection. Results: A computer program named PLTC (Piecewise Linearization for Transient Curves) was implemented and verified by the linearization of the standard sine curve and typical transient curves of a NPP. Conclusion: The method and the PLTC program can be well used to the piecewise linearization of transient curves, with improving efficiency and precision. (authors)

  20. STRUCTURAL APPROACH TO THE MATHEMATICAL DESCRIPTION AND COMPUTER VISUALIZATION OF PLANE KINEMATIC CURVES FOR THE DISPLAY OF GEARS

    Directory of Open Access Journals (Sweden)

    Tatyana TRETYAK

    2018-05-01

    Full Text Available The structural approach stated in this paper allows to simulate the different plane kinematic curves without their concrete analytic equations. The summarized unified mapping system for rack gearing is used. The examples of plane kinematic curves received by the structural method on computer are adduced.

  1. Physical fitness and performance. Cardiorespiratory fitness in girls-change from middle to high school.

    Science.gov (United States)

    Pfeiffer, Karin A; Dowda, Marsha; Dishman, Rod K; Sirard, John R; Pate, Russell R

    2007-12-01

    To determine how factors are related to change in cardiorespiratory fitness (CRF) across time in middle school girls followed through high school. Adolescent girls (N = 274, 59% African American, baseline age = 13.6 +/- 0.6 yr) performed a submaximal fitness test (PWC170) in 8th, 9th, and 12th grades. Height, weight, sports participation, and physical activity were also measured. Moderate-to-vigorous physical activity (MVPA) and vigorous physical activity (VPA) were determined by the number of blocks reported on the 3-Day Physical Activity Recall (3DPAR). Individual differences and developmental change in CRF were assessed simultaneously by calculating individual growth curves for each participant, using growth curve modeling. Both weight-relative and absolute CRF increased from 8th to 9th grade and decreased from 9th to 12th grade. On average, girls lost 0.16 kg.m.min.kg.yr in weight-relative PWC170 scores (P interactions between CRF, physical activity, race, BMI, and sports participation.

  2. FPGA curved track fitter with very low resource usage

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jin-Yuan; Wang, M.; Gottschalk, E.; Shi, Z.; /Fermilab

    2006-11-01

    Standard least-squares curved track fitting process is tailored for FPGA implementation. The coefficients in the fitting matrices are carefully chosen so that only shift and accumulation operations are used in the process. The divisions and full multiplications are eliminated. Comparison in an application example shows that the fitting errors of the low resource usage implementation are less than 4% bigger than the fitting errors of the exact least-squares algorithm. The implementation is suitable for low-cost, low-power applications such as high energy physics detector trigger systems.

  3. Radioligand assays - methods and applications. IV. Uniform regression of hyperbolic and linear radioimmunoassay calibration curves

    Energy Technology Data Exchange (ETDEWEB)

    Keilacker, H; Becker, G; Ziegler, M; Gottschling, H D [Zentralinstitut fuer Diabetes, Karlsburg (German Democratic Republic)

    1980-10-01

    In order to handle all types of radioimmunoassay (RIA) calibration curves obtained in the authors' laboratory in the same way, they tried to find a non-linear expression for their regression which allows calibration curves with different degrees of curvature to be fitted. Considering the two boundary cases of the incubation protocol they derived a hyperbolic inverse regression function: x = a/sub 1/y + a/sub 0/ + asub(-1)y/sup -1/, where x is the total concentration of antigen, asub(i) are constants, and y is the specifically bound radioactivity. An RIA evaluation procedure based on this function is described providing a fitted inverse RIA calibration curve and some statistical quality parameters. The latter are of an order which is normal for RIA systems. There is an excellent agreement between fitted and experimentally obtained calibration curves having a different degree of curvature.

  4. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Science.gov (United States)

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  5. Physical fitness reference standards in fibromyalgia: The al-Ándalus project.

    Science.gov (United States)

    Álvarez-Gallardo, I C; Carbonell-Baeza, A; Segura-Jiménez, V; Soriano-Maldonado, A; Intemann, T; Aparicio, V A; Estévez-López, F; Camiletti-Moirón, D; Herrador-Colmenero, M; Ruiz, J R; Delgado-Fernández, M; Ortega, F B

    2017-11-01

    We aimed (1) to report age-specific physical fitness levels in people with fibromyalgia of a representative sample from Andalusia; and (2) to compare the fitness levels of people with fibromyalgia with non-fibromyalgia controls. This cross-sectional study included 468 (21 men) patients with fibromyalgia and 360 (55 men) controls. The fibromyalgia sample was geographically representative from southern Spain. Physical fitness was assessed with the Senior Fitness Test battery plus the handgrip test. We applied the Generalized Additive Model for Location, Scale and Shape to calculate percentile curves for women and fitted mean curves using a linear regression for men. Our results show that people with fibromyalgia reached worse performance in all fitness tests than controls (P fitness levels among patients with fibromyalgia and controls in a large sample of patients with fibromyalgia from southern of Spain. Physical fitness levels of people with fibromyalgia from Andalusia are very low in comparison with age-matched healthy controls. This information could be useful to correctly interpret physical fitness assessments and helping health care providers to identify individuals at risk for losing physical independence. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. THE CARNEGIE SUPERNOVA PROJECT: LIGHT-CURVE FITTING WITH SNooPy

    International Nuclear Information System (INIS)

    Burns, Christopher R.; Persson, S. E.; Madore, Barry F.; Freedman, Wendy L.; Stritzinger, Maximilian; Phillips, M. M.; Boldt, Luis; Campillay, Abdo; Folatelli, Gaston; Gonzalez, Sergio; Krzeminski, Wojtek; Morrell, Nidia; Salgado, Francisco; Kattner, ShiAnne; Contreras, Carlos; Suntzeff, Nicholas B.

    2011-01-01

    In providing an independent measure of the expansion history of the universe, the Carnegie Supernova Project (CSP) has observed 71 high-z Type Ia supernovae (SNe Ia) in the near-infrared bands Y and J. These can be used to construct rest-frame i-band light curves which, when compared to a low-z sample, yield distance moduli that are less sensitive to extinction and/or decline-rate corrections than in the optical. However, working with NIR observed and i-band rest-frame photometry presents unique challenges and has necessitated the development of a new set of observational tools in order to reduce and analyze both the low-z and high-z CSP sample. We present in this paper the methods used to generate uBVgriYJH light-curve templates based on a sample of 24 high-quality low-z CSP SNe. We also present two methods for determining the distances to the hosts of SN Ia events. A larger sample of 30 low-z SNe Ia in the Hubble flow is used to calibrate these methods. We then apply the method and derive distances to seven galaxies that are so nearby that their motions are not dominated by the Hubble flow.

  7. Highly curved image sensors: a practical approach for improved optical performance.

    Science.gov (United States)

    Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff

    2017-06-12

    The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30° subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.

  8. Nonlinear fitness-space-structure adaptation and principal component analysis in genetic algorithms: an application to x-ray reflectivity analysis

    International Nuclear Information System (INIS)

    Tiilikainen, J; Tilli, J-M; Bosund, V; Mattila, M; Hakkarainen, T; Airaksinen, V-M; Lipsanen, H

    2007-01-01

    Two novel genetic algorithms implementing principal component analysis and an adaptive nonlinear fitness-space-structure technique are presented and compared with conventional algorithms in x-ray reflectivity analysis. Principal component analysis based on Hessian or interparameter covariance matrices is used to rotate a coordinate frame. The nonlinear adaptation applies nonlinear estimates to reshape the probability distribution of the trial parameters. The simulated x-ray reflectivity of a realistic model of a periodic nanolaminate structure was used as a test case for the fitting algorithms. The novel methods had significantly faster convergence and less stagnation than conventional non-adaptive genetic algorithms. The covariance approach needs no additional curve calculations compared with conventional methods, and it had better convergence properties than the computationally expensive Hessian approach. These new algorithms can also be applied to other fitting problems where tight interparameter dependence is present

  9. Accuracy of progress ratios determined from experience curves: the case of photovoltaic technology development

    OpenAIRE

    van Sark, W.G.J.H.M.; Alsema, E.A.; Junginger, H.M.; de Moor, H.H.C.; Schaeffer, G.J.

    2008-01-01

    Learning curves are extensively used in policy and scenario studies. Progress ratios (PRs) are derived from historical data and are used for forecasting cost development of many technologies, including photovoltaics (PV). Forecasts are highly sensitive to uncertainties in the PR. A PR usually is determined together with the coefficient of determination R2, which should approach unity for a good fit of the available data. Although the R2 is instructive, we recommend using the error in the PR d...

  10. Perspectives for development friendly financial markets - No one-size fits all approach!

    DEFF Research Database (Denmark)

    Schmidt, Johannes Dragsbæk

    The paper argues against the usual "one size fits all" approach of the IFIs that all economies must follow the same financial policy. It is necessary to take into consideration a contextual and historical approach in order to enable more considerations  for different local political, economic...... and cultural circumstances. It was furthermore noted that the current deep crisis of the IFIs is associated with both lack of legitimacy and loss of liquidity. Following this the Brettonwoods institutions must either be reformed or abolished...

  11. DEM4-26, Least Square Fit for IBM PC by Deming Method

    International Nuclear Information System (INIS)

    Rinard, P.M.; Bosler, G.E.

    1989-01-01

    1 - Description of program or function: DEM4-26 is a generalized least square fitting program based on Deming's method. Functions built into the program for fitting include linear, quadratic, cubic, power, Howard's, exponential, and Gaussian; others can easily be added. The program has the following capabilities: (1) entry, editing, and saving of data; (2) fitting of any of the built-in functions or of a user-supplied function; (3) plotting the data and fitted function on the display screen, with error limits if requested, and with the option of copying the plot to the printer; (4) interpolation of x or y values from the fitted curve with error estimates based on error limits selected by the user; and (5) plotting the residuals between the y data values and the fitted curve, with the option copying the plot to the printer. 2 - Method of solution: Deming's method

  12. Graphical approach to assess the soil fertility evaluation model validity for rice (case study: southern area of Merapi Mountain, Indonesia)

    Science.gov (United States)

    Julianto, E. A.; Suntoro, W. A.; Dewi, W. S.; Partoyo

    2018-03-01

    Climate change has been reported to exacerbate land resources degradation including soil fertility decline. The appropriate validity use on soil fertility evaluation could reduce the risk of climate change effect on plant cultivation. This study aims to assess the validity of a Soil Fertility Evaluation Model using a graphical approach. The models evaluated were the Indonesian Soil Research Center (PPT) version model, the FAO Unesco version model, and the Kyuma version model. Each model was then correlated with rice production (dry grain weight/GKP). The goodness of fit of each model can be tested to evaluate the quality and validity of a model, as well as the regression coefficient (R2). This research used the Eviews 9 programme by a graphical approach. The results obtained three curves, namely actual, fitted, and residual curves. If the actual and fitted curves are widely apart or irregular, this means that the quality of the model is not good, or there are many other factors that are still not included in the model (large residual) and conversely. Indeed, if the actual and fitted curves show exactly the same shape, it means that all factors have already been included in the model. Modification of the standard soil fertility evaluation models can improve the quality and validity of a model.

  13. SITE INDEX CURVES AND HYPSOMETRIC RELATIONSHIP FOR Eucalyptus grandis PLANTATIONS FOR THE CAMPOS GERAIS REGION, PARANA STATE

    Directory of Open Access Journals (Sweden)

    Fabiane Aparecida de Souza Retslaff

    2015-06-01

    Full Text Available The study aimed to fit mathematical models for the construction of Site Index curves and to estimate heights at different ages for Eucalyptus grandis in the Campos Gerais region, Parana State. The data used to fit the models came from permanent, temporary plots and pre-harvesting inventory, covering ages from 2.5 to 26.5 years. Several models were tested to represent the sites and the hypsometric relationship. The Site Index curves were constructed by the guide-curve method. For the Site Index, the Chapman-Richards model showed the best fit and precision statistics, generating 5 Site Index curves (range of 5 m with the Chapman-Richards model. The four hypsometric models tested showed satisfactory performance and similar statistics and the inclusion of the variables dominant height or site index did not substantially improve the goodness of fit statistics, but the residues were more homogeneous and closer to zero.

  14. Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.

    Science.gov (United States)

    Roman, A; Ahmed, K; Challacombe, B

    2016-05-01

    Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  15. Meta-analysis of single-arm survival studies: a distribution-free approach for estimating summary survival curves with random effects.

    Science.gov (United States)

    Combescure, Christophe; Foucher, Yohann; Jackson, Daniel

    2014-07-10

    In epidemiologic studies and clinical trials with time-dependent outcome (for instance death or disease progression), survival curves are used to describe the risk of the event over time. In meta-analyses of studies reporting a survival curve, the most informative finding is a summary survival curve. In this paper, we propose a method to obtain a distribution-free summary survival curve by expanding the product-limit estimator of survival for aggregated survival data. The extension of DerSimonian and Laird's methodology for multiple outcomes is applied to account for the between-study heterogeneity. Statistics I(2)  and H(2) are used to quantify the impact of the heterogeneity in the published survival curves. A statistical test for between-strata comparison is proposed, with the aim to explore study-level factors potentially associated with survival. The performance of the proposed approach is evaluated in a simulation study. Our approach is also applied to synthesize the survival of untreated patients with hepatocellular carcinoma from aggregate data of 27 studies and synthesize the graft survival of kidney transplant recipients from individual data from six hospitals. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Fitness analysis method for magnesium in drinking water with atomic absorption using quadratic curve calibration

    Directory of Open Access Journals (Sweden)

    Esteban Pérez-López

    2014-11-01

    Full Text Available Because of the importance of quantitative chemical analysis in research, quality control, sales of services and other areas of interest , and the limiting of some instrumental analysis methods for quantification with linear calibration curve, sometimes because the short linear dynamic ranges of the analyte, and sometimes by limiting the technique itself, is that there is a need to investigate a little more about the convenience of using quadratic curves for analytical quantification, which seeks demonstrate that it is a valid calculation model for chemical analysis instruments. To this was taken as an analysis method based on the technique and atomic absorption spectroscopy in particular a determination of magnesium in a sample of drinking water Tacares sector Northern Grecia, employing a nonlinear calibration curve and a curve specific quadratic behavior, which was compared with the test results obtained for the same analysis with a linear calibration curve. The results show that the methodology is valid for the determination referred to, with all confidence, since the concentrations are very similar, and as used hypothesis testing can be considered equal.

  17. Cubic Bezier Curve Approach for Automated Offline Signature Verification with Intrusion Identification

    Directory of Open Access Journals (Sweden)

    Arun Vijayaragavan

    2014-01-01

    Full Text Available Authentication is a process of identifying person’s rights over a system. Many authentication types are used in various systems, wherein biometrics authentication systems are of a special concern. Signature verification is a basic biometric authentication technique used widely. The signature matching algorithm uses image correlation and graph matching technique which provides false rejection or acceptance. We proposed a model to compare knowledge from signature. Intrusion in the signature repository system results in copy of the signature that leads to false acceptance. Our approach uses a Bezier curve algorithm to identify the curve points and uses the behaviors of the signature for verification. An analyzing mobile agent is used to identify the input signature parameters and compare them with reference signature repository. It identifies duplication of signature over intrusion and rejects it. Experiments are conducted on a database with thousands of signature images from various sources and the results are favorable.

  18. A variant of the Hubbert curve for world oil production forecasts

    International Nuclear Information System (INIS)

    Maggio, G.; Cacciola, G.

    2009-01-01

    In recent years, the economic and political aspects of energy problems have prompted many researchers and analysts to focus their attention on the Hubbert Peak Theory with the aim of forecasting future trends in world oil production. In this paper, a model that attempts to contribute in this regard is presented; it is based on a variant of the well-known Hubbert curve. In addition, the sum of multiple-Hubbert curves (two cycles) is used to provide a better fit for the historical data on oil production (crude and natural gas liquid (NGL)). Taking into consideration three possible scenarios for oil reserves, this approach allowed us to forecast when peak oil production, referring to crude oil and NGL, should occur. In particular, by assuming a range of 2250-3000 gigabarrels (Gb) for ultimately recoverable conventional oil, our predictions foresee a peak between 2009 and 2021 at 29.3-32.1 Gb/year.

  19. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    NARCIS (Netherlands)

    Vriens, D.; Geus-Oei, L.F. de; Oyen, W.J.G.; Visser, E.P.

    2009-01-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible

  20. Progress in evaluation of human observer visual detection performance using the ROC curve approach

    International Nuclear Information System (INIS)

    Metz, C.E.; Starr, S.J.; Lusted, L.B.; Rossmann, K.

    1976-01-01

    The ROC approach to analysis of human observer detection performance as playing a key role in elucidation the relationships among the physical parameters of an imaging operation, the ability of a human observer to use the image to make decisions regarding the state of health or disease in a medical diagnostic situation, and the medical and social utility of those decisions, was studied. The conventional ROC curve describing observer performance in simple detection tasks can be used to predict observer performance in complex detection tasks. The conventional ROC curve thus provides a description of observer detection performance which is useful in situations more relevant clinically than those for which it is measured. Similar predictions regarding observer performance in identification and recognition tasks are currently being sought. The ROC curve can be used to relate signal detectability to various measures of the diagnostic and social benefit derived from a medical imaging procedure. These relationships provide a means for assessing the relative desirability of alternative diagnostic techniques and can be used to evaluate combinations of diagnostic studies

  1. Bose-Einstein Condensate Dark Matter Halos Confronted with Galactic Rotation Curves

    Directory of Open Access Journals (Sweden)

    M. Dwornik

    2017-01-01

    Full Text Available We present a comparative confrontation of both the Bose-Einstein Condensate (BEC and the Navarro-Frenk-White (NFW dark halo models with galactic rotation curves. We employ 6 High Surface Brightness (HSB, 6 Low Surface Brightness (LSB, and 7 dwarf galaxies with rotation curves falling into two classes. In the first class rotational velocities increase with radius over the observed range. The BEC and NFW models give comparable fits for HSB and LSB galaxies of this type, while for dwarf galaxies the fit is significantly better with the BEC model. In the second class the rotational velocity of HSB and LSB galaxies exhibits long flat plateaus, resulting in better fit with the NFW model for HSB galaxies and comparable fits for LSB galaxies. We conclude that due to its central density cusp avoidance the BEC model fits better dwarf galaxy dark matter distribution. Nevertheless it suffers from sharp cutoff in larger galaxies, where the NFW model performs better. The investigated galaxy sample obeys the Tully-Fisher relation, including the particular characteristics exhibited by dwarf galaxies. In both models the fitting enforces a relation between dark matter parameters: the characteristic density and the corresponding characteristic distance scale with an inverse power.

  2. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    Directory of Open Access Journals (Sweden)

    Shanshan He

    2015-10-01

    Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.

  3. Can Low-Resolution Airborne Laser Scanning Data Be Used to Model Stream Rating Curves?

    Directory of Open Access Journals (Sweden)

    Steve W. Lyon

    2015-03-01

    Full Text Available This pilot study explores the potential of using low-resolution (0.2 points/m2 airborne laser scanning (ALS-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2 ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries. This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  4. Can low-resolution airborne laser scanning data be used to model stream rating curves?

    Science.gov (United States)

    Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar

    2015-01-01

    This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  5. Navigation and flight director guidance for the NASA/FAA helicopter MLS curved approach flight test program

    Science.gov (United States)

    Phatak, A. V.; Lee, M. G.

    1985-01-01

    The navigation and flight director guidance systems implemented in the NASA/FAA helicopter microwave landing system (MLS) curved approach flight test program is described. Flight test were conducted at the U.S. Navy's Crows Landing facility, using the NASA Ames UH-lH helicopter equipped with the V/STOLAND avionics system. The purpose of these tests was to investigate the feasibility of flying complex, curved and descending approaches to a landing using MLS flight director guidance. A description of the navigation aids used, the avionics system, cockpit instrumentation and on-board navigation equipment used for the flight test is provided. Three generic reference flight paths were developed and flown during the test. They were as follows: U-Turn, S-turn and Straight-In flight profiles. These profiles and their geometries are described in detail. A 3-cue flight director was implemented on the helicopter. A description of the formulation and implementation of the flight director laws is also presented. Performance data and analysis is presented for one pilot conducting the flight director approaches.

  6. Simultaneous fitting of real-time PCR data with efficiency of amplification modeled as Gaussian function of target fluorescence

    Directory of Open Access Journals (Sweden)

    Lazar Andreas

    2008-02-01

    Full Text Available Abstract Background In real-time PCR, it is necessary to consider the efficiency of amplification (EA of amplicons in order to determine initial target levels properly. EAs can be deduced from standard curves, but these involve extra effort and cost and may yield invalid EAs. Alternatively, EA can be extracted from individual fluorescence curves. Unfortunately, this is not reliable enough. Results Here we introduce simultaneous non-linear fitting to determine – without standard curves – an optimal common EA for all samples of a group. In order to adjust EA as a function of target fluorescence, and still to describe fluorescence as a function of cycle number, we use an iterative algorithm that increases fluorescence cycle by cycle and thus simulates the PCR process. A Gauss peak function is used to model the decrease of EA with increasing amplicon accumulation. Our approach was validated experimentally with hydrolysis probe or SYBR green detection with dilution series of 5 different targets. It performed distinctly better in terms of accuracy than standard curve, DART-PCR, and LinRegPCR approaches. Based on reliable EAs, it was possible to detect that for some amplicons, extraordinary fluorescence (EA > 2.00 was generated with locked nucleic acid hydrolysis probes, but not with SYBR green. Conclusion In comparison to previously reported approaches that are based on the separate analysis of each curve and on modelling EA as a function of cycle number, our approach yields more accurate and precise estimates of relative initial target levels.

  7. The link between the baryonic mass distribution and the rotation curve shape

    NARCIS (Netherlands)

    Swaters, R. A.; Sancisi, R.; van der Hulst, J. M.; van Albada, T. S.

    The observed rotation curves of disc galaxies, ranging from late-type dwarf galaxies to early-type spirals, can be fitted remarkably well simply by scaling up the contributions of the stellar and H?i discs. This baryonic scaling model can explain the full breadth of observed rotation curves with

  8. A computerized glow curve analysis (GCA) method for WinREMS thermoluminescent dosimeter data using MATLAB

    International Nuclear Information System (INIS)

    Harvey, John A.; Rodrigues, Miesher L.; Kearfott, Kimberlee J.

    2011-01-01

    A computerized glow curve analysis (GCA) program for handling of thermoluminescence data originating from WinREMS is presented. The MATLAB program fits the glow peaks using the first-order kinetics model. Tested materials are LiF:Mg,Ti, CaF 2 :Dy, CaF 2 :Tm, CaF 2 :Mn, LiF:Mg,Cu,P, and CaSO 4 :Dy, with most having an average figure of merit (FOM) of 1.3% or less, with CaSO 4 :Dy 2.2% or less. Output is a list of fit parameters, peak areas, and graphs for each fit, evaluating each glow curve in 1.5 s or less. - Highlights: → Robust algorithm for performing thermoluminescent dosimeter glow curve analysis. → Written in MATLAB so readily implemented on variety of computers. → Usage of figure of merit demonstrated for six different materials.

  9. Spiral Galaxy Central Bulge Tangential Speed of Revolution Curves

    Science.gov (United States)

    Taff, Laurence

    2013-03-01

    The objective was to, for the first time in a century, scientifically analyze the ``rotation curves'' (sic) of the central bulges of scores of spiral galaxies. I commenced with a methodological, rational, geometrical, arithmetic, and statistical examination--none of them carried through before--of the radial velocity data. The requirement for such a thorough treatment is the paucity of data typically available for the central bulge: fewer than 10 observations and frequently only five. The most must be made of these. A consequence of this logical handling is the discovery of a unique model for the central bulge volume mass density resting on the positive slope, linear, rise of its tangential speed of revolution curve and hence--for the first time--a reliable mass estimate. The deduction comes from a known physics-based, mathematically valid, derivation (not assertion). It rests on the full (not partial) equations of motion plus Poisson's equation. Following that is a prediction for the gravitational potential energy and thence the gravitational force. From this comes a forecast for the tangential speed of revolution curve. It was analyzed in a fashion identical to that of the data thereby closing the circle and demonstrating internal self-consistency. This is a hallmark of a scientific method-informed approach to an experimental problem. Multiple plots of the relevant quantities and measures of goodness of fit will be shown. Astronomy related

  10. Optimization of Fit for Mass Customized Apparel Ordering Using Fit Preference and Self Measurement.

    Science.gov (United States)

    2000-01-01

    in significance and definition for both consumers and manufacturers. Fit preference involves an individualized bias toward a particular look, size...Committee. Bishton, D. (1984). The sweatshop report. Birmingham: AFFOR. 268 269 Bjerve, S. & Doksum, K. (1993). Correlation curves: Measures of...anthropometry methods. New York: John Wiley & Sons. Rosenbaum, R. & Schilling D. (1997). In sweatshops , wages are the issue. The Corporate Examiner

  11. Greenhouse gas abatement cost curves of the residential heating market. A microeconomic approach

    International Nuclear Information System (INIS)

    Dieckhoener, Caroline; Hecking, Harald

    2012-01-01

    In this paper, we develop a microeconomic approach to deduce greenhouse gas abatement cost curves of the residential heating sector. By accounting for household behavior, we find that welfare-based abatement costs are generally higher than pure technical equipment costs. Our results are based on a microsimulation of private households' investment decision for heating systems until 2030. The households' investment behavior in the simulation is derived from a discrete choice estimation which allows investigating the welfare costs of different abatement policies in terms of the compensating variation and the excess burden. We simulate greenhouse gas abatements and welfare costs of carbon taxes and subsidies on heating system investments until 2030 to deduce abatement curves. Given utility maximizing households, our results suggest a carbon tax to be the welfare efficient policy. Assuming behavioral misperceptions instead, a subsidy on investments might have lower marginal greenhouse gas abatement costs than a carbon tax.

  12. Tensor-guided fitting of subduction slab depths

    Science.gov (United States)

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  13. Trend analyses with river sediment rating curves

    Science.gov (United States)

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  14. Polarization Curve of a Non-Uniformly Aged PEM Fuel Cell

    Directory of Open Access Journals (Sweden)

    Andrei Kulikovsky

    2014-01-01

    Full Text Available We develop a semi-analytical model for polarization curve of a polymer electrolyte membrane (PEM fuel cell with distributed (aged along the oxygen channel MEA transport and kinetic parameters of the membrane–electrode assembly (MEA. We show that the curve corresponding to varying along the channel parameter, in general, does not reduce to the curve for a certain constant value of this parameter. A possibility to determine the shape of the deteriorated MEA parameter along the oxygen channel by fitting the model equation to the cell polarization data is demonstrated.

  15. Quarkonium level fitting with two-power potentials

    International Nuclear Information System (INIS)

    Joshi, G.C.; Wignall, J.W.G.

    1981-01-01

    An attempt has been made to fit psi and UPSILON energy levels and leptonic decay width ratios with a non-relativistic potential model using a potential of the form V(r) = Arsup(p) + Brsup(q) + C. It is found that reasonable fits to states below hadronic decay threshold can be obtained for values of the powers p and q anywhere along a family of curves in the (p,q) plane that smoothly join the Martin potential (p = 0, q = 0.1) to the potential forms with p approximately -1 suggested by QCD; for the latter case the best fit is obtained with q approximately 0.4 - 0.5

  16. Rhizocarpon calibration curve for the Aoraki/Mount Cook area of New Zealand

    Science.gov (United States)

    Lowell, Thomas V.; Schoenenberger, Katherine; Deddens, James A.; Denton, George H.; Smith, Colby; Black, Jessica; Hendy, Chris H.

    2005-05-01

    Development of Rhizocarpon growth curve from the Aoraki/Mount Cook area of New Zealand provides a means to assess Little Ice Age glacier behaviour and suggests approaches that have wider application. Employing a sampling strategy based on large populations affords the opportunity to assess which of various metrics (e.g. single largest, average of five largest, mean of an entire population) best characterise Rhizocarpon growth patterns. The 98% quantile from each population fitted with a quadric curve forms a reliable representation of the growth pattern. Since this metric does not depend on the original sample size, comparisons are valid where sample strategy must be adapted to local situations or where the original sample size differs. For the Aoraki/Mount Cook area a surface 100 years old will have a 98% quantile lichen diameter of 34.3 mm, whereas a 200-year-old surface will have a lichen diameter of 73.7 mm. In the Southern Alps, constraints from the age range of calibration points, the flattening of the quadric calibration curve and ecological factors limit the useful age range to approximately 250 years. Copyright

  17. Single-centre experience of retroperitoneoscopic approach in urology with tips to overcome the steep learning curve

    Directory of Open Access Journals (Sweden)

    Aneesh Srivastava

    2016-01-01

    Full Text Available Context: The retroperitoneoscopic or retroperitoneal (RP surgical approach has not become as popular as the transperitoneal (TP one due to the steeper learning curve. Aims: Our single-institution experience focuses on the feasibility, advantages and complications of retroperitoneoscopic surgeries (RS performed over the past 10 years. Tips and tricks have been discussed to overcome the steep learning curve and these are emphasised. Settings and Design: This study made a retrospective analysis of computerised hospital data of patients who underwent RP urological procedures from 2003 to 2013 at a tertiary care centre. Patients and Methods: Between 2003 and 2013, 314 cases of RS were performed for various urological procedures. We analysed the operative time, peri-operative complications, time to return of bowel sound, length of hospital stay, and advantages and difficulties involved. Post-operative complications were stratified into five grades using modified Clavien classification (MCC. Results: RS were successfully completed in 95.5% of patients, with 4% of the procedures electively performed by the combined approach (both RP and TP; 3.2% required open conversion and 1.3% were converted to the TP approach. The most common cause for conversion was bleeding. Mean hospital stay was 3.2 ± 1.2 days and the mean time for returning of bowel sounds was 16.5 ± 5.4 h. Of the patients, 1.4% required peri-operative blood transfusion. A total of 16 patients (5% had post-operative complications and the majority were grades I and II as per MCC. The rates of intra-operative and post-operative complications depended on the difficulty of the procedure, but the complications diminished over the years with the increasing experience of surgeons. Conclusion: Retroperitoneoscopy has proven an excellent approach, with certain advantages. The tips and tricks that have been provided and emphasised should definitely help to minimise the steep learning curve.

  18. New approach in the evaluation of a fitness program at a worksite.

    Science.gov (United States)

    Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T

    1999-03-01

    The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.

  19. Trait-fitness relationships determine how trade-off shapes affect species coexistence.

    Science.gov (United States)

    Ehrlich, Elias; Becks, Lutz; Gaedke, Ursula

    2017-12-01

    Trade-offs between functional traits are ubiquitous in nature and can promote species coexistence depending on their shape. Classic theory predicts that convex trade-offs facilitate coexistence of specialized species with extreme trait values (extreme species) while concave trade-offs promote species with intermediate trait values (intermediate species). We show here that this prediction becomes insufficient when the traits translate non-linearly into fitness which frequently occurs in nature, e.g., an increasing length of spines reduces grazing losses only up to a certain threshold resulting in a saturating or sigmoid trait-fitness function. We present a novel, general approach to evaluate the effect of different trade-off shapes on species coexistence. We compare the trade-off curve to the invasion boundary of an intermediate species invading the two extreme species. At this boundary, the invasion fitness is zero. Thus, it separates trait combinations where invasion is or is not possible. The invasion boundary is calculated based on measurable trait-fitness relationships. If at least one of these relationships is not linear, the invasion boundary becomes non-linear, implying that convex and concave trade-offs not necessarily lead to different coexistence patterns. Therefore, we suggest a new ecological classification of trade-offs into extreme-favoring and intermediate-favoring which differs from a purely mathematical description of their shape. We apply our approach to a well-established model of an empirical predator-prey system with competing prey types facing a trade-off between edibility and half-saturation constant for nutrient uptake. We show that the survival of the intermediate prey depends on the convexity of the trade-off. Overall, our approach provides a general tool to make a priori predictions on the outcome of competition among species facing a common trade-off in dependence of the shape of the trade-off and the shape of the trait-fitness

  20. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    Science.gov (United States)

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Modeling of alpha mass-efficiency curve

    International Nuclear Information System (INIS)

    Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.

    2005-01-01

    We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface

  2. Differential geometry and topology of curves

    CERN Document Server

    Animov, Yu

    2001-01-01

    Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.

  3. ASTEROID LIGHT CURVES FROM THE PALOMAR TRANSIENT FACTORY SURVEY: ROTATION PERIODS AND PHASE FUNCTIONS FROM SPARSE PHOTOMETRY

    Energy Technology Data Exchange (ETDEWEB)

    Waszczak, Adam [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States); Chang, Chan-Kao; Cheng, Yu-Chi; Ip, Wing-Huen; Kinoshita, Daisuke [Institute of Astronomy, National Central University, Jhongli, Taiwan (China); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, Rehovot (Israel); Laher, Russ; Surace, Jason [Spitzer Science Center, California Institute of Technology, Pasadena, CA 91125 (United States); Masci, Frank; Helou, George [Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125 (United States); Levitan, David; Prince, Thomas A.; Kulkarni, Shrinivas, E-mail: waszczak@caltech.edu [Division of Physics, Mathematics and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States)

    2015-09-15

    We fit 54,296 sparsely sampled asteroid light curves in the Palomar Transient Factory survey to a combined rotation plus phase-function model. Each light curve consists of 20 or more observations acquired in a single opposition. Using 805 asteroids in our sample that have reference periods in the literature, we find that the reliability of our fitted periods is a complicated function of the period, amplitude, apparent magnitude, and other light-curve attributes. Using the 805-asteroid ground-truth sample, we train an automated classifier to estimate (along with manual inspection) the validity of the remaining ∼53,000 fitted periods. By this method we find that 9033 of our light curves (of ∼8300 unique asteroids) have “reliable” periods. Subsequent consideration of asteroids with multiple light-curve fits indicates a 4% contamination in these “reliable” periods. For 3902 light curves with sufficient phase-angle coverage and either a reliable fit period or low amplitude, we examine the distribution of several phase-function parameters, none of which are bimodal though all correlate with the bond albedo and with visible-band colors. Comparing the theoretical maximal spin rate of a fluid body with our amplitude versus spin-rate distribution suggests that, if held together only by self-gravity, most asteroids are in general less dense than ∼2 g cm{sup −3}, while C types have a lower limit of between 1 and 2 g cm{sup −3}. These results are in agreement with previous density estimates. For 5–20 km diameters, S types rotate faster and have lower amplitudes than C types. If both populations share the same angular momentum, this may indicate the two types’ differing ability to deform under rotational stress. Lastly, we compare our absolute magnitudes (and apparent-magnitude residuals) to those of the Minor Planet Center’s nominal (G = 0.15, rotation-neglecting) model; our phase-function plus Fourier-series fitting reduces asteroid photometric rms

  4. On the second kinetic order thermoluminescent glow curves

    International Nuclear Information System (INIS)

    Dang Thanh Luong; Nguyen Hao Quang; Hoang Minh Giang

    1995-01-01

    The kinetic parameters of thermoluminescent material such as CaF 2 -N and CaSO 4 -Dy with the different grain sizes are investigated in detail using the least square method of fitting. It was found that the activation energy E (or trap depth) and peak temperature T m ax are changed with the elapsed time between the irradiation and read-out for the low temperature glow curve peaks. The similar TL glow curve shapes are obtained for the different CaSO 4 -Dy grain size. (author). 7 refs., 5 figs., 2 tabs

  5. Beyond the SCS curve number: A new stochastic spatial runoff approach

    Science.gov (United States)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  6. The S-curve for forecasting waste generation in construction projects.

    Science.gov (United States)

    Lu, Weisheng; Peng, Yi; Chen, Xi; Skitmore, Martin; Zhang, Xiaoling

    2016-10-01

    Forecasting construction waste generation is the yardstick of any effort by policy-makers, researchers, practitioners and the like to manage construction and demolition (C&D) waste. This paper develops and tests an S-curve model to indicate accumulative waste generation as a project progresses. Using 37,148 disposal records generated from 138 building projects in Hong Kong in four consecutive years from January 2011 to June 2015, a wide range of potential S-curve models are examined, and as a result, the formula that best fits the historical data set is found. The S-curve model is then further linked to project characteristics using artificial neural networks (ANNs) so that it can be used to forecast waste generation in future construction projects. It was found that, among the S-curve models, cumulative logistic distribution is the best formula to fit the historical data. Meanwhile, contract sum, location, public-private nature, and duration can be used to forecast construction waste generation. The study provides contractors with not only an S-curve model to forecast overall waste generation before a project commences, but also with a detailed baseline to benchmark and manage waste during the course of construction. The major contribution of this paper is to the body of knowledge in the field of construction waste generation forecasting. By examining it with an S-curve model, the study elevates construction waste management to a level equivalent to project cost management where the model has already been readily accepted as a standard tool. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Dose-response calibration curves of {sup 137}Cs gamma rays for dicentric chromosome aberrations in human lymphocytes

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Wol Soon; Oh, Su Jung; Jeong, Soo Kyun; Yang, Kwang Mo [Dept. of Research center, Dong Nam Institute of Radiological and Medical Sciences, Busan (Korea, Republic of); Jeong, Min Ho [Dept. of Microbiology, Dong A University College of Medicine, Busan (Korea, Republic of)

    2012-11-15

    Recently, the increased threat of radiologically industrial accident such as radiation nondestructive inspection or destruction of nuclear accident by natural disaster such as Fukushima accident requires a greater capacity for cytogenetic biodosimetry, which is critical for clinical triage of potentially thousands of radiation-exposed individuals. Dicentric chromosome aberration analysis is the conventional means of assessing radiation exposure. Dose–response calibration curves for {sup 13}'7Cs gamma rays have been established for unstable chromosome aberrations in human peripheral blood lymphocytes in many laboratories of international biodosimetry network. In this study, therefore, we established dose– response calibration curves of our laboratory for {sup 137}Cs gamma raysaccording to the IAEA protocols for conducting the dicentric chromosome assay We established in vitro dose–response calibration curves for dicentric chromosome aberrations in human lymphocytes for{sup 13}'7Cs gamma rays in the 0 to 5 Gy range, using the maximum likelihood linear-quadratic model, Y = c+αD+βD2. The estimated coefficients of the fitted curves were within the 95% confidence intervals (CIs) and the curve fitting of dose–effect relationship data indicated a good fit to the linear-quadratic model. Hence, meaningful dose estimation from unknown sample can be determined accurately by using our laboratory’s calibration curve according to standard protocol.

  8. Learning curves in highly skilled chess players: a test of the generality of the power law of practice.

    Science.gov (United States)

    Howard, Robert W

    2014-09-01

    The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Evaluation of J-R curve testing of nuclear piping materials using the direct current potential drop technique

    International Nuclear Information System (INIS)

    Hackett, E.M.; Kirk, M.T.; Hays, R.A.

    1986-08-01

    A method is described for developing J-R curves for nuclear piping materials using the DC Potential Drop (DCPD) technique. Experimental calibration curves were developed for both three point bend and compact specimen geometries using ASTM A106 steel, a type 304 stainless steel and a high strength aluminum alloy. These curves were fit with a power law expression over the range of crack extension encountered during J-R curve tests (0.6 a/W to 0.8 a/W). The calibration curves were insensitive to both material and sidegrooving and depended solely on specimen geometry and lead attachment points. Crack initiation in J-R curve tests using DCPD was determined by a deviation from a linear region on a plot of COD vs. DCPD. The validity of this criterion for ASTM A106 steel was determined by a series of multispecimen tests that bracketed the initiation region. A statistical differential slope procedure for determination of the crack initiation point is presented and discussed. J-R curve tests were performed on ASTM A106 steel and type 304 stainless steel using both the elastic compliance and DCPD techniques to assess R-curve comparability. J-R curves determined using the two approaches were found to be in good agreement for ASTM A106 steel. The applicability of the DCPD technique to type 304 stainless steel and high rate loading of ferromagnetic materials is discussed. 15 refs., 33 figs

  10. Correlation between 2D and 3D flow curve modelling of DP steels using a microstructure-based RVE approach

    International Nuclear Information System (INIS)

    Ramazani, A.; Mukherjee, K.; Quade, H.; Prahl, U.; Bleck, W.

    2013-01-01

    A microstructure-based approach by means of representative volume elements (RVEs) is employed to evaluate the flow curve of DP steels using virtual tensile tests. Microstructures with different martensite fractions and morphologies are studied in two- and three-dimensional approaches. Micro sections of DP microstructures with various amounts of martensite have been converted to 2D RVEs, while 3D RVEs were constructed statistically with randomly distributed phases. A dislocation-based model is used to describe the flow curve of each ferrite and martensite phase separately as a function of carbon partitioning and microstructural features. Numerical tensile tests of RVE were carried out using the ABAQUS/Standard code to predict the flow behaviour of DP steels. It is observed that 2D plane strain modelling gives an underpredicted flow curve for DP steels, while the 3D modelling gives a quantitatively reasonable description of flow curve in comparison to the experimental data. In this work, a von Mises stress correlation factor σ 3D /σ 2D has been identified to compare the predicted flow curves of these two dimensionalities showing a third order polynomial relation with respect to martensite fraction and a second order polynomial relation with respect to equivalent plastic strain, respectively. The quantification of this polynomial correlation factor is performed based on laboratory-annealed DP600 chemistry with varying martensite content and it is validated for industrially produced DP qualities with various chemistry, strength level and martensite fraction.

  11. Stage discharge curve for Guillemard Bridge streamflow sation based on rating curve method using historical flood event data

    International Nuclear Information System (INIS)

    Ros, F C; Sidek, L M; Desa, M N; Arifin, K; Tosaka, H

    2013-01-01

    The purpose of the stage-discharge curves varies from water quality study, flood modelling study, can be used to project climate change scenarios and so on. As the bed of the river often changes due to the annual monsoon seasons that sometimes cause by massive floods, the capacity of the river will changed causing shifting controlled to happen. This study proposes to use the historical flood event data from 1960 to 2009 in calculating the stage-discharge curve of Guillemard Bridge located in Sg. Kelantan. Regression analysis was done to check the quality of the data and examine the correlation between the two variables, Q and H. The mean values of the two variables then were adopted to find the value of difference between zero gauge height and the level of zero flow, 'a', K and 'n' to fit into rating curve equation and finally plotting the stage-discharge rating curve. Regression analysis of the historical flood data indicate that 91 percent of the original uncertainty has been explained by the analysis with the standard error of 0.085.

  12. Work-to-Family Conflict, Positive Spillover, and Boundary Management: A Person-Environment Fit Approach

    Science.gov (United States)

    Chen, Zheng; Powell, Gary N.; Greenhaus, Jeffrey H.

    2009-01-01

    This study adopted a person-environment fit approach to examine whether greater congruence between employees' preferences for segmenting their work domain from their family domain (i.e., keeping work matters at work) and what their employers' work environment allowed would be associated with lower work-to-family conflict and higher work-to-family…

  13. Fitting the flow curve of a plastically deformed silicon steel for the prediction of magnetic properties

    International Nuclear Information System (INIS)

    Sablik, M.J.; Landgraf, F.J.G.; Magnabosco, R.; Fukuhara, M.; Campos, M.F. de; Machado, R.; Missell, F.P.

    2006-01-01

    We report measurements and modelling of magnetic effects due to plastic deformation in 2.2% Si steel, emphasizing new tensile deformation data. The modelling approach is to take the Ludwik law for the strain-hardening stress and use it to compute the dislocation density, which is then used in the computation of magnetic hysteresis. A nonlinear extrapolation is used across the discontinuous yield region to obtain the value of stress at the yield point that is used in fitting Ludwik's law to the mechanical data. The computed magnetic hysteresis exhibits sharp shearing of the loops at small deformation, in agreement with experimental behavior. Magnetic hysteresis loss is shown to follow a Ludwik-like dependence on the residual strain, but with a smaller Ludwik exponent than applies for the mechanical behavior

  14. More basic approach to the analysis of multiple specimen R-curves for determination of J/sub c/

    International Nuclear Information System (INIS)

    Carlson, K.W.; Williams, J.A.

    1980-02-01

    Multiple specimen J-R curves were developed for groups of 1T compact specimens with different a/W values and depth of side grooving. The purpose of this investigation was to determine J/sub c/ (J at onset of crack extension) for each group. Judicious selection of points on the load versus load-line deflection record at which to unload and heat tint specimens permitted direct observation of approximate onset of crack extension. It was found that the present recommended procedure for determining J/sub c/ from multiple specimen R-curves, which is being considered for standardization, consistently yielded nonconservative J/sub c/ values. A more basic approach to analyzing multiple specimen R-curves is presented, applied, and discussed. This analysis determined J/sub c/ values that closely corresponded to actual observed onset of crack extension

  15. Light extraction block with curved surface

    Science.gov (United States)

    Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.

    2016-03-22

    Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.

  16. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    Science.gov (United States)

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  17. Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications

    Science.gov (United States)

    Wang, K.; Lettenmaier, D. P.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.

  18. State Authenticity as Fit to Environment: The Implications of Social Identity for Fit, Authenticity, and Self-Segregation.

    Science.gov (United States)

    Schmader, Toni; Sedikides, Constantine

    2017-10-01

    People seek out situations that "fit," but the concept of fit is not well understood. We introduce State Authenticity as Fit to the Environment (SAFE), a conceptual framework for understanding how social identities motivate the situations that people approach or avoid. Drawing from but expanding the authenticity literature, we first outline three types of person-environment fit: self-concept fit, goal fit, and social fit. Each type of fit, we argue, facilitates cognitive fluency, motivational fluency, and social fluency that promote state authenticity and drive approach or avoidance behaviors. Using this model, we assert that contexts subtly signal social identities in ways that implicate each type of fit, eliciting state authenticity for advantaged groups but state inauthenticity for disadvantaged groups. Given that people strive to be authentic, these processes cascade down to self-segregation among social groups, reinforcing social inequalities. We conclude by mapping out directions for research on relevant mechanisms and boundary conditions.

  19. tgcd: An R package for analyzing thermoluminescence glow curves

    Directory of Open Access Journals (Sweden)

    Jun Peng

    2016-01-01

    Full Text Available Thermoluminescence (TL glow curves are widely used in dosimetric studies. Many commercial and free-distributed programs are used to deconvolute TL glow curves. This study introduces an open-source R package tgcd to conduct TL glow curve analysis, such as kinetic parameter estimation, glow peak simulation, and peak shape analysis. TL glow curves can be deconvoluted according to the general-order empirical expression or the semi-analytical expression derived from the one trap-one recombination center (OTOR model based on the Lambert W function by using a modified Levenberg–Marquardt algorithm from which any of the parameters can be constrained or fixed. The package provides an interactive environment to initialize parameters and offers an automated “trial-and-error” protocol to obtain optimal fit results. First-order, second-order, and general-order glow peaks (curves are simulated according to a number of simple kinetic models. The package was developed using a combination of Fortran and R programming languages to improve efficiency and flexibility.

  20. Experimental R-curve behavior in partially stabilized zirconia using moiracute e interferometry

    International Nuclear Information System (INIS)

    Perry, K.E.; Okada, H.; Atluri, S.N.

    1993-01-01

    Moiracute e interferometry is employed to study toughening in partially stabilized zirconia (PSZ). Energy to fracture as a function of crack growth curves (R-curves) is derived from mode I compliance calculations and from near tip fitting of the moiracute e fringes. The effect of the tetragonal to monoclinic phase transformation in the zirconia is found by comparing the bulk compliance R-curves to the locally derived moiracute e R-curve. Localized strain field plots are produced from the moiracute e data for the PSZ zirconia. The observed transformation zone height compares favorably with that predicted by Okada et al. in a companion paper, as does the qualitative nature of the R-curve with predictions by Stump and Budiansky

  1. Stress strain flow curves for Cu-OFP

    International Nuclear Information System (INIS)

    Sandstroem, Rolf; Hallgren, Josefin

    2009-04-01

    Stress strain curves of oxygen free copper alloyed with phosphorus Cu-OFP have been determined in compression and tension. The compression tests were performed at room temperature for strain rates between 10 -5 and 10 -3 1/s. The tests in tension covered the temperature range 20 to 175 deg C for strain rates between 10 -7 and 5x10 -3 1/s. The results in compression and tension were close for similar strain rates. A model for stress strain curves has been formulated using basic dislocation mechanisms. The model has been set up in such a way that fitting of parameters to the curves is avoided. By using a fundamental creep model as a basis a direct relation to creep data has been established. The maximum engineering flow stress in tension is related to the creep stress giving the same strain rate. The model reproduces the measured flow curves as function of temperature and strain rate in the investigated interval. The model is suitable to use in finite-element computations of structures in Cu-OFP

  2. Inverse Diffusion Curves Using Shape Optimization.

    Science.gov (United States)

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  3. Maximum safe speed estimation using planar quintic Bezier curve with C2 continuity

    Science.gov (United States)

    Ibrahim, Mohamad Fakharuddin; Misro, Md Yushalify; Ramli, Ahmad; Ali, Jamaludin Md

    2017-08-01

    This paper describes an alternative way in estimating design speed or the maximum speed allowed for a vehicle to drive safely on a road using curvature information from Bezier curve fitting on a map. We had tested on some route in Tun Sardon Road, Balik Pulau, Penang, Malaysia. We had proposed to use piecewise planar quintic Bezier curve while satisfying the curvature continuity between joined curves in the process of mapping the road. By finding the derivatives of quintic Bezier curve, the value of curvature was calculated and design speed was derived. In this paper, a higher order of Bezier Curve had been used. A higher degree of curve will give more freedom for users to control the shape of the curve compared to curve in lower degree.

  4. Advanced topics in the arithmetic of elliptic curves

    CERN Document Server

    Silverman, Joseph H

    1994-01-01

    In the introduction to the first volume of The Arithmetic of Elliptic Curves (Springer-Verlag, 1986), I observed that "the theory of elliptic curves is rich, varied, and amazingly vast," and as a consequence, "many important topics had to be omitted." I included a brief introduction to ten additional topics as an appendix to the first volume, with the tacit understanding that eventually there might be a second volume containing the details. You are now holding that second volume. it turned out that even those ten topics would not fit Unfortunately, into a single book, so I was forced to make some choices. The following material is covered in this book: I. Elliptic and modular functions for the full modular group. II. Elliptic curves with complex multiplication. III. Elliptic surfaces and specialization theorems. IV. Neron models, Kodaira-Neron classification of special fibers, Tate's algorithm, and Ogg's conductor-discriminant formula. V. Tate's theory of q-curves over p-adic fields. VI. Neron's theory of can...

  5. Spreadsheets, Graphing Calculators and the Line of Best Fit

    Directory of Open Access Journals (Sweden)

    Bernie O'Sullivan

    2003-07-01

    One technique that can now be done, almost mindlessly, is the line of best fit. Both the graphing calculator and the Excel spreadsheet produce models for collected data that appear to be very good fits, but upon closer scrutiny, are revealed to be quite poor. This article will examine one such case. I will couch the paper within the framework of a very good classroom investigation that will help generate students’ understanding of the basic principles of curve fitting and will enable them to produce a very accurate model of collected data by combining the technology of the graphing calculator and the spreadsheet.

  6. Characterizing Synergistic Water and Energy Efficiency at the Residential Scale Using a Cost Abatement Curve Approach

    Science.gov (United States)

    Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.

    2015-12-01

    Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.

  7. Representative Stress-Strain Curve by Spherical Indentation on Elastic-Plastic Materials

    Directory of Open Access Journals (Sweden)

    Chao Chang

    2018-01-01

    Full Text Available Tensile stress-strain curve of metallic materials can be determined by the representative stress-strain curve from the spherical indentation. Tabor empirically determined the stress constraint factor (stress CF, ψ, and strain constraint factor (strain CF, β, but the choice of value for ψ and β is still under discussion. In this study, a new insight into the relationship between constraint factors of stress and strain is analytically described based on the formation of Tabor’s equation. Experiment tests were performed to evaluate these constraint factors. From the results, representative stress-strain curves using a proposed strain constraint factor can fit better with nominal stress-strain curve than those using Tabor’s constraint factors.

  8. Fluorometric titration approach for calibration of quantity of binding site of purified monoclonal antibody recognizing epitope/hapten nonfluorescent at 340 nm.

    Science.gov (United States)

    Yang, Xiaolan; Hu, Xiaolei; Xu, Bangtian; Wang, Xin; Qin, Jialin; He, Chenxiong; Xie, Yanling; Li, Yuanli; Liu, Lin; Liao, Fei

    2014-06-17

    A fluorometric titration approach was proposed for the calibration of the quantity of monoclonal antibody (mcAb) via the quench of fluorescence of tryptophan residues. It applied to purified mcAbs recognizing tryptophan-deficient epitopes, haptens nonfluorescent at 340 nm under the excitation at 280 nm, or fluorescent haptens bearing excitation valleys nearby 280 nm and excitation peaks nearby 340 nm to serve as Förster-resonance-energy-transfer (FRET) acceptors of tryptophan. Titration probes were epitopes/haptens themselves or conjugates of nonfluorescent haptens or tryptophan-deficient epitopes with FRET acceptors of tryptophan. Under the excitation at 280 nm, titration curves were recorded as fluorescence specific for the FRET acceptors or for mcAbs at 340 nm. To quantify the binding site of a mcAb, a universal model considering both static and dynamic quench by either type of probes was proposed for fitting to the titration curve. This was easy for fitting to fluorescence specific for the FRET acceptors but encountered nonconvergence for fitting to fluorescence of mcAbs at 340 nm. As a solution, (a) the maximum of the absolute values of first-order derivatives of a titration curve as fluorescence at 340 nm was estimated from the best-fit model for a probe level of zero, and (b) molar quantity of the binding site of the mcAb was estimated via consecutive fitting to the same titration curve by utilizing such a maximum as an approximate of the slope for linear response of fluorescence at 340 nm to quantities of the mcAb. This fluorometric titration approach was proved effective with one mcAb for six-histidine and another for penicillin G.

  9. A new approach to a global fit of the CKM matrix

    Energy Technology Data Exchange (ETDEWEB)

    Hoecker, A.; Lacker, H.; Laplace, S. [Laboratoire de l' Accelerateur Lineaire, 91 - Orsay (France); Le Diberder, F. [Laboratoire de Physique Nucleaire et des Hautes Energies, 75 - Paris (France)

    2001-05-01

    We report on a new approach to a global CKM matrix analysis taking into account most recent experimental and theoretical results. The statistical framework (Rfit) developed in this paper advocates frequentist statistics. Other approaches, such as Bayesian statistics or the 95% CL scan method are also discussed. We emphasize the distinction of a model testing and a model dependent, metrological phase in which the various parameters of the theory are estimated. Measurements and theoretical parameters entering the global fit are thoroughly discussed, in particular with respect to their theoretical uncertainties. Graphical results for confidence levels are drawn in various one and two-dimensional parameter spaces. Numerical results are provided for all relevant CKM parameterizations, the CKM elements and theoretical input parameters. Predictions for branching ratios of rare K and B meson decays are obtained. A simple, predictive SUSY extension of the Standard Model is discussed. (authors)

  10. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    Science.gov (United States)

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  11. Estimating stock parameters from trawl cpue-at-age series using year-class curves

    NARCIS (Netherlands)

    Cotter, A.J.R.; Mesnil, B.; Piet, G.J.

    2007-01-01

    A year-class curve is a plot of log cpue (catch per unit effort) over age for a single year class of a species (in contrast to the better known catch curve, fitted to multiple year classes at one time). When linear, the intercept and slope estimate the log cpue at age 0 and the average rate of total

  12. Learning Curves: Making Quality Online Health Information Available at a Fitness Center

    OpenAIRE

    Dobbins, Montie T.; Tarver, Talicia; Adams, Mararia; Jones, Dixie A.

    2012-01-01

    Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center – Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges.

  13. Learning Curves: Making Quality Online Health Information Available at a Fitness Center.

    Science.gov (United States)

    Dobbins, Montie T; Tarver, Talicia; Adams, Mararia; Jones, Dixie A

    2012-01-01

    Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center - Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges.

  14. A fit method for the determination of inherent filtration with diagnostic x-ray units

    International Nuclear Information System (INIS)

    Meghzifene, K; Nowotny, R; Aiginger, H

    2006-01-01

    A method for the determination of total inherent filtration for clinical x-ray units using attenuation curves was devised. A model for the calculation of x-ray spectra is used to calculate kerma values which are then adjusted to the experimental data in minimizing the sum of the squared relative differences in kerma using a modified simplex fit process. The model considers tube voltage, voltage ripple, anode angle and additional filters. Fit parameters are the thickness of an additional inherent Al filter and a general normalization factor. Nineteen sets of measurements including attenuation data for three tube voltages and five Al-filter settings each were obtained. Relative differences of experimental and calculated kerma using the data for the additional filter thickness are within a range of -7.6% to 6.4%. Quality curves, i.e. the relationship of additional filtration to HVL, are often used to determine filtration but the results show that standard quality curves do not reflect the variety of conditions encountered in practice. To relate the thickness of the additional filter to the condition of the anode surface, the data fits were also made using tungsten as the filter material. These fits gave an identical fit quality compared to aluminium with a tungsten filter thickness of 2.12-8.21 μm which is within the range of the additional absorbing layers determined for rough anodes

  15. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  16. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    Directory of Open Access Journals (Sweden)

    Shinichiro Tomitaka

    2016-10-01

    Full Text Available Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items. The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an

  17. REDUCED DATA FOR CURVE MODELING – APPLICATIONS IN GRAPHICS, COMPUTER VISION AND PHYSICS

    Directory of Open Access Journals (Sweden)

    Małgorzata Janik

    2013-06-01

    Full Text Available In this paper we consider the problem of modeling curves in Rn via interpolation without a priori specified interpolation knots. We discuss two approaches to estimate the missing knots for non-parametric data (i.e. collection of points. The first approach (uniform evaluation is based on blind guess in which knots are chosen uniformly. The second approach (cumulative chord parameterization incorporates the geometry of the distribution of data points. More precisely, the difference is equal to the Euclidean distance between data points qi+1 and qi. The second method partially compensates for the loss of the information carried by the reduced data. We also present the application of the above schemes for fitting non-parametric data in computer graphics (light-source motion rendering, in computer vision (image segmentation and in physics (high velocity particles trajectory modeling. Though experiments are conducted for points in R2 and R3 the entire method is equally applicable in Rn.

  18. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    Science.gov (United States)

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  19. Mild angle early onset idiopathic scoliosis children avoid progression under FITS method (Functional Individual Therapy of Scoliosis).

    Science.gov (United States)

    Białek, Marianna

    2015-05-01

    Physiotherapy for stabilization of idiopathic scoliosis angle in growing children remains controversial. Notably, little data on effectiveness of physiotherapy in children with Early Onset Idiopathic Scoliosis (EOIS) has been published.The aim of this study was to check results of FITS physiotherapy in a group of children with EOIS.The charts of the patients archived in a prospectively collected database were retrospectively reviewed. The inclusion criteria were:diagnosis of EOIS based on spine radiography, age below 10 years, both girls and boys, Cobb angle between 118 and 308, Risser zero, FITS therapy, no other treatment (bracing), and a follow-up at least 2 years from the initiation of the treatment. The criterion for curve progression were as follows: the Cobb angle increase of 68 or more, for curve stabilization; the Cobb angle was 58 comparing to the initial radiograph,for curve correction; and the Cobb angle decrease of 68 or more at the final follow-up radiograph.There were 41 children with EOIS, 36 girls and 5 boys, mean age 7.71.3 years (range 4 to 9 years) who started FITS therapy. The curve pattern was single thoracic (5 children), single thoracolumbar (22 children) or double thoracic/thoracolumbar (14 children), totally 55 structural curvatures. The minimum follow-up was 2 years after initiation of the FITS treatment, maximum was 16 years, mean 4.8 years). At follow-up the mean age was 12.53.4 years. Out of 41 children, 10 passed pubertal growth spurt at the final follow-up and 31 were still immature and continued FITS therapy. Out of 41 children, 27 improved, 13 were stable, and one progressed. Out of 55 structural curves, 32 improved, 22 were stable and one progressed. For the 55 structural curves, the Cobb angle significantly decreased from 18.085.48 at first assessment to 12.586.38 at last evaluation,pphysiotherapy was effective in preventing curve progression in children with EOIS. Final postpubertal follow-up data is needed.

  20. Comparison between two scalar field models using rotation curves of spiral galaxies

    Science.gov (United States)

    Fernández-Hernández, Lizbeth M.; Rodríguez-Meza, Mario A.; Matos, Tonatiuh

    2018-04-01

    Scalar fields have been used as candidates for dark matter in the universe, from axions with masses ∼ 10-5eV until ultra-light scalar fields with masses ∼ Axions behave as cold dark matter while the ultra-light scalar fields galaxies are Bose-Einstein condensate drops. The ultra-light scalar fields are also called scalar field dark matter model. In this work we study rotation curves for low surface brightness spiral galaxies using two scalar field models: the Gross-Pitaevskii Bose-Einstein condensate in the Thomas-Fermi approximation and a scalar field solution of the Klein-Gordon equation. We also used the zero disk approximation galaxy model where photometric data is not considered, only the scalar field dark matter model contribution to rotation curve is taken into account. From the best-fitting analysis of the galaxy catalog we use, we found the range of values of the fitting parameters: the length scale and the central density. The worst fitting results (values of χ red2 much greater than 1, on the average) were for the Thomas-Fermi models, i.e., the scalar field dark matter is better than the Thomas- Fermi approximation model to fit the rotation curves of the analysed galaxies. To complete our analysis we compute from the fitting parameters the mass of the scalar field models and two astrophysical quantities of interest, the dynamical dark matter mass within 300 pc and the characteristic central surface density of the dark matter models. We found that the value of the central mass within 300 pc is in agreement with previous reported results, that this mass is ≈ 107 M ⊙/pc2, independent of the dark matter model. And, on the contrary, the value of the characteristic central surface density do depend on the dark matter model.

  1. Fit for purpose? Introducing a rational priority setting approach into a community care setting.

    Science.gov (United States)

    Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale

    2016-06-20

    Purpose - Program budgeting and marginal analysis (PBMA) is a priority setting approach that assists decision makers with allocating resources. Previous PBMA work establishes its efficacy and indicates that contextual factors complicate priority setting, which can hamper PBMA effectiveness. The purpose of this paper is to gain qualitative insight into PBMA effectiveness. Design/methodology/approach - A Canadian case study of PBMA implementation. Data consist of decision-maker interviews pre (n=20), post year-1 (n=12) and post year-2 (n=9) of PBMA to examine perceptions of baseline priority setting practice vis-à-vis desired practice, and perceptions of PBMA usability and acceptability. Findings - Fit emerged as a key theme in determining PBMA effectiveness. Fit herein refers to being of suitable quality and form to meet the intended purposes and needs of the end-users, and includes desirability, acceptability, and usability dimensions. Results confirm decision-maker desire for rational approaches like PBMA. However, most participants indicated that the timing of the exercise and the form in which PBMA was applied were not well-suited for this case study. Participant acceptance of and buy-in to PBMA changed during the study: a leadership change, limited organizational commitment, and concerns with organizational capacity were key barriers to PBMA adoption and thereby effectiveness. Practical implications - These findings suggest that a potential way-forward includes adding a contextual readiness/capacity assessment stage to PBMA, recognizing organizational complexity, and considering incremental adoption of PBMA's approach. Originality/value - These insights help us to better understand and work with priority setting conditions to advance evidence-informed decision making.

  2. Learning curves in energy planning models

    Energy Technology Data Exchange (ETDEWEB)

    Barreto, L; Kypreos, S [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    This study describes the endogenous representation of investment cost learning curves into the MARKAL energy planning model. A piece-wise representation of the learning curves is implemented using Mixed Integer Programming. The approach is briefly described and some results are presented. (author) 3 figs., 5 refs.

  3. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to d...

  4. Symmetry Properties of Potentiometric Titration Curves.

    Science.gov (United States)

    Macca, Carlo; Bombi, G. Giorgio

    1983-01-01

    Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)

  5. Stenting for curved lesions using a novel curved balloon: Preliminary experimental study.

    Science.gov (United States)

    Tomita, Hideshi; Higaki, Takashi; Kobayashi, Toshiki; Fujii, Takanari; Fujimoto, Kazuto

    2015-08-01

    Stenting may be a compelling approach to dilating curved lesions in congenital heart diseases. However, balloon-expandable stents, which are commonly used for congenital heart diseases, are usually deployed in a straight orientation. In this study, we evaluated the effect of stenting with a novel curved balloon considered to provide better conformability to the curved-angled lesion. In vitro experiments: A Palmaz Genesis(®) stent (Johnson & Johnson, Cordis Co, Bridgewater, NJ, USA) mounted on the Goku(®) curve (Tokai Medical Co. Nagoya, Japan) was dilated in vitro to observe directly the behavior of the stent and balloon assembly during expansion. Animal experiment: A short Express(®) Vascular SD (Boston Scientific Co, Marlborough, MA, USA) stent and a long Express(®) Vascular LD stent (Boston Scientific) mounted on the curved balloon were deployed in the curved vessel of a pig to observe the effect of stenting in vivo. In vitro experiments: Although the stent was dilated in a curved fashion, stent and balloon assembly also rotated conjointly during expansion of its curved portion. In the primary stenting of the short stent, the stent was dilated with rotation of the curved portion. The excised stent conformed to the curved vessel. As the long stent could not be negotiated across the mid-portion with the balloon in expansion when it started curving, the mid-portion of the stent failed to expand fully. Furthermore, the balloon, which became entangled with the stent strut, could not be retrieved even after complete deflation. This novel curved balloon catheter might be used for implantation of the short stent in a curved lesion; however, it should not be used for primary stenting of the long stent. Post-dilation to conform the stent to the angled vessel would be safer than primary stenting irrespective of stent length. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  6. SU-F-I-63: Relaxation Times of Lipid Resonances in NAFLD Animal Model Using Enhanced Curve Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Song, K-H; Yoo, C-H; Lim, S-I; Choe, B-Y [Department of Biomedical Engineering, and Research Institute of Biomedical Engineering, The Catholic University of Korea College of Medicine, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: The objective of this study is to evaluate the relaxation time of methylene resonance in comparison with other lipid resonances. Methods: The examinations were performed on a 3.0T MRI scanner using a four-channel animal coil. Eight more Sprague-Dawley rats in the same baseline weight range were housed with ad libitum access to water and a high-fat (HF) diet (60% fat, 20% protein, and 20% carbohydrate). In order to avoid large blood vessels, a voxel (0.8×0.8×0.8 cm{sup 3}) was placed in a homogeneous area of the liver parenchyma during free breathing. Lipid relaxations in NC and HF diet rats were estimated at a fixed repetition time (TR) of 6000 msec, and multi echo time (TEs) of 40–220 msec. All spectra for data measurement were processed using the Advanced Method for Accurate, Robust, and Efficient Spectral (AMARES) fitting algorithm of the Java-based Magnetic Resonance User Interface (jMRUI) package. Results: The mean T2 relaxation time of the methylene resonance in normal-chow diet was 37.1 msec (M{sub 0}, 2.9±0.5), with a standard deviation of 4.3 msec. The mean T2 relaxation time of the methylene resonance was 31.4 msec (M{sub 0}, 3.7±0.3), with a standard deviation of 1.8 msec. The T2 relaxation times of methylene protons were higher in normal-chow diet rats than in HF rats (p<0.05), and the extrapolated M{sub 0} values were higher in HF rats than in NC rats (p<0.005). The excellent linear fit with R{sup 2}>0.9971 and R{sup 2}>0.9987 indicates T2 relaxation decay curves with mono-exponential function. Conclusion: In in vivo, a sufficient spectral resolution and a sufficiently high signal-to-noise ratio (SNR) can be achieved, so that the data measured over short TE values can be extrapolated back to TE = 0 to produce better estimates of the relative weights of the spectral components. In the short term, treating the effective decay rate as exponential is an adequate approximation.

  7. Fitness of the analysis method of magnesium in drinking water using atomic absorption with quadratic calibration curve

    International Nuclear Information System (INIS)

    Perez-Lopez, Esteban

    2014-01-01

    The quantitative chemical analysis has been importance in research. Also, aspects like: quality control, sales of services and other areas of interest. Some instrumental analysis methods for quantification with linear calibration curve have presented limitations, because the short liner dynamic ranges of the analyte, or sometimes, by limiting the technique itself. The need has been to investigate a little more about the convenience of using quadratic calibration curves for analytical quantification, with which it has seeked demonstrate that has been a valid calculation model for chemical analysis instruments. An analysis base method is used on the technique of atomic absorption spectroscopy and in particular a determination of magnesium in a drinking water sample of the Tacares sector North of Grecia. A nonlinear calibration curve was used and specifically a curve with quadratic behavior. The same was compared with the test results obtained for the equal analysis with a linear calibration curve. The results have showed that the methodology has been valid for the determination referred with all confidence, since the concentrations have been very similar and, according to the used hypothesis testing, can be considered equal. (author) [es

  8. Generalized drying curves in conductive/convective paper drying

    Directory of Open Access Journals (Sweden)

    O.C. Motta Lima

    2000-12-01

    Full Text Available This work presents a study related to conductive/convective drying of paper (cellulose sheets over heated surfaces, under natural and forced air conditions. The experimental apparatus consists in a metallic box heated by a thermostatic bath containing an upper surface on which the paper samples (about 1 mm thick are placed. The system is submitted to ambient air under two different conditions: natural convection and forced convection provide by an adjustable blower. The influence of initial paper moisture content, drying (heated surface temperature and air velocity on drying curves behavior is observed under different drying conditions. Hence, these influence is studied through the proposal of generalized drying curves. Those curves are analyzed individually for each air condition exposed above and for both together. A set of equations to fit them is proposed and discussed.

  9. An assessment of mode-coupling and falling-friction mechanisms in railway curve squeal through a simplified approach

    Science.gov (United States)

    Ding, Bo; Squicciarini, Giacomo; Thompson, David; Corradi, Roberto

    2018-06-01

    Curve squeal is one of the most annoying types of noise caused by the railway system. It usually occurs when a train or tram is running around tight curves. Although this phenomenon has been studied for many years, the generation mechanism is still the subject of controversy and not fully understood. A negative slope in the friction curve under full sliding has been considered to be the main cause of curve squeal for a long time but more recently mode coupling has been demonstrated to be another possible explanation. Mode coupling relies on the inclusion of both the lateral and vertical dynamics at the contact and an exchange of energy occurs between the normal and the axial directions. The purpose of this paper is to assess the role of the mode-coupling and falling-friction mechanisms in curve squeal through the use of a simple approach based on practical parameter values representative of an actual situation. A tramway wheel is adopted to study the effect of the adhesion coefficient, the lateral contact position, the contact angle and the damping ratio. Cases corresponding to both inner and outer wheels in the curve are considered and it is shown that there are situations in which both wheels can squeal due to mode coupling. Additionally, a negative slope is introduced in the friction curve while keeping active the vertical dynamics in order to analyse both mechanisms together. It is shown that, in the presence of mode coupling, the squealing frequency can differ from the natural frequency of either of the coupled wheel modes. Moreover, a phase difference between wheel vibration in the vertical and lateral directions is observed as a characteristic of mode coupling. For both these features a qualitative comparison is shown with field measurements which show the same behaviour.

  10. Extended analysis of cooling curves

    International Nuclear Information System (INIS)

    Djurdjevic, M.B.; Kierkus, W.T.; Liliac, R.E.; Sokolowski, J.H.

    2002-01-01

    Thermal Analysis (TA) is the measurement of changes in a physical property of a material that is heated through a phase transformation temperature range. The temperature changes in the material are recorded as a function of the heating or cooling time in such a manner that allows for the detection of phase transformations. In order to increase accuracy, characteristic points on the cooling curve have been identified using the first derivative curve plotted versus time. In this paper, an alternative approach to the analysis of the cooling curve has been proposed. The first derivative curve has been plotted versus temperature and all characteristic points have been identified with the same accuracy achieved using the traditional method. The new cooling curve analysis also enables the Dendrite Coherency Point (DCP) to be detected using only one thermocouple. (author)

  11. Learning curves in health professions education.

    Science.gov (United States)

    Pusic, Martin V; Boutis, Kathy; Hatala, Rose; Cook, David A

    2015-08-01

    Learning curves, which graphically show the relationship between learning effort and achievement, are common in published education research but are not often used in day-to-day educational activities. The purpose of this article is to describe the generation and analysis of learning curves and their applicability to health professions education. The authors argue that the time is right for a closer look at using learning curves-given their desirable properties-to inform both self-directed instruction by individuals and education management by instructors.A typical learning curve is made up of a measure of learning (y-axis), a measure of effort (x-axis), and a mathematical linking function. At the individual level, learning curves make manifest a single person's progress towards competence including his/her rate of learning, the inflection point where learning becomes more effortful, and the remaining distance to mastery attainment. At the group level, overlaid learning curves show the full variation of a group of learners' paths through a given learning domain. Specifically, they make overt the difference between time-based and competency-based approaches to instruction. Additionally, instructors can use learning curve information to more accurately target educational resources to those who most require them.The learning curve approach requires a fine-grained collection of data that will not be possible in all educational settings; however, the increased use of an assessment paradigm that explicitly includes effort and its link to individual achievement could result in increased learner engagement and more effective instructional design.

  12. Principal Curves on Riemannian Manifolds.

    Science.gov (United States)

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  13. Survival curves study of platelet labelling with 51Cr

    International Nuclear Information System (INIS)

    Penas, M.E.

    1981-01-01

    Platelet kinetics and idiopathic thrombocytopenic purpura were researched in the literature. An 'in vitro' platelet labelling with 51 Cr procedure in implementation has been evaluated in human beings. Functions used for fitting considered the cases whether the curve was linear or exponential as well as the presence of hematies. (author)

  14. Dose-effect Curve for X-radiation in Lymphocytes in Goats

    International Nuclear Information System (INIS)

    Hasanbasic, D.; Saracevic, L.; Sacirbegovic, A.

    1998-01-01

    Dose-effect curve for X-radiation was made based on the analysis of chromosome aberrations in lympocytes of goats. Blood samples from seven goats were irradiated using MOORHEAD method, slightly modified and adapted to our conditions. Linear-square model was used, and the dose-effect curves were fitted by the smallest squares method. Dose-effect curve (collective) for goats is displayed as the following expression: y(D)= 8,6639·10 -3 D + 2,9748·10 -2 D 2 +2,9475·10 -3 . Comparison with some domestic animals such as sheep and pigs showed differences not only with respect to linear-square model, but to other mathematical presentations as well. (author)

  15. The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates

    Science.gov (United States)

    Sivo, Stephen; Fan, Xitao; Witta, Lea

    2005-01-01

    The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…

  16. Estimation of growth curve parameters in Konya Merino sheep ...

    African Journals Online (AJOL)

    The objective of this study was to determine the fitness of Quadratic, Cubic, Gompertz and Logistic functions to the growth curves of Konya Merino lambs obtained by using monthly records of live weight from birth to 480 days of age. The models were evaluated according to determination coefficient (R2), mean square ...

  17. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    International Nuclear Information System (INIS)

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    2016-01-01

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North American datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate. - Highlights: • We develop a segmented regression technique to estimate historical CFL learning curves. • CFL experience curves do not have a constant learning rate. • CFLs exhibited a learning rate of approximately 21% from 1990 to 1997. • The CFL learning rate significantly increased after 1998. • Increased CFL learning rate is correlated to technology deployment programs.

  18. A new method for measuring coronary artery diameters with CT spatial profile curves

    International Nuclear Information System (INIS)

    Shimamoto, Ryoichi; Suzuki, Jun-ichi; Yamazaki, Tadashi; Tsuji, Taeko; Ohmoto, Yuki; Morita, Toshihiro; Yamashita, Hiroshi; Honye, Junko; Nagai, Ryozo; Akahane, Masaaki; Ohtomo, Kuni

    2007-01-01

    Purpose: Coronary artery vascular edge recognition on computed tomography (CT) angiograms is influenced by window parameters. A noninvasive method for vascular edge recognition independent of window setting with use of multi-detector row CT was contrived and its feasibility and accuracy were estimated by intravascular ultrasound (IVUS). Methods: Multi-detector row CT was performed to obtain 29 CT spatial profile curves by setting a line cursor across short-axis coronary angiograms processed by multi-planar reconstruction. IVUS was also performed to determine the reference coronary diameter. IVUS diameter was fitted horizontally between two points on the upward and downward slopes of the profile curves and Hounsfield number was measured at the fitted level to test seven candidate indexes for definition of intravascular coronary diameter. The best index from the curves should show the best agreement with IVUS diameter. Results: Of the seven candidates the agreement was the best (agreement: 16 ± 11%) when the two ratios of Hounsfield number at the level of IVUS diameter over that at the peak on the profile curves were used with water and with fat as the background tissue. These edge definitions were achieved by cutting the horizontal distance by the curves at the level defined by the ratio of 0.41 for water background and 0.57 for fat background. Conclusions: Vascular edge recognition of the coronary artery with CT spatial profile curves was feasible and the contrived method could define the coronary diameter with reasonable agreement

  19. Temporal issues in person-organization fit, person-job fit, and turnover : The role of leader-member exchange

    NARCIS (Netherlands)

    Boon, C.; Biron, M.

    2016-01-01

    Person–environment fit has been found to have significant implications for employee attitudes and behaviors. Most research to date has approached person–environment fit as a static phenomenon, and without examining how different types of person–environment fit may affect each other. In particular,

  20. MICA: Multiple interval-based curve alignment

    Science.gov (United States)

    Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf

    2018-01-01

    MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA

  1. The 'fitting problem' in cosmology

    International Nuclear Information System (INIS)

    Ellis, G.F.R.; Stoeger, W.

    1987-01-01

    The paper considers the best way to fit an idealised exactly homogeneous and isotropic universe model to a realistic ('lumpy') universe; whether made explicit or not, some such approach of necessity underlies the use of the standard Robertson-Walker models as models of the real universe. Approaches based on averaging, normal coordinates and null data are presented, the latter offering the best opportunity to relate the fitting procedure to data obtainable by astronomical observations. (author)

  2. A comparative analysis of the EEDF obtained by Regularization and by Least square fit methods

    International Nuclear Information System (INIS)

    Gutierrez T, C.; Flores Ll, H.

    2004-01-01

    The second derived of the characteristic curve current-voltage (I - V) of a Langmuir probe (I - V) is numerically calculated using the Tikhonov method for to determine the distribution function of the electrons energy (EEDF). One comparison of the obtained EEDF and a fit by least square are discussed (LS). The I - V experimental curve is obtained in a plasma source in the electron cyclotron resonance (ECR) using a cylindrical probe. The parameters of plasma are determined of the EEDF by means of the Laframboise theory. For the case of the LS fit, the obtained results are similar to those obtained by the Tikhonov method, but in the first case the procedure is slow to achieve the best fit. (Author)

  3. Calculating the parameters of experimental data Gauss distribution using the least square fit method and evaluation of their accuracy

    International Nuclear Information System (INIS)

    Guseva, E.V.; Peregudov, V.N.

    1982-01-01

    The FITGAV program for calculation of parameters of the Gauss curve describing experimental data is considered. The calculations are based on the least square fit method. The estimations of errors in the parameter determination as a function of experimental data sample volume and their statistical significance are obtained. The curve fit using 100 points occupies less than 1 s at the SM-4 type computer

  4. Quantitative description of the magnetization curves of amorphous alloys of the series a-DyxGd1-xNi

    International Nuclear Information System (INIS)

    Barbara, B.; Filippi, J.; Amaral, V.S.

    1992-01-01

    The magnetization curves of the series of amorphous alloys Dy x Gd 1-x Ni measured between 1.5 and 4.2 K and up to 15 T, have been fitted to the zero kelvin analytical model of Chudnovsky. The results of these fits allow a detailed understanding of the magnetization curves of amorphous alloys with ferromagnetic interactions. In particular, the ratio D/J of the local anisotropy and exchange energies, and the magnetic and atomic correlation lengths, are accurately determined. (orig.)

  5. QUENCH: A software package for the determination of quenching curves in Liquid Scintillation counting

    International Nuclear Information System (INIS)

    Cassette, Philippe

    2016-01-01

    In Liquid Scintillation Counting (LSC), the scintillating source is part of the measurement system and its detection efficiency varies with the scintillator used, the vial and the volume and the chemistry of the sample. The detection efficiency is generally determined using a quenching curve, describing, for a specific radionuclide, the relationship between a quenching index given by the counter and the detection efficiency. A quenched set of LS standard sources are prepared by adding a quenching agent and the quenching index and detection efficiency are determined for each source. Then a simple formula is fitted to the experimental points to define the quenching curve function. The paper describes a software package specifically devoted to the determination of quenching curves with uncertainties. The experimental measurements are described by their quenching index and detection efficiency with uncertainties on both quantities. Random Gaussian fluctuations of these experimental measurements are sampled and a polynomial or logarithmic function is fitted on each fluctuation by χ"2 minimization. This Monte Carlo procedure is repeated many times and eventually the arithmetic mean and the experimental standard deviation of each parameter are calculated, together with the covariances between these parameters. Using these parameters, the detection efficiency, corresponding to an arbitrary quenching index within the measured range, can be calculated. The associated uncertainty is calculated with the law of propagation of variances, including the covariance terms. - Highlights: • The program “QUENCH” is devoted to the interpolation of quenching curves in LSC. • Functions are fitted to experimental data with uncertainties in both quenching and efficiency. • The parameters of the fitting function and the associated covariance matrix are evaluated. • The detection efficiency and uncertainty corresponding to a given quenching index is calculated.

  6. Modelling stochastic chances in curve shape, with an application to cancer diagnostics

    DEFF Research Database (Denmark)

    Hobolth, A; Jensen, Eva B. Vedel

    2000-01-01

    Often, the statistical analysis of the shape of a random planar curve is based on a model for a polygonal approximation to the curve. In the present paper, we instead describe the curve as a continuous stochastic deformation of a template curve. The advantage of this continuous approach is that t......Often, the statistical analysis of the shape of a random planar curve is based on a model for a polygonal approximation to the curve. In the present paper, we instead describe the curve as a continuous stochastic deformation of a template curve. The advantage of this continuous approach...... is that the parameters in the model do not relate to a particular polygonal approximation. A somewhat similar approach has been used by Kent et al. (1996), who describe the limiting behaviour of a model with a first-order Markov property as the landmarks on the curve become closely spaced; see also Grenander(1993...

  7. IEFIT - An Interactive Approach to High Temperature Fusion Plasma Magnetic Equilibrium Fitting

    International Nuclear Information System (INIS)

    Peng, Q.; Schachter, J.; Schissel, D.P.; Lao, L.L.

    1999-01-01

    An interactive IDL based wrapper, IEFIT, has been created for the magnetic equilibrium reconstruction code EFIT written in FORTRAN. It allows high temperature fusion physicists to rapidly optimize a plasma equilibrium reconstruction by eliminating the unnecessarily repeated initialization in the conventional approach along with the immediate display of the fitting results of each input variation. It uses a new IDL based graphics package, GaPlotObj, developed in cooperation with Fanning Software Consulting, that provides a unified interface with great flexibility in presenting and analyzing scientific data. The overall interactivity reduces the process to minutes from the usual hours

  8. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    DEFF Research Database (Denmark)

    Bolker, B.M.; Gardner, B.; Maunder, M.

    2013-01-01

    Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. R is convenient and (relatively) easy...... to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield...

  9. Simple approach to approximate predictions of the vapor–liquid equilibrium curve near the critical point and its application to Lennard-Jones fluids

    International Nuclear Information System (INIS)

    Staśkiewicz, B.; Okrasiński, W.

    2012-01-01

    We propose a simple analytical form of the vapor–liquid equilibrium curve near the critical point for Lennard-Jones fluids. Coexistence densities curves and vapor pressure have been determined using the Van der Waals and Dieterici equation of state. In described method the Bernoulli differential equations, critical exponent theory and some type of Maxwell's criterion have been used. Presented approach has not yet been used to determine analytical form of phase curves as done in this Letter. Lennard-Jones fluids have been considered for analysis. Comparison with experimental data is done. The accuracy of the method is described. -- Highlights: ► We propose a new analytical way to determine the VLE curve. ► Simple, mathematically straightforward form of phase curves is presented. ► Comparison with experimental data is discussed. ► The accuracy of the method has been confirmed.

  10. A standard curve based method for relative real time PCR data processing

    Directory of Open Access Journals (Sweden)

    Krause Andreas

    2005-03-01

    Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that

  11. Curve fitting using a genetic algorithm for the X-ray fluorescence measurement of lead in bone

    International Nuclear Information System (INIS)

    Luo, L.; McMaster University, Hamilton; Chettle, D.R.; Nie, H.; McNeill, F.E.; Popovic, M.

    2006-01-01

    We investigated the potential application of the genetic algorithm in the analysis of X-ray fluorescence spectra from measurement of lead in bone. Candidate solutions are first designed based on the field knowledge and the whole operation, evaluation, selection, crossover and mutation, is then repeated until a given convergence criterion is met. An average-parameters based genetic algorithm is suggested to improve the fitting precision and accuracy. Relative standard deviation (RSD%) of fitting amplitude, peak position and width is 1.3-7.1, 0.009-0.14 and 1.4-3.3, separately. The genetic algorithm was shown to make a good resolution and fitting of K lines of Pb and γ elastic peaks. (author)

  12. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.

    Science.gov (United States)

    König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-03-13

    Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, Pdigital interventions. This study highlights that new user groups might be better reached by apps designed to address a more intuitive

  13. A new procedure for automatic fitting of the basilar-membrane input-output function to individual behavioral data

    DEFF Research Database (Denmark)

    Kowalewski, Borys; Fereczkowski, Michal; MacDonald, Ewen

    2016-01-01

    system and, potentially, for clinical diagnostics. Computational algorithms are available that mimic the functioning of the nonlinear cochlear processing. One such algorithm is the dual resonance non-linear (DRNL) filterbank [6]. Its parameters can be modified to account for individual hearing loss, e.......g., based on behavioral, temporal masking curves (TMC) data. This approach was used within the framework of the computational auditory signal-processing and perception (CASP) model to account for various aspects of SNHL [4]. However, due to the computational complexity, on-line fitting of the DRNL...

  14. Comparison of parametric, orthogonal, and spline functions to model individual lactation curves for milk yield in Canadian Holsteins

    Directory of Open Access Journals (Sweden)

    Corrado Dimauro

    2010-11-01

    Full Text Available Test day records for milk yield of 57,390 first lactation Canadian Holsteins were analyzed with a linear model that included the fixed effects of herd-test date and days in milk (DIM interval nested within age and calving season. Residuals from this model were analyzed as a new variable and fitted with a five parameter model, fourth-order Legendre polynomials, with linear, quadratic and cubic spline models with three knots. The fit of the models was rather poor, with about 30-40% of the curves showing an adjusted R-square lower than 0.20 across all models. Results underline a great difficulty in modelling individual deviations around the mean curve for milk yield. However, the Ali and Schaeffer (5 parameter model and the fourth-order Legendre polynomials were able to detect two basic shapes of individual deviations among the mean curve. Quadratic and, especially, cubic spline functions had better fitting performances but a poor predictive ability due to their great flexibility that results in an abrupt change of the estimated curve when data are missing. Parametric and orthogonal polynomials seem to be robust and affordable under this standpoint.

  15. Exponential models applied to automated processing of radioimmunoassay standard curves

    International Nuclear Information System (INIS)

    Morin, J.F.; Savina, A.; Caroff, J.; Miossec, J.; Legendre, J.M.; Jacolot, G.; Morin, P.P.

    1979-01-01

    An improved computer processing is described for fitting of radio-immunological standard curves by means of an exponential model on a desk-top calculator. This method has been applied to a variety of radioassays and the results are in accordance with those obtained by more sophisticated models [fr

  16. A Bayesian hierarchical model for demand curve analysis.

    Science.gov (United States)

    Ho, Yen-Yi; Nhu Vo, Tien; Chu, Haitao; Luo, Xianghua; Le, Chap T

    2018-07-01

    Drug self-administration experiments are a frequently used approach to assessing the abuse liability and reinforcing property of a compound. It has been used to assess the abuse liabilities of various substances such as psychomotor stimulants and hallucinogens, food, nicotine, and alcohol. The demand curve generated from a self-administration study describes how demand of a drug or non-drug reinforcer varies as a function of price. With the approval of the 2009 Family Smoking Prevention and Tobacco Control Act, demand curve analysis provides crucial evidence to inform the US Food and Drug Administration's policy on tobacco regulation, because it produces several important quantitative measurements to assess the reinforcing strength of nicotine. The conventional approach popularly used to analyze the demand curve data is individual-specific non-linear least square regression. The non-linear least square approach sets out to minimize the residual sum of squares for each subject in the dataset; however, this one-subject-at-a-time approach does not allow for the estimation of between- and within-subject variability in a unified model framework. In this paper, we review the existing approaches to analyze the demand curve data, non-linear least square regression, and the mixed effects regression and propose a new Bayesian hierarchical model. We conduct simulation analyses to compare the performance of these three approaches and illustrate the proposed approaches in a case study of nicotine self-administration in rats. We present simulation results and discuss the benefits of using the proposed approaches.

  17. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  18. Creep curve modeling of hastelloy-X alloy by using the theta projection method

    International Nuclear Information System (INIS)

    Woo Gon, Kim; Woo-Seog, Ryu; Jong-Hwa, Chang; Song-Nan, Yin

    2007-01-01

    To model the creep curves of the Hastelloy-X alloy which is being considered as a candidate material for the VHTR (Very High Temperature gas-cooled Reactor) components, full creep curves were obtained by constant-load creep tests for different stress levels at 950 C degrees. Using the experimental creep data, the creep curves were modeled by applying the Theta projection method. A number of computing processes of a nonlinear least square fitting (NLSF) analysis was carried out to establish the suitably of the four Theta parameters. The results showed that the Θ 1 and Θ 2 parameters could not be optimized well with a large error during the fitting of the full creep curves. On the other hand, the Θ 3 and Θ 4 parameters were optimized well without an error. For this result, to find a suitable cutoff strain criterion, the NLSF analysis was performed with various cutoff strains for all the creep curves. An optimum cutoff strain range for defining the four Theta parameters accurately was found to be a 3% cutoff strain. At the 3% cutoff strain, the predicted curves coincided well with the experimental ones. The variation of the four Theta parameters as the function of a stress showed a good linearity, and the creep curves were modeled well for the low stress levels. Predicted minimum creep rate showed a good agreement with the experimental data. Also, for a design usage of the Hastelloy-X alloy, the plot of the log stress versus log the time to a 1% strain was predicted, and the creep rate curves with time and a cutoff strain at 950 C degrees were constructed numerically for a wide rang of stresses by using the Theta projection method. (authors)

  19. REFLECTED LIGHT CURVES, SPHERICAL AND BOND ALBEDOS OF JUPITER- AND SATURN-LIKE EXOPLANETS

    Energy Technology Data Exchange (ETDEWEB)

    Dyudina, Ulyana; Kopparla, Pushkar; Ingersoll, Andrew P.; Yung, Yuk L. [Division of Geological and Planetary Sciences, 150-21 California Institute of Technology, Pasadena, CA 91125 (United States); Zhang, Xi [University of California Santa Cruz 1156 High Street, Santa Cruz, CA 95064 (United States); Li, Liming [Department of Physics, University of Houston, Houston, TX 77204 (United States); Dones, Luke [Southwest Research Institute, 1050 Walnut Street, Suite 300, Boulder CO 80302 (United States); Verbiscer, Anne, E-mail: ulyana@gps.caltech.edu [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States)

    2016-05-10

    Reflected light curves observed for exoplanets indicate that a few of them host bright clouds. We estimate how the light curve and total stellar heating of a planet depends on forward and backward scattering in the clouds based on Pioneer and Cassini spacecraft images of Jupiter and Saturn. We fit analytical functions to the local reflected brightnesses of Jupiter and Saturn depending on the planet’s phase. These observations cover broadbands at 0.59–0.72 and 0.39–0.5 μ m, and narrowbands at 0.938 (atmospheric window), 0.889 (CH4 absorption band), and 0.24–0.28 μ m. We simulate the images of the planets with a ray-tracing model, and disk-integrate them to produce the full-orbit light curves. For Jupiter, we also fit the modeled light curves to the observed full-disk brightness. We derive spherical albedos for Jupiter and Saturn, and for planets with Lambertian and Rayleigh-scattering atmospheres. Jupiter-like atmospheres can produce light curves that are a factor of two fainter at half-phase than the Lambertian planet, given the same geometric albedo at transit. The spherical albedo is typically lower than for a Lambertian planet by up to a factor of ∼1.5. The Lambertian assumption will underestimate the absorption of the stellar light and the equilibrium temperature of the planetary atmosphere. We also compare our light curves with the light curves of solid bodies: the moons Enceladus and Callisto. Their strong backscattering peak within a few degrees of opposition (secondary eclipse) can lead to an even stronger underestimate of the stellar heating.

  20. Statistical study of clone survival curves after irradiation in one or two stages. Comparison and generalization of different models

    International Nuclear Information System (INIS)

    Lachet, Bernard.

    1975-01-01

    A statistical study was carried out on 208 survival curves for chlorella subjected to γ or particle radiations. The computing programmes used were written in Fortran. The different experimental causes contributing to the variance of a survival rate are analyzed and consequently the experiments can be planned. Each curve was fitted to four models by the weighted least squares method applied to non-linear functions. The validity of the fits obtained can be checked by the F test. It was possible to define the confidence and prediction zones around an adjusted curve by weighting of the residual variance, in spite of error on the doses delivered; the confidence limits can them be fixed for a dose estimated from an exact or measured survival. The four models adopted were compared for the precision of their fit (by a non-parametric simultaneous comparison test) and the scattering of their adjusted parameters: Wideroe's model gives a very good fit with the experimental points in return for a scattering of its parameters, which robs them of their presumed meaning. The principal component analysis showed the statistical equivalence of the 1 and 2 hit target models. Division of the irradiation into two doses, the first fixed by the investigator, leads to families of curves for which the equation was established from that of any basic model expressing the dose survival relationship in one-stage irradiation [fr

  1. Optimal weights for circle fitting with discrete granular data

    International Nuclear Information System (INIS)

    Chernov, N.; Kolganova, E.; Ososkov, G.

    1995-01-01

    The problem of the data approximation measured along a circle by modern detectors in high energy physics, as for example, RICH (Ring Imaging Cherenkov) is considered. Such detectors having the discrete cell structure register the energy dissipation produced by a passing elementary particle not in a single point, but in several adjacent cells where all this energy is distributed. The presence of background hits makes inapplicable circle fitting methods based on the least square fit due to their noise sensitivity. In this paper it's shown that the efficient way to overcome these problems of the curve fitting is the robust fitting technique based on a reweighted least square method with optimally chosen weights, obtained by the use of maximum likelihood estimates. Results of numerical experiments are given proving the high efficiency of the suggested method. 9 refs., 5 figs., 1 tab

  2. Using commercial simulators for determining flash distillation curves for petroleum fractions

    Directory of Open Access Journals (Sweden)

    Eleonora Erdmann

    2008-01-01

    Full Text Available This work describes a new method for estimating the equilibrium flash vaporisation (EFV distillation curve for petro-leum fractions by using commercial simulators. A commercial simulator was used for implementing a stationary mo-del for flash distillation; this model was adjusted by using a distillation curve obtained from standard laboratory ana-lytical assays. Such curve can be one of many types (eg ASTM D86, D1160 or D2887 and involves an experimental procedure simpler than that required for obtaining an EFV curve. Any commercial simulator able to model petroleum can be used for the simulation (HYSYS and CHEMCAD simulators were used here. Several types of petroleum and fractions were experimentally analysed for evaluating the proposed method; this data was then put into a process si-mulator (according to the proposed method to estimate the corresponding EFV curves. HYSYS- and CHEMCAD-estimated curves were compared to those produced by two traditional estimation methods (Edmister’s and Maswell’s methods. Simulation-estimated curves were close to average Edmister and Maxwell curves in all cases. The propo-sed method has several advantages; it avoids the need for experimentally obtaining an EFV curve, it does not de-pend on the type of experimental curve used to fit the model and it enables estimating several pressures by using just one experimental curve as data.

  3. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    Science.gov (United States)

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  4. Dose - Response Curves for Dicentrics and PCC Rings: Preparedness for Radiological Emergency in Thailand

    International Nuclear Information System (INIS)

    Rungsimaphorn, B.; Rerkamnuaychoke, B.; Sudprasert, W.

    2014-01-01

    Establishing in-vitro dose calibration curves is important for reconstruction of radiation dose in the exposed individuals. The aim of this pioneering work in Thailand was to generate dose-response curves using conventional biological dosimetry: dicentric chromosome assay (DCA) and premature chromosome condensation (PCC) assay. The peripheral blood lymphocytes were irradiated with 137 Cs at a dose rate of 0.652 Gy/min to doses of 0.1, 0.25, 0.5, 0.75, 1, 2, 3, 4 and 5 Gy for DCA technique, and 5, 10, 15, 20 and 25 Gy for PCC technique. The blood samples were cultured and processed following the standard procedure given by the IAEA with slight modifications. At least 500-1,000 metaphases or 100 dicentrics/ PCC rings were analyzed using an automated metaphase finder system. The yield of dicentrics with dose was fitted to a linear quadratic model using Chromosome Aberration Calculation Software (CABAS, version 2.0), whereas the dose-response curve of PCC rings was fitted to a linear relationship. These curves will be useful for in-vitro dose reconstruction and can support the preparedness for radiological emergency in the country.

  5. FITTING OF THE DATA FOR DIFFUSION COEFFICIENTS IN UNSATURATED POROUS MEDIA

    Energy Technology Data Exchange (ETDEWEB)

    B. Bullard

    1999-05-01

    The purpose of this calculation is to evaluate diffusion coefficients in unsaturated porous media for use in the TSPA-VA analyses. Using experimental data, regression techniques were used to curve fit the diffusion coefficient in unsaturated porous media as a function of volumetric water content. This calculation substantiates the model fit used in Total System Performance Assessment-1995 An Evaluation of the Potential Yucca Mountain Repository (TSPA-1995), Section 6.5.4.

  6. FITTING OF THE DATA FOR DIFFUSION COEFFICIENTS IN UNSATURATED POROUS MEDIA

    International Nuclear Information System (INIS)

    B. Bullard

    1999-01-01

    The purpose of this calculation is to evaluate diffusion coefficients in unsaturated porous media for use in the TSPA-VA analyses. Using experimental data, regression techniques were used to curve fit the diffusion coefficient in unsaturated porous media as a function of volumetric water content. This calculation substantiates the model fit used in Total System Performance Assessment-1995 An Evaluation of the Potential Yucca Mountain Repository (TSPA-1995), Section 6.5.4

  7. Quantitative description of the magnetization curves of amorphous alloys of the series a-Dy xGd 1-xNi

    Science.gov (United States)

    Barbara, B.; Amaral, V. S.; Filippi, J.

    1992-10-01

    The magnetization curves of the series of amorphous alloys Dy xGd 1- xNi measured between 1.5 and 4.2 K and up to 15 T, have been fitted to the zero kelvin analytical model of Chudnovsky [1]. The results of these fits allow a detailed understanding of the magnetization curves of amorphous alloys with ferromagnetic interactions. In particular, the ratio D/ J of the local anisotropy and exchange energies, and the magnetic and atomic correlation lengths, are accurately determined.

  8. A minicourse on moduli of curves

    International Nuclear Information System (INIS)

    Looijenga, E.

    2000-01-01

    These are notes that accompany a short course given at the School on Algebraic Geometry 1999 at the ICTP, Trieste. A major goal is to outline various approaches to moduli spaces of curves. In the last part I discuss the algebraic classes that naturally live on these spaces; these can be thought of as the characteristic classes for bundles of curves. (author)

  9. Land administration in Ecuador; Current situation and opportunities with adoption of fit-for-purpose land administration approach

    NARCIS (Netherlands)

    Todorovski, D.; Salazar, Rodolfo; Jacome, Ginella; Bermeo, Antonio; Orellana, Esteban; Zambrano, Fatima; Teran, Andrea; Mejia, Raul

    2018-01-01

    The aim of this paper is to explore current land administration situation in Ecuador and identify opportunities for fit-for-purpose (FFP) land administration approach that could improve the land administration functions for the country and its citizens. In this paper, initially literature about land

  10. Intensity Conserving Spectral Fitting

    Science.gov (United States)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  11. Establishment and validation of a dose-effect curve for γ-rays by cytogenetic analysis

    International Nuclear Information System (INIS)

    Barquinero, Joan F.; Caballin, Maria Rosa; Barrios, Leonardo; Ribas, Montserrat; Miro, Rosa; Egozcue, Josep

    1995-01-01

    A dose-effect curve obtained by analysis of dicentric chromosomes after irradiation of peripheral blood samples, from one donor, at 11 different doses of γ-rays is presented. For the elaboration of this curve, more than 18,000 first division metaphases have been analyzed. The results fit very well to the linear-quadratic model. To validate the curve, samples from six individuals (three controls and three occupationally exposed persons) were irradiated at 2 Gy. The results obtained, when compared with the curve, showed that in all cases the 95% confidence interval included the 2 Gy dose, with estimated dose ranges from 1.82 to 2.19 Gy

  12. EMPIRICALLY ESTIMATED FAR-UV EXTINCTION CURVES FOR CLASSICAL T TAURI STARS

    Energy Technology Data Exchange (ETDEWEB)

    McJunkin, Matthew; France, Kevin [Laboratory for Atmospheric and Space Physics, University of Colorado, 600 UCB, Boulder, CO 80303-7814 (United States); Schindhelm, Eric [Southwest Research Institute, 1050 Walnut Street, Suite 300, Boulder, CO 80302 (United States); Herczeg, Gregory [Kavli Institute for Astronomy and Astrophysics, Peking University, Yi He Yuan Lu 5, Haidian Qu, 100871 Beijing (China); Schneider, P. Christian [ESA/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk (Netherlands); Brown, Alex, E-mail: matthew.mcjunkin@colorado.edu [Center for Astrophysics and Space Astronomy, University of Colorado, 593 UCB, Boulder, CO 80309-0593 (United States)

    2016-09-10

    Measurements of extinction curves toward young stars are essential for calculating the intrinsic stellar spectrophotometric radiation. This flux determines the chemical properties and evolution of the circumstellar region, including the environment in which planets form. We develop a new technique using H{sub 2} emission lines pumped by stellar Ly α photons to characterize the extinction curve by comparing the measured far-ultraviolet H{sub 2} line fluxes with model H{sub 2} line fluxes. The difference between model and observed fluxes can be attributed to the dust attenuation along the line of sight through both the interstellar and circumstellar material. The extinction curves are fit by a Cardelli et al. (1989) model and the A {sub V} (H{sub 2}) for the 10 targets studied with good extinction fits range from 0.5 to 1.5 mag, with R {sub V} values ranging from 2.0 to 4.7. A {sub V} and R {sub V} are found to be highly degenerate, suggesting that one or the other needs to be calculated independently. Column densities and temperatures for the fluorescent H{sub 2} populations are also determined, with averages of log{sub 10}( N (H{sub 2})) = 19.0 and T = 1500 K. This paper explores the strengths and limitations of the newly developed extinction curve technique in order to assess the reliability of the results and improve the method in the future.

  13. Exploring Person Fit with an Approach Based on Multilevel Logistic Regression

    Science.gov (United States)

    Walker, A. Adrienne; Engelhard, George, Jr.

    2015-01-01

    The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In…

  14. Gamma-Ray Pulsar Light Curves as Probes of Magnetospheric Structure

    Science.gov (United States)

    Harding, A. K.

    2016-01-01

    The large number of gamma-ray pulsars discovered by the Fermi Gamma-Ray Space Telescope since its launch in 2008 dwarfs the handful that were previously known. The variety of observed light curves makes possible a tomography of both the ensemble-averaged field structure and the high-energy emission regions of a pulsar magnetosphere. Fitting the gamma-ray pulsar light curves with model magnetospheres and emission models has revealed that most of the high-energy emission, and the particles acceleration, takes place near or beyond the light cylinder, near the current sheet. As pulsar magnetosphere models become more sophisticated, it is possible to probe magnetic field structure and emission that are self-consistently determined. Light curve modeling will continue to be a powerful tool for constraining the pulsar magnetosphere physics.

  15. [Chinese neonatal birth weight curve for different gestational age].

    Science.gov (United States)

    Zhu, Li; Zhang, Rong; Zhang, Shulian; Shi, Wenjing; Yan, Weili; Wang, Xiaoli; Lyu, Qin; Liu, Ling; Zhou, Qin; Qiu, Quanfang; Li, Xiaoying; He, Haiying; Wang, Jimei; Li, Ruichun; Lu, Jiarong; Yin, Zhaoqing; Su, Ping; Lin, Xinzhu; Guo, Fang; Zhang, Hui; Li, Shujun; Xin, Hua; Han, Yanqing; Wang, Hongyun; Chen, Dongmei; Li, Zhankui; Wang, Huiqin; Qiu, Yinping; Liu, Huayan; Yang, Jie; Yang, Xiaoli; Li, Mingxia; Li, Wenjing; Han, Shuping; Cao, Bei; Yi, Bin; Zhang, Yihui; Chen, Chao

    2015-02-01

    Since 1986, the reference of birth weight for gestational age has not been updated. The aim of this study was to set up Chinese neonatal network to investigate the current situation of birth weight in China, especially preterm birth weight, to develop the new reference for birth weight for gestational age and birth weight curve. A nationwide neonatology network was established in China. This survey was carried out in 63 hospitals of 23 provinces, municipalities and autonomous regions. We continuously collected the information of live births in participating hospitals during the study period of 2011-2014. Data describing birth weight and gestational age were collected prospectively. Newborn's birth weight was measured by electronic scale within 2 hours after birth when baby was undressed. The evaluation of gestational age was based on the combination of mother's last menstrual period, ultrasound in first trimester and gestational age estimation by gestational age scoring system. the growth curve was drawn by using LMSP method, which was conducted in GAMLSS 1.9-4 software package in R software 2.11.1. A total of 159 334 newborn infants were enrolled in this study. There were 84 447 male and 74 907 female. The mean birth weight was (3 232 ± 555) g, the mean birth weight of male newborn was (3 271 ± 576) g, the mean weight of female newborn was (3 188 ± 528) g. The test of the variables' distribution suggested that the distribution of gestational age and birth weight did not fit the normal distribution, the optimal distribution for them was BCT distribution. The Q-Q plot test and worm plot test suggested that this curve fitted the distribution optimally. The male and female neonatal birth weight curve was developed using the same method. Using GAMLSS method to establish nationwide neonatal birth weight curve, and the first time to update the birth weight reference in recent 28 years.

  16. F(α) curves: Experimental results

    International Nuclear Information System (INIS)

    Glazier, J.A.; Gunaratne, G.; Libchaber, A.

    1988-01-01

    We study the transition to chaos at the golden and silver means for forced Rayleigh-Benard (RB) convection in mercury. We present f(α) curves below, at, and above the transition, and provide comparisons to the curves calculated for the one-dimensional circle map. We find good agreement at both the golden and silver means. This confirms our earlier observation that for low amplitude forcing, forced RB convection is well described by the one-dimensional circle map and indicates that the f(α) curve is a good measure of the approach to criticality. For selected subcritical experimental data sets we calculate the degree of subcriticality. We also present both experimental and calculated results for f(α) in the presence of a third frequency. Again we obtain agreement: The presence of random noise or a third frequency narrows the right-hand (negative q) side of the f(α) curve. Subcriticality results in symmetrically narrowed curves. We can also distinguish these cases by examining the power spectra and Poincare sections of the time series

  17. Multi-q pattern classification of polarization curves

    Science.gov (United States)

    Fabbri, Ricardo; Bastos, Ivan N.; Neto, Francisco D. Moura; Lopes, Francisco J. P.; Gonçalves, Wesley N.; Bruno, Odemir M.

    2014-02-01

    Several experimental measurements are expressed in the form of one-dimensional profiles, for which there is a scarcity of methodologies able to classify the pertinence of a given result to a specific group. The polarization curves that evaluate the corrosion kinetics of electrodes in corrosive media are applications where the behavior is chiefly analyzed from profiles. Polarization curves are indeed a classic method to determine the global kinetics of metallic electrodes, but the strong nonlinearity from different metals and alloys can overlap and the discrimination becomes a challenging problem. Moreover, even finding a typical curve from replicated tests requires subjective judgment. In this paper, we used the so-called multi-q approach based on the Tsallis statistics in a classification engine to separate the multiple polarization curve profiles of two stainless steels. We collected 48 experimental polarization curves in an aqueous chloride medium of two stainless steel types, with different resistance against localized corrosion. Multi-q pattern analysis was then carried out on a wide potential range, from cathodic up to anodic regions. An excellent classification rate was obtained, at a success rate of 90%, 80%, and 83% for low (cathodic), high (anodic), and both potential ranges, respectively, using only 2% of the original profile data. These results show the potential of the proposed approach towards efficient, robust, systematic and automatic classification of highly nonlinear profile curves.

  18. Clarification of the use of chi-square and likelihood functions in fits to histograms

    International Nuclear Information System (INIS)

    Baker, S.; Cousins, R.D.

    1984-01-01

    We consider the problem of fitting curves to histograms in which the data obey multinomial or Poisson statistics. Techniques commonly used by physicists are examined in light of standard results found in the statistics literature. We review the relationship between multinomial and Poisson distributions, and clarify a sufficient condition for equality of the area under the fitted curve and the number of events on the histogram. Following the statisticians, we use the likelihood ratio test to construct a general Z 2 statistic, Zsub(lambda) 2 , which yields parameter and error estimates identical to those of the method of maximum likelihood. The Zsub(lambda) 2 statistic is further useful for testing goodness-of-fit since the value of its minimum asymptotically obeys a classical chi-square distribution. One should be aware, however, of the potential for statistical bias, especially when the number of events is small. (orig.)

  19. Construction of long-term isochronous stress-strain curves by a modeling of short-term creep curves for a Grade 9Cr-1Mo steel

    International Nuclear Information System (INIS)

    Kim, Woo-Gon; Yin, Song-Nan; Koo, Gyeong-Hoi

    2009-01-01

    This study dealt with the construction of long-term isochronous stress-strain curves (ISSC) by a modeling of short-term creep curves for a Grade 9Cr-1Mo steel (G91) which is a candidate material for structural applications in the next generation nuclear reactors as well as in fusion reactors. To do this, tensile material data used in the inelastic constitutive equations was obtained by tensile tests at 550degC. Creep curves were obtained by a series of creep tests with different stress levels of 300MPa to 220MPa at an identical controlled temperature of 550degC. On the basis of these experimental data, the creep curves were characterized by Garofalo's creep model. Three parameters of P 1 , P 2 and P 3 in Garofalo's model were properly optimized by a nonlinear least square fitting (NLSF) analysis. The stress dependency of the three parameters was found to be a linear relationship. But, the P 3 parameter representing the steady state creep rate exhibited a two slope behavior with different stress exponents at a transient stress of about 250 MPa. The long-term creep curves of the G91 steel was modeled by Garofalo's model with only a few short-term creep data. Using the modeled creep curves, the long-term isochronous curves up to 10 5 hours were successfully constructed. (author)

  20. Flow characteristics of curved ducts

    Directory of Open Access Journals (Sweden)

    Rudolf P.

    2007-10-01

    Full Text Available Curved channels are very often present in real hydraulic systems, e.g. curved diffusers of hydraulic turbines, S-shaped bulb turbines, fittings, etc. Curvature brings change of velocity profile, generation of vortices and production of hydraulic losses. Flow simulation using CFD techniques were performed to understand these phenomena. Cases ranging from single elbow to coupled elbows in shapes of U, S and spatial right angle position with circular cross-section were modeled for Re = 60000. Spatial development of the flow was studied and consequently it was deduced that minor losses are connected with the transformation of pressure energy into kinetic energy and vice versa. This transformation is a dissipative process and is reflected in the amount of the energy irreversibly lost. Least loss coefficient is connected with flow in U-shape elbows, biggest one with flow in Sshape elbows. Finally, the extent of the flow domain influenced by presence of curvature was examined. This isimportant for proper placement of mano- and flowmeters during experimental tests. Simulations were verified with experimental results presented in literature.

  1. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  2. Investigation of erosion behavior in different pipe-fitting using Eulerian-Lagrangian approach

    Science.gov (United States)

    Kulkarni, Harshwardhan; Khadamkar, Hrushikesh; Mathpati, Channamallikarjun

    2017-11-01

    Erosion is a wear mechanism of piping system in which wall thinning occurs because of turbulent flow along with along with impact of solid particle on the pipe wall, because of this pipe ruptures causes costly repair of plant and personal injuries. In this study two way coupled Eulerian-Lagrangian approach is used to solve the liquid solid (water-ferrous suspension) flow in the different pipe fitting namely elbow, t-junction, reducer, orifice and 50% open gate valve. Simulations carried out using incomressible transient solver in OpenFOAM for different Reynolds's number (10k, 25k, 50k) and using WenYu drag model to find out possible higher erosion region in pipe fitting. Used transient solver is a hybrid in nature which is combination of Lagrangian library and pimpleFoam. Result obtained from simulation shows that exit region of elbow specially downstream of straight, extradose of the bend section more affected by erosion. Centrifugal force on solid particle at bend affect the erosion behavior. In case of t-junction erosion occurs below the locus of the projection of branch pipe on the wall. For the case of reducer, orifice and a gate valve reduction area as well as downstream is getting more affected by erosion because of increase in velocities.

  3. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    Science.gov (United States)

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  4. The J-resistance curve Leak-before-Break test program on material for the Darlington Nuclear Generating Station

    International Nuclear Information System (INIS)

    Mukherjee, B.

    1988-01-01

    The Darlington Leak-Before-Break (DLBB) approach has been developed for large diameter (21, 22, 24 inch) SA106B heat transport (HT) piping and SA105 fittings as a design alternative to pipewhip restraints and in recognition of the questionable benefits of providing such restraints. Ontario Hydro's DLBB approach is based on the elastic plastic fracture mechanics method. In this test program, J-resistance curves were determined from actual pipe heats that were used in the construction of the Darlington heat transport systems (Units 1 and 2). Test blocks were prepared using four different welding procedures for nuclear Class I piping. The test program was designed to take into account the effect of various factors such as test temperature, crack plane orientation, welding effects, etc., which have influence on fracture properties. A total of 91 tests were conducted. An acceptable lower bound J-resistance curve for the piping steels was obtained by machining maximum thickness specimens from the pipes and by testing side grooved compact tension specimens. Test results showed that all pipes, welds and heat-affected zone materials within the scope of the DLBB program exhibited uppershelf toughness behaviour. All specimens showed high crack initiation toughness Jsub(lc), rising J-resistance curve and stable and ductile crack extension. Toughness of product forms depended on the direction of crack extension (circumferential versus axial crack orientation). Toughness of DLBB welds and parent materials at 250 0 C was lower than that at 20 0 C. (author)

  5. Fit-For-Purpose Land Administration

    DEFF Research Database (Denmark)

    Enemark, Stig; McLaren, Robin

    2017-01-01

    This paper looks at implementing Fit-For-Purpose land administration solutions at county level. This will require a country specific strategy drawing from the recent GLTN publication on “Fit-For-Purpose Land Administration – Guiding Principles for Country Implementation”. The Fit...... administration; 4) Designing the country specific FFP spatial / legal / institutional frameworks; 5) Capacity development; 6) Country specific instruction manuals; and 7) Economic benefits analysis. Finally, the paper presents some experiences and reflections from a case study on implementing the FFP approach...

  6. Investigation of learning and experience curves

    Energy Technology Data Exchange (ETDEWEB)

    Krawiec, F.; Thornton, J.; Edesess, M.

    1980-04-01

    The applicability of learning and experience curves for predicting future costs of solar technologies is assessed, and the major test case is the production economics of heliostats. Alternative methods for estimating cost reductions in systems manufacture are discussed, and procedures for using learning and experience curves to predict costs are outlined. Because adequate production data often do not exist, production histories of analogous products/processes are analyzed and learning and aggregated cost curves for these surrogates estimated. If the surrogate learning curves apply, they can be used to estimate solar technology costs. The steps involved in generating these cost estimates are given. Second-generation glass-steel and inflated-bubble heliostat design concepts, developed by MDAC and GE, respectively, are described; a costing scenario for 25,000 units/yr is detailed; surrogates for cost analysis are chosen; learning and aggregate cost curves are estimated; and aggregate cost curves for the GE and MDAC designs are estimated. However, an approach that combines a neoclassical production function with a learning-by-doing hypothesis is needed to yield a cost relation compatible with the historical learning curve and the traditional cost function of economic theory.

  7. A Data-driven Study of RR Lyrae Near-IR Light Curves: Principal Component Analysis, Robust Fits, and Metallicity Estimates

    Science.gov (United States)

    Hajdu, Gergely; Dékány, István; Catelan, Márcio; Grebel, Eva K.; Jurcsik, Johanna

    2018-04-01

    RR Lyrae variables are widely used tracers of Galactic halo structure and kinematics, but they can also serve to constrain the distribution of the old stellar population in the Galactic bulge. With the aim of improving their near-infrared photometric characterization, we investigate their near-infrared light curves, as well as the empirical relationships between their light curve and metallicities using machine learning methods. We introduce a new, robust method for the estimation of the light-curve shapes, hence the average magnitudes of RR Lyrae variables in the K S band, by utilizing the first few principal components (PCs) as basis vectors, obtained from the PC analysis of a training set of light curves. Furthermore, we use the amplitudes of these PCs to predict the light-curve shape of each star in the J-band, allowing us to precisely determine their average magnitudes (hence colors), even in cases where only one J measurement is available. Finally, we demonstrate that the K S-band light-curve parameters of RR Lyrae variables, together with the period, allow the estimation of the metallicity of individual stars with an accuracy of ∼0.2–0.25 dex, providing valuable chemical information about old stellar populations bearing RR Lyrae variables. The methods presented here can be straightforwardly adopted for other classes of variable stars, bands, or for the estimation of other physical quantities.

  8. The mapping approach in the path integral formalism applied to curve-crossing systems

    International Nuclear Information System (INIS)

    Novikov, Alexey; Kleinekathoefer, Ulrich; Schreiber, Michael

    2004-01-01

    The path integral formalism in a combined phase-space and coherent-state representation is applied to the problem of curve-crossing dynamics. The system of interest is described by two coupled one-dimensional harmonic potential energy surfaces interacting with a heat bath consisting of harmonic oscillators. The mapping approach is used to rewrite the Lagrangian function of the electronic part of the system. Using the Feynman-Vernon influence-functional method the bath is eliminated whereas the non-Gaussian part of the path integral is treated using the generating functional for the electronic trajectories. The dynamics of a Gaussian wave packet is analyzed along a one-dimensional reaction coordinate within a perturbative treatment for a small coordinate shift between the potential energy surfaces

  9. Comparison of calibration curve of radiochromic films EBT2 and EBT

    International Nuclear Information System (INIS)

    Parra Osorio, V.; Martin-Viera Cueto, J. A.; Galan Montenegro, P.; Benitez Villegas, E. M.; Casado Villalon, F. F.; Bodineau Gil, C.

    2013-01-01

    The aim is to compare the quality of the fit to calibrate two radiochromic films batches, one model and another of EBT3 EBT2, using both experimental settings as phenomenological expression as of the calibration curve depends on the precision and accuracy of the estimate of absorbed dose. (Author)

  10. Effect of shrink fitting and cutting on iron loss of permanent magnet motor

    International Nuclear Information System (INIS)

    Takahashi, N.; Morimoto, H.; Yunoki, Y.; Miyagi, D.

    2008-01-01

    Magnetic properties of a motor core are affected by the distortion due to the compression caused by shrink fitting and the distortion caused by punching, etc. In this paper, the B-H curve and iron loss of stator core of actual motor under shrink fitting are measured. It is shown that the maximum permeability is reduced by about 50%, and the iron loss is increased by about 30% due to the shrink fitting. It is illustrated that the loss of motor is increased by about 10%, 4% and 2% due to the shrink fitting, the cutting stress and the eddy current in rotor magnet, respectively

  11. Automated ligand fitting by core-fragment fitting and extension into density

    International Nuclear Information System (INIS)

    Terwilliger, Thomas C.; Klei, Herbert; Adams, Paul D.; Moriarty, Nigel W.; Cohn, Judith D.

    2006-01-01

    An automated ligand-fitting procedure has been developed and tested on 9327 ligands and (F o − F c )exp(iϕ c ) difference density from macromolecular structures in the Protein Data Bank. A procedure for fitting of ligands to electron-density maps by first fitting a core fragment of the ligand to density and then extending the remainder of the ligand into density is presented. The approach was tested by fitting 9327 ligands over a wide range of resolutions (most are in the range 0.8-4.8 Å) from the Protein Data Bank (PDB) into (F o − F c )exp(iϕ c ) difference density calculated using entries from the PDB without these ligands. The procedure was able to place 58% of these 9327 ligands within 2 Å (r.m.s.d.) of the coordinates of the atoms in the original PDB entry for that ligand. The success of the fitting procedure was relatively insensitive to the size of the ligand in the range 10–100 non-H atoms and was only moderately sensitive to resolution, with the percentage of ligands placed near the coordinates of the original PDB entry for fits in the range 58–73% over all resolution ranges tested

  12. Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: The Henry Ford ExercIse Testing (FIT) project.

    Science.gov (United States)

    Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan; Sakr, Sherif

    2017-01-01

    Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.

  13. Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: The Henry Ford ExercIse Testing (FIT project.

    Directory of Open Access Journals (Sweden)

    Manal Alghamdi

    Full Text Available Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE. The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree and achieved high accuracy of prediction (AUC = 0.92. The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.

  14. Short-term corneal changes with gas-permeable contact lens wear in keratoconus subjects: a comparison of two fitting approaches.

    Science.gov (United States)

    Romero-Jiménez, Miguel; Santodomingo-Rubido, Jacinto; Flores-Rodríguez, Patricia; González-Méijome, Jose-Manuel

    2015-01-01

    To evaluate changes in anterior corneal topography and higher-order aberrations (HOA) after 14-days of rigid gas-permeable (RGP) contact lens (CL) wear in keratoconus subjects comparing two different fitting approaches. Thirty-one keratoconus subjects (50 eyes) without previous history of CL wear were recruited for the study. Subjects were randomly fitted to either an apical-touch or three-point-touch fitting approach. The lens' back optic zone radius (BOZR) was 0.4mm and 0.1mm flatter than the first definite apical clearance lens, respectively. Differences between the baseline and post-CL wear for steepest, flattest and average corneal power (ACP) readings, central corneal astigmatism (CCA), maximum tangential curvature (KTag), anterior corneal surface asphericity, anterior corneal surface HOA and thinnest corneal thickness measured with Pentacam were compared. A statistically significant flattening was found over time on the flattest and steepest simulated keratometry and ACP in apical-touch group (all p<0.01). A statistically significant reduction in KTag was found in both groups after contact lens wear (all p<0.05). Significant reduction was found over time in CCA (p=0.001) and anterior corneal asphericity in both groups (p<0.001). Thickness at the thinnest corneal point increased significantly after CL wear (p<0.0001). Coma-like and total HOA root mean square (RMS) error were significantly reduced following CL wearing in both fitting approaches (all p<0.05). Short-term rigid gas-permeable CL wear flattens the anterior cornea, increases the thinnest corneal thickness and reduces anterior surface HOA in keratoconus subjects. Apical-touch was associated with greater corneal flattening in comparison to three-point-touch lens wear. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  15. Disadvantages of using the area under the receiver operating characteristic curve to assess imaging tests: A discussion and proposal for an alternative approach

    International Nuclear Information System (INIS)

    Halligan, Steve; Altman, Douglas G.; Mallett, Susan

    2015-01-01

    The objectives are to describe the disadvantages of the area under the receiver operating characteristic curve (ROC AUC) to measure diagnostic test performance and to propose an alternative based on net benefit. We use a narrative review supplemented by data from a study of computer-assisted detection for CT colonography. We identified problems with ROC AUC. Confidence scoring by readers was highly non-normal, and score distribution was bimodal. Consequently, ROC curves were highly extrapolated with AUC mostly dependent on areas without patient data. AUC depended on the method used for curve fitting. ROC AUC does not account for prevalence or different misclassification costs arising from false-negative and false-positive diagnoses. Change in ROC AUC has little direct clinical meaning for clinicians. An alternative analysis based on net benefit is proposed, based on the change in sensitivity and specificity at clinically relevant thresholds. Net benefit incorporates estimates of prevalence and misclassification costs, and it is clinically interpretable since it reflects changes in correct and incorrect diagnoses when a new diagnostic test is introduced. ROC AUC is most useful in the early stages of test assessment whereas methods based on net benefit are more useful to assess radiological tests where the clinical context is known. Net benefit is more useful for assessing clinical impact. (orig.)

  16. Reflection curves—new computation and rendering techniques

    Directory of Open Access Journals (Sweden)

    Dan-Eugen Ulmet

    2004-05-01

    Full Text Available Reflection curves on surfaces are important tools for free-form surface interrogation. They are essential for industrial 3D CAD/CAM systems and for rendering purposes. In this note, new approaches regarding the computation and rendering of reflection curves on surfaces are introduced. These approaches are designed to take the advantage of the graphics libraries of recent releases of commercial systems such as the OpenInventor toolkit (developed by Silicon Graphics or Matlab (developed by The Math Works. A new relation between reflection curves and contour curves is derived; this theoretical result is used for a straightforward Matlab implementation of reflection curves. A new type of reflection curves is also generated using the OpenInventor texture and environment mapping implementations. This allows the computation, rendering, and animation of reflection curves at interactive rates, which makes it particularly useful for industrial applications.

  17. Fit-For-Purpose Land Administration

    DEFF Research Database (Denmark)

    Enemark, Stig

    2015-01-01

    The term “Fit-For-Purpose Land Administration” indicates that the approach used for building land administration systems in less developed countries should be flexible and focused on serving the purpose of the systems (such as providing security of tenure and control of land use) rather than...... focusing on top-end technical solutions and high accuracy surveys. Of course, such flexibility allows for land administration systems to be incrementally improved over time. This paper unfolds the Fit-For-Purpose concept by analyzing the three core components: The spatial framework (large scale land parcel...... mapping) should be provided using affordable modern technologies such aerial imageries rather than field surveys. The legal framework must support both legal and social tenure, and the regulations must be designed along administrative rather than judicial lines. The fit-for-purpose approach must...

  18. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  19. Supply curve bidding of electricity in constrained power networks

    Energy Technology Data Exchange (ETDEWEB)

    Al-Agtash, Salem Y. [Hijjawi Faculty of Engineering; Yarmouk University; Irbid 21163 (Jordan)

    2010-07-15

    This paper presents a Supply Curve Bidding (SCB) approach that complies with the notion of the Standard Market Design (SMD) in electricity markets. The approach considers the demand-side option and Locational Marginal Pricing (LMP) clearing. It iteratively alters Supply Function Equilibria (SFE) model solutions, then choosing the best bid based on market-clearing LMP and network conditions. It has been argued that SCB better benefits suppliers compared to fixed quantity-price bids. It provides more flexibility and better opportunity to achieving profitable outcomes over a range of demands. In addition, SCB fits two important criteria: simplifies evaluating electricity derivatives and captures smooth marginal cost characteristics that reflect actual production costs. The simultaneous inclusion of physical unit constraints and transmission security constraints will assure a feasible solution. An IEEE 24-bus system is used to illustrate perturbations of SCB in constrained power networks within the framework of SDM. By searching in the neighborhood of SFE model solutions, suppliers can obtain their best bid offers based on market-clearing LMP and network conditions. In this case, electricity producers can derive their best offering strategy both in the power exchange and the long-term contractual markets within a profitable, yet secure, electricity market. (author)

  20. Supply curve bidding of electricity in constrained power networks

    International Nuclear Information System (INIS)

    Al-Agtash, Salem Y.

    2010-01-01

    This paper presents a Supply Curve Bidding (SCB) approach that complies with the notion of the Standard Market Design (SMD) in electricity markets. The approach considers the demand-side option and Locational Marginal Pricing (LMP) clearing. It iteratively alters Supply Function Equilibria (SFE) model solutions, then choosing the best bid based on market-clearing LMP and network conditions. It has been argued that SCB better benefits suppliers compared to fixed quantity-price bids. It provides more flexibility and better opportunity to achieving profitable outcomes over a range of demands. In addition, SCB fits two important criteria: simplifies evaluating electricity derivatives and captures smooth marginal cost characteristics that reflect actual production costs. The simultaneous inclusion of physical unit constraints and transmission security constraints will assure a feasible solution. An IEEE 24-bus system is used to illustrate perturbations of SCB in constrained power networks within the framework of SDM. By searching in the neighborhood of SFE model solutions, suppliers can obtain their best bid offers based on market-clearing LMP and network conditions. In this case, electricity producers can derive their best offering strategy both in the power exchange and the long-term contractual markets within a profitable, yet secure, electricity market. (author)

  1. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications

    Science.gov (United States)

    W. Hasan, W. Z.

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554

  2. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Science.gov (United States)

    Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S

    2018-01-01

    The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  3. Field data-based mathematical modeling by Bode equations and vector fitting algorithm for renewable energy applications.

    Directory of Open Access Journals (Sweden)

    A H Sabry

    Full Text Available The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.

  4. Linking occurrence and fitness to persistence: Habitat-based approach for endangered Greater Sage-Grouse

    Science.gov (United States)

    Aldridge, Cameron L.; Boyce, Mark S.

    2007-01-01

    Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel

  5. Lagrangian Curves on Spectral Curves of Monopoles

    International Nuclear Information System (INIS)

    Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.

    2010-01-01

    We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.

  6. Applicability of the θ projection method to creep curves of Ni-22Cr-18Fe-9Mo alloy

    International Nuclear Information System (INIS)

    Kurata, Yuji; Utsumi, Hirokazu

    1998-01-01

    Applicability of the θ projection method has been examined for constant-load creep test results at 800 and 1000degC on Ni-22Cr-18Fe-9Mo alloy in the solution-treated and aged conditions. The results obtained are as follows: (1) Normal type creep curves obtained at 1000degC for aged Ni-22Cr-18Fe-9Mo alloy are fitted using the θ projection method with four θ parameters. Stress dependence of θ parameters can be expressed in terms of simple equations. (2) The θ projection method with four θ parameters cannot be applied to the remaining creep curves where most of the life is occupied by a tertiary creep stage. Therefore, the θ projection method consisting of only the tertiary creep component with two θ parameters was applied. The creep curves can be fitted using this method. (3) If the θ projection method with four θ or two θ parameters is applied to creep curves in accordance with creep curve shapes, creep rupture time can be predicted in terms of formulation of stress and/or temperature dependence of θ parameters. (author)

  7. IDF-curves for precipitation In Belgium

    International Nuclear Information System (INIS)

    Mohymont, Bernard; Demarde, Gaston R.

    2004-01-01

    The Intensity-Duration-Frequency (IDF) curves for precipitation constitute a relationship between the intensity, the duration and the frequency of rainfall amounts. The intensity of precipitation is expressed in mm/h, the duration or aggregation time is the length of the interval considered while the frequency stands for the probability of occurrence of the event. IDF-curves constitute a classical and useful tool that is primarily used to dimension hydraulic structures in general, as e.g., sewer systems and which are consequently used to assess the risk of inundation. In this presentation, the IDF relation for precipitation is studied for different locations in Belgium. These locations correspond to two long-term, high-quality precipitation networks of the RMIB: (a) the daily precipitation depths of the climatological network (more than 200 stations, 1951-2001 baseline period); (b) the high-frequency 10-minutes precipitation depths of the hydro meteorological network (more than 30 stations, 15 to 33 years baseline period). For the station of Uccle, an uninterrupted time-series of more than one hundred years of 10-minutes rainfall data is available. The proposed technique for assessing the curves is based on maximum annual values of precipitation. A new analytical formula for the IDF-curves was developed such that these curves stay valid for aggregation times ranging from 10 minutes to 30 days (when fitted with appropriate data). Moreover, all parameters of this formula have physical dimensions. Finally, adequate spatial interpolation techniques are used to provide nationwide extreme values precipitation depths for short- to long-term durations With a given return period. These values are estimated on the grid points of the Belgian ALADIN-domain used in the operational weather forecasts at the RMIB.(Author)

  8. A Practical Anodic and Cathodic Curve Intersection Model to Understand Multiple Corrosion Potentials of Fe-Based Glassy Alloys in OH- Contained Solutions.

    Science.gov (United States)

    Li, Y J; Wang, Y G; An, B; Xu, H; Liu, Y; Zhang, L C; Ma, H Y; Wang, W M

    2016-01-01

    A practical anodic and cathodic curve intersection model, which consisted of an apparent anodic curve and an imaginary cathodic line, was proposed to explain multiple corrosion potentials occurred in potentiodynamic polarization curves of Fe-based glassy alloys in alkaline solution. The apparent anodic curve was selected from the measured anodic curves. The imaginary cathodic line was obtained by linearly fitting the differences of anodic curves and can be moved evenly or rotated to predict the number and value of corrosion potentials.

  9. Self-Interacting Dark Matter Can Explain Diverse Galactic Rotation Curves.

    Science.gov (United States)

    Kamada, Ayuki; Kaplinghat, Manoj; Pace, Andrew B; Yu, Hai-Bo

    2017-09-15

    The rotation curves of spiral galaxies exhibit a diversity that has been difficult to understand in the cold dark matter (CDM) paradigm. We show that the self-interacting dark matter (SIDM) model provides excellent fits to the rotation curves of a sample of galaxies with asymptotic velocities in the 25-300  km/s range that exemplify the full range of diversity. We assume only the halo concentration-mass relation predicted by the CDM model and a fixed value of the self-interaction cross section. In dark-matter-dominated galaxies, thermalization due to self-interactions creates large cores and reduces dark matter densities. In contrast, thermalization leads to denser and smaller cores in more luminous galaxies and naturally explains the flatness of rotation curves of the highly luminous galaxies at small radii. Our results demonstrate that the impact of the baryons on the SIDM halo profile and the scatter from the assembly history of halos as encoded in the concentration-mass relation can explain the diverse rotation curves of spiral galaxies.

  10. Fit of second order thermoluminescence glow peaks using the logistic distribution function

    International Nuclear Information System (INIS)

    Pagonis, V.; Kitis, G.

    2001-01-01

    A new thermoluminescence glow curve deconvolution (GCD) function is introduced which accurately describes second order thermoluminescence (TL) curves. The logistic asymmetric (LA) statistical probability function is used with the function variables being the maximum peak intensity (I m ), the temperature of the maximum peak intensity (T m ) and the LA width parameter a 2 . An analytical expression is derived from which the activation energy E can be calculated as a function of T m and the LA width parameter a 2 with an accuracy of 2% or better. The accuracy of the fit was tested for E values ranging from 0.7 to 2.5 eV, for s values between 10 5 and 10 25 s -1 , and for trap occupation number n 0 /N between 1 and 10 -6 . The goodness of fit of the logistic asymmetric function is described by the Figure of Merit (FOM) which is found to be of the order of 10 -2 . Preliminary results show that the GCD described here can easily be extended to the description of general order TL glow curves by varying the asymmetry parameter of the logistic asymmetric function. It is concluded that the TL kinetic analysis of first, second and general order TL glow curves can be performed with high accuracy and speed by using commercially available statistical packages that incorporate the Weibull and logistic asymmetric functions. (author)

  11. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  12. ROC generated thresholds for field-assessed aerobic fitness related to body size and cardiometabolic risk in schoolchildren.

    Directory of Open Access Journals (Sweden)

    Lynne M Boddy

    Full Text Available OBJECTIVES: 1. to investigate whether 20 m multi-stage shuttle run performance (20mSRT, an indirect measure of aerobic fitness, could discriminate between healthy and overweight status in 9-10.9 yr old schoolchildren using Receiver Operating Characteristic (ROC analysis; 2. Investigate if cardiometabolic risk differed by aerobic fitness group by applying the ROC cut point to a second, cross-sectional cohort. DESIGN: Analysis of cross-sectional data. PARTICIPANTS: 16,619 9-10.9 year old participants from SportsLinx project and 300 11-13.9 year old participants from the Welsh Schools Health and Fitness Study. OUTCOME MEASURES: SportsLinx; 20mSRT, body mass index (BMI, waist circumference, subscapular and superilliac skinfold thicknesses. Welsh Schools Health and Fitness Study; 20mSRT performance, waist circumference, and clustered cardiometabolic risk. ANALYSES: Three ROC curve analyses were completed, each using 20mSRT performance with ROC curve 1 related to BMI, curve 2 was related to waist circumference and 3 was related to skinfolds (estimated % body fat. These were repeated for both girls and boys. The mean of the three aerobic fitness thresholds was retained for analysis. The thresholds were subsequently applied to clustered cardiometabolic risk data from the Welsh Schools study to assess whether risk differed by aerobic fitness group. RESULTS: The diagnostic accuracy of the ROC generated thresholds was higher than would be expected by chance (all models AUC >0.7. The mean thresholds were 33 and 25 shuttles for boys and girls respectively. Participants classified as 'fit' had significantly lower cardiometabolic risk scores in comparison to those classed as unfit (p<0.001. CONCLUSION: The use of the ROC generated cut points by health professionals, teachers and coaches may provide the opportunity to apply population level 'risk identification and stratification' processes and plan for "at-risk" children to be referred onto intervention

  13. The Melting Curve and Premelting of MgO

    OpenAIRE

    Cohen, R. E.; Weitz, J. S.

    1996-01-01

    The melting curve for MgO was obtained using molecular dynamics and a non-empirical, many-body potential. We also studied premelting effects by computing the dynamical structure factor in the crystal on approach to melting. The melting curve simulations were performed with periodic boundary conditions with cells up to 512 atoms using the ab-initio Variational Induced Breathing (VIB) model. The melting curve was obtained by computing $% \\Delta H_m$ and $\\Delta V_m$ and integrating the Clapeyro...

  14. dftools: Distribution function fitting

    Science.gov (United States)

    Obreschkow, Danail

    2018-05-01

    dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

  15. A dose-surviving fraction curve for mouse colonic mucosa

    International Nuclear Information System (INIS)

    Tucker, S.L.; Thames, H.D. Jr.; Withers, H.R.; Mason, K.A.

    1983-01-01

    A dose-surviving fraction curve representing the response of the mouse colonic mucosa to single doses of 137 Cs gamma radiation was obtained from the results of a multifraction in vivo colony assay. Construction of the curve required an estimated of the average number of clonogens initially present per colonic crypt. The estimated clonogen count (88) was determined by a statistical method based on the use of doses per fraction common to different fractionation protocols. Parameters for the LQ and TC models of cell survival were obtained by weighted least-squares fits to the data. A comparison of the survival characteristics of cells from the mouse colonic and jejunal crypts suggested that the epithelium of the colon is less radiosensitive than that of the jejunum. (author)

  16. INFOS: spectrum fitting software for NMR analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Albert A., E-mail: alsi@nmr.phys.chem.ethz.ch [ETH Zürich, Physical Chemistry (Switzerland)

    2017-02-15

    Software for fitting of NMR spectra in MATLAB is presented. Spectra are fitted in the frequency domain, using Fourier transformed lineshapes, which are derived using the experimental acquisition and processing parameters. This yields more accurate fits compared to common fitting methods that use Lorentzian or Gaussian functions. Furthermore, a very time-efficient algorithm for calculating and fitting spectra has been developed. The software also performs initial peak picking, followed by subsequent fitting and refinement of the peak list, by iteratively adding and removing peaks to improve the overall fit. Estimation of error on fitting parameters is performed using a Monte-Carlo approach. Many fitting options allow the software to be flexible enough for a wide array of applications, while still being straightforward to set up with minimal user input.

  17. Alternative Forms of Fit in Contingency Theory.

    Science.gov (United States)

    Drazin, Robert; Van de Ven, Andrew H.

    1985-01-01

    This paper examines the selection, interaction, and systems approaches to fit in structural contingency theory. The concepts of fit evaluated may be applied not only to structural contingency theory but to contingency theories in general. (MD)

  18. Cosmological applications of algebraic quantum field theory in curved spacetimes

    CERN Document Server

    Hack, Thomas-Paul

    2016-01-01

    This book provides a largely self-contained and broadly accessible exposition on two cosmological applications of algebraic quantum field theory (QFT) in curved spacetime: a fundamental analysis of the cosmological evolution according to the Standard Model of Cosmology; and a fundamental study of the perturbations in inflation. The two central sections of the book dealing with these applications are preceded by sections providing a pedagogical introduction to the subject. Introductory material on the construction of linear QFTs on general curved spacetimes with and without gauge symmetry in the algebraic approach, physically meaningful quantum states on general curved spacetimes, and the backreaction of quantum fields in curved spacetimes via the semiclassical Einstein equation is also given. The reader should have a basic understanding of General Relativity and QFT on Minkowski spacetime, but no background in QFT on curved spacetimes or the algebraic approach to QFT is required.

  19. Mono-Exponential Fitting in T2-Relaxometry: Relevance of Offset and First Echo.

    Directory of Open Access Journals (Sweden)

    David Milford

    Full Text Available T2 relaxometry has become an important tool in quantitative MRI. Little focus has been put on the effect of the refocusing flip angle upon the offset parameter, which was introduced to account for a signal floor due to noise or to long T2 components. The aim of this study was to show that B1 imperfections contribute significantly to the offset. We further introduce a simple method to reduce the systematic error in T2 by discarding the first echo and using the offset fitting approach.Signal curves of T2 relaxometry were simulated based on extended phase graph theory and evaluated for 4 different methods (inclusion and exclusion of the first echo, while fitting with and without the offset. We further performed T2 relaxometry in a phantom at 9.4T magnetic resonance imaging scanner and used the same methods for post-processing as in the extended phase graph simulated data. Single spin echo sequences were used to determine the correct T2 time.The simulation data showed that the systematic error in T2 and the offset depends on the refocusing pulse, the echo spacing and the echo train length. The systematic error could be reduced by discarding the first echo. Further reduction of the systematic T2 error was reached by using the offset as fitting parameter. The phantom experiments confirmed these findings.The fitted offset parameter in T2 relaxometry is influenced by imperfect refocusing pulses. Using the offset as a fitting parameter and discarding the first echo is a fast and easy method to minimize the error in T2, particularly for low to intermediate echo train length.

  20. Object-Image Correspondence for Algebraic Curves under Projections

    Directory of Open Access Journals (Sweden)

    Joseph M. Burdis

    2013-03-01

    Full Text Available We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence problem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  1. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Guignard, P.A.; Chan, W. (Royal Melbourne Hospital, Parkville (Australia). Dept. of Nuclear Medicine)

    1984-09-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned.

  2. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    International Nuclear Information System (INIS)

    Guignard, P.A.; Chan, W.

    1984-01-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned. (author)

  3. Flat rotation curves using scalar-tensor theories

    Energy Technology Data Exchange (ETDEWEB)

    Cervantes-Cota, Jorge L [Depto de Fisica, Instituto Nacional de Investigaciones Nucleares, A.P. 18-1027, 11801 D.F. (Mexico); RodrIguez-Meza, M A [Depto de Fisica, Instituto Nacional de Investigaciones Nucleares, A.P. 18-1027, 11801 D.F. (Mexico); Nunez, Dario [Instituto de Ciencias Nucleares, Universidad Nacional Autonoma de Mexico, A.P. 70-543, 04510 D.F. (Mexico)

    2007-11-15

    We computed flat rotation curves from scalar-tensor theories in their weak field limit. Our model, by construction, fits a flat rotation profile for velocities of stars. As a result, the form of the scalar field potential and DM distribution in a galaxy are determined. By taking into account the constraints for the fundamental parameters of the theory ({lambda}, {alpha}), it is possible to obtain analytical results for the density profiles. For positive and negative values of {alpha}, the DM matter profile is as cuspy as NFW's.

  4. Curves and surfaces for CAGD a practical guide

    CERN Document Server

    Farin, Gerald

    2002-01-01

    This fifth edition has been fully updated to cover the many advances made in CAGD and curve and surface theory since 1997, when the fourth edition appeared. Material has been restructured into theory and applications chapters. The theory material has been streamlined using the blossoming approach; the applications material includes least squares techniques in addition to the traditional interpolation methods. In all other respects, it is, thankfully, the same. This means you get the informal, friendly style and unique approach that has made Curves and Surfaces for CAGD: A Practical Gui

  5. TWO METHODS OF ESTIMATING SEMIPARAMETRIC COMPONENT IN THE ENVIRONMENTAL KUZNET'S CURVE (EKC)

    OpenAIRE

    Paudel, Krishna P.; Zapata, Hector O.

    2004-01-01

    This study compares parametric and semiparametric smoothing techniques to estimate the environmental Kuznet curve. The ad hoc functional form where income is related either as a square or a cubic function to environmental quality is relaxed in search of a better nonlinear fit to the pollution-income relationship for panel data.

  6. On the use of the covariance matrix to fit correlated data

    Science.gov (United States)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  7. A Probabilistic Approach to Fitting Period–luminosity Relations and Validating Gaia Parallaxes

    Energy Technology Data Exchange (ETDEWEB)

    Sesar, Branimir; Fouesneau, Morgan; Bailer-Jones, Coryn A. L.; Gould, Andy; Rix, Hans-Walter [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Price-Whelan, Adrian M., E-mail: bsesar@mpia.de [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

    2017-04-01

    Pulsating stars, such as Cepheids, Miras, and RR Lyrae stars, are important distance indicators and calibrators of the “cosmic distance ladder,” and yet their period–luminosity–metallicity (PLZ) relations are still constrained using simple statistical methods that cannot take full advantage of available data. To enable optimal usage of data provided by the Gaia mission, we present a probabilistic approach that simultaneously constrains parameters of PLZ relations and uncertainties in Gaia parallax measurements. We demonstrate this approach by constraining PLZ relations of type ab RR Lyrae stars in near-infrared W 1 and W 2 bands, using Tycho- Gaia Astrometric Solution (TGAS) parallax measurements for a sample of ≈100 type ab RR Lyrae stars located within 2.5 kpc of the Sun. The fitted PLZ relations are consistent with previous studies, and in combination with other data, deliver distances precise to 6% (once various sources of uncertainty are taken into account). To a precision of 0.05 mas (1 σ ), we do not find a statistically significant offset in TGAS parallaxes for this sample of distant RR Lyrae stars (median parallax of 0.8 mas and distance of 1.4 kpc). With only minor modifications, our probabilistic approach can be used to constrain PLZ relations of other pulsating stars, and we intend to apply it to Cepheid and Mira stars in the near future.

  8. Contribution to the boiling curve of sodium

    International Nuclear Information System (INIS)

    Schins, H.E.J.

    1975-01-01

    Sodium in a pool was preheated to saturation temperatures at system pressures of 200, 350 and 500 torr. A test section of normal stainless steel was then extra heated by means of the conical fitting condenser zone of a heat pipe. Measurements were made of heat transfer fluxes, q in W/cm 2 , as a function of wall excess temperature above saturation, THETA = Tsub(w) - Tsub(s) in 0 C, both, in natural convection and in boiling regimes. These measurements make it possible to select the Subbotin natural convection and nucleate boiling curves among other variants proposed in literature. Further it is empirically demonstrated on water that the minimum film boiling point corresponds to the homogeneous nucleation temperature calculated by the Doering formula. Assuming that the minimum film boiling point of sodium can be obtained in the same manner, it is then possible to give an appoximate boiling curve of sodium for the use in thermal interaction studies. At 1 atm the heat transfer fluxes q versus wall temperatures THETA are for a point on the natural convection curve 0.3 W/cm 2 and 2 0 C; for start of boiling 1.6 W/cm 2 and 6 0 C; for peak heat flux 360 W/cm 2 and 37 0 C; for minimum film boiling 30 W/cm 2 and 905 0 C and for a point on the film boiling curve 160 W/cm 2 and 2,000 0 C. (orig.) [de

  9. On the analysis of Canadian Holstein dairy cow lactation curves using standard growth functions

    NARCIS (Netherlands)

    López, S.; France, J.; Odongo, N.E.; McBride, R.A.; Kebreab, E.; Alzahal, O.; McBride, B.W.; Dijkstra, J.

    2015-01-01

    Six classical growth functions (monomolecular, Schumacher, Gompertz, logistic, Richards, and Morgan) were fitted to individual and average (by parity) cumulative milk production curves of Canadian Holstein dairy cows. The data analyzed consisted of approximately 91,000 daily milk yield records

  10. Modelling of isothermal remanence magnetisation curves for an assembly of macrospins

    International Nuclear Information System (INIS)

    Tournus, F.

    2015-01-01

    We present a robust and efficient framework to compute isothermal remanent magnetisation (IRM) curves for magnetic nanoparticle assemblies. The assembly is modelled by independent, randomly oriented, uniaxial macrospins and we use a Néel model to take into account the thermal relaxation. A simple analytic expression is established for a single size, in a sudden switching approximation, and is compared to more evolved models. We show that for realistic samples (necessarily presenting a size dispersion) the simple model is very satisfactory. With this framework, it is then possible to reliably simulate IRM curves, which can be compared to experimental measurements and used in a best fit procedure. We also examine the influence of several parameters on the IRM curves and we discuss the link between the irreversible susceptibility and the switching field distribution. - Highlights: • A framework to compute IRM curves for nanoparticle assemblies is presented. • A simple analytic expression (for a single size) is compared to more evolved models. • The simple expression can reliably simulate IRM curves for realistic samples. • Irreversible susceptibility and the influence of several parameters is discussed

  11. Modeling of X-ray rocking curves for layers after two-stage ion-implantation

    Directory of Open Access Journals (Sweden)

    O.I. Liubchenko

    2017-10-01

    Full Text Available In this work, we consider the approach for simulation of X-ray rocking curves inherent to InSb(111 crystals implanted with Be+ ions with various energies and doses. The method is based on the semi-kinematical theory of X-ray diffraction in the case of Bragg geometry. A fitting procedure that relies on the Hooke–Jeeves direct search algorithm was developed to determine the depth profiles of strain and structural disorders in the ion-modified layers. The thickness and maximum value of strain of ion-modified InSb(111 layers were determined. For implantation energies 66 and 80 keV, doses 25 and 50 µC, the thickness of the strained layer is about 500 nm with the maximum value of strain close to 0.1%. Additionally, an amorphous layer with significant thickness was found in the implantation region.

  12. DARK MATTER, MAGNETIC FIELDS, AND THE ROTATION CURVE OF THE MILKY WAY

    International Nuclear Information System (INIS)

    Ruiz-Granados, B.; Battaner, E.; Florido, E.; Calvo, J.; Rubiño-Martín, J. A.

    2012-01-01

    The study of the disk rotation curve of our Galaxy at large distances provides an interesting scenario for us to test whether magnetic fields should be considered as a non-negligible dynamical ingredient. By assuming a bulge, an exponential disk for the stellar and gaseous distributions, and a dark halo and disk magnetic fields, we fit the rotation velocity of the Milky Way. In general, when the magnetic contribution is added to the dynamics, a better description of the rotation curve is obtained. Our main conclusion is that magnetic fields should be taken into account for the Milky Way dynamics. Azimuthal magnetic field strengths of B φ ∼ 2 μG at distances of ∼2 R 0 (16 kpc) are able to explain the rise-up for the rotation curve in the outer disk.

  13. AN ANALYSIS OF THE SHAPES OF INTERSTELLAR EXTINCTION CURVES. VI. THE NEAR-IR EXTINCTION LAW

    International Nuclear Information System (INIS)

    Fitzpatrick, E. L.; Massa, D.

    2009-01-01

    We combine new observations from the Hubble Space Telescope's Advanced Camera of Survey with existing data to investigate the wavelength dependence of near-IR (NIR) extinction. Previous studies suggest a power law form for NIR extinction, with a 'universal' value of the exponent, although some recent observations indicate that significant sight line-to-sight line variability may exist. We show that a power-law model for the NIR extinction provides an excellent fit to most extinction curves, but that the value of the power, β, varies significantly from sight line to sight line. Therefore, it seems that a 'universal NIR extinction law' is not possible. Instead, we find that as β decreases, R(V) ≡ A(V)/E(B - V) tends to increase, suggesting that NIR extinction curves which have been considered 'peculiar' may, in fact, be typical for different R(V) values. We show that the power-law parameters can depend on the wavelength interval used to derive them, with the β increasing as longer wavelengths are included. This result implies that extrapolating power-law fits to determine R(V) is unreliable. To avoid this problem, we adopt a different functional form for NIR extinction. This new form mimics a power law whose exponent increases with wavelength, has only two free parameters, can fit all of our curves over a longer wavelength baseline and to higher precision, and produces R(V) values which are consistent with independent estimates and commonly used methods for estimating R(V). Furthermore, unlike the power-law model, it gives R(V)s that are independent of the wavelength interval used to derive them. It also suggests that the relation R(V) = -1.36 E(K-V)/(E(B-V)) - 0.79 can estimate R(V) to ±0.12. Finally, we use model extinction curves to show that our extinction curves are in accord with theoretical expectations, and demonstrate how large samples of observational quantities can provide useful constraints on the grain properties.

  14. Global experience curves for wind farms

    International Nuclear Information System (INIS)

    Junginger, M.; Faaij, A.; Turkenburg, W.C.

    2005-01-01

    In order to forecast the technological development and cost of wind turbines and the production costs of wind electricity, frequent use is made of the so-called experience curve concept. Experience curves of wind turbines are generally based on data describing the development of national markets, which cause a number of problems when applied for global assessments. To analyze global wind energy price development more adequately, we compose a global experience curve. First, underlying factors for past and potential future price reductions of wind turbines are analyzed. Also possible implications and pitfalls when applying the experience curve methodology are assessed. Second, we present and discuss a new approach of establishing a global experience curve and thus a global progress ratio for the investment cost of wind farms. Results show that global progress ratios for wind farms may lie between 77% and 85% (with an average of 81%), which is significantly more optimistic than progress ratios applied in most current scenario studies and integrated assessment models. While the findings are based on a limited amount of data, they may indicate faster price reduction opportunities than so far assumed. With this global experience curve we aim to improve the reliability of describing the speed with which global costs of wind power may decline

  15. Projection-based curve clustering

    International Nuclear Information System (INIS)

    Auder, Benjamin; Fischer, Aurelie

    2012-01-01

    This paper focuses on unsupervised curve classification in the context of nuclear industry. At the Commissariat a l'Energie Atomique (CEA), Cadarache (France), the thermal-hydraulic computer code CATHARE is used to study the reliability of reactor vessels. The code inputs are physical parameters and the outputs are time evolution curves of a few other physical quantities. As the CATHARE code is quite complex and CPU time-consuming, it has to be approximated by a regression model. This regression process involves a clustering step. In the present paper, the CATHARE output curves are clustered using a k-means scheme, with a projection onto a lower dimensional space. We study the properties of the empirically optimal cluster centres found by the clustering method based on projections, compared with the 'true' ones. The choice of the projection basis is discussed, and an algorithm is implemented to select the best projection basis among a library of orthonormal bases. The approach is illustrated on a simulated example and then applied to the industrial problem. (authors)

  16. Dose response curve of induction of MN in lymphocytes for energies Cs-137; Curva dosis respuesta de induccion de micronucleos en linfocitos para las energias Cs-137

    Energy Technology Data Exchange (ETDEWEB)

    Serna Berna, A.; Alcaraz, M.; Acevedo, C.; Vicente, V.; Fuente, I. de la; Canteras, M.

    2006-07-01

    The determination of the dose-response curve is a crucial step to use the Micronucleus assay in Lymphocytes as a biological dosimeters. The most widely used fitting function is the linear-quadratic function. The coefficients are fitted by calibration data provided by irradiations of blood from healthy donors. In our case we performed the calibration curve corresponding to gamma radiation from Cesium-137 (660 keV). Doses ranged from 0 to 16 Gy. The fitting procedure used was the iteratively re weighted least square algorithm implemented in a Matlab routine. The results of the analysis of our data show that the dose-effect curve does not follow a linear-quadratic curve at high radiation doses, diminishing the quadratic parameters as dose increases. This can be interpreted as a micronucleus saturation effect beyond a certain dose level. We conclude that the MN assay with lymphocytes can be well characterized as a biological dosimeters up to a maximum dose of 4.5 Gy. (Author)

  17. Migration and the Wage Curve:

    DEFF Research Database (Denmark)

    Brücker, Herbert; Jahn, Elke J.

    in a general equilibrium framework. For the empirical analysis we employ the IABS, a two percent sample of the German labor force. We find that the elasticity of the wage curve is particularly high for young workers and workers with a university degree, while it is low for older workers and workers......  Based on a wage curve approach we examine the labor market effects of migration in Germany. The wage curve relies on the assumption that wages respond to a change in the unemployment rate, albeit imperfectly. This allows one to derive the wage and employment effects of migration simultaneously...... with a vocational degree. The wage and employment effects of migration are moderate: a 1 percent increase in the German labor force through immigration increases the aggregate unemployment rate by less than 0.1 percentage points and reduces average wages by less 0.1 percent. While native workers benefit from...

  18. Fitness Trends and Disparities Among School-Aged Children in Georgia, 2011-2014.

    Science.gov (United States)

    Bai, Yang; Saint-Maurice, Pedro F; Welk, Gregory J

    Although FitnessGram fitness data on aerobic capacity and body mass index (BMI) have been collected in public schools in Georgia since the 2011-2012 school year, the data have not been analyzed. The primary objective of our study was to use these data to assess changes in fitness among school-aged children in Georgia between 2011 and 2014. A secondary objective was to determine if student fitness differed by school size and socioeconomic characteristics. FitnessGram classifies fitness into the Healthy Fitness Zone (HFZ) or not within the HFZ for aerobic capacity and BMI. We used data for 3 successive school years (ie, 2011-2012 to 2013-2014) obtained from FitnessGram testing of students in >1600 schools. We calculated the percentage of students who achieved the HFZ for aerobic capacity and BMI. We used growth curve models to estimate the annual changes in these proportions, and we determined the effect of school size and socioeconomic status on these changes. Both elementary school boys (β = 1.31%, standard error [SE] = 0.23%, P fitness profiles. Surveillance results such as these may help inform the process of designing state and local school-based fitness promotion and public health programs and tracking the results of those programs.

  19. Is High-Intensity Functional Training (HIFT)/CrossFit Safe for Military Fitness Training?

    Science.gov (United States)

    Poston, Walker S C; Haddock, Christopher K; Heinrich, Katie M; Jahnke, Sara A; Jitnarin, Nattinee; Batchelor, David B

    2016-07-01

    High-intensity functional training (HIFT) is a promising fitness paradigm that gained popularity among military populations. Rather than biasing workouts toward maximizing fitness domains such as aerobic endurance, HIFT workouts are designed to promote general physical preparedness. HIFT programs have proliferated as a result of concerns about the relevance of traditional physical training (PT), which historically focused on aerobic condition via running. Other concerns about traditional PT include: (1) the relevance of service fitness tests given current combat demands, (2) the perception that military PT is geared toward passing service fitness tests, and (3) that training for combat requires more than just aerobic endurance. Despite its' popularity in the military, concerns have been raised about HIFT's injury potential, leading to some approaches being labeled as "extreme conditioning programs" by several military and civilian experts. Given HIFT programs' popularity in the military and concerns about injury, a review of data on HIFT injury potential is needed to inform military policy. The purpose of this review is to: (1) provide an overview of scientific methods used to appropriately compare injury rates among fitness activities and (2) evaluate scientific data regarding HIFT injury risk compared to traditional military PT and other accepted fitness activities. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  20. Reconceptualizing fit in strategic human resource management: 'Lost in translation?'

    NARCIS (Netherlands)

    Paauwe, J.; Boon, C.; Boselie, P.; den Hartog, D.; Paauwe, J.; Guest, D.; Wright, P.

    2013-01-01

    To date, studies that focus on the concept of 'fit' in strategic human resource management (SHRM) fail to show consistent evidence. A variety of fit approaches are available, but there is no general consensus about what constitutes fit. This chapter aims to reconceptualize fit through a literature

  1. Probing potential energy curves of C2- by translational energy spectrometry

    International Nuclear Information System (INIS)

    Gupta, A.K.; Aravind, G.; Krishnamurthy, M.

    2004-01-01

    We present studies on collision induced dissociation of C 2 - with Ar at an impact energy of 15 keV. The C - fragment ion kinetic-energy release (KER) distribution is measured and is used to compute the KER in the center of mass (c.m.) frame (KER c.m. ). We employ the reflection method to deduce an effective repulsive potential-energy curve for the molecular anion that is otherwise difficult to evaluate from quantum computational methods. The nuclear wave packet of the molecular ion in the initial ground state is computed by the semiclassical WKB method using the potential-energy curve of the 2 Σ g + ground electronic state calculated by an ab initio quantum computation method. The ground-state nuclear wave packet is reflected on a parametrized repulsive potential-energy curve where the parameters are determined by fitting the measured KER c.m. with the calculated KER distribution

  2. COMPARING BEHAVIORAL DOSE-EFFECT CURVES FOR HUMANS AND LABORATORY ANIMALS ACUTELY EXPOSED TO TOLUENE.

    Science.gov (United States)

    The utility of laboratory animal data in toxicology depends upon the ability to generalize the results quantitatively to humans. To compare the acute behavioral effects of inhaled toluene in humans to those in animals, dose-effect curves were fitted by meta-analysis of published...

  3. SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook

    2012-01-01

    The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority (∼90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.

  4. Derivation of electron and photon energy spectra from electron beam central axis depth dose curves

    Energy Technology Data Exchange (ETDEWEB)

    Deng Jun [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 (United States)]. E-mail: jun@reyes.stanford.edu; Jiang, Steve B.; Pawlicki, Todd; Li Jinsheng; Ma, C.M. [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 (United States)

    2001-05-01

    A method for deriving the electron and photon energy spectra from electron beam central axis percentage depth dose (PDD) curves has been investigated. The PDD curves of 6, 12 and 20 MeV electron beams obtained from the Monte Carlo full phase space simulations of the Varian linear accelerator treatment head have been used to test the method. We have employed a 'random creep' algorithm to determine the energy spectra of electrons and photons in a clinical electron beam. The fitted electron and photon energy spectra have been compared with the corresponding spectra obtained from the Monte Carlo full phase space simulations. Our fitted energy spectra are in good agreement with the Monte Carlo simulated spectra in terms of peak location, peak width, amplitude and smoothness of the spectrum. In addition, the derived depth dose curves of head-generated photons agree well in both shape and amplitude with those calculated using the full phase space data. The central axis depth dose curves and dose profiles at various depths have been compared using an automated electron beam commissioning procedure. The comparison has demonstrated that our method is capable of deriving the energy spectra for the Varian accelerator electron beams investigated. We have implemented this method in the electron beam commissioning procedure for Monte Carlo electron beam dose calculations. (author)

  5. A GLOBAL MODEL OF THE LIGHT CURVES AND EXPANSION VELOCITIES OF TYPE II-PLATEAU SUPERNOVAE

    Energy Technology Data Exchange (ETDEWEB)

    Pejcha, Ondřej [Department of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08540 (United States); Prieto, Jose L., E-mail: pejcha@astro.princeton.edu [Núcleo de Astronomía de la Facultad de Ingeniería, Universidad Diego Portales, Av. Ejército 441 Santiago (Chile)

    2015-02-01

    We present a new self-consistent and versatile method that derives photospheric radius and temperature variations of Type II-Plateau supernovae based on their expansion velocities and photometric measurements. We apply the method to a sample of 26 well-observed, nearby supernovae with published light curves and velocities. We simultaneously fit ∼230 velocity and ∼6800 mag measurements distributed over 21 photometric passbands spanning wavelengths from 0.19 to 2.2 μm. The light-curve differences among the Type II-Plateau supernovae are well modeled by assuming different rates of photospheric radius expansion, which we explain as different density profiles of the ejecta, and we argue that steeper density profiles result in flatter plateaus, if everything else remains unchanged. The steep luminosity decline of Type II-Linear supernovae is due to fast evolution of the photospheric temperature, which we verify with a successful fit of SN 1980K. Eliminating the need for theoretical supernova atmosphere models, we obtain self-consistent relative distances, reddenings, and nickel masses fully accounting for all internal model uncertainties and covariances. We use our global fit to estimate the time evolution of any missing band tailored specifically for each supernova, and we construct spectral energy distributions and bolometric light curves. We produce bolometric corrections for all filter combinations in our sample. We compare our model to the theoretical dilution factors and find good agreement for the B and V filters. Our results differ from the theory when the I, J, H, or K bands are included. We investigate the reddening law toward our supernovae and find reasonable agreement with standard R{sub V}∼3.1 reddening law in UBVRI bands. Results for other bands are inconclusive. We make our fitting code publicly available.

  6. Benefit and cost curves for typical pollination mutualisms.

    Science.gov (United States)

    Morris, William F; Vázquez, Diego P; Chacoff, Natacha P

    2010-05-01

    Mutualisms provide benefits to interacting species, but they also involve costs. If costs come to exceed benefits as population density or the frequency of encounters between species increases, the interaction will no longer be mutualistic. Thus curves that represent benefits and costs as functions of interaction frequency are important tools for predicting when a mutualism will tip over into antagonism. Currently, most of what we know about benefit and cost curves in pollination mutualisms comes from highly specialized pollinating seed-consumer mutualisms, such as the yucca moth-yucca interaction. There, benefits to female reproduction saturate as the number of visits to a flower increases (because the amount of pollen needed to fertilize all the flower's ovules is finite), but costs continue to increase (because pollinator offspring consume developing seeds), leading to a peak in seed production at an intermediate number of visits. But for most plant-pollinator mutualisms, costs to the plant are more subtle than consumption of seeds, and how such costs scale with interaction frequency remains largely unknown. Here, we present reasonable benefit and cost curves that are appropriate for typical pollinator-plant interactions, and we show how they can result in a wide diversity of relationships between net benefit (benefit minus cost) and interaction frequency. We then use maximum-likelihood methods to fit net-benefit curves to measures of female reproductive success for three typical pollination mutualisms from two continents, and for each system we chose the most parsimonious model using information-criterion statistics. We discuss the implications of the shape of the net-benefit curve for the ecology and evolution of plant-pollinator mutualisms, as well as the challenges that lie ahead for disentangling the underlying benefit and cost curves for typical pollination mutualisms.

  7. Robotic Mitral Valve Repair: The Learning Curve.

    Science.gov (United States)

    Goodman, Avi; Koprivanac, Marijan; Kelava, Marta; Mick, Stephanie L; Gillinov, A Marc; Rajeswaran, Jeevanantham; Brzezinski, Anna; Blackstone, Eugene H; Mihaljevic, Tomislav

    Adoption of robotic mitral valve surgery has been slow, likely in part because of its perceived technical complexity and a poorly understood learning curve. We sought to correlate changes in technical performance and outcome with surgeon experience in the "learning curve" part of our series. From 2006 to 2011, two surgeons undertook robotically assisted mitral valve repair in 458 patients (intent-to-treat); 404 procedures were completed entirely robotically (as-treated). Learning curves were constructed by modeling surgical sequence number semiparametrically with flexible penalized spline smoothing best-fit curves. Operative efficiency, reflecting technical performance, improved for (1) operating room time for case 1 to cases 200 (early experience) and 400 (later experience), from 414 to 364 to 321 minutes (12% and 22% decrease, respectively), (2) cardiopulmonary bypass time, from 148 to 102 to 91 minutes (31% and 39% decrease), and (3) myocardial ischemic time, from 119 to 75 to 68 minutes (37% and 43% decrease). Composite postoperative complications, reflecting safety, decreased from 17% to 6% to 2% (63% and 85% decrease). Intensive care unit stay decreased from 32 to 28 to 24 hours (13% and 25% decrease). Postoperative stay fell from 5.2 to 4.5 to 3.8 days (13% and 27% decrease). There were no in-hospital deaths. Predischarge mitral regurgitation of less than 2+, reflecting effectiveness, was achieved in 395 (97.8%), without correlation to experience; return-to-work times did not change substantially with experience. Technical efficiency of robotic mitral valve repair improves with experience and permits its safe and effective conduct.

  8. Developments in Interpreting Learning Curves and Applications to Energy Technology Policy

    International Nuclear Information System (INIS)

    Van der Zwaan, B.C.C.; Wene, C.O.

    2011-01-01

    The book 'Learning Curves: Theory, Models, and Applications' first draws a learning map that shows where learning is involved within organizations, then examines how it can be sustained, perfected, and accelerated. The book reviews empirical findings in the literature in terms of different sources for learning and partial assessments of the steps that make up the actual learning process inside the learning curve. Chapter 23 on 'Developments in Interpreting Learning Curves and Applications to Energy Technology Policy' is written by Bob van der Zwaan and Clas-Otto Wene. In this chapter they provide some interpretations of experience and learning curves starting from three different theoretical platforms. These interpretations are aimed at explaining learning rates for different energy technologies. The ultimate purpose is to find the role that experience and learning curves can legitimately play in designing efficient government deployment programs and in analyzing the implications of different energy scenarios. The 'Component Learning' section summarizes recent work by the authors that focuses on the disaggregation of technologies in their respective components and argues that traditional learning for overall technology should perhaps be replaced by a phenomenology that recognizes learning for individual components. The 'Learning and Time' section presents an approach that departs more strongly from the conventional learning curve methodology, by suggesting that exponential growth and progress may be the deeper underlying processes behind observed learning-by-doing. Contrary to this view, the cybernetic approach presented in the 'Cybernetic Approach' section sees learning curves as expressing a fundamental property of organizations in competitive markets and applies the findings from second order cybernetics to calculate the learning rates for operationally closed systems. All three interpretations find empirical support. The 'Conclusions' section summarizes the

  9. Developments in Interpreting Learning Curves and Applications to Energy Technology Policy

    Energy Technology Data Exchange (ETDEWEB)

    Van der Zwaan, B.C.C. [Energy research Centre of the Netherlands, ECN Policy Studies, Petten (Netherlands); Wene, C.O. [Wenergy, Lund (Sweden)

    2011-06-15

    The book 'Learning Curves: Theory, Models, and Applications' first draws a learning map that shows where learning is involved within organizations, then examines how it can be sustained, perfected, and accelerated. The book reviews empirical findings in the literature in terms of different sources for learning and partial assessments of the steps that make up the actual learning process inside the learning curve. Chapter 23 on 'Developments in Interpreting Learning Curves and Applications to Energy Technology Policy' is written by Bob van der Zwaan and Clas-Otto Wene. In this chapter they provide some interpretations of experience and learning curves starting from three different theoretical platforms. These interpretations are aimed at explaining learning rates for different energy technologies. The ultimate purpose is to find the role that experience and learning curves can legitimately play in designing efficient government deployment programs and in analyzing the implications of different energy scenarios. The 'Component Learning' section summarizes recent work by the authors that focuses on the disaggregation of technologies in their respective components and argues that traditional learning for overall technology should perhaps be replaced by a phenomenology that recognizes learning for individual components. The 'Learning and Time' section presents an approach that departs more strongly from the conventional learning curve methodology, by suggesting that exponential growth and progress may be the deeper underlying processes behind observed learning-by-doing. Contrary to this view, the cybernetic approach presented in the 'Cybernetic Approach' section sees learning curves as expressing a fundamental property of organizations in competitive markets and applies the findings from second order cybernetics to calculate the learning rates for operationally closed systems. All three interpretations find empirical

  10. Deriving the suction stress of unsaturated soils from water retention curve, based on wetted surface area in pores

    Science.gov (United States)

    Greco, Roberto; Gargano, Rudy

    2016-04-01

    The evaluation of suction stress in unsaturated soils has important implications in several practical applications. Suction stress affects soil aggregate stability and soil erosion. Furthermore, the equilibrium of shallow unsaturated soil deposits along steep slopes is often possible only thanks to the contribution of suction to soil effective stress. Experimental evidence, as well as theoretical arguments, shows that suction stress is a nonlinear function of matric suction. The relationship expressing the dependence of suction stress on soil matric suction is usually indicated as Soil Stress Characteristic Curve (SSCC). In this study, a novel equation for the evaluation of the suction stress of an unsaturated soil is proposed, assuming that the exchange of stress between soil water and solid particles occurs only through the part of the surface of the solid particles which is in direct contact with water. The proposed equation, based only upon geometric considerations related to soil pore-size distribution, allows to easily derive the SSCC from the water retention curve (SWRC), with the assignment of two additional parameters. The first parameter, representing the projection of the external surface area of the soil over a generic plane surface, can be reasonably estimated from the residual water content of the soil. The second parameter, indicated as H0, is the water potential, below which adsorption significantly contributes to water retention. For the experimental verification of the proposed approach such a parameter is considered as a fitting parameter. The proposed equation is applied to the interpretation of suction stress experimental data, taken from the literature, spanning over a wide range of soil textures. The obtained results show that in all cases the proposed relationships closely reproduces the experimental data, performing better than other currently used expressions. The obtained results also show that the adopted values of the parameter H0

  11. On the applicability of numerical image mapping for PIV image analysis near curved interfaces

    International Nuclear Information System (INIS)

    Masullo, Alessandro; Theunissen, Raf

    2017-01-01

    This paper scrutinises the general suitability of image mapping for particle image velocimetry (PIV) applications. Image mapping can improve PIV measurement accuracy by eliminating overlap between the PIV interrogation windows and an interface, as illustrated by some examples in the literature. Image mapping transforms the PIV images using a curvilinear interface-fitted mesh prior to performing the PIV cross correlation. However, degrading effects due to particle image deformation and the Jacobian transformation inherent in the mapping along curvilinear grid lines have never been deeply investigated. Here, the implementation of image mapping from mesh generation to image resampling is presented in detail, and related error sources are analysed. Systematic comparison with standard PIV approaches shows that image mapping is effective only in a very limited set of flow conditions and geometries, and depends strongly on a priori knowledge of the boundary shape and streamlines. In particular, with strongly curved geometries or streamlines that are not parallel to the interface, the image-mapping approach is easily outperformed by more traditional image analysis methodologies invoking suitable spatial relocation of the obtained displacement vector. (paper)

  12. Automatic fitting of Gaussian peaks using abductive machine learning

    Science.gov (United States)

    Abdel-Aal, R. E.

    1998-02-01

    Analytical techniques have been used for many years for fitting Gaussian peaks in nuclear spectroscopy. However, the complexity of the approach warrants looking for machine-learning alternatives where intensive computations are required only once (during training), while actual analysis on individual spectra is greatly simplified and quickened. This should allow the use of simple portable systems for fast and automated analysis of large numbers of spectra, particularly in situations where accuracy may be traded for speed and simplicity. This paper proposes the use of abductive networks machine learning for this purpose. The Abductory Induction Mechanism (AIM) tool was used to build models for analyzing both single and double Gaussian peaks in the presence of noise depicting statistical uncertainties in collected spectra. AIM networks were synthesized by training on 1000 representative simulated spectra and evaluated on 500 new spectra. A classifier network determines the multiplicity of single/double peaks with an accuracy of 5.8%. With statistical uncertainties corresponding to a peak count of 100, average percentage absolute errors for the height, position, and width of single peaks are 4.9, 2.9, and 4.2%, respectively. For double peaks, these average errors are within 7.0, 3.1, and 5.9%, respectively. Models have been developed which account for the effect of a linear background on a single peak. Performance is compared with a neural network application and with an analytical curve-fitting routine, and the new technique is applied to actual data of an alpha spectrum.

  13. Automatic fitting of Gaussian peaks using abductive machine learning

    International Nuclear Information System (INIS)

    Abdel-Aal, R.E.

    1998-01-01

    Analytical techniques have been used for many years for fitting Gaussian peaks in nuclear spectroscopy. However, the complexity of the approach warrants looking for machine-learning alternatives where intensive computations are required only once (during training), while actual analysis on individual spectra is greatly simplified and quickened. This should allow the use of simple portable systems for fast and automated analysis of large numbers of spectra, particularly in situations where accuracy may be traded for speed and simplicity. This paper proposes the use of abductive networks machine learning for this purpose. The Abductory Induction Mechanism (AIM) tool was used to build models for analyzing both single and double Gaussian peaks in the presence of noise depicting statistical uncertainties in collected spectra. AIM networks were synthesized by training on 1,000 representative simulated spectra and evaluated on 500 new spectra. A classifier network determines the multiplicity of single/double peaks with an accuracy of 98%. With statistical uncertainties corresponding to a peak count of 100, average percentage absolute errors for the height, position, and width of single peaks are 4.9, 2.9, and 4.2%, respectively. For double peaks, these average errors are within 7.0, 3.1, and 5.9%, respectively. Models have been developed which account for the effect of a linear background on a single peak. Performance is compared with a neural network application and with an analytical curve-fitting routine, and the new technique is applied to actual data of an alpha spectrum

  14. Steam generator tube fitness-for-service guidelines

    International Nuclear Information System (INIS)

    Gorman, J.A.; Harris, J.E.; Lowenstein, D.B.

    1995-07-01

    The objectives of this project were to characterize defect mechanisms which could affect the integrity of steam generator tubes, to review and critique state-of-the-art Canadian and international steam generator tube fitness-for-service criteria and guidelines, and to obtain recommendations for criteria that could be used to assess fitness-for service guidelines for steam generator tubes containing defects in Canadian power plant service. Degradation mechanisms, that could affect CANDU steam generator tubes in Canada, have been characterized. The design standards and safety criteria that apply to steam generator tubing in nuclear power plant service in Canada and in Belgium, France, Japan, Spain, Sweden, and the USA have been reviewed and described. The fitness-for-service guidelines used for a variety of specific defect types in Canada and internationally have been evaluated and described in detail in order to highlight the considerations involved in developing such defect specific guidelines. Existing procedures for defect assessment and disposition have been identified, including inspection and examination practices. The approaches used in Canada and in Belgium, France, Japan, Spain, Sweden, and the USA for fitness-for-service guidelines were compared and contrasted for a variety of defect mechanisms. The strengths and weaknesses of the various approaches have been assessed. The report presents recommendations on approaches that may be adopted in the development of fitness-for-service guidelines for use in the dispositioning of steam generator tubing defects in Canada. (author). 175 refs., 2 tabs., 28 figs

  15. Measurement and fitting techniques for the assessment of material nonlinearity using nonlinear Rayleigh waves

    Energy Technology Data Exchange (ETDEWEB)

    Torello, David [GW Woodruff School of Mechanical Engineering, Georgia Tech (United States); Kim, Jin-Yeon [School of Civil and Environmental Engineering, Georgia Tech (United States); Qu, Jianmin [Department of Civil and Environmental Engineering, Northwestern University (United States); Jacobs, Laurence J. [School of Civil and Environmental Engineering, Georgia Tech and GW Woodruff School of Mechanical Engineering, Georgia Tech (United States)

    2015-03-31

    This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. These experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.

  16. Calculation approaches for grid usage fees to influence the load curve in the distribution grid level

    International Nuclear Information System (INIS)

    Illing, Bjoern

    2014-01-01

    Dominated by the energy policy the decentralized German energy market is changing. One mature target of the government is to increase the contribution of renewable generation to the gross electricity consumption. In order to achieve this target disadvantages like an increased need for capacity management occurs. Load reduction and variable grid fees offer the grid operator solutions to realize capacity management by influencing the load profile. The evolution of the current grid fees towards more causality is required to adapt these approaches. Two calculation approaches are developed in this assignment. On the one hand multivariable grid fees keeping the current components demand and energy charge. Additional to the grid costs grid load dependent parameters like the amount of decentralized feed-ins, time and local circumstances as well as grid capacities are considered. On the other hand the grid fee flat-rate which represents a demand based model on a monthly level. Both approaches are designed to meet the criteria for future grid fees. By means of a case study the effects of the grid fees on the load profile at the low voltage grid is simulated. Thereby the consumption is represented by different behaviour models and the results are scaled at the benchmark grid area. The resulting load curve is analyzed concerning the effects of peak load reduction as well as the integration of renewable energy sources. Additionally the combined effect of grid fees and electricity tariffs is evaluated. Finally the work discusses the launching of grid fees in the tense atmosphere of politics, legislation and grid operation. Results of this work are two calculation approaches designed for grid operators to define the grid fees. Multivariable grid fees are based on the current calculation scheme. Hereby demand and energy charges are weighted by time, locational and load related dependencies. The grid fee flat-rate defines a limitation in demand extraction. Different demand levels

  17. Use of regionalisation approach to develop fire frequency curves for Victoria, Australia

    Science.gov (United States)

    Khastagir, Anirban; Jayasuriya, Niranjali; Bhuyian, Muhammed A.

    2017-11-01

    It is important to perform fire frequency analysis to obtain fire frequency curves (FFC) based on fire intensity at different parts of Victoria. In this paper fire frequency curves (FFCs) were derived based on forest fire danger index (FFDI). FFDI is a measure related to fire initiation, spreading speed and containment difficulty. The mean temperature (T), relative humidity (RH) and areal extent of open water (LC2) during summer months (Dec-Feb) were identified as the most important parameters for assessing the risk of occurrence of bushfire. Based on these parameters, Andrews' curve equation was applied to 40 selected meteorological stations to identify homogenous stations to form unique clusters. A methodology using peak FFDI from cluster averaged FFDIs was developed by applying Log Pearson Type III (LPIII) distribution to generate FFCs. A total of nine homogeneous clusters across Victoria were identified, and subsequently their FFC's were developed in order to estimate the regionalised fire occurrence characteristics.

  18. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    Science.gov (United States)

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.

    Science.gov (United States)

    Buchanan, Trey

    Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…

  20. Optimization of ISOL targets based on Monte-Carlo simulations of ion release curves

    International Nuclear Information System (INIS)

    Mustapha, B.; Nolen, J.A.

    2003-01-01

    A detailed model for simulating release curves from ISOL targets has been developed. The full 3D geometry is implemented using Geant-4. Produced particles are followed individually from production to release. The delay time is computed event by event. All processes involved: diffusion, effusion and decay are included to obtain the overall release curve. By fitting to the experimental data, important parameters of the release process (diffusion coefficient, sticking time, ...) are extracted. They can be used to improve the efficiency of existing targets and design new ones more suitable to produce beams of rare isotopes

  1. Optimization of ISOL targets based on Monte-Carlo simulations of ion release curves

    CERN Document Server

    Mustapha, B

    2003-01-01

    A detailed model for simulating release curves from ISOL targets has been developed. The full 3D geometry is implemented using Geant-4. Produced particles are followed individually from production to release. The delay time is computed event by event. All processes involved: diffusion, effusion and decay are included to obtain the overall release curve. By fitting to the experimental data, important parameters of the release process (diffusion coefficient, sticking time, ...) are extracted. They can be used to improve the efficiency of existing targets and design new ones more suitable to produce beams of rare isotopes.

  2. A Probabilistic Framework for Curve Evolution

    DEFF Research Database (Denmark)

    Dahl, Vedrana Andersen

    2017-01-01

    approach include ability to handle textured images, simple generalization to multiple regions, and efficiency in computation. We test our probabilistic framework in combination with parametric (snakes) and geometric (level-sets) curves. The experimental results on composed and natural images demonstrate...

  3. Temporal issues in person–organization fit, person–job fit and turnover: The role of leader–member exchange

    Science.gov (United States)

    Boon, Corine; Biron, Michal

    2016-01-01

    Person–environment fit has been found to have significant implications for employee attitudes and behaviors. Most research to date has approached person–environment fit as a static phenomenon, and without examining how different types of person–environment fit may affect each other. In particular, little is known about the conditions under which fit with one aspect of the environment influences another aspect, as well as subsequent behavior. To address this gap we examine the role of leader–member exchange in the relationship between two types of person–environment fit over time: person–organization and person–job fit, and subsequent turnover. Using data from two waves (T1 and T2, respectively) and turnover data collected two years later (T3) from a sample of 160 employees working in an elderly care organization in the Netherlands, we find that person–organization fit at T1 is positively associated with person–job fit at T2, but only for employees in high-quality leader–member exchange relationships. Higher needs–supplies fit at T2 is associated with lower turnover at T3. In contrast, among employees in high-quality leader–member exchange relationships, the demands–abilities dimension of person–job fit at T2 is associated with higher turnover at T3. PMID:27904171

  4. Temporal issues in person-organization fit, person-job fit and turnover: The role of leader-member exchange.

    Science.gov (United States)

    Boon, Corine; Biron, Michal

    2016-12-01

    Person-environment fit has been found to have significant implications for employee attitudes and behaviors. Most research to date has approached person-environment fit as a static phenomenon, and without examining how different types of person-environment fit may affect each other. In particular, little is known about the conditions under which fit with one aspect of the environment influences another aspect, as well as subsequent behavior. To address this gap we examine the role of leader-member exchange in the relationship between two types of person-environment fit over time: person-organization and person-job fit, and subsequent turnover. Using data from two waves (T1 and T2, respectively) and turnover data collected two years later (T3) from a sample of 160 employees working in an elderly care organization in the Netherlands, we find that person-organization fit at T1 is positively associated with person-job fit at T2, but only for employees in high-quality leader-member exchange relationships. Higher needs-supplies fit at T2 is associated with lower turnover at T3. In contrast, among employees in high-quality leader-member exchange relationships, the demands-abilities dimension of person-job fit at T2 is associated with higher turnover at T3.

  5. Effect of driving experience on anticipatory look-ahead fixations in real curve driving.

    Science.gov (United States)

    Lehtonen, Esko; Lappi, Otto; Koirikivi, Iivo; Summala, Heikki

    2014-09-01

    Anticipatory skills are a potential factor for novice drivers' curve accidents. Behavioural data show that steering and speed regulation are affected by forward planning of the trajectory. When approaching a curve, the relevant visual information for online steering control and for planning is located at different eccentricities, creating a need to disengage the gaze from the guidance of steering to anticipatory look-ahead fixations over curves. With experience, peripheral vision can be increasingly used in the visual guidance of steering. This could leave experienced drivers more gaze time to invest on look-ahead fixations over curves, facilitating the trajectory planning. Eighteen drivers (nine novices, nine experienced) drove an instrumented vehicle on a rural road four times in both directions. Their eye movements were analyzed in six curves. The trajectory of the car was modelled and divided to approach, entry and exit phases. Experienced drivers spent less time on the road-ahead and more time on the look-ahead fixations over the curves. Look-ahead fixations were also more common in the approach than in the entry phase of the curve. The results suggest that with experience drivers allocate greater part of their visual attention to trajectory planning. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Is High Intensity Functional Training (HIFT)/CrossFit® Safe for Military Fitness Training?

    Science.gov (United States)

    Poston, Walker S.C.; Haddock, Christopher K.; Heinrich, Katie M.; Jahnke, Sara A.; Jitnarin, Nattinee; Batchelor, David B.

    2016-01-01

    High-intensity functional training (HIFT) is a promising fitness paradigm that gained popularity among military populations. Rather than biasing workouts toward maximizing fitness domains such as aerobic endurance, HIFT workouts are designed to promote general physical preparedness. HIFT programs have proliferated due to concerns about the relevance of traditional physical training (PT), which historically focused on aerobic condition via running. Other concerns about traditional PT include: 1) the relevance of service fitness tests given current combat demands; 2) the perception that military PT is geared toward passing service fitness tests; and 3) that training for combat requires more than just aerobic endurance. Despite its’ popularity in the military, concerns have been raised about HIFT’s injury potential, leading to some approaches being labeled as “extreme conditioning programs” by several military and civilian experts. Given HIFT programs’ popularity in the military and concerns about injury, a review of data on HIFT injury potential is needed to inform military policy. The purpose of this review is to: 1) provide an overview of scientific methods used to appropriately compare injury rates among fitness activities; and 2) evaluate scientific data regarding HIFT injury risk compared to traditional military PT and other accepted fitness activities PMID:27391615

  7. A fitting algorithm based on simulated annealing techniques for efficiency calibration of HPGe detectors using different mathematical functions

    Energy Technology Data Exchange (ETDEWEB)

    Hurtado, S. [Servicio de Radioisotopos, Centro de Investigacion, Tecnologia e Innovacion (CITIUS), Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain)], E-mail: shurtado@us.es; Garcia-Leon, M. [Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Aptd. 1065, 41080 Sevilla (Spain); Garcia-Tenorio, R. [Departamento de Fisica Aplicada II, E.T.S.A. Universidad de Sevilla, Avda, Reina Mercedes 2, 41012 Sevilla (Spain)

    2008-09-11

    In this work several mathematical functions are compared in order to perform the full-energy peak efficiency calibration of HPGe detectors using a 126cm{sup 3} HPGe coaxial detector and gamma-ray energies ranging from 36 to 1460 keV. Statistical tests and Monte Carlo simulations were used to study the performance of the fitting curve equations. Furthermore the fitting procedure of these complex functional forms to experimental data is a non-linear multi-parameter minimization problem. In gamma-ray spectrometry usually non-linear least-squares fitting algorithms (Levenberg-Marquardt method) provide a fast convergence while minimizing {chi}{sub R}{sup 2}, however, sometimes reaching only local minima. In order to overcome that shortcoming a hybrid algorithm based on simulated annealing (HSA) techniques is proposed. Additionally a new function is suggested that models the efficiency curve of germanium detectors in gamma-ray spectrometry.

  8. Ultraviolet light curves of U Geminorum and VW Hydri

    International Nuclear Information System (INIS)

    Wu, C.-C.; Panek, R.J.; Holm, A.V.; Schiffer, F.H. III

    1982-01-01

    Ultraviolet light curves have been obtained for the quiescent dwarf novae U Gem and VW Hyi. The amplitude of the hump associated with the accretion hot spot is much smaller in the UV than in the visible. This implies that the bright spot temperature is roughly 12000 K if it is optically thick. A hotter spot would have to be optically thin in the near UV. The flux distribution of U Gem in quiescence cannot be fitted by model spectra of steady state, viscous accretion disks. The absolute luminosity, the flux distribution, and the far UV spectrum suggest that the primary star is visible in the far UV. The optical-UV flux distribution of VW Hyi could be matched roughly by the authors' model accretion disks, but the fitting is poorly constrained due to the uncertainty in its distance. (Auth.)

  9. The mass of the black hole in 1A 0620-00, revisiting the ellipsoidal light curve modelling

    Science.gov (United States)

    van Grunsven, Theo F. J.; Jonker, Peter G.; Verbunt, Frank W. M.; Robinson, Edward L.

    2017-12-01

    The mass distribution of stellar-mass black holes can provide important clues to supernova modelling, but observationally it is still ill constrained. Therefore, it is of importance to make black hole mass measurements as accurate as possible. The X-ray transient 1A 0620-00 is well studied, with a published black hole mass of 6.61 ± 0.25 M⊙, based on an orbital inclination i of 51.0° ± 0.9°. This was obtained by Cantrell et al. (2010) as an average of independent fits to V-, I- and H-band light curves. In this work, we perform an independent check on the value of i by re-analysing existing YALO/SMARTS V-, I- and H-band photometry, using different modelling software and fitting strategy. Performing a fit to the three light curves simultaneously, we obtain a value for i of 54.1° ± 1.1°, resulting in a black hole mass of 5.86 ± 0.24 M⊙. Applying the same model to the light curves individually, we obtain 58.2° ± 1.9°, 53.6° ± 1.6° and 50.5° ± 2.2° for V-, I- and H-band, respectively, where the differences in best-fitting i are caused by the contribution of the residual accretion disc light in the three different bands. We conclude that the mass determination of this black hole may still be subject to systematic effects exceeding the statistical uncertainty. Obtaining more accurate masses would be greatly helped by continuous phase-resolved spectroscopic observations simultaneous with photometry.

  10. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep......Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach...... learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several...

  11. Estimation of Typhoon Wind Hazard Curves for Nuclear Sites

    Energy Technology Data Exchange (ETDEWEB)

    Choun, Young-Sun; Kim, Min-Kyu [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The intensity of such typhoons, which can influence the Korean Peninsula, is on an increasing trend owing to a rapid change of climate of the Northwest Pacific Ocean. Therefore, nuclear facilities should be prepared against future super-typhoons. Currently, the U.S. Nuclear Regulatory Commission requires that a new NPP should be designed to endure the design-basis hurricane wind speeds corresponding to an annual exceedance frequency of 10{sup -7} (return period of 10 million years). A typical technique used to estimate typhoon wind speeds is based on a sampling of the key parameters of typhoon wind models from the distribution functions fitting statistical distributions to the observation data. Thus, the estimated wind speeds for long return periods include an unavoidable uncertainty owing to a limited observation. This study estimates the typhoon wind speeds for nuclear sites using a Monte Carlo simulation, and derives wind hazard curves using a logic-tree framework to reduce the epistemic uncertainty. Typhoon wind speeds were estimated for different return periods through a Monte-Carlo simulation using the typhoon observation data, and the wind hazard curves were derived using a logic-tree framework for three nuclear sites. The hazard curves for the simulated and probable maximum winds were obtained. The mean hazard curves for the simulated and probable maximum winds can be used for the design and risk assessment of an NPP.

  12. Multiphoton absorption coefficients in solids: an universal curve

    International Nuclear Information System (INIS)

    Brandi, H.S.; Araujo, C.B. de

    1983-04-01

    An universal curve for the frequency dependence of the multiphoton absorption coefficient is proposed based on a 'non-perturbative' approach. Specific applications have been made to obtain two, three, four and five photons absorption coefficient in different materials. Properly scaling of the two photon absorption coefficient and the use of the universal curve yields results for the higher order absorption coefficients in good agreement with the experimental data. (Author) [pt

  13. Light-Curve Modelling Constraints on the Obliquities and Aspect Angles of the Young Fermi Pulsars

    Science.gov (United States)

    Pierbattista, M.; Harding, A. K.; Grenier, I. A.; Johnson, T. J.; Caraveo, P. A.; Kerr, M.; Gonthier, P. L.

    2015-01-01

    In more than four years of observation the Large Area Telescope on board the Fermi satellite has identified pulsed gamma-ray emission from more than 80 young or middle-aged pulsars, in most cases providing light curves with high statistics. Fitting the observed profiles with geometrical models can provide estimates of the magnetic obliquity alpha and of the line of sight angle zeta, yielding estimates of the radiation beaming factor and radiated luminosity. Using different gamma-ray emission geometries (Polar Cap, Slot Gap, Outer Gap, One Pole Caustic) and core plus cone geometries for the radio emission, we fit gamma-ray light curves for 76 young or middle-aged pulsars and we jointly fit their gamma-ray plus radio light curves when possible. We find that a joint radio plus gamma-ray fit strategy is important to obtain (alpha, zeta) estimates that can explain simultaneously detectable radio and gamma-ray emission: when the radio emission is available, the inclusion of the radio light curve in the fit leads to important changes in the (alpha, gamma) solutions. The most pronounced changes are observed for Outer Gap and One Pole Caustic models for which the gamma-ray only fit leads to underestimated alpha or zeta when the solution is found to the left or to the right of the main alpha-zeta plane diagonal respectively. The intermediate-to-high altitude magnetosphere models, Slot Gap, Outer Gap, and One pole Caustic, are favored in explaining the observations. We find no apparent evolution of a on a time scale of 106 years. For all emission geometries our derived gamma-ray beaming factors are generally less than one and do not significantly evolve with the spin-down power. A more pronounced beaming factor vs. spin-down power correlation is observed for Slot Gap model and radio-quiet pulsars and for the Outer Gap model and radio-loud pulsars. The beaming factor distributions exhibit a large dispersion that is less pronounced for the Slot Gap case and that decreases from

  14. Model-based methodology to develop the isochronous stress-strain curves for modified 9Cr steels

    International Nuclear Information System (INIS)

    Kim, Woo Gon; Yin, Song Nan; Kim, Sung Ho; Lee, Chan Bock; Jung, Ik Hee

    2008-01-01

    Since high temperature materials are designed with a target life based on a specified amount of allowable strain and stress, their Isochronous Stress-Strain Curves (ISSC) are needed to avoid an excessive deformation during an intended service life. In this paper, a model-based methodology to develop the isochronous curves for a G91 steel is described. Creep strain-time curves were reviewed for typical high-temperature materials, and Garofalo's model which conforms well to the primary and secondary creep stages was proper for the G91 steel. Procedures to obtain an instantaneous elastic-plastic strain, ε i were given in detail. Also, to accurately determine the P 1 , P 2 and P 3 parameters in the Garofalo's model, a Nonlinear Least Square Fitting (NLSF) method was adopted and useful. The long-term creep curves for the G91 steel can be modeled by the Garofalo's model, and the long-term ISSCs can be developed using the modeled creep curves

  15. Industry-Cost-Curve Approach for Modeling the Environmental Impact of Introducing New Technologies in Life Cycle Assessment.

    Science.gov (United States)

    Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André

    2015-07-07

    The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.

  16. WASP-36b: A NEW TRANSITING PLANET AROUND A METAL-POOR G-DWARF, AND AN INVESTIGATION INTO ANALYSES BASED ON A SINGLE TRANSIT LIGHT CURVE

    Energy Technology Data Exchange (ETDEWEB)

    Smith, A. M. S.; Anderson, D. R.; Hellier, C.; Maxted, P. F. L.; Smalley, B.; Southworth, J. [Astrophysics Group, Keele University, Staffordshire, ST5 5BG (United Kingdom); Collier Cameron, A. [SUPA, School of Physics and Astronomy, University of St Andrews, North Haugh, Fife, KY16 9SS (United Kingdom); Gillon, M.; Jehin, E. [Institut d' Astrophysique et de Geophysique, Universite de Liege, Allee du 6 Aout, 17 Bat. B5C, Liege 1 (Belgium); Lendl, M.; Queloz, D.; Triaud, A. H. M. J.; Pepe, F.; Segransan, D.; Udry, S. [Observatoire de Geneve, Universite de Geneve, 51 Chemin des Maillettes, 1290 Sauverny (Switzerland); West, R. G. [Department of Physics and Astronomy, University of Leicester, Leicester, LE1 7RH (United Kingdom); Barros, S. C. C.; Pollacco, D. [Astrophysics Research Centre, School of Mathematics and Physics, Queen' s University, University Road, Belfast, BT7 1NN (United Kingdom); Street, R. A., E-mail: amss@astro.keele.ac.uk [Las Cumbres Observatory, 6740 Cortona Drive Suite 102, Goleta, CA 93117 (United States)

    2012-04-15

    We report the discovery, from WASP and CORALIE, of a transiting exoplanet in a 1.54 day orbit. The host star, WASP-36, is a magnitude V = 12.7, metal-poor G2 dwarf (T{sub eff} = 5959 {+-} 134 K), with [Fe/H] =-0.26 {+-} 0.10. We determine the planet to have mass and radius, respectively, 2.30 {+-} 0.07 and 1.28 {+-} 0.03 times that of Jupiter. We have eight partial or complete transit light curves, from four different observatories, which allow us to investigate the potential effects on the fitted system parameters of using only a single light curve. We find that the solutions obtained by analyzing each of these light curves independently are consistent with our global fit to all the data, despite the apparent presence of correlated noise in at least two of the light curves.

  17. Institutional Fit and River Basin Governance: a New Approach Using Multiple Composite Measures

    Directory of Open Access Journals (Sweden)

    Louis Lebel

    2013-03-01

    Full Text Available The notion that effective environmental governance depends in part on achieving a reasonable fit between institutional arrangements and the features of ecosystems and their interconnections with users has been central to much thinking about social-ecological systems for more than a decade. Based on expert consultations this study proposes a set of six dimensions of fit for water governance regimes and then empirically explores variation in measures of these in 28 case studies of national parts of river basins in Europe, Asia, Latin America, and Africa drawing on a database compiled by the Twin2Go project. The six measures capture different but potentially important dimensions of fit: allocation, integration, conservation, basinization, participation, and adaptation. Based on combinations of responses to a standard questionnaire filled in by groups of experts in each basin we derived quantitative measures for each indicator. Substantial variation in these measures of fit was apparent among basins in developing and developed countries. Geographical location is not a barrier to high institutional fit; but within basins different measures of fit often diverge. This suggests it is difficult, but not impossible, to simultaneously achieve a high fit against multiple challenging conditions. Comparing multidimensional fit profiles give a sense of how well water governance regimes are equipped for dealing with a range of natural resource and use-related conditions and suggests areas for priority intervention. The findings of this study thus confirm and help explain previous work that has concluded that context is important for understanding the variable consequences of institutional reform on water governance practices as well as on social and environmental outcomes.

  18. VRF ("Visual RobFit") — nuclear spectral analysis with non-linear full-spectrum nuclide shape fitting

    Science.gov (United States)

    Lasche, George; Coldwell, Robert; Metzger, Robert

    2017-09-01

    A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.

  19. Methodology and Limitations of Ohio Enrollment Projections. The AIR Professional File, No. 4, Winter 1979-80.

    Science.gov (United States)

    Kraetsch, Gayla A.

    Two quantitative enrollment projection techniques and the methods used by researchers at the Ohio Board of Regents (OBR) are discussed. Two quantitative approaches that are typically used for enrollment projections are curve-fitting techniques and causal models. Many state forecasters use curve-fitting techniques, a popular approach because only…

  20. Application of Learning Curves for Didactic Model Evaluation: Case Studies

    Directory of Open Access Journals (Sweden)

    Felix Mödritscher

    2013-01-01

    Full Text Available The success of (online courses depends, among other factors, on the underlying didactical models which have always been evaluated with qualitative and quantitative research methods. Several new evaluation techniques have been developed and established in the last years. One of them is ‘learning curves’, which aim at measuring error rates of users when they interact with adaptive educational systems, thereby enabling the underlying models to be evaluated and improved. In this paper, we report how we have applied this new method to two case studies to show that learning curves are useful to evaluate didactical models and their implementation in educational platforms. Results show that the error rates follow a power law distribution with each additional attempt if the didactical model of an instructional unit is valid. Furthermore, the initial error rate, the slope of the curve and the goodness of fit of the curve are valid indicators for the difficulty level of a course and the quality of its didactical model. As a conclusion, the idea of applying learning curves for evaluating didactical model on the basis of usage data is considered to be valuable for supporting teachers and learning content providers in improving their online courses.

  1. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  2. An integral parametrization of the bacterial growth curve experimental demonstration with E. coli C600 bacteria

    International Nuclear Information System (INIS)

    Garces, F.; Vidania, R. de

    1984-01-01

    In this work an integral parametrization of the bacterial growth curve is presented. The values of the parameters are obtained by fitting to the experimental data. Those parameters, with allow to describe the growth in its different phases, are the followings: slopes of the curve in its three parts and the time which divides the last two phases of the bacterial growth. The experimental data are bacterial densities measured by optical methods. The bacteria used was the E. coli C 6 00. (Author)

  3. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  4. Anatomical curve identification

    Science.gov (United States)

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  5. Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction

    Science.gov (United States)

    Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.

    2012-12-01

    ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.

  6. Treatment of External Levels in Neutron Resonance Fitting: Application to the Nonfissile Nuclide 52Cr

    International Nuclear Information System (INIS)

    Froehner, Fritz H.; Bouland, Olivier

    2001-01-01

    Measured neutron resonance cross sections are usually analyzed and parametrized by fitting theoretical curves to high-resolution point data. Theoretically, the cross sections depend mainly on the 'internal' levels inside the fitted energy range but also on the 'external' levels outside. Although the external levels are mostly unknown, they must be accounted for. If they are simply omitted, the experimental data cannot be fitted satisfactorily. Especially with elastic scattering and total cross-section data, one gets troublesome edge effects and difficulties with the potential cross section between resonances. Various ad hoc approaches to these problems are still being used, involving replacement of the unknown levels by equidistant ('picket fence') or Monte Carlo-sampled resonance sequences, or replication of the internal level sequence; however, more convenient, better working, and theoretically sound techniques have been available for decades. These analytical techniques are reviewed. They describe the contribution of external levels to the R matrix concisely in terms of average resonance parameters (strength function, effective radius, etc.). A more recent, especially convenient approximation accounts for the edge effects by just one fictitious pair of very broad external resonances. Fitting the thermal region, including accurately known thermal cross sections, is often done by adjusting a number of bound levels by trial and error, although again a simple analytical recipe involving just one bound level has been available for a long time. For illustration, these analytical techniques are applied to the resolved resonance region of 52 Cr. The distinction between channel radii and effective radii, crucial in the present context, is emphasized

  7. Derivation of Path Independent Coupled Mix Mode Cohesive Laws from Fracture Resistance Curves

    DEFF Research Database (Denmark)

    Goutianos, Stergios

    2016-01-01

    A generalised approach is presented to derive coupled mixed mode cohesive laws described with physical parameters such as peak traction, critical opening, fracture energy and cohesive shape. The approach is based on deriving mix mode fracture resistance curves from an effective mix mode cohesive...... law at different mode mixities. From the fracture resistance curves, the normal and shear stresses of the cohesive laws can be obtained by differentiation. Since, the mixed mode cohesive laws are obtained from a fracture resistance curve (potential function), path independence is automatically...

  8. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  9. Relationship Between Crack Growth Resistance KR Curve and Specimen Width for 2060 - T8E30 Lithium Aluminum Alloy

    Directory of Open Access Journals (Sweden)

    Tong Di Hua

    2016-01-01

    Full Text Available KR crack growth resistance curve can be used to predict crack propagation behavior, estimate the crack component bearing capacity after the crack, so KR curve research occupies very important position in the fracture mechanics. Based on crack growth resistance KR test curve of 2060 - T8E30 lithium aluminum alloy under the same thickness for different width, studies have shown that under the same thickness, the influence of the width on the resistance curve of crack propagation can be neglected. Empirical equation of resistance curve of crack extension of the smaller width specimen is given. Extending the fitting equation to that of larger width, it can be found that it is highly coincided with the experimental results.

  10. Parametric representation of centrifugal pump homologous curves

    International Nuclear Information System (INIS)

    Veloso, Marcelo A.; Mattos, Joao R.L. de

    2015-01-01

    Essential for any mathematical model designed to simulate flow transient events caused by pump operations is the pump performance data. The performance of a centrifugal pump is characterized by four basic quantities: the rotational speed, the volumetric flow rate, the dynamic head, and the hydraulic torque. The curves showing the relationships between these four variables are called the pump characteristic curves. The characteristic curves are empirically developed by the pump manufacturer and uniquely describe head and torque as functions of volumetric flow rate and rotation speed. Because of comprising a large amount of points, this configuration is not suitable for computational purposes. However, it can be converted to a simpler form by the development of the homologous curves, in which dynamic head and hydraulic torque ratios are expressed as functions of volumetric flow and rotation speed ratios. The numerical use of the complete set of homologous curves requires specification of sixteen partial curves, being eight for the dynamic head and eight for the hydraulic torque. As a consequence, the handling of homologous curves is still somewhat complicated. In solving flow transient problems that require the pump characteristic data for all the operation zones, the parametric form appears as the simplest way to deal with the homologous curves. In this approach, the complete characteristics of a pump can be described by only two closed curves, one for the dynamic head and other for the hydraulic torque, both in function of a single angular coordinate defined adequately in terms of the quotient between volumetric flow ratio and rotation speed ratio. The usefulness and advantages of this alternative method are demonstrated through a practical example in which the homologous curves for a pump of the type used in the main coolant loops of a pressurized water reactor (PWR) are transformed to the parametric form. (author)

  11. Understanding the reductions in US corn ethanol production costs: An experience curve approach

    International Nuclear Information System (INIS)

    Hettinga, W.G.; Junginger, H.M.; Dekker, S.C.; Hoogwijk, M.; McAloon, A.J.; Hicks, K.B.

    2009-01-01

    The US is currently the world's largest ethanol producer. An increasing percentage is used as transportation fuel, but debates continue on its costs competitiveness and energy balance. In this study, technological development of ethanol production and resulting cost reductions are investigated by using the experience curve approach, scrutinizing costs of dry grind ethanol production over the timeframe 1980-2005. Cost reductions are differentiated between feedstock (corn) production and industrial (ethanol) processing. Corn production costs in the US have declined by 62% over 30 years, down to 100$ 2005 /tonne in 2005, while corn production volumes almost doubled since 1975. A progress ratio (PR) of 0.55 is calculated indicating a 45% cost decline over each doubling in cumulative production. Higher corn yields and increasing farm sizes are the most important drivers behind this cost decline. Industrial processing costs of ethanol have declined by 45% since 1983, to below 130$ 2005 /m 3 in 2005 (excluding costs for corn and capital), equivalent to a PR of 0.87. Total ethanol production costs (including capital and net corn costs) have declined approximately 60% from 800$ 2005 /m 3 in the early 1980s, to 300$ 2005 /m 3 in 2005. Higher ethanol yields, lower energy use and the replacement of beverage alcohol-based production technologies have mostly contributed to this substantial cost decline. In addition, the average size of dry grind ethanol plants increased by 235% since 1990. For the future it is estimated that solely due to technological learning, production costs of ethanol may decline 28-44%, though this excludes effects of the current rising corn and fossil fuel costs. It is also concluded that experience curves are a valuable tool to describe both past and potential future cost reductions in US corn-based ethanol production

  12. A unified conformational selection and induced fit approach to protein-peptide docking.

    Directory of Open Access Journals (Sweden)

    Mikael Trellet

    Full Text Available Protein-peptide interactions are vital for the cell. They mediate, inhibit or serve as structural components in nearly 40% of all macromolecular interactions, and are often associated with diseases, making them interesting leads for protein drug design. In recent years, large-scale technologies have enabled exhaustive studies on the peptide recognition preferences for a number of peptide-binding domain families. Yet, the paucity of data regarding their molecular binding mechanisms together with their inherent flexibility makes the structural prediction of protein-peptide interactions very challenging. This leaves flexible docking as one of the few amenable computational techniques to model these complexes. We present here an ensemble, flexible protein-peptide docking protocol that combines conformational selection and induced fit mechanisms. Starting from an ensemble of three peptide conformations (extended, a-helix, polyproline-II, flexible docking with HADDOCK generates 79.4% of high quality models for bound/unbound and 69.4% for unbound/unbound docking when tested against the largest protein-peptide complexes benchmark dataset available to date. Conformational selection at the rigid-body docking stage successfully recovers the most relevant conformation for a given protein-peptide complex and the subsequent flexible refinement further improves the interface by up to 4.5 Å interface RMSD. Cluster-based scoring of the models results in a selection of near-native solutions in the top three for ∼75% of the successfully predicted cases. This unified conformational selection and induced fit approach to protein-peptide docking should open the route to the modeling of challenging systems such as disorder-order transitions taking place upon binding, significantly expanding the applicability limit of biomolecular interaction modeling by docking.

  13. Phase Curve Analysis of Super-Earth 55 Cancri e

    Science.gov (United States)

    Angelo, Isabel; Hu, Renyu

    2018-01-01

    One of the primary questions when characterizing Earth-sized and super-Earth-sized exoplanets is whether they have a substantial atmosphere like Earth and Venus, or a bare-rock surface that may come with a tenuous atmosphere like Mercury. Phase curves of the planets in thermal emission provide clues to this question, because a substantial atmosphere would transport heat more efficiently than a bare-rock surface. Analyzing phase curve photometric data around secondary eclipse has previously been used to study energy transport in the atmospheres of hot Jupiters. Here we use phase curve, Spitzer time-series photometry to study the thermal emission properties of the super-Earth exoplanet 55 Cancri e. We utilize a previously developed semi-analytical framework to fit a physical model to infrared photometric data of host star 55 Cancri from the Spitzer telescope IRAC 2 band at 4.5 μm. The model uses various parameters of planetary properties including Bond albedo, heat redistribution efficiency (i.e., the ratio between the radiative timescale and advective timescale of the photosphere), and atmospheric greenhouse factor. The phase curve of 55 Cancri e is dominated by thermal emission with an eastward-shifted hot spot located on the planet surface. We determine the heat redistribution efficiency to be ≈1.47, which implies that the advective timescale is on the same order as the radiative timescale. This requirement from the phase curve cannot be met by the bare-rock planet scenario, because heat transport by currents of molten lava would be too slow. The phase curve thus favors the scenario with a substantial atmosphere. Our constraints on the heat redistribution efficiency translate to a photosphere pressure of ~1.4 bar. The Spitzer IRAC 2 band is thus a window into the deep atmosphere of the planet 55 Cancri e.

  14. What are the key drivers of MAC curves? A partial-equilibrium modelling approach for the UK

    International Nuclear Information System (INIS)

    Kesicki, Fabian

    2013-01-01

    Marginal abatement cost (MAC) curves are widely used for the assessment of costs related to CO 2 emissions reduction in environmental economics, as well as domestic and international climate policy. Several meta-analyses and model comparisons have previously been performed that aim to identify the causes for the wide range of MAC curves. Most of these concentrate on general equilibrium models with a focus on aspects such as specific model type and technology learning, while other important aspects remain almost unconsidered, including the availability of abatement technologies and level of discount rates. This paper addresses the influence of several key parameters on MAC curves for the United Kingdom and the year 2030. A technology-rich energy system model, UK MARKAL, is used to derive the MAC curves. The results of this study show that MAC curves are robust even to extreme fossil fuel price changes, while uncertainty around the choice of the discount rate, the availability of key abatement technologies and the demand level were singled out as the most important influencing factors. By using a different model type and studying a wider range of influencing factors, this paper contributes to the debate on the sensitivity of MAC curves. - Highlights: ► A partial-equilibrium model is employed to test key sensitivities of MAC curves. ► MAC curves are found to be robust to wide-ranging changes in fossil fuel prices. ► Most influencing factors are the discount rate, availability of key technologies. ► Further important uncertainty in MAC curves is related to demand changes

  15. Inferring genetic interactions from comparative fitness data.

    Science.gov (United States)

    Crona, Kristina; Gavryushkin, Alex; Greene, Devin; Beerenwinkel, Niko

    2017-12-20

    Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax , the fungus Aspergillus niger , and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations.

  16. vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments

    Directory of Open Access Journals (Sweden)

    Demeter Lisa

    2010-05-01

    Full Text Available Abstract Background The replication rate (or fitness between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV. HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models, a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1. Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/.

  17. Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach.

    Science.gov (United States)

    Hawkley, Louise C; Capitanio, John P

    2015-05-26

    Sociality permeates each of the fundamental motives of human existence and plays a critical role in evolutionary fitness across the lifespan. Evidence for this thesis draws from research linking deficits in social relationship--as indexed by perceived social isolation (i.e. loneliness)--with adverse health and fitness consequences at each developmental stage of life. Outcomes include depression, poor sleep quality, impaired executive function, accelerated cognitive decline, unfavourable cardiovascular function, impaired immunity, altered hypothalamic pituitary-adrenocortical activity, a pro-inflammatory gene expression profile and earlier mortality. Gaps in this research are summarized with suggestions for future research. In addition, we argue that a better understanding of naturally occurring variation in loneliness, and its physiological and psychological underpinnings, in non-human species may be a valuable direction to better understand the persistence of a 'lonely' phenotype in social species, and its consequences for health and fitness. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Interior Temperature Measurement Using Curved Mercury Capillary Sensor Based on X-ray Radiography

    Science.gov (United States)

    Chen, Shuyue; Jiang, Xing; Lu, Guirong

    2017-07-01

    A method was presented for measuring the interior temperature of objects using a curved mercury capillary sensor based on X-ray radiography. The sensor is composed of a mercury bubble, a capillary and a fixed support. X-ray digital radiography was employed to capture image of the mercury column in the capillary, and a temperature control system was designed for the sensor calibration. We adopted livewire algorithms and mathematical morphology to calculate the mercury length. A measurement model relating mercury length to temperature was established, and the measurement uncertainty associated with the mercury column length and the linear model fitted by least-square method were analyzed. To verify the system, the interior temperature measurement of an autoclave, which is totally closed, was taken from 29.53°C to 67.34°C. The experiment results show that the response of the system is approximately linear with an uncertainty of maximum 0.79°C. This technique provides a new approach to measure interior temperature of objects.

  19. Fitness

    Science.gov (United States)

    ... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...

  20. A Fit-For-Purpose approach to Land Administration in Africa in support of the new 2030 Global Agenda

    DEFF Research Database (Denmark)

    Enemark, Stig

    2017-01-01

    on legacy approaches, have been fragmented and have not delivered the required pervasive changes and improvements at scale. The solutions have not helped the most needy - the poor and disadvantaged that have no security of tenure. In fact the beneficiaries have often been the elite and organizations...... involved in land grabbing. It is time to rethink the approaches. New solutions are required that can deliver security of tenure for all, are affordable and can be quickly developed and incrementally improved over time. The Fit-For-Purpose (FFP) approach to land administration has emerged to meet...... administration systems is the only viable solution to solving the global security of tenure divide. The FFP approach is flexible and includes the adaptability to meet the actual and basic needs of society today and having the capability to be incrementally improved over time. This will be triggered in response...

  1. Constraining brane tension using rotation curves of galaxies

    Science.gov (United States)

    García-Aspeitia, Miguel A.; Rodríguez-Meza, Mario A.

    2018-04-01

    We present in this work a study of brane theory phenomenology focusing on the brane tension parameter, which is the main observable of the theory. We show the modifications steaming from the presence of branes in the rotation curves of spiral galaxies for three well known dark matter density profiles: Pseudo isothermal, Navarro-Frenk-White and Burkert dark matter density profiles. We estimate the brane tension parameter using a sample of high resolution observed rotation curves of low surface brightness spiral galaxies and a synthetic rotation curve for the three density profiles. Also, the fittings using the brane theory model of the rotation curves are compared with standard Newtonian models. We found that Navarro-Frenk-White model prefers lower values of the brane tension parameter, on the average λ ∼ 0.73 × 10‑3eV4, therefore showing clear brane effects. Burkert case does prefer higher values of the tension parameter, on the average λ ∼ 0.93 eV4 ‑ 46 eV4, i.e., negligible brane effects. Whereas pseudo isothermal is an intermediate case. Due to the low densities found in the galactic medium it is almost impossible to find evidence of the presence of extra dimensions. In this context, we found that our results show weaker bounds to the brane tension values in comparison with other bounds found previously, as the lower value found for dwarf stars composed of a polytropic equation of state, λ ≈ 104 MeV4.

  2. Bragg Curve, Biological Bragg Curve and Biological Issues in Space Radiation Protection with Shielding

    Science.gov (United States)

    Honglu, Wu; Cucinotta, F.A.; Durante, M.; Lin, Z.; Rusek, A.

    2006-01-01

    The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X-rays, the presence of shielding does not always reduce the radiation risks for energetic charged particle exposure. Since the dose delivered by the charged particle increases sharply as the particle approaches the end of its range, a position known as the Bragg peak, the Bragg curve does not necessarily represent the biological damage along the particle traversal since biological effects are influenced by the track structure of both primary and secondary particles. Therefore, the biological Bragg curve is dependent on the energy and the type of the primary particle, and may vary for different biological endpoints. To achieve a Bragg curve distribution, we exposed cells to energetic heavy ions with the beam geometry parallel to a monolayer of fibroblasts. Qualitative analyses of gamma-H2AX fluorescence, a known marker of DSBs, indicated increased clustering of DNA damage before the Bragg peak, enhanced homogenous distribution at the peak, and provided visual evidence of high linear energy transfer (LET) particle traversal of cells beyond the Bragg peak. A quantitative biological response curve generated for micronuclei (MN) induction across the Bragg curve did not reveal an increased yield of MN at the location of the Bragg peak. However, the ratio of mono-to bi-nucleated cells, which indicates inhibition in cell progression, increased at the Bragg peak location. These results, along with other biological concerns, show that space radiation protection with shielding can be a complicated issue.

  3. Nonlinear Growth Curves in Developmental Research

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam; Hamagami, Fumiaki

    2011-01-01

    Developmentalists are often interested in understanding change processes and growth models are the most common analytic tool for examining such processes. Nonlinear growth curves are especially valuable to developmentalists because the defining characteristics of the growth process such as initial levels, rates of change during growth spurts, and asymptotic levels can be estimated. A variety of growth models are described beginning with the linear growth model and moving to nonlinear models of varying complexity. A detailed discussion of nonlinear models is provided, highlighting the added insights into complex developmental processes associated with their use. A collection of growth models are fit to repeated measures of height from participants of the Berkeley Growth and Guidance Studies from early childhood through adulthood. PMID:21824131

  4. The role of creep in stress strain curves for copper

    International Nuclear Information System (INIS)

    Sandström, Rolf; Hallgren, Josefin

    2012-01-01

    Highlights: ► A dislocation based model takes into account both dynamic and static recovery. ► Tests at constant load and at constant strain rate modelled without fitting parameters. ► The model can describe primary and secondary creep of Cu-OFP from 75 to 250 °C. ► The temperature and strain rate dependence of stress strain curves can be modelled. ► Intended for the slow strain rates in canisters for storage of nuclear waste. - Abstract: A model for plastic deformation in pure copper taking work hardening, dynamic recovery and static recovery into account, has been formulated using basic dislocation mechanisms. The model is intended to be used in finite-element computations of the long term behaviour of structures in Cu-OFP for storage of nuclear waste. The relation between the strain rate and the maximum flow stress in the model has been demonstrated to correspond to strain rate versus stress in creep tests for oxygen free copper alloyed with phosphorus Cu-OFP. A further development of the model can also represent the primary and secondary stage of creep curves. The model is compared to stress strain curves in compression and tension for Cu-OFP. The compression tests were performed at room temperature for strain rates between 5 × 10 −5 and 5 × 10 −3 s −1 . The tests in tension covered the temperature range 20–175 °C for strain rates between 1 × 10 −7 and 1 × 10 −4 s −1 . Consequently, it is demonstrated that the model can represent mechanical test data that have been generated both at constant load and at constant strain rate without the use of any fitting parameters.

  5. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  6. Reactor Pressure Vessel P-T Limit Curve Round Robin

    Energy Technology Data Exchange (ETDEWEB)

    Jang, C.H.; Moon, H.R.; Jeong, I.S. [Korea Electric Power Research Institute, Taejon (Korea)

    2002-07-01

    This report is the summary of the analysis results for the P-T Limit Curve construction which have been subjected to the round robin analysis. The purpose of the round robin is to compare the procedure and method used in various organizations to construct P-T limit curve to prevent brittle fracture of reactor pressure vessel of nuclear power plants. Each Participant used its own approach to construct the P-T limit curve and submitted the results, By analyzing the results, the reference procedure for the P-T limit curve could be established. This report include the results of the comparison of the procedure and method used by the participants, and sensitivity study of the key parameters. (author) 23 refs, 88 figs, 17 tabs.

  7. Hamiltonian inclusive fitness: a fitter fitness concept.

    Science.gov (United States)

    Costa, James T

    2013-01-01

    In 1963-1964 W. D. Hamilton introduced the concept of inclusive fitness, the only significant elaboration of Darwinian fitness since the nineteenth century. I discuss the origin of the modern fitness concept, providing context for Hamilton's discovery of inclusive fitness in relation to the puzzle of altruism. While fitness conceptually originates with Darwin, the term itself stems from Spencer and crystallized quantitatively in the early twentieth century. Hamiltonian inclusive fitness, with Price's reformulation, provided the solution to Darwin's 'special difficulty'-the evolution of caste polymorphism and sterility in social insects. Hamilton further explored the roles of inclusive fitness and reciprocation to tackle Darwin's other difficulty, the evolution of human altruism. The heuristically powerful inclusive fitness concept ramified over the past 50 years: the number and diversity of 'offspring ideas' that it has engendered render it a fitter fitness concept, one that Darwin would have appreciated.

  8. Use of structure-activity landscape index curves and curve integrals to evaluate the performance of multiple machine learning prediction models.

    Science.gov (United States)

    Ledonne, Norman C; Rissolo, Kevin; Bulgarelli, James; Tini, Leonard

    2011-02-07

    Standard approaches to address the performance of predictive models that used common statistical measurements for the entire data set provide an overview of the average performance of the models across the entire predictive space, but give little insight into applicability of the model across the prediction space. Guha and Van Drie recently proposed the use of structure-activity landscape index (SALI) curves via the SALI curve integral (SCI) as a means to map the predictive power of computational models within the predictive space. This approach evaluates model performance by assessing the accuracy of pairwise predictions, comparing compound pairs in a manner similar to that done by medicinal chemists. The SALI approach was used to evaluate the performance of continuous prediction models for MDR1-MDCK in vitro efflux potential. Efflux models were built with ADMET Predictor neural net, support vector machine, kernel partial least squares, and multiple linear regression engines, as well as SIMCA-P+ partial least squares, and random forest from Pipeline Pilot as implemented by AstraZeneca, using molecular descriptors from SimulationsPlus and AstraZeneca. The results indicate that the choice of training sets used to build the prediction models is of great importance in the resulting model quality and that the SCI values calculated for these models were very similar to their Kendall τ values, leading to our suggestion of an approach to use this SALI/SCI paradigm to evaluate predictive model performance that will allow more informed decisions regarding model utility. The use of SALI graphs and curves provides an additional level of quality assessment for predictive models.

  9. Multiple Beta Spectrum Analysis Method Based on Spectrum Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Uk Jae; Jung, Yun Song; Kim, Hee Reyoung [UNIST, Ulsan (Korea, Republic of)

    2016-05-15

    When the sample of several mixed radioactive nuclides is measured, it is difficult to divide each nuclide due to the overlapping of spectrums. For this reason, simple mathematical analysis method for spectrum analysis of the mixed beta ray source has been studied. However, existing research was in need of more accurate spectral analysis method as it has a problem of accuracy. The study will describe the contents of the separation methods of the mixed beta ray source through the analysis of the beta spectrum slope based on the curve fitting to resolve the existing problem. The fitting methods including It was understood that sum of sine fitting method was the best one of such proposed methods as Fourier, polynomial, Gaussian and sum of sine to obtain equation for distribution of mixed beta spectrum. It was shown to be the most appropriate for the analysis of the spectrum with various ratios of mixed nuclides. It was thought that this method could be applied to rapid spectrum analysis of the mixed beta ray source.

  10. Testing modified gravity at large distances with the HI Nearby Galaxy Survey's rotation curves

    Science.gov (United States)

    Mastache, Jorge; Cervantes-Cota, Jorge L.; de la Macorra, Axel

    2013-03-01

    Recently a new—quantum motivated—theory of gravity has been proposed that modifies the standard Newtonian potential at large distances when spherical symmetry is considered. Accordingly, Newtonian gravity is altered by adding an extra Rindler acceleration term that has to be phenomenologically determined. Here we consider a standard and a power-law generalization of the Rindler modified Newtonian potential. The new terms in the gravitational potential are hypothesized to play the role of dark matter in galaxies. Our galactic model includes the mass of the integrated gas, and stars for which we consider three stellar mass functions (Kroupa, diet-Salpeter, and free mass model). We test this idea by fitting rotation curves of seventeen low surface brightness galaxies from the HI Nearby Galaxy Survey (THINGS). We find that the Rindler parameters do not perform a suitable fit to the rotation curves in comparison to standard dark matter profiles (Navarro-Frenk-White and Burkert) and, in addition, the computed parameters of the Rindler gravity show a high spread, posing the model as a nonacceptable alternative to dark matter.

  11. Master sintering curve: A practical approach to its construction

    Directory of Open Access Journals (Sweden)

    Pouchly V.

    2010-01-01

    Full Text Available The concept of a Master Sintering Curve (MSC is a strong tool for optimizing the sintering process. However, constructing the MSC from sintering data involves complicated and time-consuming calculations. A practical method for the construction of a MSC is presented in the paper. With the help of a few dilatometric sintering experiments the newly developed software calculates the MSC and finds the optimal activation energy of a given material. The software, which also enables sintering prediction, was verified by sintering tetragonal and cubic zirconia, and alumina of two different particle sizes.

  12. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-03-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  13. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-06-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  14. ESTIMATING TORSION OF DIGITAL CURVES USING 3D IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Christoph Blankenburg

    2016-04-01

    Full Text Available Curvature and torsion of three-dimensional curves are important quantities in fields like material science or biomedical engineering. Torsion has an exact definition in the continuous domain. However, in the discrete case most of the existing torsion evaluation methods lead to inaccurate values, especially for low resolution data. In this contribution we use the discrete points of space curves to determine the Fourier series coefficients which allow for representing the underlying continuous curve with Cesàro’s mean. This representation of the curve suits for the estimation of curvature and torsion values with their classical continuous definition. In comparison with the literature, one major advantage of this approach is that no a priori knowledge about the shape of the cyclic curve parts approximating the discrete curves is required. Synthetic data, i.e. curves with known curvature and torsion, are used to quantify the inherent algorithm accuracy for torsion and curvature estimation. The algorithm is also tested on tomographic data of fiber structures and open foams, where discrete curves are extracted from the pore spaces.

  15. Toward a Conceptualization of Perceived Work-Family Fit and Balance: A Demands and Resources Approach

    Science.gov (United States)

    Voydanoff, Patricia

    2005-01-01

    Using person-environment fit theory, this article formulates a conceptual model that links work, family, and boundary-spanning demands and resources to work and family role performance and quality. Linking mechanisms include 2 dimensions of perceived work-family fit (work demands--family resources fit and family demands--work resources fit) and a…

  16. Local fit evaluation of structural equation models using graphical criteria.

    Science.gov (United States)

    Thoemmes, Felix; Rosseel, Yves; Textor, Johannes

    2018-03-01

    Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Exploration and extension of an improved Riemann track fitting algorithm

    Science.gov (United States)

    Strandlie, A.; Frühwirth, R.

    2017-09-01

    Recently, a new Riemann track fit which operates on translated and scaled measurements has been proposed. This study shows that the new Riemann fit is virtually as precise as popular approaches such as the Kalman filter or an iterative non-linear track fitting procedure, and significantly more precise than other, non-iterative circular track fitting approaches over a large range of measurement uncertainties. The fit is then extended in two directions: first, the measurements are allowed to lie on plane sensors of arbitrary orientation; second, the full error propagation from the measurements to the estimated circle parameters is computed. The covariance matrix of the estimated track parameters can therefore be computed without recourse to asymptotic properties, and is consequently valid for any number of observation. It does, however, assume normally distributed measurement errors. The calculations are validated on a simulated track sample and show excellent agreement with the theoretical expectations.

  18. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  19. SIMULATED PERFORMANCE OF TIMESCALE METRICS FOR APERIODIC LIGHT CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Findeisen, Krzysztof; Hillenbrand, Lynne [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, MC 249-17, Pasadena, CA 91125 (United States); Cody, Ann Marie, E-mail: krzys@astro.caltech.edu [Spitzer Science Center, California Institute of Technology, MC 314-6, Pasadena, CA 91125 (United States)

    2015-01-10

    Aperiodic variability is a characteristic feature of young stars, massive stars, and active galactic nuclei. With the recent proliferation of time-domain surveys, it is increasingly essential to develop methods to quantify and analyze aperiodic variability. We develop three timescale metrics that have been little used in astronomy—Δm-Δt plots, peak-finding, and Gaussian process regression—and present simulations comparing their effectiveness across a range of aperiodic light curve shapes, characteristic timescales, observing cadences, and signal to noise ratios. We find that Gaussian process regression is easily confused by noise and by irregular sampling, even when the model being fit reflects the process underlying the light curve, but that Δm-Δt plots and peak-finding can coarsely characterize timescales across a broad region of parameter space. We make public the software we used for our simulations, both in the spirit of open research and to allow others to carry out analogous simulations for their own observing programs.

  20. SIMULATED PERFORMANCE OF TIMESCALE METRICS FOR APERIODIC LIGHT CURVES

    International Nuclear Information System (INIS)

    Findeisen, Krzysztof; Hillenbrand, Lynne; Cody, Ann Marie

    2015-01-01

    Aperiodic variability is a characteristic feature of young stars, massive stars, and active galactic nuclei. With the recent proliferation of time-domain surveys, it is increasingly essential to develop methods to quantify and analyze aperiodic variability. We develop three timescale metrics that have been little used in astronomy—Δm-Δt plots, peak-finding, and Gaussian process regression—and present simulations comparing their effectiveness across a range of aperiodic light curve shapes, characteristic timescales, observing cadences, and signal to noise ratios. We find that Gaussian process regression is easily confused by noise and by irregular sampling, even when the model being fit reflects the process underlying the light curve, but that Δm-Δt plots and peak-finding can coarsely characterize timescales across a broad region of parameter space. We make public the software we used for our simulations, both in the spirit of open research and to allow others to carry out analogous simulations for their own observing programs

  1. CONFIRMATION OF HOT JUPITER KEPLER-41b VIA PHASE CURVE ANALYSIS

    International Nuclear Information System (INIS)

    Quintana, Elisa V.; Rowe, Jason F.; Caldwell, Douglas A.; Christiansen, Jessie L.; Jenkins, Jon M.; Morris, Robert L.; Smith, Jeffrey C.; Thompson, Susan E.; Barclay, Thomas; Howell, Steve B.; Borucki, William J.; Sanderfer, Dwight T.; Still, Martin; Ciardi, David R.; Demory, Brice-Olivier; Klaus, Todd C.; Fulton, Benjamin J.; Shporer, Avi

    2013-01-01

    We present high precision photometry of Kepler-41, a giant planet in a 1.86 day orbit around a G6V star that was recently confirmed through radial velocity measurements. We have developed a new method to confirm giant planets solely from the photometric light curve, and we apply this method herein to Kepler-41 to establish the validity of this technique. We generate a full phase photometric model by including the primary and secondary transits, ellipsoidal variations, Doppler beaming, and reflected/emitted light from the planet. Third light contamination scenarios that can mimic a planetary transit signal are simulated by injecting a full range of dilution values into the model, and we re-fit each diluted light curve model to the light curve. The resulting constraints on the maximum occultation depth and stellar density combined with stellar evolution models rules out stellar blends and provides a measurement of the planet's mass, size, and temperature. We expect about two dozen Kepler giant planets can be confirmed via this method.

  2. Clinical evaluation of a computerized topography software method for fitting rigid gas permeable contact lenses.

    Science.gov (United States)

    Szczotka, L B; Capretta, D M; Lass, J H

    1994-10-01

    Computerized videokeratoscope software programs now have the ability to assist in the design of rigid gas permeable (RGP) contact lenses and simulate fluorescein patterns. We evaluated the performance of Computed Anatomy's Topographic Modeling System (TMS-1) and its Contact Lens Fitting Program (version 1.41) in fitting RGP lenses in 31 subjects. Computerized topographic analysis, balanced manifest refraction, slit lamp examination, and keratometry were performed. Initial lens parameters were ordered according to manufacturer's programmed recommendations for base curve, power, lens diameter, optic zone diameter, and edge lift. Final lens parameters were based on clinical performance. Lenses were recorded for base curve changes of 0.1 mm or more, power alterations of +/- 0.50 D or more, or for any alteration in diameter/optic zone. Twenty-seven patients were analyzed for all five recommended parameters. Thirteen of 27 patients (48%) required no parameter changes. Nine of 27 patients (33%) required one parameter change, four of 27 patients (15%) required two parameter changes, and one patient (4%) needed three parameters altered. The most prevalent change was a power alteration, required in nine of 27 patients (33%); however, comparisons of all initial to final parameters showed no statistically significant differences. Comparison of initial base curves to that which would have been chosen via standard keratometry also showed no significant difference. This study found the TMS-1 default lens recommendations to be clinically unacceptable. This system, however, could be an alternative method of initial lens selection if used to titrate a fit or if software enhancements are incorporated to account for lens movement and flexure.

  3. Accelerated pharmacokinetic map determination for dynamic contrast enhanced MRI using frequency-domain based Tofts model.

    Science.gov (United States)

    Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam

    2014-01-01

    Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.

  4. A New Approach for Obtaining Cosmological Constraints from Type Ia Supernovae using Approximate Bayesian Computation

    Energy Technology Data Exchange (ETDEWEB)

    Jennings, Elise; Wolf, Rachel; Sako, Masao

    2016-11-09

    Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set of $\\sim$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $\\Omega_m, w_0, \\alpha$ and $\\beta$ and a magnitude offset parameter, with no systematics we obtain $\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$ (a $\\sim11$% 1$\\sigma$ uncertainty) using the Tripp metric and $\\Delta(w_0) = -0.055\\pm0.068$ (a $\\sim7$% 1$\\sigma$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $\\Delta(w_0) = -0.062\\pm0.132$ (a $\\sim14$% 1$\\sigma$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $w_0$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.

  5. Mathematical modeling improves EC50 estimations from classical dose-response curves.

    Science.gov (United States)

    Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar

    2015-03-01

    The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.

  6. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    Science.gov (United States)

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  7. The play approach to learning in the context of families and schools: an alternative paradigm for nutrition and fitness education in the 21st century.

    Science.gov (United States)

    Rickard, K A; Gallahue, D L; Gruen, G E; Tridle, M; Bewley, N; Steele, K

    1995-10-01

    An alternative paradigm for nutrition and fitness education centers on understanding and developing skill in implementing a play approach to learning about healthful eating and promoting active play in the context of the child, the family, and the school. The play approach is defined as a process for learning that is intrinsically motivated, enjoyable, freely chosen, nonliteral, safe, and actively engaged in by young learners. Making choices, assuming responsibility for one's decisions and actions, and having fun are inherent components of the play approach to learning. In this approach, internal cognitive transactions and intrinsic motivation are the primary forces that ultimately determine healthful choices and life habits. Theoretical models of children's learning--the dynamic systems theory and the cognitive-developmental theory of Jean Piaget--provide a theoretical basis for nutrition and fitness education in the 21st century. The ultimate goal is to develop partnerships of children, families, and schools in ways that promote the well-being of children and translate into healthful life habits. The play approach is an ongoing process of learning that is applicable to learners of all ages.

  8. A Tangent Bundle Theory for Visual Curve Completion.

    Science.gov (United States)

    Ben-Yosef, Guy; Ben-Shahar, Ohad

    2012-07-01

    Visual curve completion is a fundamental perceptual mechanism that completes the missing parts (e.g., due to occlusion) between observed contour fragments. Previous research into the shape of completed curves has generally followed an "axiomatic" approach, where desired perceptual/geometrical properties are first defined as axioms, followed by mathematical investigation into curves that satisfy them. However, determining psychophysically such desired properties is difficult and researchers still debate what they should be in the first place. Instead, here we exploit the observation that curve completion is an early visual process to formalize the problem in the unit tangent bundle R(2) × S(1), which abstracts the primary visual cortex (V1) and facilitates exploration of basic principles from which perceptual properties are later derived rather than imposed. Exploring here the elementary principle of least action in V1, we show how the problem becomes one of finding minimum-length admissible curves in R(2) × S(1). We formalize the problem in variational terms, we analyze it theoretically, and we formulate practical algorithms for the reconstruction of these completed curves. We then explore their induced visual properties vis-à-vis popular perceptual axioms and show how our theory predicts many perceptual properties reported in the corresponding perceptual literature. Finally, we demonstrate a variety of curve completions and report comparisons to psychophysical data and other completion models.

  9. Physical fitness reference standards in European children: the IDEFICS study.

    Science.gov (United States)

    De Miguel-Etayo, P; Gracia-Marco, L; Ortega, F B; Intemann, T; Foraita, R; Lissner, L; Oja, L; Barba, G; Michels, N; Tornaritis, M; Molnár, D; Pitsiladis, Y; Ahrens, W; Moreno, L A

    2014-09-01

    A low fitness status during childhood and adolescence is associated with important health-related outcomes, such as increased future risk for obesity and cardiovascular diseases, impaired skeletal health, reduced quality of life and poor mental health. Fitness reference values for adolescents from different countries have been published, but there is a scarcity of reference values for pre-pubertal children in Europe, using harmonised measures of fitness in the literature. The IDEFICS study offers a good opportunity to establish normative values of a large set of fitness components from eight European countries using common and well-standardised methods in a large sample of children. Therefore, the aim of this study is to report sex- and age-specific fitness reference standards in European children. Children (10,302) aged 6-10.9 years (50.7% girls) were examined. The test battery included: the flamingo balance test, back-saver sit-and-reach test (flexibility), handgrip strength test, standing long jump test (lower-limb explosive strength) and 40-m sprint test (speed). Moreover, cardiorespiratory fitness was assessed by a 20-m shuttle run test. Percentile curves for the 1st, 3rd, 10th, 25th, 50th, 75th, 90th, 97th and 99th percentiles were calculated using the General Additive Model for Location Scale and Shape (GAMLSS). Our results show that boys performed better than girls in speed, lower- and upper-limb strength and cardiorespiratory fitness, and girls performed better in balance and flexibility. Older children performed better than younger children, except for cardiorespiratory fitness in boys and flexibility in girls. Our results provide for the first time sex- and age-specific physical fitness reference standards in European children aged 6-10.9 years.

  10. Curve aligning approach for gait authentication based on a wearable accelerometer

    International Nuclear Information System (INIS)

    Sun, Hu; Yuao, Tao

    2012-01-01

    Gait authentication based on a wearable accelerometer is a novel biometric which can be used for identity identification, medical rehabilitation and early detection of neurological disorders. The method for matching gait patterns tells heavily on authentication performances. In this paper, curve aligning is introduced as a new method for matching gait patterns and it is compared with correlation and dynamic time warping (DTW). A support vector machine (SVM) is proposed to fuse pattern-matching methods in a decision level. Accelerations collected from ankles of 22 walking subjects are processed for authentications in our experiments. The fusion of curve aligning with backward–forward accelerations and DTW with vertical accelerations promotes authentication performances substantially and consistently. This fusion algorithm is tested repeatedly. Its mean and standard deviation of equal error rates are 0.794% and 0.696%, respectively, whereas among all presented non-fusion algorithms, the best one shows an EER of 3.03%. (paper)

  11. Experimental Method for Plotting S-N Curve with a Small Number of Specimens

    Directory of Open Access Journals (Sweden)

    Strzelecki Przemysław

    2016-12-01

    Full Text Available The study presents two approaches to plotting an S-N curve based on the experimental results. The first approach is commonly used by researchers and presented in detail in many studies and standard documents. The model uses a linear regression whose parameters are estimated by using the least squares method. A staircase method is used for an unlimited fatigue life criterion. The second model combines the S-N curve defined as a straight line and the record of random occurrence of the fatigue limit. A maximum likelihood method is used to estimate the S-N curve parameters. Fatigue data for C45+C steel obtained in the torsional bending test were used to compare the estimated S-N curves. For pseudo-random numbers generated by using the Mersenne Twister algorithm, the estimated S-N curve for 10 experimental results plotted by using the second model, estimates the fatigue life in the scatter band of the factor 3. The result gives good approximation, especially regarding the time required to plot the S-N curve.

  12. Fitting motivational content and process: A systematic investigation of fit between value framing and self-regulation.

    Science.gov (United States)

    Woltin, Karl-Andrew; Bardi, Anat

    2017-12-28

    Values are often phrased as ideals that people seek to approach, but they can also be conceptualized as counter-ideals that people seek to avoid. We aimed to test whether individuals endorse more strongly values that are framed in line with their predominant self-regulatory motivation, using individual difference scales in promotion/prevention (Higgins, 1997) and in behavioral approach/inhibition (Carver & White, 1994). To address this systematically, we developed approach- and avoidance-framed versions of the Portrait Value Questionnaire-RR (PVQ-RR; Schwartz et al., 2012). Participants completed approach- and avoidance-framed PVQ-RR versions in two studies measuring regulatory focus or motivational orientation (together 414 U.S. adults, 48% female, ages 18-69) and one study manipulating motivational orientation (39 UK high school students, 79% female, ages 16-19). Value framing consistently interacted with both self-regulation variables. However, a fit between self-regulation and value framing resulted in greater value endorsement only for promotion-focused and approach-oriented (not prevention-focused and avoidance-oriented) participants. This may be because values are more naturally understood as ideal states that people seek to approach. Our findings provide first insights into the psychological process of person-value framing fit affecting value endorsement. We discuss implications for cross-cultural value research and research on value-congruent behavior. © 2017 Wiley Periodicals, Inc.

  13. Fit-for-Purpose

    DEFF Research Database (Denmark)

    Enemark, Stig

    2013-01-01

    ; completeness to cover the total jurisdiction; and credibility in terms of reliable data being trusted by the users. Accuracy can then be incrementally improved over time when relevant and justified by serving the needs of citizen, business and society in general. Such a fit-for-purpose approach is fundamental...... systems act within adopted land policies that define the legal regulatory pattern for dealing with land issues. Land administration systems - whether highly advanced or very basic – require a spatial framework to operate. This framework provides the fundamental information for dealing with land issues...... concepts may well be seen as the end target but not as the point of entry. When assessing the technology and investment choices the focus should be on building a fit-for-purpose framework that will meet the needs of society today and that can be incrementally improved over time....

  14. Brief Report: Examining children’s disruptive behavior in the wake of trauma - A two-piece growth curve model before and after a school shooting

    Science.gov (United States)

    Liao, Yue; Shonkoff, Eleanor T.; Barnett, Elizabeth; Wen, CK Fred; Miller, Kimberly A.; Eddy, J. Mark

    2015-01-01

    School shootings may have serious negative impacts on children years after the event. Previous research suggests that children exposed to traumatic events experience heightened fear, anxiety, and feelings of vulnerability, but little research has examined potential aggressive and disruptive behavioral reactions. Utilizing a longitudinal dataset in which a local school shooting occurred during the course of data collection, this study sought to investigate whether the trajectory of disruptive behaviors was affected by the shooting. A two-piece growth curve model was used to examine the trajectory of disruptive behaviors during the pre-shooting years (i.e., piece one) and post-shooting years (i.e., piece two). Results indicated that the two-piece growth curve model fit the data better than the one-piece model and that the school shooting precipitated a faster decline in aggressive behaviors. This study demonstrated a novel approach to examining effects of an unexpected traumatic event on behavioral trajectories using an existing longitudinal data set. PMID:26298676

  15. Heterogeneity in glucose response curves during an oral glucose tolerance test and associated cardiometabolic risk

    DEFF Research Database (Denmark)

    Hulman, Adam; Simmons, Rebecca Kate; Vistisen, Dorte

    2017-01-01

    patterns of plasma glucose change during the oral glucose tolerance test. Cardiometabolic risk factor profiles were compared between the identified groups. Using latent class trajectory analysis, five glucose response curves were identified. Despite similar fasting and 2-h values, glucose peaks and peak......We aimed to examine heterogeneity in glucose response curves during an oral glucose tolerance test with multiple measurements and to compare cardiometabolic risk profiles between identified glucose response curve groups. We analyzed data from 1,267 individuals without diabetes from five studies...... in Denmark, the Netherlands and the USA. Each study included between 5 and 11 measurements at different time points during a 2-h oral glucose tolerance test, resulting in 9,602 plasma glucose measurements. Latent class trajectories with a cubic specification for time were fitted to identify different...

  16. Estimating water retention curves and strength properties of unsaturated sandy soils from basic soil gradation parameters

    Science.gov (United States)

    Wang, Ji-Peng; Hu, Nian; François, Bertrand; Lambert, Pierre

    2017-07-01

    This study proposed two pedotransfer functions (PTFs) to estimate sandy soil water retention curves. It is based on the van Genuchten's water retention model and from a semiphysical and semistatistical approach. Basic gradation parameters of d60 as particle size at 60% passing and the coefficient of uniformity Cu are employed in the PTFs with two idealized conditions, the monosized scenario and the extremely polydisperse condition, satisfied. Water retention tests are carried out on eight granular materials with narrow particle size distributions as supplementary data of the UNSODA database. The air entry value is expressed as inversely proportional to d60 and the parameter n, which is related to slope of water retention curve, is a function of Cu. The proposed PTFs, although have fewer parameters, have better fitness than previous PTFs for sandy soils. Furthermore, by incorporating with the suction stress definition, the proposed pedotransfer functions are imbedded in shear strength equations which provide a way to estimate capillary induced tensile strength or cohesion at a certain suction or degree of saturation from basic soil gradation parameters. The estimation shows quantitative agreement with experimental data in literature, and it also explains that the capillary-induced cohesion is generally higher for materials with finer mean particle size or higher polydispersity.

  17. Relating species abundance distributions to species-area curves in two Mediterranean-type shrublands

    Science.gov (United States)

    Keeley, Jon E.

    2003-01-01

    Based on both theoretical and empirical studies there is evidence that different species abundance distributions underlie different species-area relationships. Here I show that Australian and Californian shrubland communities (at the scale from 1 to 1000 m2) exhibit different species-area relationships and different species abundance patterns. The species-area relationship in Australian heathlands best fits an exponential model and species abundance (based on both density and cover) follows a narrow log normal distribution. In contrast, the species-area relationship in Californian shrublands is best fit with the power model and, although species abundance appears to fit a log normal distribution, the distribution is much broader than in Australian heathlands. I hypothesize that the primary driver of these differences is the abundance of small-stature annual species in California and the lack of annuals in Australian heathlands. Species-area is best fit by an exponential model in Australian heathlands because the bulk of the species are common and thus the species-area curves initially rise rapidly between 1 and 100 m2. Annuals in Californian shrublands generate very broad species abundance distributions with many uncommon or rare species. The power function is a better model in these communities because richness increases slowly from 1 to 100 m2 but more rapidly between 100 and 1000 m2due to the abundance of rare or uncommon species that are more likely to be encountered at coarser spatial scales. The implications of this study are that both the exponential and power function models are legitimate representations of species-area relationships in different plant communities. Also, structural differences in community organization, arising from different species abundance distributions, may lead to different species-area curves, and this may be tied to patterns of life form distribution.

  18. NASA-FAA helicopter Microwave Landing System curved path flight test

    Science.gov (United States)

    Swenson, H. N.; Hamlin, J. R.; Wilson, G. W.

    1984-01-01

    An ongoing series of joint NASA/FAA helicopter Microwave Landing System (MLS) flight tests was conducted at Ames Research Center. This paper deals with tests done from the spring through the fall of 1983. This flight test investigated and developed solutions to the problem of manually flying curved-path and steep glide slope approaches into the terminal area using the MLS and flight director guidance. An MLS-equipped Bell UH-1H helicopter flown by NASA test pilots was used to develop approaches and procedures for flying these approaches. The approaches took the form of Straight-in, U-turn, and S-turn flightpaths with glide slopes of 6 deg, 9 deg, and 12 deg. These procedures were evaluated by 18 pilots from various elements of the helicopter community, flying a total of 221 hooded instrument approaches. Flying these curved path and steep glide slopes was found to be operationally acceptable with flight director guidance using the MLS.

  19. Organization Design for Dynamic Fit: A Review and Projection

    Directory of Open Access Journals (Sweden)

    Mark Nissen

    2014-08-01

    Full Text Available The concept of fit is central to organization design. In the organizational literature, fit historically has been portrayed as a static concept. Both organizations and their environments, however, are continually changing, so a valid concept of fit needs to reflect organizational dynamics. In this article, I analyze various theoretical perspectives and studies that relate to organizational fit, differentiating those that employ an equilibrating or a fluxing approach. Four substantive themes emerge from this analysis: design orientation, design tension, designer/manager roles, and measurement and validation. Implications of each of these themes for dynamic fit are derived, and promising future research directions are discussed.

  20. Estimating Corporate Yield Curves

    OpenAIRE

    Antionio Diaz; Frank Skinner

    2001-01-01

    This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...

  1. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    Science.gov (United States)

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  2. Use of structure-activity landscape index curves and curve integrals to evaluate the performance of multiple machine learning prediction models

    Directory of Open Access Journals (Sweden)

    LeDonne Norman C

    2011-02-01

    Full Text Available Abstract Background Standard approaches to address the performance of predictive models that used common statistical measurements for the entire data set provide an overview of the average performance of the models across the entire predictive space, but give little insight into applicability of the model across the prediction space. Guha and Van Drie recently proposed the use of structure-activity landscape index (SALI curves via the SALI curve integral (SCI as a means to map the predictive power of computational models within the predictive space. This approach evaluates model performance by assessing the accuracy of pairwise predictions, comparing compound pairs in a manner similar to that done by medicinal chemists. Results The SALI approach was used to evaluate the performance of continuous prediction models for MDR1-MDCK in vitro efflux potential. Efflux models were built with ADMET Predictor neural net, support vector machine, kernel partial least squares, and multiple linear regression engines, as well as SIMCA-P+ partial least squares, and random forest from Pipeline Pilot as implemented by AstraZeneca, using molecular descriptors from SimulationsPlus and AstraZeneca. Conclusion The results indicate that the choice of training sets used to build the prediction models is of great importance in the resulting model quality and that the SCI values calculated for these models were very similar to their Kendall τ values, leading to our suggestion of an approach to use this SALI/SCI paradigm to evaluate predictive model performance that will allow more informed decisions regarding model utility. The use of SALI graphs and curves provides an additional level of quality assessment for predictive models.

  3. Theoretical Aspects of Phonon Dispersion Curves for Metals

    International Nuclear Information System (INIS)

    Cochran, W.

    1965-01-01

    Reasonably complete knowledge of the phonon dispersion curves for at least a dozen metallic elements and intermetallic compounds has now been obtained from neutron inelastic scattering experiments. The results have one feature in common: when analysed in terms of interatomic force constants they reveal the presence of comparatively long-range forces extending over several atomic spacings. The results for lead are particularly interesting; it did not prove possible to fit them by a force-constant model, but the dispersion curves for wave vectors in symmetry directions when analysed in terms of force constants between planes of atoms showed an oscillatory interatomic potential extending over distances of more than 20Å. This review is concerned with recent theoretical work which has a bearing on the calculation of phonon dispersion curves for metals and the explanation of the long range of the interatomic potential. The best hope at present for a general treatment of atomic interaction in metals appears to lie in the ''method of neutral pseudo-atoms'', (a description recently coined by Ziman). This approximate theory is outlined and its relevance to Kohn anomalies in phonon dispersion curves is discussed. Experimental data for sodium is consistent with the theory, and the interatomic potential in sodium varies periodically in a distance π/k F , where fik F is the Fermi momentum, as has already been demonstrated by Koenig in a different way. More exact calculations have been made for sodium by Toya and by Sham. The relationship between the different methods and other work of a more general character such as that of Harrison are discussed. (author) [fr

  4. Spherical images and inextensible curved folding

    Science.gov (United States)

    Seffen, Keith A.

    2018-02-01

    In their study, Duncan and Duncan [Proc. R. Soc. London A 383, 191 (1982), 10.1098/rspa.1982.0126] calculate the shape of an inextensible surface folded in two about a general curve. They find the analytical relationships between pairs of generators linked across the fold curve, the shape of the original path, and the fold angle variation along it. They present two special cases of generator layouts for which the fold angle is uniform or the folded curve remains planar, for simplifying practical folding in sheet-metal processes. We verify their special cases by a graphical treatment according to a method of Gauss. We replace the fold curve by a piecewise linear path, which connects vertices of intersecting pairs of hinge lines. Inspired by the d-cone analysis by Farmer and Calladine [Int. J. Mech. Sci. 47, 509 (2005), 10.1016/j.ijmecsci.2005.02.013], we construct the spherical images for developable folding of successive vertices: the operating conditions of the special cases in Duncan and Duncan are then revealed straightforwardly by the geometric relationships between the images. Our approach may be used to synthesize folding patterns for novel deployable and shape-changing surfaces without need of complex calculation.

  5. A curve fitting approach to estimate the extent of fermentation of indigestible carbohydrates

    NARCIS (Netherlands)

    Wang, H.; Weening, D.; Jonkers, E.; Boer, T.; Stellaard, F.; Small, A. C.; Preston, T.; Vonk, R. J.; Priebe, M. G.

    Background Information about the extent of carbohydrate digestion and fermentation is critical to our ability to explore the metabolic effects of carbohydrate fermentation in vivo. We used cooked (13)C-labelled barley kernels, which are rich in indigestible carbohydrates, to develop a method which

  6. A curve fitting approach to estimate the extent of fermentation of indigestible carbohydrates

    NARCIS (Netherlands)

    Wang, H.; Weening, D.; Jonkers, E.; Boer, T.; Stellaard, F.; Small, A. C.; Preston, T.; Vonk, R. J.; Priebe, M. G.

    2008-01-01

    Background Information about the extent of carbohydrate digestion and fermentation is critical to our ability to explore the metabolic effects of carbohydrate fermentation in vivo. We used cooked (13)C-labelled barley kernels, which are rich in indigestible carbohydrates, to develop a method which

  7. Searching for transits in the WTS with the difference imaging light curves

    Science.gov (United States)

    Zendejas Dominguez, Jesus

    2013-12-01

    The search for exo-planets is currently one of the most exiting and active topics in astronomy. Small and rocky planets are particularly the subject of intense research, since if they are suitably located from their host star, they may be warm and potentially habitable worlds. On the other hand, the discovery of giant planets in short-period orbits provides important constraints on models that describe planet formation and orbital migration theories. Several projects are dedicated to discover and characterize planets outside of our solar system. Among them, the Wide-Field Camera Transit Survey (WTS) is a pioneer program aimed to search for extra-solar planets, that stands out for its particular aims and methodology. The WTS has been in operation since August 2007 with observations from the United Kingdom Infrared Telescope, and represents the first survey that searches for transiting planets in the near-infrared wavelengths; hence the WTS is designed to discover planets around M-dwarfs. The survey was originally assigned about 200 nights, observing four fields that were selected seasonally (RA = 03, 07, 17 and 19h) during a year. The images from the survey are processed by a data reduction pipeline, which uses aperture photometry to construct the light curves. For the most complete field (19h-1145 epochs) in the survey, we produce an alternative set of light curves by using the method of difference imaging, which is a photometric technique that has shown important advantages when used in crowded fields. A quantitative comparison between the photometric precision achieved with both methods is carried out in this work. We remove systematic effects using the sysrem algorithm, scale the error bars on the light curves, and perform a comparison of the corrected light curves. The results show that the aperture photometry light curves provide slightly better precision for objects with J detect transits in the WTS light curves, we use a modified version of the box-fitting

  8. Automated Model Fit Method for Diesel Engine Control Development

    NARCIS (Netherlands)

    Seykens, X.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  9. Automated model fit method for diesel engine control development

    NARCIS (Netherlands)

    Seykens, X.L.J.; Willems, F.P.T.; Kuijpers, B.; Rietjens, C.J.H.

    2014-01-01

    This paper presents an automated fit for a control-oriented physics-based diesel engine combustion model. This method is based on the combination of a dedicated measurement procedure and structured approach to fit the required combustion model parameters. Only a data set is required that is

  10. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    Science.gov (United States)

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  11. From Experiment to Theory: What Can We Learn from Growth Curves?

    Science.gov (United States)

    Kareva, Irina; Karev, Georgy

    2018-01-01

    Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.

  12. New approach to the adjustment of group cross sections fitting integral measurements

    International Nuclear Information System (INIS)

    Chao, Y.A.

    1979-01-01

    The adjustment of group cross sections fitting integral measurements is viewed as a process of estimating theoretical and/or experimental negligence errors to bring statistical consistency to the integral and differential data so that they can be combined to form an enlarged ensemble, based on which an improved estimation of the physical constants can be made. A three-step approach is suggested, and its formalism of general validity is developed. In step one, the data of negligence error are extracted from the given integral and differential data. The method of extraction is based on the concepts of prior probability and information entropy. It automatically leads to vanishing negligence error as the two sets of data are statistically consistent. The second step is to identify the sources of negligence error and adjust the data by an amount compensating for the extracted negligence discrepancy. In the last step, the two data sets, already adjusted to mutual consistency are combined as a single unified ensemble. Standard methods of statistics can then be applied to reestimate the physical constants. 1 figure

  13. Models to estimate lactation curves of milk yield and somatic cell count in dairy cows at the herd level for the use in simulations and predictive models

    Directory of Open Access Journals (Sweden)

    Kaare Græsbøll

    2016-12-01

    Full Text Available Typically, central milk recording data from dairy herds are recorded less than monthly. Over-fitting early in lactation periods is a challenge, which we explored in different ways by reducing the number of parameters needed to describe the milk yield and somatic cell count of individual cows. Furthermore, we investigated how the parameters of lactation models correlate between parities and from dam to offspring. The aim of the study was to provide simple and robust models for cow level milk yield and somatic cell count (SCC for fitting to sparse data to parameterise herd- and cow-specific simulation of dairy herds.Data from 610 Danish Holstein herds were used to determine parity traits in milk production regarding milk yield and SCC of individual cows. Parity was stratified in first, second and third and higher for milk, and first to sixth and higher for SCC. Fitting of herd level parameters allowed for cow level lactation curves with three, two or one-parameters per lactation. Correlations of milk yield and SCC were estimated between lactations and between dam and offspring.The shape of the lactation curves varied markedly between farms. The correlation between lactations for milk yield and SCC were 0.2-0.6 and significant on more than 95% of farms. The variation in the daily milk yield was observed to be a source of variation to the SCC, and the total SCC was less correlated with the milk production than somatic cells per ml. A positive correlation was found between relative levels of the total SCC and the milk yield.The variation of lactation and SCC curves between farms highlights the importance of a herd level approach. The one-parameter per cow model using a herd level curve allows for estimating the cow production level from first the recording in the parity, while a two-parameter model requires more recordings for a credible estimate, but may more precisely predict persistence, and given the independence of parameters, these can be

  14. Development of fitting methods using geometric progression formulae of gamma-ray buildup factors

    International Nuclear Information System (INIS)

    Yoshida, Yoshitaka

    2006-01-01

    The gamma ray buildup factors are represented by an approximation method to speed up calculation using the point attenuation kernel method. The fitting parameters obtained by the GP formula and Taylor's formula are compiled in ANSI/ANS 6.4.3, available without any limitation. The GP formula featured high accuracy but required a high-level fitting technique. Thus the GP formula was divided into a curved line and a part representing the base values and used to develop the a fitting method and X k fitting method. As a result, this methodology showed that (1) when the fitting ranges were identical, there was no change in standard deviation when the unit penetration depth was varied; (2) even with fitting up to 300 mfp, the average standard deviation of 26 materials was 2.9% and acceptable GP parameters were extracted; (3) when the same end points of the fitting were selected and the starting points of fitting were identical with the unit penetration depth, the deviation became smaller with increasing unit penetration depth; and (4) even with the deviation adjusted to the positive side from 0.5 mfp to 300 mfp, the average standard deviation of 26 materials was 5.6%, which was an acceptable value. However, the GP parameters obtained by this methodology cannot be used for direct interpolation using gamma ray energy or materials. (author)

  15. Singular interactions supported by embedded curves

    International Nuclear Information System (INIS)

    Kaynak, Burak Tevfik; Turgut, O Teoman

    2012-01-01

    In this work, singular interactions supported by embedded curves on Riemannian manifolds are discussed from a more direct and physical perspective, via the heat kernel approach. We show that the renormalized problem is well defined, the ground state is finite and the corresponding wavefunction is positive. The renormalization group invariance of the model is also discussed. (paper)

  16. QUENCH: A software package for the determination of quenching curves in Liquid Scintillation counting.

    Science.gov (United States)

    Cassette, Philippe

    2016-03-01

    In Liquid Scintillation Counting (LSC), the scintillating source is part of the measurement system and its detection efficiency varies with the scintillator used, the vial and the volume and the chemistry of the sample. The detection efficiency is generally determined using a quenching curve, describing, for a specific radionuclide, the relationship between a quenching index given by the counter and the detection efficiency. A quenched set of LS standard sources are prepared by adding a quenching agent and the quenching index and detection efficiency are determined for each source. Then a simple formula is fitted to the experimental points to define the quenching curve function. The paper describes a software package specifically devoted to the determination of quenching curves with uncertainties. The experimental measurements are described by their quenching index and detection efficiency with uncertainties on both quantities. Random Gaussian fluctuations of these experimental measurements are sampled and a polynomial or logarithmic function is fitted on each fluctuation by χ(2) minimization. This Monte Carlo procedure is repeated many times and eventually the arithmetic mean and the experimental standard deviation of each parameter are calculated, together with the covariances between these parameters. Using these parameters, the detection efficiency, corresponding to an arbitrary quenching index within the measured range, can be calculated. The associated uncertainty is calculated with the law of propagation of variances, including the covariance terms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Probabilistic Rainfall Intensity-Duration-Frequency Curves for the October 2015 Flooding in South Carolina

    Science.gov (United States)

    Phillips, R.; Samadi, S. Z.; Meadows, M.

    2017-12-01

    The potential for the intensity of extreme rainfall to increase with climate change nonstationarity has emerged as a prevailing issue for the design of engineering infrastructure, underscoring the need to better characterize the statistical assumptions underlying hydrological frequency analysis. The focus of this study is on developing probabilistic rainfall intensity-duration-frequency (IDF) curves for the major catchments in South Carolina (SC) where the October 02-05, 2015 floods caused infrastructure damages and several lives to be lost. Continuous to discrete probability distributions including Weibull, the generalized extreme value (GEV), the Generalized Pareto (GP), the Gumbel, the Fréchet, the normal, and the log-normal functions were fitted to the short duration (i.e., 24-hr) intense rainfall. Analysis suggests that the GEV probability distribution provided the most adequate fit to rainfall records. Rainfall frequency analysis indicated return periods above 500 years for urban drainage systems with a maximum return level of approximately 2,744 years, whereas rainfall magnitude was much lower in rural catchments. Further, the return levels (i.e., 2, 20, 50,100, 500, and 1000 years) computed by Monte Carlo method were consistently higher than the NOAA design IDF curves. Given the potential increase in the magnitude of intense rainfall, current IDF curves can substantially underestimate the frequency of extremes, indicating the susceptibility of the storm drainage and flood control structures in SC that were designed under assumptions of a stationary climate.

  18. Fitting Hidden Markov Models to Psychological Data

    Directory of Open Access Journals (Sweden)

    Ingmar Visser

    2002-01-01

    Full Text Available Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

  19. Convolution based profile fitting

    International Nuclear Information System (INIS)

    Kern, A.; Coelho, A.A.; Cheary, R.W.

    2002-01-01

    diffractometers (e.g. BM16 at ESRF and Station 2.3 at Daresbury). In the literature, convolution based profile fitting is normally associated with microstructure analysis where the sample contribution needs to be separated from the instrument contribution in an observed profile. This is no longer the case. Convolution based profile fitting can be also performed on a fully empirical basis to provide better fits to data and a greater variety of profile shapes. With convolution based profile fitting virtually any peak shape and its angular dependence can be modelled. The approach may be based on a physical model (FPA) or performed empirically. The quality of fit by convolution is normally better than using other methods. The uncertainty in derived parameters is therefore reduced. The number of parameters required to describe a pattern is normally smaller than the 'analytical function approach' and therefore parameter correlation is reduced significantly, therefore, increasing profile complexity does not necessarily require an increasing number of parameters. Copyright (2002) Australian X-ray Analytical Association Inc

  20. Bootstrap confidence intervals for principal response curves

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ter Braak, Cajo J. F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the