WorldWideScience

Sample records for curve fitting approach

  1. Comparison of ductile-to-brittle transition curve fitting approaches

    International Nuclear Information System (INIS)

    Cao, L.W.; Wu, S.J.; Flewitt, P.E.J.

    2012-01-01

    Ductile-to-brittle transition (DBT) curve fitting approaches are compared over the transition temperature range for reactor pressure vessel steels with different kinds of data, including Charpy-V notch impact energy data and fracture toughness data. Three DBT curve fitting methods have been frequently used in the past, including the Burr S-Weibull and tanh distributions. In general there is greater scatter associated with test data obtained within the transition region. Therefore these methods give results with different accuracies, especially when fitting to small quantities of data. The comparison shows that the Burr distribution and tanh distribution can almost equally fit well distributed and large data sets extending across the test temperature range to include the upper and lower shelves. The S-Weibull distribution fit is poor for the lower shelf of the DBT curve. Overall for both large and small quantities of measured data the Burr distribution provides the best description. - Highlights: ► Burr distribution offers a better fit than that of a S-Weibull and tanh fit. ► Burr and tanh methods show similar fitting ability for a large data set. ► Burr method can fit sparse data well distributed across the test temperature. ► S-Weibull method cannot fit the lower shelf well and show poor fitting quality.

  2. GLOBAL AND STRICT CURVE FITTING METHOD

    NARCIS (Netherlands)

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  3. From Curve Fitting to Machine Learning

    CERN Document Server

    Zielesny, Achim

    2011-01-01

    The analysis of experimental data is at heart of science from its beginnings. But it was the advent of digital computers that allowed the execution of highly non-linear and increasingly complex data analysis procedures - methods that were completely unfeasible before. Non-linear curve fitting, clustering and machine learning belong to these modern techniques which are a further step towards computational intelligence. The goal of this book is to provide an interactive and illustrative guide to these topics. It concentrates on the road from two dimensional curve fitting to multidimensional clus

  4. Estimating reaction rate constants: comparison between traditional curve fitting and curve resolution

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.

    2000-01-01

    A traditional curve fitting (TCF) algorithm is compared with a classical curve resolution (CCR) approach for estimating reaction rate constants from spectral data obtained in time of a chemical reaction. In the TCF algorithm, reaction rate constants an estimated from the absorbance versus time data

  5. CURVE LSFIT, Gamma Spectrometer Calibration by Interactive Fitting Method

    International Nuclear Information System (INIS)

    Olson, D.G.

    1992-01-01

    1 - Description of program or function: CURVE and LSFIT are interactive programs designed to obtain the best data fit to an arbitrary curve. CURVE finds the type of fitting routine which produces the best curve. The types of fitting routines available are linear regression, exponential, logarithmic, power, least squares polynomial, and spline. LSFIT produces a reliable calibration curve for gamma ray spectrometry by using the uncertainty value associated with each data point. LSFIT is intended for use where an entire efficiency curve is to be made starting at 30 KeV and continuing to 1836 KeV. It creates calibration curves using up to three least squares polynomial fits to produce the best curve for photon energies above 120 KeV and a spline function to combine these fitted points with a best fit for points below 120 KeV. 2 - Method of solution: The quality of fit is tested by comparing the measured y-value to the y-value calculated from the fitted curve. The fractional difference between these two values is printed for the evaluation of the quality of the fit. 3 - Restrictions on the complexity of the problem - Maxima of: 2000 data points calibration curve output (LSFIT) 30 input data points 3 least squares polynomial fits (LSFIT) The least squares polynomial fit requires that the number of data points used exceed the degree of fit by at least two

  6. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  7. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    Science.gov (United States)

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0

  8. The environmental Kuznets curve. Does one size fit all?

    International Nuclear Information System (INIS)

    List, J.A.; Gallet, C.A.

    1999-01-01

    This paper uses a new panel data set on state-level sulfur dioxide and nitrogen oxide emissions from 1929-1994 to test the appropriateness of the 'one size fits all' reduced-form regression approach commonly used in the environmental Kuznets curve literature. Empirical results provide initial evidence that an inverted-U shape characterizes the relationship between per capita emissions and per capita incomes at the state level. Parameter estimates suggest, however, that previous studies, which restrict cross-sections to undergo identical experiences over time, may be presenting statistically biased results. 25 refs

  9. Curve fitting methods for solar radiation data modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my; Singh, Balbir Singh Mahinder, E-mail: samsul-ariffin@petronas.com.my, E-mail: balbir@petronas.com.my [Department of Fundamental and Applied Sciences, Faculty of Sciences and Information Technology, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 31750 Tronoh, Perak Darul Ridzuan (Malaysia)

    2014-10-24

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R{sup 2}. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  10. Curve fitting methods for solar radiation data modeling

    Science.gov (United States)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-10-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R2. The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods.

  11. Curve fitting methods for solar radiation data modeling

    International Nuclear Information System (INIS)

    Karim, Samsul Ariffin Abdul; Singh, Balbir Singh Mahinder

    2014-01-01

    This paper studies the use of several type of curve fitting method to smooth the global solar radiation data. After the data have been fitted by using curve fitting method, the mathematical model of global solar radiation will be developed. The error measurement was calculated by using goodness-fit statistics such as root mean square error (RMSE) and the value of R 2 . The best fitting methods will be used as a starting point for the construction of mathematical modeling of solar radiation received in Universiti Teknologi PETRONAS (UTP) Malaysia. Numerical results indicated that Gaussian fitting and sine fitting (both with two terms) gives better results as compare with the other fitting methods

  12. Real-Time Exponential Curve Fits Using Discrete Calculus

    Science.gov (United States)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  13. Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm

    Directory of Open Access Journals (Sweden)

    Daeho Jang

    2015-09-01

    Full Text Available The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air, the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.

  14. Testing the validity of stock-recruitment curve fits

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.

    1988-01-01

    The utilities relied heavily on the Ricker stock-recruitment model as the basis for quantifying biological compensation in the Hudson River power case. They presented many fits of the Ricker model to data derived from striped bass catch and effort records compiled by the National Marine Fisheries Service. Based on this curve-fitting exercise, a value of 4 was chosen for the parameter alpha in the Ricker model, and this value was used to derive the utilities' estimates of the long-term impact of power plants on striped bass populations. A technique was developed and applied to address a single fundamental question: if the Ricker model were applicable to the Hudson River striped bass population, could the estimates of alpha from the curve-fitting exercise be considered reliable. The technique involved constructing a simulation model that incorporated the essential biological features of the population and simulated the characteristics of the available actual catch-per-unit-effort data through time. The ability or failure to retrieve the known parameter values underlying the simulation model via the curve-fitting exercise was a direct test of the reliability of the results of fitting stock-recruitment curves to the real data. The results demonstrated that estimates of alpha from the curve-fitting exercise were not reliable. The simulation-modeling technique provides an effective way to identify whether or not particular data are appropriate for use in fitting such models. 39 refs., 2 figs., 3 tabs

  15. Sensitivity of Fit Indices to Misspecification in Growth Curve Models

    Science.gov (United States)

    Wu, Wei; West, Stephen G.

    2010-01-01

    This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…

  16. Curve fitting for RHB Islamic Bank annual net profit

    Science.gov (United States)

    Nadarajan, Dineswary; Noor, Noor Fadiya Mohd

    2015-05-01

    The RHB Islamic Bank net profit data are obtained from 2004 to 2012. Curve fitting is done by assuming the data are exact or experimental due to smoothing process. Higher order Lagrange polynomial and cubic spline with curve fitting procedure are constructed using Maple software. Normality test is performed to check the data adequacy. Regression analysis with curve estimation is conducted in SPSS environment. All the eleven models are found to be acceptable at 10% significant level of ANOVA. Residual error and absolute relative true error are calculated and compared. The optimal model based on the minimum average error is proposed.

  17. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  18. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  19. Fitting the curve in Excel® : Systematic curve fitting of laboratory and remotely sensed planetary spectra

    NARCIS (Netherlands)

    McCraig, M.A.; Osinski, G.R.; Cloutis, E.A.; Flemming, R.L.; Izawa, M.R.M.; Reddy, V.; Fieber-Beyer, S.K.; Pompilio, L.; van der Meer, F.D.; Berger, J.A.; Bramble, M.S.; Applin, D.M.

    2017-01-01

    Spectroscopy in planetary science often provides the only information regarding the compositional and mineralogical make up of planetary surfaces. The methods employed when curve fitting and modelling spectra can be confusing and difficult to visualize and comprehend. Researchers who are new to

  20. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    NARCIS (Netherlands)

    Vriens, D.; Geus-Oei, L.F. de; Oyen, W.J.G.; Visser, E.P.

    2009-01-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible

  1. THE CPA QUALIFICATION METHOD BASED ON THE GAUSSIAN CURVE FITTING

    Directory of Open Access Journals (Sweden)

    M.T. Adithia

    2015-01-01

    Full Text Available The Correlation Power Analysis (CPA attack is an attack on cryptographic devices, especially smart cards. The results of the attack are correlation traces. Based on the correlation traces, an evaluation is done to observe whether significant peaks appear in the traces or not. The evaluation is done manually, by experts. If significant peaks appear then the smart card is not considered secure since it is assumed that the secret key is revealed. We develop a method that objectively detects peaks and decides which peak is significant. We conclude that using the Gaussian curve fitting method, the subjective qualification of the peak significance can be objectified. Thus, better decisions can be taken by security experts. We also conclude that the Gaussian curve fitting method is able to show the influence of peak sizes, especially the width and height, to a significance of a particular peak.

  2. Application of tan h curve fitting to toughness data

    International Nuclear Information System (INIS)

    Sakai, Yuzuru; Ogura, Nobukazu

    1985-01-01

    Curve-fitting regression procedures for toughness data have been examined. The objectives of fitting curve in the context of the study of nuclear pressure vessel steels are (1) convenient summarization of test data to permit comparison of materials and testing methods; (2) development of statistical base concerning the data; (3) the surveying of the relationships between charpy data and fracture toughness data; (4) estimation of fracture toughness level from charpy absorbed energy data. The computational procedures using the tanh function have been applied to the toughness data (charpy absorbed energy, static fracture toughness, dynamic fracture toughness, crack arrest toughness) of A533B cl.1 and A508 cl.3 steels. The results of the analysis shows the statistical features of the material toughness and gives the method for estimating fracture toughness level from charpy absorbed energy data. (author)

  3. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  4. PLOTNFIT.4TH, Data Plotting and Curve Fitting by Polynomials

    International Nuclear Information System (INIS)

    Schiffgens, J.O.

    1990-01-01

    1 - Description of program or function: PLOTnFIT is used for plotting and analyzing data by fitting nth degree polynomials of basis functions to the data interactively and printing graphs of the data and the polynomial functions. It can be used to generate linear, semi-log, and log-log graphs and can automatically scale the coordinate axes to suit the data. Multiple data sets may be plotted on a single graph. An auxiliary program, READ1ST, is included which produces an on-line summary of the information contained in the PLOTnFIT reference report. 2 - Method of solution: PLOTnFIT uses the least squares method to calculate the coefficients of nth-degree (up to 10. degree) polynomials of 11 selected basis functions such that each polynomial fits the data in a least squares sense. The procedure incorporated in the code uses a linear combination of orthogonal polynomials to avoid 'i11-conditioning' and to perform the curve fitting task with single-precision arithmetic. 3 - Restrictions on the complexity of the problem - Maxima of: 225 data points per job (or graph) including all data sets 8 data sets (or tasks) per job (or graph)

  5. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Huang, Can

    2018-01-01

    –slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost......In order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master...... optimality. Numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods....

  6. Fitness function and nonunique solutions in x-ray reflectivity curve fitting: crosserror between surface roughness and mass density

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Mattila, M; Hakkarainen, T; Sormunen, J; Lipsanen, H

    2007-01-01

    Nonunique solutions of the x-ray reflectivity (XRR) curve fitting problem were studied by modelling layer structures with neural networks and designing a fitness function to handle the nonidealities of measurements. Modelled atomic-layer-deposited aluminium oxide film structures were used in the simulations to calculate XRR curves based on Parratt's formalism. This approach reduced the dimensionality of the parameter space and allowed the use of fitness landscapes in the study of nonunique solutions. Fitness landscapes, where the height in a map represents the fitness value as a function of the process parameters, revealed tracks where the local fitness optima lie. The tracks were projected on the physical parameter space thus allowing the construction of the crosserror equation between weakly determined parameters, i.e. between the mass density and the surface roughness of a layer. The equation gives the minimum error for the other parameters which is a consequence of the nonuniqueness of the solution if noise is present. Furthermore, the existence of a possible unique solution in a certain parameter range was found to be dependent on the layer thickness and the signal-to-noise ratio

  7. A graph-based method for fitting planar B-spline curves with intersections

    Directory of Open Access Journals (Sweden)

    Pengbo Bo

    2016-01-01

    Full Text Available The problem of fitting B-spline curves to planar point clouds is studied in this paper. A novel method is proposed to deal with the most challenging case where multiple intersecting curves or curves with self-intersection are necessary for shape representation. A method based on Delauney Triangulation of data points is developed to identify connected components which is also capable of removing outliers. A skeleton representation is utilized to represent the topological structure which is further used to create a weighted graph for deciding the merging of curve segments. Different to existing approaches which utilize local shape information near intersections, our method considers shape characteristics of curve segments in a larger scope and is thus capable of giving more satisfactory results. By fitting each group of data points with a B-spline curve, we solve the problems of curve structure reconstruction from point clouds, as well as the vectorization of simple line drawing images by drawing lines reconstruction.

  8. PLOTnFIT: A BASIC program for data plotting and curve fitting

    Energy Technology Data Exchange (ETDEWEB)

    Schiffgens, J O

    1989-10-01

    PLOTnFIT is a BASIC program to be used with an IBM or IBM-compatible personal computer (PC) for plotting and fitting curves to measured or observed data for both extrapolation and interpolation. It uses the Least Squares method to calculate the coefficients of nth degree polynomials (e.g., up to 10th degree) of Basis Functions so that each polynomial fits the data in a Least Squares sense, then plots the data and the polynomial that a user decides best represents them. PLOTnFIT is very versatile. It can be used to generate linear, semilog, and log-log graphs and can automatically scale the coordinate axes to suit the data. It can plot more than one data set on a graph (e.g., up to 8 data sets) and more data points than a user is likely to put on one graph (e.g., up to 225 points). A PC diskette containing (1) READIST.PNF (a summary of this NUREG), (2) INI06891.SIS and FOL06891.SIS (two data files), and 3) PLOTNFIT.4TH (the latest version of the program) may be obtained from the National Energy Software Center, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439. (author)

  9. Dose-response curve estimation: a semiparametric mixture approach.

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2011-12-01

    In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples. © 2011, The International Biometric Society.

  10. Theoretical derivation of anodizing current and comparison between fitted curves and measured curves under different conditions

    Science.gov (United States)

    Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei

    2015-04-01

    Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current jion = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.

  11. Theoretical derivation of anodizing current and comparison between fitted curves and measured curves under different conditions.

    Science.gov (United States)

    Chong, Bin; Yu, Dongliang; Jin, Rong; Wang, Yang; Li, Dongdong; Song, Ye; Gao, Mingqi; Zhu, Xufei

    2015-04-10

    Anodic TiO2 nanotubes have been studied extensively for many years. However, the growth kinetics still remains unclear. The systematic study of the current transient under constant anodizing voltage has not been mentioned in the original literature. Here, a derivation and its corresponding theoretical formula are proposed to overcome this challenge. In this paper, the theoretical expressions for the time dependent ionic current and electronic current are derived to explore the anodizing process of Ti. The anodizing current-time curves under different anodizing voltages and different temperatures are experimentally investigated in the anodization of Ti. Furthermore, the quantitative relationship between the thickness of the barrier layer and anodizing time, and the relationships between the ionic/electronic current and temperatures are proposed in this paper. All of the current-transient plots can be fitted consistently by the proposed theoretical expressions. Additionally, it is the first time that the coefficient A of the exponential relationship (ionic current j(ion) = A exp(BE)) has been determined under various temperatures and voltages. And the results indicate that as temperature and voltage increase, ionic current and electronic current both increase. The temperature has a larger effect on electronic current than ionic current. These results can promote the research of kinetics from a qualitative to quantitative level.

  12. Multidimentional and Multi-Parameter Fortran-Based Curve Fitting ...

    African Journals Online (AJOL)

    This work briefly describes the mathematics behind the algorithm, and also elaborates how to implement it using FORTRAN 95 programming language. The advantage of this algorithm, when it is extended to surfaces and complex functions, is that it makes researchers to have a better trust during fitting. It also improves the ...

  13. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  14. Box-Cox transformation for resolving Peelle's Pertinent Puzzle in curve fitting

    International Nuclear Information System (INIS)

    Oh, Soo-Youl

    2003-01-01

    Incorporating the Box-Cox transformation into a least-squares method is presented as one of resolutions of an anomaly known as Peelle's Pertinent Puzzle. The transformation is a strategy to make non-normal distribution data resemble normal data. A procedure is proposed: transform the measured raw data with an optimized Box-Cox transformation parameter, fit the transformed data using a usual curve fitting method, then inverse-transform the fitted results to final estimates. The generalized least-squares method utilized in GMA is adopted as the curve fitting tool for the test of proposed procedure. In the procedure, covariance matrices are correspondingly transformed and inverse-transformed with the aid of error propagation law. In addition to a sensible answer to the Peelle's problem itself, the procedure resulted in reasonable estimates of 6 Li(n,t) cross sections in several to 800 keV energy region. Meanwhile, comparisons of the present procedure with that of Chiba and Smith show that both procedures yield estimates so close each other for the sample evaluation on 6 Li(n,t) above as well as for the Peelle's problem. Two procedures, however, are conceptually very different and further discussions would be needed for a consensus on this issue of resolving the Puzzle. It is also pointed out that the transformation is applicable not only to a least-squares method but also to other parameter estimation method such as a usual Bayesian approach formulated with an assumption of normality of the probability density function. (author)

  15. Application of numerical methods in spectroscopy : fitting of the curve of thermoluminescence

    International Nuclear Information System (INIS)

    RANDRIAMANALINA, S.

    1999-01-01

    The method of non linear least squares is one of the mathematical tools widely employed in spectroscopy, it is used for the determination of parameters of a model. In other hand, the spline function is among fitting functions that introduce the smallest error. It is used for the calculation of the area under the curve. We present an application of these methods, with the details of the corresponding algorithms, to the fitting of the thermoluminescence curve. [fr

  16. Data fitting by G1 rational cubic Bézier curves using harmony search

    Directory of Open Access Journals (Sweden)

    Najihah Mohamed

    2015-07-01

    Full Text Available A metaheuristic algorithm, called Harmony Search (HS is implemented for data fitting by rational cubic Bézier curves. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. HS is suitable for multivariate non-linear optimization problem. It is mainly achieved by data fitting using rational cubic Bézier curves with G1 continuity for every joint of segments of the whole data sets. This approach has significant contributions in making the technique automated. HS is used to optimize positions of middle points and values of the shape parameters. Test outline images and comparative experimental analysis are presented to show effectiveness and robustness of the proposed method. Statistical testing between HS and two other different metaheuristic algorithms is used in the analysis on several outline images. All of the algorithms improvised a near optimal solution but the result that is obtained by the HS is better than the results of the other two algorithms.

  17. A non-iterative method for fitting decay curves with background

    International Nuclear Information System (INIS)

    Mukoyama, T.

    1982-01-01

    A non-iterative method for fitting a decay curve with background is presented. The sum of an exponential function and a constant term is linearized by the use of the difference equation and parameters are determined by the standard linear least-squares fitting. The validity of the present method has been tested against pseudo-experimental data. (orig.)

  18. Background does not significantly affect power-exponential fitting of gastric emptying curves

    International Nuclear Information System (INIS)

    Jonderko, K.

    1987-01-01

    Using a procedure enabling the assessment of background radiation, research was done to elucidate the course of changes in background activity during gastric emptying measurements. Attention was focused on the changes in the shape of power-exponential fitted gastric emptying curves after correction for background was performed. The observed pattern of background counts allowed to explain the shifts of the parameters characterizing power-exponential curves connected with background correction. It was concluded that background had a negligible effect on the power-exponential fitting of gastric emptying curves. (author)

  19. Potential errors when fitting experience curves by means of spreadsheet software

    International Nuclear Information System (INIS)

    Sark, W.G.J.H.M. van; Alsema, E.A.

    2010-01-01

    Progress ratios (PRs) are widely used in forecasting development of many technologies; they are derived from historical data represented in experience curves. Fitting the double logarithmic graphs is easily done with spreadsheet software like Microsoft Excel, by adding a trend line to the graph. However, it is unknown to many that these data are transformed to linear data before a fit is performed. This leads to erroneous results or a transformation bias in the PR, as we demonstrate using the experience curve for photovoltaic technology: logarithmic transformation leads to overestimates of progress ratios and underestimates of goodness of fit. Therefore, other graphing and analysis software is recommended.

  20. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  1. Cuckoo search with Lévy flights for weighted Bayesian energy functional optimization in global-support curve data fitting.

    Science.gov (United States)

    Gálvez, Akemi; Iglesias, Andrés; Cabellos, Luis

    2014-01-01

    The problem of data fitting is very important in many theoretical and applied fields. In this paper, we consider the problem of optimizing a weighted Bayesian energy functional for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS) that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  2. Repair models of cell survival and corresponding computer program for survival curve fitting

    International Nuclear Information System (INIS)

    Shen Xun; Hu Yiwei

    1992-01-01

    Some basic concepts and formulations of two repair models of survival, the incomplete repair (IR) model and the lethal-potentially lethal (LPL) model, are introduced. An IBM-PC computer program for survival curve fitting with these models was developed and applied to fit the survivals of human melanoma cells HX118 irradiated at different dose rates. Comparison was made between the repair models and two non-repair models, the multitar get-single hit model and the linear-quadratic model, in the fitting and analysis of the survival-dose curves. It was shown that either IR model or LPL model can fit a set of survival curves of different dose rates with same parameters and provide information on the repair capacity of cells. These two mathematical models could be very useful in quantitative study on the radiosensitivity and repair capacity of cells

  3. Multimodal determination of Rayleigh dispersion and attenuation curves using the circle fit method

    Science.gov (United States)

    Verachtert, R.; Lombaert, G.; Degrande, G.

    2018-03-01

    This paper introduces the circle fit method for the determination of multi-modal Rayleigh dispersion and attenuation curves as part of a Multichannel Analysis of Surface Waves (MASW) experiment. The wave field is transformed to the frequency-wavenumber (fk) domain using a discretized Hankel transform. In a Nyquist plot of the fk-spectrum, displaying the imaginary part against the real part, the Rayleigh wave modes correspond to circles. The experimental Rayleigh dispersion and attenuation curves are derived from the angular sweep of the central angle of these circles. The method can also be applied to the analytical fk-spectrum of the Green's function of a layered half-space in order to compute dispersion and attenuation curves, as an alternative to solving an eigenvalue problem. A MASW experiment is subsequently simulated for a site with a regular velocity profile and a site with a soft layer trapped between two stiffer layers. The performance of the circle fit method to determine the dispersion and attenuation curves is compared with the peak picking method and the half-power bandwidth method. The circle fit method is found to be the most accurate and robust method for the determination of the dispersion curves. When determining attenuation curves, the circle fit method and half-power bandwidth method are accurate if the mode exhibits a sharp peak in the fk-spectrum. Furthermore, simulated and theoretical attenuation curves determined with the circle fit method agree very well. A similar correspondence is not obtained when using the half-power bandwidth method. Finally, the circle fit method is applied to measurement data obtained for a MASW experiment at a site in Heverlee, Belgium. In order to validate the soil profile obtained from the inversion procedure, force-velocity transfer functions were computed and found in good correspondence with the experimental transfer functions, especially in the frequency range between 5 and 80 Hz.

  4. Weighted curve-fitting program for the HP 67/97 calculator

    International Nuclear Information System (INIS)

    Stockli, M.P.

    1983-01-01

    The HP 67/97 calculator provides in its standard equipment a curve-fit program for linear, logarithmic, exponential and power functions that is quite useful and popular. However, in more sophisticated applications, proper weights for data are often essential. For this purpose a program package was created which is very similar to the standard curve-fit program but which includes the weights of the data for proper statistical analysis. This allows accurate calculation of the uncertainties of the fitted curve parameters as well as the uncertainties of interpolations or extrapolations, or optionally the uncertainties can be normalized with chi-square. The program is very versatile and allows one to perform quite difficult data analysis in a convenient way with the pocket calculator HP 67/97

  5. Identifying ambiguous prostate gland contours from histology using capsule shape information and least squares curve fitting

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Rania [DigiPen Institute of Technology, Department of Computer Engineering, Redmond, WA (United States); McKenzie, Frederic D. [Old Dominion University, Department of Electrical and Computer Engineering, Norfolk, VA (United States)

    2007-12-15

    To obtain an accurate assessment of the percentage and depth of extra-capsular soft tissue removed with the prostate by the various surgical techniques in order to help surgeons in determining the appropriateness of different surgical approaches. This can be enhanced by an accurate and automated means of identifying the prostate gland contour. To facilitate 3D reconstruction and, ultimately, more accurate analyses, it is essential for us to identify the capsule boundary that separates the prostate gland tissue from its extra-capsular tissue. However, the capsule is sometimes unrecognizable due to the naturally occurring intrusion of muscle and connective tissue into the prostate gland. At these regions where the capsule disappears, its contour can be arbitrarily created with a continuing contour line based on the natural shape of the prostate. We utilize an algorithm based on a least squares curve fitting technique that uses a prostate shape equation to merge previously detected capsule parts with the shape equation to produce an approximated curve that represents the prostate capsule. We have tested our algorithm using three different shapes on 13 histologic prostate slices that are cut at different locations from the apex. The best result shows a 90% average contour match when compared to pathologist-drawn contours. We believe that automatically identifying histologic prostate contours will lead to increased objective analyses of surgical margins and extracapsular spread of cancer. Our results show that this is achievable. (orig.)

  6. Identifying ambiguous prostate gland contours from histology using capsule shape information and least squares curve fitting

    International Nuclear Information System (INIS)

    Hussein, Rania; McKenzie, Frederic D.

    2007-01-01

    To obtain an accurate assessment of the percentage and depth of extra-capsular soft tissue removed with the prostate by the various surgical techniques in order to help surgeons in determining the appropriateness of different surgical approaches. This can be enhanced by an accurate and automated means of identifying the prostate gland contour. To facilitate 3D reconstruction and, ultimately, more accurate analyses, it is essential for us to identify the capsule boundary that separates the prostate gland tissue from its extra-capsular tissue. However, the capsule is sometimes unrecognizable due to the naturally occurring intrusion of muscle and connective tissue into the prostate gland. At these regions where the capsule disappears, its contour can be arbitrarily created with a continuing contour line based on the natural shape of the prostate. We utilize an algorithm based on a least squares curve fitting technique that uses a prostate shape equation to merge previously detected capsule parts with the shape equation to produce an approximated curve that represents the prostate capsule. We have tested our algorithm using three different shapes on 13 histologic prostate slices that are cut at different locations from the apex. The best result shows a 90% average contour match when compared to pathologist-drawn contours. We believe that automatically identifying histologic prostate contours will lead to increased objective analyses of surgical margins and extracapsular spread of cancer. Our results show that this is achievable. (orig.)

  7. Automatic Curve Fitting Based on Radial Basis Functions and a Hierarchical Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    G. Trejo-Caballero

    2015-01-01

    Full Text Available Curve fitting is a very challenging problem that arises in a wide variety of scientific and engineering applications. Given a set of data points, possibly noisy, the goal is to build a compact representation of the curve that corresponds to the best estimate of the unknown underlying relationship between two variables. Despite the large number of methods available to tackle this problem, it remains challenging and elusive. In this paper, a new method to tackle such problem using strictly a linear combination of radial basis functions (RBFs is proposed. To be more specific, we divide the parameter search space into linear and nonlinear parameter subspaces. We use a hierarchical genetic algorithm (HGA to minimize a model selection criterion, which allows us to automatically and simultaneously determine the nonlinear parameters and then, by the least-squares method through Singular Value Decomposition method, to compute the linear parameters. The method is fully automatic and does not require subjective parameters, for example, smooth factor or centre locations, to perform the solution. In order to validate the efficacy of our approach, we perform an experimental study with several tests on benchmarks smooth functions. A comparative analysis with two successful methods based on RBF networks has been included.

  8. A three-parameter langmuir-type model for fitting standard curves of sandwich enzyme immunoassays with special attention to the α-fetoprotein assay

    NARCIS (Netherlands)

    Kortlandt, W.; Endeman, H.J.; Hoeke, J.O.O.

    In a simplified approach to the reaction kinetics of enzyme-linked immunoassays, a Langmuir-type equation y = [ax/(b + x)] + c was derived. This model proved to be superior to logit-log and semilog models in the curve-fitting of standard curves. An assay for α-fetoprotein developed in our laboratory

  9. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  10. Cuckoo Search with Lévy Flights for Weighted Bayesian Energy Functional Optimization in Global-Support Curve Data Fitting

    Directory of Open Access Journals (Sweden)

    Akemi Gálvez

    2014-01-01

    for data fitting by using global-support approximating curves. By global-support curves we mean curves expressed as a linear combination of basis functions whose support is the whole domain of the problem, as opposed to other common approaches in CAD/CAM and computer graphics driven by piecewise functions (such as B-splines and NURBS that provide local control of the shape of the curve. Our method applies a powerful nature-inspired metaheuristic algorithm called cuckoo search, introduced recently to solve optimization problems. A major advantage of this method is its simplicity: cuckoo search requires only two parameters, many fewer than other metaheuristic approaches, so the parameter tuning becomes a very simple task. The paper shows that this new approach can be successfully used to solve our optimization problem. To check the performance of our approach, it has been applied to five illustrative examples of different types, including open and closed 2D and 3D curves that exhibit challenging features, such as cusps and self-intersections. Our results show that the method performs pretty well, being able to solve our minimization problem in an astonishingly straightforward way.

  11. Potential errors when fitting experience curves by means of spreadsheet software

    NARCIS (Netherlands)

    van Sark, W.G.J.H.M.|info:eu-repo/dai/nl/074628526; Alsema, E.A.|info:eu-repo/dai/nl/073416258

    2010-01-01

    Progress ratios (PRs) are widely used in forecasting development of many technologies; they are derived from historical data represented in experience curves. Fitting the double logarithmic graphs is easily done with spreadsheet software like Microsoft Excel, by adding a trend line to the graph.

  12. CABAS: A freely available PC program for fitting calibration curves in chromosome aberration dosimetry

    International Nuclear Information System (INIS)

    Deperas, J.; Szluiska, M.; Deperas-Kaminska, M.; Edwards, A.; Lloyd, D.; Lindholm, C.; Romm, H.; Roy, L.; Moss, R.; Morand, J.; Wojcik, A.

    2007-01-01

    The aim of biological dosimetry is to estimate the dose and the associated uncertainty to which an accident victim was exposed. This process requires the use of the maximum-likelihood method for fitting a calibration curve, a procedure that is not implemented in most statistical computer programs. Several laboratories have produced their own programs, but these are frequently not user-friendly and not available to outside users. We developed a software for fitting a linear-quadratic dose-response relationship by the method of maximum-likelihood and for estimating a dose from the number of aberrations observed. The program called as CABAS consists of the main curve-fitting and dose estimating module and modules for calculating the dose in cases of partial body exposure, for estimating the minimum number of cells necessary to detect a given dose of radiation and for calculating the dose in the case of a protracted exposure. (authors)

  13. Polynomial curve fitting for control rod worth using least square numerical analysis

    International Nuclear Information System (INIS)

    Muhammad Husamuddin Abdul Khalil; Mark Dennis Usang; Julia Abdul Karim; Mohd Amin Sharifuldin Salleh

    2012-01-01

    RTP must have sufficient excess reactivity to compensate the negative reactivity feedback effects such as those caused by the fuel temperature and power defects of reactivity, fuel burn-up and to allow full power operation for predetermined period of time. To compensate this excess reactivity, it is necessary to introduce an amount of negative reactivity by adjusting or controlling the control rods at will. Control rod worth depends largely upon the value of the neutron flux at the location of the rod and reflected by a polynomial curve. Purpose of this paper is to rule out the polynomial curve fitting using least square numerical techniques via MATLAB compatible language. (author)

  14. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers

    International Nuclear Information System (INIS)

    Cardoso, Vanderlei

    2002-01-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  15. Box-Cox transformation for resolving the Peelle's Pertinent Puzzle in a curve fitting

    International Nuclear Information System (INIS)

    Oh, S. Y.; Seo, C. G.

    2004-01-01

    Incorporating the Box-Cox transformation into a curve fitting is presented as one of methods for resolving an anomaly known as the Peelle's Pertinent Puzzle in the nuclear data community. The Box-Cox transformation is a strategy to make non-normal distribution data resemble normal distribution data. The proposed method consists of the following steps: transform the raw data to be fitted with the optimized Box-Cox transformation parameter, fit the transformed data using a conventional curve fitting tool, the least-squares method in this study, then inverse-transform the fitted results to the final estimates. Covariance matrices are correspondingly transformed and inverse-transformed with the aid of the law of error propagation. In addition to a sensible answer to the Puzzle, the proposed method resulted in reasonable estimates for a test evaluation with pseudo-experimental 6 Li(n, t) cross sections in several to 800 keV energy region, while the GMA code resulted in systematic underestimates that characterize the Puzzle. Meanwhile, it is observed that the present method and the Chiba-Smith method yield almost the same estimates for the test evaluation on 6 Li(n, t). Conceptually, however, two methods are very different from each other and further discussions are needed for a consensus on the issue of how to resolve the Puzzle. (authors)

  16. Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers

    Directory of Open Access Journals (Sweden)

    N. I. Tananaev

    2015-03-01

    Full Text Available Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.

  17. Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers

    Science.gov (United States)

    Tananaev, N. I.

    2015-03-01

    Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power) model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.

  18. The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting

    Science.gov (United States)

    Tao, Zhang; Li, Zhang; Dingjun, Chen

    On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.

  19. Statistically generated weighted curve fit of residual functions for modal analysis of structures

    Science.gov (United States)

    Bookout, P. S.

    1995-01-01

    A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.

  20. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  1. Nonlinear models for fitting growth curves of Nellore cows reared in the Amazon Biome

    Directory of Open Access Journals (Sweden)

    Kedma Nayra da Silva Marinho

    2013-09-01

    Full Text Available Growth curves of Nellore cows were estimated by comparing six nonlinear models: Brody, Logistic, two alternatives by Gompertz, Richards and Von Bertalanffy. The models were fitted to weight-age data, from birth to 750 days of age of 29,221 cows, born between 1976 and 2006 in the Brazilian states of Acre, Amapá, Amazonas, Pará, Rondônia, Roraima and Tocantins. The models were fitted by the Gauss-Newton method. The goodness of fit of the models was evaluated by using mean square error, adjusted coefficient of determination, prediction error and mean absolute error. Biological interpretation of parameters was accomplished by plotting estimated weights versus the observed weight means, instantaneous growth rate, absolute maturity rate, relative instantaneous growth rate, inflection point and magnitude of the parameters A (asymptotic weight and K (maturing rate. The Brody and Von Bertalanffy models fitted the weight-age data but the other models did not. The average weight (A and growth rate (K were: 384.6±1.63 kg and 0.0022±0.00002 (Brody and 313.40±0.70 kg and 0.0045±0.00002 (Von Bertalanffy. The Brody model provides better goodness of fit than the Von Bertalanffy model.

  2. Numerical generation of boundary-fitted curvilinear coordinate systems for arbitrarily curved surfaces

    International Nuclear Information System (INIS)

    Takagi, T.; Miki, K.; Chen, B.C.J.; Sha, W.T.

    1985-01-01

    A new method is presented for numerically generating boundary-fitted coordinate systems for arbitrarily curved surfaces. The three-dimensional surface has been expressed by functions of two parameters using the geometrical modeling techniques in computer graphics. This leads to new quasi-one- and two-dimensional elliptic partial differential equations for coordinate transformation. Since the equations involve the derivatives of the surface expressions, the grids geneated by the equations distribute on the surface depending on its slope and curvature. A computer program GRID-CS based on the method was developed and applied to a surface of the second order, a torus and a surface of a primary containment vessel for a nuclear reactor. These applications confirm that GRID-CS is a convenient and efficient tool for grid generation on arbitrarily curved surfaces

  3. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  4. Prediction of Pressing Quality for Press-Fit Assembly Based on Press-Fit Curve and Maximum Press-Mounting Force

    Directory of Open Access Journals (Sweden)

    Bo You

    2015-01-01

    Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.

  5. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Genetic algorithm using independent component analysis in x-ray reflectivity curve fitting of periodic layer structures

    International Nuclear Information System (INIS)

    Tiilikainen, J; Bosund, V; Tilli, J-M; Sormunen, J; Mattila, M; Hakkarainen, T; Lipsanen, H

    2007-01-01

    A novel genetic algorithm (GA) utilizing independent component analysis (ICA) was developed for x-ray reflectivity (XRR) curve fitting. EFICA was used to reduce mutual information, or interparameter dependences, during the combinatorial phase. The performance of the new algorithm was studied by fitting trial XRR curves to target curves which were computed using realistic multilayer models. The median convergence properties of conventional GA, GA using principal component analysis and the novel GA were compared. GA using ICA was found to outperform the other methods with problems having 41 parameters or more to be fitted without additional XRR curve calculations. The computational complexity of the conventional methods was linear but the novel method had a quadratic computational complexity due to the applied ICA method which sets a practical limit for the dimensionality of the problem to be solved. However, the novel algorithm had the best capability to extend the fitting analysis based on Parratt's formalism to multiperiodic layer structures

  7. Two Aspects of the Simplex Model: Goodness of Fit to Linear Growth Curve Structures and the Analysis of Mean Trends.

    Science.gov (United States)

    Mandys, Frantisek; Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    Studied the conditions under which the quasi-Markov simplex model fits a linear growth curve covariance structure and determined when the model is rejected. Presents a quasi-Markov simplex model with structured means and gives an example. (SLD)

  8. Absolute Distances to Nearby Type Ia Supernovae via Light Curve Fitting Methods

    Science.gov (United States)

    Vinkó, J.; Ordasi, A.; Szalai, T.; Sárneczky, K.; Bányai, E.; Bíró, I. B.; Borkovits, T.; Hegedüs, T.; Hodosán, G.; Kelemen, J.; Klagyivik, P.; Kriskovics, L.; Kun, E.; Marion, G. H.; Marschalkó, G.; Molnár, L.; Nagy, A. P.; Pál, A.; Silverman, J. M.; Szakáts, R.; Szegedi-Elek, E.; Székely, P.; Szing, A.; Vida, K.; Wheeler, J. C.

    2018-06-01

    We present a comparative study of absolute distances to a sample of very nearby, bright Type Ia supernovae (SNe) derived from high cadence, high signal-to-noise, multi-band photometric data. Our sample consists of four SNe: 2012cg, 2012ht, 2013dy and 2014J. We present new homogeneous, high-cadence photometric data in Johnson–Cousins BVRI and Sloan g‧r‧i‧z‧ bands taken from two sites (Piszkesteto and Baja, Hungary), and the light curves are analyzed with publicly available light curve fitters (MLCS2k2, SNooPy2 and SALT2.4). When comparing the best-fit parameters provided by the different codes, it is found that the distance moduli of moderately reddened SNe Ia agree within ≲0.2 mag, and the agreement is even better (≲0.1 mag) for the highest signal-to-noise BVRI data. For the highly reddened SN 2014J the dispersion of the inferred distance moduli is slightly higher. These SN-based distances are in good agreement with the Cepheid distances to their host galaxies. We conclude that the current state-of-the-art light curve fitters for Type Ia SNe can provide consistent absolute distance moduli having less than ∼0.1–0.2 mag uncertainty for nearby SNe. Still, there is room for future improvements to reach the desired ∼0.05 mag accuracy in the absolute distance modulus.

  9. A sigmoidal fit for pressure-volume curves of idiopathic pulmonary fibrosis patients on mechanical ventilation: clinical implications

    Directory of Open Access Journals (Sweden)

    Juliana C. Ferreira

    2011-01-01

    Full Text Available OBJECTIVE: Respiratory pressure-volume curves fitted to exponential equations have been used to assess disease severity and prognosis in spontaneously breathing patients with idiopathic pulmonary fibrosis. Sigmoidal equations have been used to fit pressure-volume curves for mechanically ventilated patients but not for idiopathic pulmonary fibrosis patients. We compared a sigmoidal model and an exponential model to fit pressure-volume curves from mechanically ventilated patients with idiopathic pulmonary fibrosis. METHODS: Six idiopathic pulmonary fibrosis patients and five controls underwent inflation pressure-volume curves using the constant-flow technique during general anesthesia prior to open lung biopsy or thymectomy. We identified the lower and upper inflection points and fit the curves with an exponential equation, V = A-B.e-k.P, and a sigmoid equation, V = a+b/(1+e-(P-c/d. RESULTS: The mean lower inflection point for idiopathic pulmonary fibrosis patients was significantly higher (10.5 ± 5.7 cm H2O than that of controls (3.6 ± 2.4 cm H2O. The sigmoidal equation fit the pressure-volume curves of the fibrotic and control patients well, but the exponential equation fit the data well only when points below 50% of the inspiratory capacity were excluded. CONCLUSION: The elevated lower inflection point and the sigmoidal shape of the pressure-volume curves suggest that respiratory system compliance is decreased close to end-expiratory lung volume in idiopathic pulmonary fibrosis patients under general anesthesia and mechanical ventilation. The sigmoidal fit was superior to the exponential fit for inflation pressure-volume curves of anesthetized patients with idiopathic pulmonary fibrosis and could be useful for guiding mechanical ventilation during general anesthesia in this condition.

  10. A person-environment fit approach to volunteerism : Volunteer personality fit and culture fit as predictors of affective outcomes

    NARCIS (Netherlands)

    Van Vianen, Annelies E. M.; Nijstad, Bernard A.; Voskuijl, Olga F.

    2008-01-01

    This study employed a person-environment (P-E) fit approach to explaining volunteer satisfaction, affective commitment, and turnover intentions. It was hypothesized that personality fit would explain additional variance in volunteer affective outcomes above and beyond motives to volunteer. This

  11. A new method for curve fitting to the data with low statistics not using the chi2-method

    International Nuclear Information System (INIS)

    Awaya, T.

    1979-01-01

    A new method which does not use the chi 2 -fitting method is investigated in order to fit the theoretical curve to data with low statistics. The method is compared with the usual and modified chi 2 -fitting ones. The analyses are done for data which are generated by computers. It is concluded that the new method gives good results in all the cases. (Auth.)

  12. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    International Nuclear Information System (INIS)

    Winkler, A W; Zagar, B G

    2013-01-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives. (paper)

  13. From curve fitting to machine learning an illustrative guide to scientific data analysis and computational intelligence

    CERN Document Server

    Zielesny, Achim

    2016-01-01

    This successful book provides in its second edition an interactive and illustrative guide from two-dimensional curve fitting to multidimensional clustering and machine learning with neural networks or support vector machines. Along the way topics like mathematical optimization or evolutionary algorithms are touched. All concepts and ideas are outlined in a clear cut manner with graphically depicted plausibility arguments and a little elementary mathematics. The major topics are extensively outlined with exploratory examples and applications. The primary goal is to be as illustrative as possible without hiding problems and pitfalls but to address them. The character of an illustrative cookbook is complemented with specific sections that address more fundamental questions like the relation between machine learning and human intelligence. All topics are completely demonstrated with the computing platform Mathematica and the Computational Intelligence Packages (CIP), a high-level function library developed with M...

  14. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    Science.gov (United States)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  15. Fitting methods for constructing energy-dependent efficiency curves and their application to ionization chamber measurements

    International Nuclear Information System (INIS)

    Svec, A.; Schrader, H.

    2002-01-01

    An ionization chamber without and with an iron liner (absorber) was calibrated by a set of radionuclide activity standards of the Physikalisch-Technische Bundesanstalt (PTB). The ionization chamber is used as a secondary standard measuring system for activity at the Slovak Institute of Metrology (SMU). Energy-dependent photon-efficiency curves were established for the ionization chamber in defined measurement geometry without and with the liner, and radionuclide efficiencies were calculated. Programmed calculation with an analytical efficiency function and a nonlinear regression algorithm of Microsoft (MS) Excel for fitting was used. Efficiencies from bremsstrahlung of pure beta-particle emitters were calibrated achieving a 10% accuracy level. Such efficiency components are added to obtain the total radionuclide efficiency of photon emitters after beta decay. The method yields differences of experimental and calculated radionuclide efficiencies for most of the photon-emitting radionuclides in the order of a few percent

  16. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  17. GERMINATOR: a software package for high-throughput scoring and curve fitting of Arabidopsis seed germination.

    Science.gov (United States)

    Joosen, Ronny V L; Kodde, Jan; Willems, Leo A J; Ligterink, Wilco; van der Plas, Linus H W; Hilhorst, Henk W M

    2010-04-01

    Over the past few decades seed physiology research has contributed to many important scientific discoveries and has provided valuable tools for the production of high quality seeds. An important instrument for this type of research is the accurate quantification of germination; however gathering cumulative germination data is a very laborious task that is often prohibitive to the execution of large experiments. In this paper we present the germinator package: a simple, highly cost-efficient and flexible procedure for high-throughput automatic scoring and evaluation of germination that can be implemented without the use of complex robotics. The germinator package contains three modules: (i) design of experimental setup with various options to replicate and randomize samples; (ii) automatic scoring of germination based on the color contrast between the protruding radicle and seed coat on a single image; and (iii) curve fitting of cumulative germination data and the extraction, recap and visualization of the various germination parameters. The curve-fitting module enables analysis of general cumulative germination data and can be used for all plant species. We show that the automatic scoring system works for Arabidopsis thaliana and Brassica spp. seeds, but is likely to be applicable to other species, as well. In this paper we show the accuracy, reproducibility and flexibility of the germinator package. We have successfully applied it to evaluate natural variation for salt tolerance in a large population of recombinant inbred lines and were able to identify several quantitative trait loci for salt tolerance. Germinator is a low-cost package that allows the monitoring of several thousands of germination tests, several times a day by a single person.

  18. Improved liver R2* mapping by pixel-wise curve fitting with adaptive neighborhood regularization.

    Science.gov (United States)

    Wang, Changqing; Zhang, Xinyuan; Liu, Xiaoyun; He, Taigang; Chen, Wufan; Feng, Qianjin; Feng, Yanqiu

    2018-08-01

    To improve liver R2* mapping by incorporating adaptive neighborhood regularization into pixel-wise curve fitting. Magnetic resonance imaging R2* mapping remains challenging because of the serial images with low signal-to-noise ratio. In this study, we proposed to exploit the neighboring pixels as regularization terms and adaptively determine the regularization parameters according to the interpixel signal similarity. The proposed algorithm, called the pixel-wise curve fitting with adaptive neighborhood regularization (PCANR), was compared with the conventional nonlinear least squares (NLS) and nonlocal means filter-based NLS algorithms on simulated, phantom, and in vivo data. Visually, the PCANR algorithm generates R2* maps with significantly reduced noise and well-preserved tiny structures. Quantitatively, the PCANR algorithm produces R2* maps with lower root mean square errors at varying R2* values and signal-to-noise-ratio levels compared with the NLS and nonlocal means filter-based NLS algorithms. For the high R2* values under low signal-to-noise-ratio levels, the PCANR algorithm outperforms the NLS and nonlocal means filter-based NLS algorithms in the accuracy and precision, in terms of mean and standard deviation of R2* measurements in selected region of interests, respectively. The PCANR algorithm can reduce the effect of noise on liver R2* mapping, and the improved measurement precision will benefit the assessment of hepatic iron in clinical practice. Magn Reson Med 80:792-801, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  19. SiFTO: An Empirical Method for Fitting SN Ia Light Curves

    Science.gov (United States)

    Conley, A.; Sullivan, M.; Hsiao, E. Y.; Guy, J.; Astier, P.; Balam, D.; Balland, C.; Basa, S.; Carlberg, R. G.; Fouchez, D.; Hardin, D.; Howell, D. A.; Hook, I. M.; Pain, R.; Perrett, K.; Pritchet, C. J.; Regnault, N.

    2008-07-01

    We present SiFTO, a new empirical method for modeling Type Ia supernova (SN Ia) light curves by manipulating a spectral template. We make use of high-redshift SN data when training the model, allowing us to extend it bluer than rest-frame U. This increases the utility of our high-redshift SN observations by allowing us to use more of the available data. We find that when the shape of the light curve is described using a stretch prescription, applying the same stretch at all wavelengths is not an adequate description. SiFTO therefore uses a generalization of stretch which applies different stretch factors as a function of both the wavelength of the observed filter and the stretch in the rest-frame B band. We compare SiFTO to other published light-curve models by applying them to the same set of SN photometry, and demonstrate that SiFTO and SALT2 perform better than the alternatives when judged by the scatter around the best-fit luminosity distance relationship. We further demonstrate that when SiFTO and SALT2 are trained on the same data set the cosmological results agree. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique (CNRS) of France, and the University of Hawaii. This work is based in part on data products produced at the Canadian Astronomy Data Centre as part of the Canada-France-Hawaii Telescope Legacy Survey, a collaborative project of NRC and CNRS.

  20. Gamma-ray Burst X-ray Flares Light Curve Fitting

    Science.gov (United States)

    Aubain, Jonisha

    2018-01-01

    Gamma Ray Bursts (GRBs) are the most luminous explosions in the Universe. These electromagnetic explosions produce jets demonstrated by a short burst of prompt gamma-ray emission followed by a broadband afterglow. There are sharp increases of flux in the X-ray light curves known as flares that occurs in about 50% of the afterglows. In this study, we characterized all of the X-ray afterglows that were detected by the Swift X-ray Telescope (XRT), whether with flares or without. We fit flares to the Norris function (Norris et al. 2005) and power laws with breaks where necessary (Racusin et al. 2009). After fitting the Norris function and power laws, we search for the residual pattern detected in prompt GRB pulses (Hakkila et al. 2014, 2015, 2017), that may indicate a common signature of shock physics. If we find the same signature in flares and prompt pulses, it provides insight into what causes them, as well as, how these flares are produced.

  1. Reference Curves for Field Tests of Musculoskeletal Fitness in U.S. Children and Adolescents: The 2012 NHANES National Youth Fitness Survey.

    Science.gov (United States)

    Laurson, Kelly R; Saint-Maurice, Pedro F; Welk, Gregory J; Eisenmann, Joey C

    2017-08-01

    Laurson, KR, Saint-Maurice, PF, Welk, GJ, and Eisenmann, JC. Reference curves for field tests of musculoskeletal fitness in U.S. children and adolescents: The 2012 NHANES National Youth Fitness Survey. J Strength Cond Res 31(8): 2075-2082, 2017-The purpose of the study was to describe current levels of musculoskeletal fitness (MSF) in U.S. youth by creating nationally representative age-specific and sex-specific growth curves for handgrip strength (including relative and allometrically scaled handgrip), modified pull-ups, and the plank test. Participants in the National Youth Fitness Survey (n = 1,453) were tested on MSF, aerobic capacity (via submaximal treadmill test), and body composition (body mass index [BMI], waist circumference, and skinfolds). Using LMS regression, age-specific and sex-specific smoothed percentile curves of MSF were created and existing percentiles were used to assign age-specific and sex-specific z-scores for aerobic capacity and body composition. Correlation matrices were created to assess the relationships between z-scores on MSF, aerobic capacity, and body composition. At younger ages (3-10 years), boys scored higher than girls for handgrip strength and modified pull-ups, but not for the plank. By ages 13-15, differences between the boys and girls curves were more pronounced, with boys scoring higher on all tests. Correlations between tests of MSF and aerobic capacity were positive and low-to-moderate in strength. Correlations between tests of MSF and body composition were negative, excluding absolute handgrip strength, which was inversely related to other MSF tests and aerobic capacity but positively associated with body composition. The growth curves herein can be used as normative reference values or a starting point for creating health-related criterion reference standards for these tests. Comparisons with prior national surveys of physical fitness indicate that some components of MSF have likely decreased in the United States over

  2. Research on Standard and Automatic Judgment of Press-fit Curve of Locomotive Wheel-set Based on AAR Standard

    Science.gov (United States)

    Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu

    2018-03-01

    In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.

  3. Methods for extracting dose response curves from radiation therapy data. I. A unified approach

    International Nuclear Information System (INIS)

    Herring, D.F.

    1980-01-01

    This paper discusses an approach to fitting models to radiation therapy data in order to extract dose response curves for tumor local control and normal tissue damage. The approach is based on the method of maximum likelihood and is illustrated by several examples. A general linear logistic equation which leads to the Ellis nominal standard dose (NSD) equation is discussed; the fit of this equation to experimental data for mouse foot skin reactions produced by fractionated irradiation is described. A logistic equation based on the concept that normal tissue reactions are associated with the surviving fraction of cells is also discussed, and the fit of this equation to the same set of mouse foot skin reaction data is also described. These two examples illustrate the importance of choosing a model based on underlying mechanisms when one seeks to attach biological significance to a model's parameters

  4. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  5. An Empirical Fitting Method for Type Ia Supernova Light Curves: A Case Study of SN 2011fe

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, WeiKang; Filippenko, Alexei V., E-mail: zwk@astro.berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)

    2017-03-20

    We present a new empirical fitting method for the optical light curves of Type Ia supernovae (SNe Ia). We find that a variant broken-power-law function provides a good fit, with the simple assumption that the optical emission is approximately the blackbody emission of the expanding fireball. This function is mathematically analytic and is derived directly from the photospheric velocity evolution. When deriving the function, we assume that both the blackbody temperature and photospheric velocity are constant, but the final function is able to accommodate these changes during the fitting procedure. Applying it to the case study of SN 2011fe gives a surprisingly good fit that can describe the light curves from the first-light time to a few weeks after peak brightness, as well as over a large range of fluxes (∼5 mag, and even ∼7 mag in the g band). Since SNe Ia share similar light-curve shapes, this fitting method has the potential to fit most other SNe Ia and characterize their properties in large statistical samples such as those already gathered and in the near future as new facilities become available.

  6. A Bayesian Approach to Multistage Fitting of the Variation of the Skeletal Age Features

    Directory of Open Access Journals (Sweden)

    Dong Hua

    2009-01-01

    Full Text Available Accurate assessment of skeletal maturity is important clinically. Skeletal age assessment is usually based on features encoded in ossification centers. Therefore, it is critical to design a mechanism to capture as much as possible characteristics of features. We have observed that given a feature, there exist stages of the skeletal age such that the variation pattern of the feature differs in these stages. Based on this observation, we propose a Bayesian cut fitting to describe features in response to the skeletal age. With our approach, appropriate positions for stage separation are determined automatically by a Bayesian approach, and a model is used to fit the variation of a feature within each stage. Our experimental results show that the proposed method surpasses the traditional fitting using only one line or one curve not only in the efficiency and accuracy of fitting but also in global and local feature characterization.

  7. Fitting fatigue test data with a novel S-N curve using frequentist and Bayesian inference

    NARCIS (Netherlands)

    Leonetti, D.; Maljaars, J.; Snijder, H.H.B.

    2017-01-01

    In design against fatigue, a lower bound stress range vs. endurance curve (S-N curve) is employed to characterize fatigue resistance of plain material and structural details. With respect to the inherent variability of the fatigue life, the S-N curve is related to a certain probability of

  8. Fitting the flow curve of a plastically deformed silicon steel for the prediction of magnetic properties

    International Nuclear Information System (INIS)

    Sablik, M.J.; Landgraf, F.J.G.; Magnabosco, R.; Fukuhara, M.; Campos, M.F. de; Machado, R.; Missell, F.P.

    2006-01-01

    We report measurements and modelling of magnetic effects due to plastic deformation in 2.2% Si steel, emphasizing new tensile deformation data. The modelling approach is to take the Ludwik law for the strain-hardening stress and use it to compute the dislocation density, which is then used in the computation of magnetic hysteresis. A nonlinear extrapolation is used across the discontinuous yield region to obtain the value of stress at the yield point that is used in fitting Ludwik's law to the mechanical data. The computed magnetic hysteresis exhibits sharp shearing of the loops at small deformation, in agreement with experimental behavior. Magnetic hysteresis loss is shown to follow a Ludwik-like dependence on the residual strain, but with a smaller Ludwik exponent than applies for the mechanical behavior

  9. Validation of curve-fitting method for blood retention of 99mTc-GSA. Comparison with blood sampling method

    International Nuclear Information System (INIS)

    Ha-Kawa, Sang Kil; Suga, Yutaka; Kouda, Katsuyasu; Ikeda, Koshi; Tanaka, Yoshimasa

    1997-01-01

    We investigated a curve-fitting method for the rate of blood retention of 99m Tc-galactosyl serum albumin (GSA) as a substitute for the blood sampling method. Seven healthy volunteers and 27 patients with liver disease underwent 99m Tc-GSA scanning. After normalization of the y-intercept as 100 percent, a biexponential regression curve for the precordial time-activity curve provided the percent injected dose (%ID) of 99m Tc-GSA in the blood without blood sampling. The discrepancy between %ID obtained by the curve-fitting method and that by the multiple blood samples was minimal in normal volunteers 3.1±2.1% (mean±standard deviation, n=77 sampling). Slightly greater discrepancy was observed in patients with liver disease (7.5±6.1%, n=135 sampling). The %ID at 15 min after injection obtained from the fitted curve was significantly greater in patients with liver cirrhosis than in the controls (53.2±11.6%, n=13; vs. 31.9±2.8%, n=7, p 99m Tc-GSA and the plasma retention rate for indocyanine green (r=-0.869, p 99m Tc-GSA and could be a substitute for the blood sampling method. (author)

  10. Fitness

    Science.gov (United States)

    ... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...

  11. Unified approach for estimating the probabilistic design S-N curves of three commonly used fatigue stress-life models

    International Nuclear Information System (INIS)

    Zhao Yongxiang; Wang Jinnuo; Gao Qing

    2001-01-01

    A unified approach, referred to as general maximum likelihood method, is presented for estimating probabilistic design S-N curves and their confidence bounds of the three commonly used fatigue stress-life models, namely three parameter, Langer and Basquin. The curves are described by a general form of mean and standard deviation S-N curves of the logarithm of fatigue life. Different from existent methods, i.e., the conventional method and the classical maximum likelihood method,present approach considers the statistical characteristics of whole test data. The parameters of the mean curve is firstly estimated by least square method and then, the parameters of the standard deviation curve is evaluated by mathematical programming method to be agreement with the maximum likelihood principle. Fit effects of the curves are assessed by fitted relation coefficient, total fitted standard error and the confidence bounds. Application to the virtual stress amplitude-crack initiation life data of a nuclear engineering material, Chinese 1Cr18Ni9Ti stainless steel pipe-weld metal, has indicated the validity of the approach to the S-N data where both S and N show the character of random variable. Practices to the two states of S-N data of Chinese 45 carbon steel notched specimens (k t = 2.0) have indicated the validity of present approach to the test results obtained respectively from group fatigue test and from maximum likelihood fatigue test. At the practices, it was revealed that in general the fit is best for the three-parameter model,slightly inferior for the Langer relation and poor for the Basquin equation. Relative to the existent methods, present approach has better fit. In addition, the possible non-conservative predictions of the existent methods, which are resulted from the influence of local statistical characteristics of the data, are also overcome by present approach

  12. Fitting-free curve resolution of spectroscopic data: Chemometric and physical chemical viewpoints.

    Science.gov (United States)

    Rajkó, Róbert; Beyramysoltan, Samira; Abdollahi, Hamid; Eőri, János; Pongor, Gábor

    2015-08-12

    In this paper the authors have investigated spectroscopic data analysis according to a recent development, i.e. the Direct Inversion in the Spectral Subspace (DISS) procedure. DISS is a supervised curve resolution technique, consequently it can be used once the spectra of the potential pure components are known and the experimental spectrum of a chemical mixture is also presented; hence the task is to determine the composition of the unknown chemical mixture. In this paper, the original algorithm of DISS is re-examined and some further critical reasoning and essential developments are provided, including the detailed explanations of the constrained minimization task based on Lagrange multiplier regularization approach. The main conclusion is that the regularization used for DISS is needed because of the possible shifted spectra effect instead of collinearity; and this new property, i.e. treating the mild shifted spectra effect, of DISS can be considered as its main scientific advantage. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  14. Generalized Wigner functions in curved spaces: A new approach

    International Nuclear Information System (INIS)

    Kandrup, H.E.

    1988-01-01

    It is well known that, given a quantum field in Minkowski space, one can define Wigner functions f/sub W//sup N/(x 1 ,p 1 ,...,x/sub N/,p/sub N/) which (a) are convenient to analyze since, unlike the field itself, they are c-number quantities and (b) can be interpreted in a limited sense as ''quantum distribution functions.'' Recently, Winter and Calzetta, Habib and Hu have shown one way in which these flat-space Wigner functions can be generalized to a curved-space setting, deriving thereby approximate kinetic equations which make sense ''quasilocally'' for ''short-wavelength modes.'' This paper suggests a completely orthogonal approach for defining curved-space Wigner functions which generalizes instead an object such as the Fourier-transformed f/sub W/ 1 (k,p), which is effectively a two-point function viewed in terms of the ''natural'' creation and annihilation operators a/sup dagger/(p-(12k) and a(p+(12k). The approach suggested here lacks the precise phase-space interpretation implicit in the approach of Winter or Calzetta, Habib, and Hu, but it is useful in that (a) it is geared to handle any ''natural'' mode decomposition, so that (b) it can facilitate exact calculations at least in certain limits, such as for a source-free linear field in a static spacetime

  15. a new approach of Analysing GRB light curves

    International Nuclear Information System (INIS)

    Varga, B.; Horvath, I.

    2005-01-01

    We estimated the T xx quantiles of the cumulative GRB light curves using our recalculated background. The basic information of the light curves was extracted by multivariate statistical methods. The possible classes of the light curves are also briefly discussed

  16. An Approach of Estimating Individual Growth Curves for Young Thoroughbred Horses Based on Their Birthdays

    Science.gov (United States)

    ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro

    2014-01-01

    ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356

  17. An interactive graphics program to retrieve, display, compare, manipulate, curve fit, difference and cross plot wind tunnel data

    Science.gov (United States)

    Elliott, R. D.; Werner, N. M.; Baker, W. M.

    1975-01-01

    The Aerodynamic Data Analysis and Integration System (ADAIS), developed as a highly interactive computer graphics program capable of manipulating large quantities of data such that addressable elements of a data base can be called up for graphic display, compared, curve fit, stored, retrieved, differenced, etc., was described. The general nature of the system is evidenced by the fact that limited usage has already occurred with data bases consisting of thermodynamic, basic loads, and flight dynamics data. Productivity using ADAIS of five times that for conventional manual methods of wind tunnel data analysis is routinely achieved. In wind tunnel data analysis, data from one or more runs of a particular test may be called up and displayed along with data from one or more runs of a different test. Curves may be faired through the data points by any of four methods, including cubic spline and least squares polynomial fit up to seventh order.

  18. Fitness analysis method for magnesium in drinking water with atomic absorption using quadratic curve calibration

    Directory of Open Access Journals (Sweden)

    Esteban Pérez-López

    2014-11-01

    Full Text Available Because of the importance of quantitative chemical analysis in research, quality control, sales of services and other areas of interest , and the limiting of some instrumental analysis methods for quantification with linear calibration curve, sometimes because the short linear dynamic ranges of the analyte, and sometimes by limiting the technique itself, is that there is a need to investigate a little more about the convenience of using quadratic curves for analytical quantification, which seeks demonstrate that it is a valid calculation model for chemical analysis instruments. To this was taken as an analysis method based on the technique and atomic absorption spectroscopy in particular a determination of magnesium in a sample of drinking water Tacares sector Northern Grecia, employing a nonlinear calibration curve and a curve specific quadratic behavior, which was compared with the test results obtained for the same analysis with a linear calibration curve. The results show that the methodology is valid for the determination referred to, with all confidence, since the concentrations are very similar, and as used hypothesis testing can be considered equal.

  19. Fitted curve parameters for the efficiency of a coaxial HPGe Detector

    International Nuclear Information System (INIS)

    Supian Samat

    1996-01-01

    Using Ngraph software, the parameters of various functions were determined by least squares analysis of fits to experimental efficiencies , ε sub f of a coaxial HPGe detector for gamma rays in the energy range 59 keV to 1836 keV. When these parameters had been determined, their reliability was tested by the calculated goodness-of-fit parameter χ sup 2 sub cal. It is shown that the function, ln ε sub f = Σ sup n sub j=0 a sub j (ln E/E sub 0) sup j , where n=3, gives satisfactory results

  20. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  1. A comparison of approaches in fitting continuum SEDs

    International Nuclear Information System (INIS)

    Liu Yao; Wang Hong-Chi; Madlener David; Wolf Sebastian

    2013-01-01

    We present a detailed comparison of two approaches, the use of a pre-calculated database and simulated annealing (SA), for fitting the continuum spectral energy distribution (SED) of astrophysical objects whose appearance is dominated by surrounding dust. While pre-calculated databases are commonly used to model SED data, only a few studies to date employed SA due to its unclear accuracy and convergence time for this specific problem. From a methodological point of view, different approaches lead to different fitting quality, demand on computational resources and calculation time. We compare the fitting quality and computational costs of these two approaches for the task of SED fitting to provide a guide to the practitioner to find a compromise between desired accuracy and available resources. To reduce uncertainties inherent to real datasets, we introduce a reference model resembling a typical circumstellar system with 10 free parameters. We derive the SED of the reference model with our code MC3 D at 78 logarithmically distributed wavelengths in the range [0.3 μm, 1.3 mm] and use this setup to simulate SEDs for the database and SA. Our result directly demonstrates the applicability of SA in the field of SED modeling, since the algorithm regularly finds better solutions to the optimization problem than a pre-calculated database. As both methods have advantages and shortcomings, a hybrid approach is preferable. While the database provides an approximate fit and overall probability distributions for all parameters deduced using Bayesian analysis, SA can be used to improve upon the results returned by the model grid.

  2. Ionization constants by curve fitting: determination of partition and distribution coefficients of acids and bases and their ions.

    Science.gov (United States)

    Clarke, F H; Cahoon, N M

    1987-08-01

    A convenient procedure has been developed for the determination of partition and distribution coefficients. The method involves the potentiometric titration of the compound, first in water and then in a rapidly stirred mixture of water and octanol. An automatic titrator is used, and the data is collected and analyzed by curve fitting on a microcomputer with 64 K of memory. The method is rapid and accurate for compounds with pKa values between 4 and 10. Partition coefficients can be measured for monoprotic and diprotic acids and bases. The partition coefficients of the neutral compound and its ion(s) can be determined by varying the ratio of octanol to water. Distribution coefficients calculated over a wide range of pH values are presented graphically as "distribution profiles". It is shown that subtraction of the titration curve of solvent alone from that of the compound in the solvent offers advantages for pKa determination by curve fitting for compounds of low aqueous solubility.

  3. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han's model for rubber vulcanization

    Science.gov (United States)

    Milani, G.; Milani, F.

    A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI) to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.

  4. Learning Curves: Making Quality Online Health Information Available at a Fitness Center

    OpenAIRE

    Dobbins, Montie T.; Tarver, Talicia; Adams, Mararia; Jones, Dixie A.

    2012-01-01

    Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center – Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges.

  5. Learning Curves: Making Quality Online Health Information Available at a Fitness Center.

    Science.gov (United States)

    Dobbins, Montie T; Tarver, Talicia; Adams, Mararia; Jones, Dixie A

    2012-01-01

    Meeting consumer health information needs can be a challenge. Research suggests that women seek health information from a variety of resources, including the Internet. In an effort to make women aware of reliable health information sources, the Louisiana State University Health Sciences Center - Shreveport Medical Library engaged in a partnership with a franchise location of Curves International, Inc. This article will discuss the project, its goals and its challenges.

  6. Radial artery pulse waveform analysis based on curve fitting using discrete Fourier series.

    Science.gov (United States)

    Jiang, Zhixing; Zhang, David; Lu, Guangming

    2018-04-19

    Radial artery pulse diagnosis has been playing an important role in traditional Chinese medicine (TCM). For its non-invasion and convenience, the pulse diagnosis has great significance in diseases analysis of modern medicine. The practitioners sense the pulse waveforms in patients' wrist to make diagnoses based on their non-objective personal experience. With the researches of pulse acquisition platforms and computerized analysis methods, the objective study on pulse diagnosis can help the TCM to keep up with the development of modern medicine. In this paper, we propose a new method to extract feature from pulse waveform based on discrete Fourier series (DFS). It regards the waveform as one kind of signal that consists of a series of sub-components represented by sine and cosine (SC) signals with different frequencies and amplitudes. After the pulse signals are collected and preprocessed, we fit the average waveform for each sample using discrete Fourier series by least squares. The feature vector is comprised by the coefficients of discrete Fourier series function. Compared with the fitting method using Gaussian mixture function, the fitting errors of proposed method are smaller, which indicate that our method can represent the original signal better. The classification performance of proposed feature is superior to the other features extracted from waveform, liking auto-regression model and Gaussian mixture model. The coefficients of optimized DFS function, who is used to fit the arterial pressure waveforms, can obtain better performance in modeling the waveforms and holds more potential information for distinguishing different psychological states. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Glycation and secondary conformational changes of human serum albumin: study of the FTIR spectroscopic curve-fitting technique

    Directory of Open Access Journals (Sweden)

    Yu-Ting Huang

    2016-05-01

    Full Text Available The aim of this study was attempted to investigate both the glycation kinetics and protein secondary conformational changes of human serum albumin (HSA after the reaction with ribose. The browning and fluorescence determinations as well as Fourier transform infrared (FTIR microspectroscopy with a curve-fitting technique were applied. Various concentrations of ribose were incubated over a 12-week period at 37 ± 0.5 oC under dark conditions. The results clearly shows that the glycation occurred in HSA-ribose reaction mixtures was markedly increased with the amount of ribose used and incubation time, leading to marked alterations of protein conformation of HSA after FTIR determination. In addition, the browning intensity of reaction solutions were colored from light to deep brown, as determined by optical observation. The increase in fluorescence intensity from HSA–ribose mixtures seemed to occur more quickly than browning, suggesting that the fluorescence products were produced earlier on in the process than compounds causing browning. Moreover, the predominant α-helical composition of HSA decreased with an increase in ribose concentration and incubation time, whereas total β-structure and random coil composition increased, as determined by curve-fitted FTIR microspectroscopy analysis. We also found that the peak intensity ratios at 1044 cm−1/1542 cm−1 markedly decreased prior to 4 weeks of incubation, then almost plateaued, implying that the consumption of ribose in the glycation reaction might have been accelerated over the first 4 weeks of incubation, and gradually decreased. This study first evidences that two unique IR peaks at 1710 cm−1 [carbonyl groups of irreversible products produced by the reaction and deposition of advanced glycation end products (AGEs] and 1621 cm−1 (aggregated HSA molecules were clearly observed from the curve-fitted FTIR spectra of HSA-ribose mixtures over the course of incubation time. This study

  8. THE CARNEGIE SUPERNOVA PROJECT: LIGHT-CURVE FITTING WITH SNooPy

    International Nuclear Information System (INIS)

    Burns, Christopher R.; Persson, S. E.; Madore, Barry F.; Freedman, Wendy L.; Stritzinger, Maximilian; Phillips, M. M.; Boldt, Luis; Campillay, Abdo; Folatelli, Gaston; Gonzalez, Sergio; Krzeminski, Wojtek; Morrell, Nidia; Salgado, Francisco; Kattner, ShiAnne; Contreras, Carlos; Suntzeff, Nicholas B.

    2011-01-01

    In providing an independent measure of the expansion history of the universe, the Carnegie Supernova Project (CSP) has observed 71 high-z Type Ia supernovae (SNe Ia) in the near-infrared bands Y and J. These can be used to construct rest-frame i-band light curves which, when compared to a low-z sample, yield distance moduli that are less sensitive to extinction and/or decline-rate corrections than in the optical. However, working with NIR observed and i-band rest-frame photometry presents unique challenges and has necessitated the development of a new set of observational tools in order to reduce and analyze both the low-z and high-z CSP sample. We present in this paper the methods used to generate uBVgriYJH light-curve templates based on a sample of 24 high-quality low-z CSP SNe. We also present two methods for determining the distances to the hosts of SN Ia events. A larger sample of 30 low-z SNe Ia in the Hubble flow is used to calibrate these methods. We then apply the method and derive distances to seven galaxies that are so nearby that their motions are not dominated by the Hubble flow.

  9. Exploring Alternative Characteristic Curve Approaches to Linking Parameter Estimates from the Generalized Partial Credit Model.

    Science.gov (United States)

    Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill

    Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…

  10. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    Science.gov (United States)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  11. Retention and Curve Number Variability in a Small Agricultural Catchment: The Probabilistic Approach

    Directory of Open Access Journals (Sweden)

    Kazimierz Banasik

    2014-04-01

    Full Text Available The variability of the curve number (CN and the retention parameter (S of the Soil Conservation Service (SCS-CN method in a small agricultural, lowland watershed (23.4 km2 to the gauging station in central Poland has been assessed using the probabilistic approach: distribution fitting and confidence intervals (CIs. Empirical CNs and Ss were computed directly from recorded rainfall depths and direct runoff volumes. Two measures of the goodness of fit were used as selection criteria in the identification of the parent distribution function. The measures specified the generalized extreme value (GEV, normal and general logistic (GLO distributions for 100-CN and GLO, lognormal and GEV distributions for S. The characteristics estimated from theoretical distribution (median, quantiles were compared to the tabulated CN and to the antecedent runoff conditions of Hawkins and Hjelmfelt. The distribution fitting for the whole sample revealed a good agreement between the tabulated CN and the median and between the antecedent runoff conditions (ARCs of Hawkins and Hjelmfelt, which certified a good calibration of the model. However, the division of the CN sample due to heavy and moderate rainfall depths revealed a serious inconsistency between the parameters mentioned. This analysis proves that the application of the SCS-CN method should rely on deep insight into the probabilistic properties of CN and S.

  12. The real-time fitting of radioactive decay curves. Pt. 3. Counting during sampling

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1994-01-01

    An analysis of a least-squares method for the real-time fitting of the theoretical total count function to the actual total count from radioactive decays has been given previously for the case where counting takes place after a sample is taken. The counting may be done in a number of different counting systems which distinguish between different types or energies of radiation emitted from the sample. The method would allow real-time determination of the numbers of atoms and hence activities of the individual isotopes present and has been designated the Time Evolved Least-Squares method (TELS). If the radioactivity which is to be measured exists as an aerosol or in a form where a sample is taken at a constant rate it may be possible to count during sampling and by so doing reduce the total time required to determine the activity of the individual isotopes present. The TELS method is extended here to the case where counting and the evaluation of the activity takes place concurrently with the sampling. The functions which need to be evaluated are derived and the calculations required to implement the method are discussed. As with the TELS method of counting after sampling the technique of counting during sampling and the simultaneous evaluation of activity could be achieved in real-time. Results of testing the method by computer simulation for two counting schemes for the descendants of radon are presented. ((orig.))

  13. Hot Spots Detection of Operating PV Arrays through IR Thermal Image Using Method Based on Curve Fitting of Gray Histogram

    Directory of Open Access Journals (Sweden)

    Jiang Lin

    2016-01-01

    Full Text Available The overall efficiency of PV arrays is affected by hot spots which should be detected and diagnosed by applying responsible monitoring techniques. The method using the IR thermal image to detect hot spots has been studied as a direct, noncontact, nondestructive technique. However, IR thermal images suffer from relatively high stochastic noise and non-uniformity clutter, so the conventional methods of image processing are not effective. The paper proposes a method to detect hotspots based on curve fitting of gray histogram. The result of MATLAB simulation proves the method proposed in the paper is effective to detect the hot spots suppressing the noise generated during the process of image acquisition.

  14. Fitness of gutta-percha cones in curved root canals prepared with reciprocating files correlated with tug-back sensation.

    Science.gov (United States)

    Yoon, Heeyoung; Baek, Seung-Ho; Kum, Kee-Yeon; Kim, Hyeon-Cheol; Moon, Young-Mi; Fang, Denny Y; Lee, WooCheol

    2015-01-01

    The purpose of this study was to evaluate the gutta-percha-occupied area (GPOA) and the relationship between GPOA and tug-back sensations in canals instrumented with reciprocating files. Twenty curved canals were instrumented using Reciproc R25 (VDW, Munich, Germany) (group R) and WaveOne Primary (Dentsply Maillefer, Ballaigues, Switzerland) (group W), respectively (n = 10 each). The presence or absence of a tug-back sensation was decided for both of #25/.08 and #30/.06 cones in every canal. The percentage of GPOA at 1-, 2-, and 3-mm levels from the working length was calculated using micro-computed tomographic imaging. The correlation between the sum of the GPOA and the presence of a tug-back sensation was also investigated. The data were analyzed statistically at P = .05. A tug-back sensation was present in 45% and 100% canals for #25/.08 and #30/.06 cones, respectively, with a significant difference (P sensation (P .05). Under the conditions of this study, the tug-back sensation can be a definitive determinant for indicating higher cone fitness in the curved canal regardless of the cone type. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  15. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  16. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Rongda Chen

    Full Text Available Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  17. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    Science.gov (United States)

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  18. A person-environment fit approach to volunteerism: Volunteer personality-fit and culture-fit as predictors of affective outcomes

    NARCIS (Netherlands)

    van Vianen, A.E.M.; Nijstad, B.A.; Voskuijl, O.F.

    2008-01-01

    This study employed a person-environment (P-E) fit approach to explaining volunteer satisfaction, affective commitment, and turnover intentions. It was hypothesized that personality fit would explain additional variance in volunteer affective outcomes above and beyond motives to volunteer. This

  19. Predicting Change in Postpartum Depression: An Individual Growth Curve Approach.

    Science.gov (United States)

    Buchanan, Trey

    Recently, methodologists interested in examining problems associated with measuring change have suggested that developmental researchers should focus upon assessing change at both intra-individual and inter-individual levels. This study used an application of individual growth curve analysis to the problem of maternal postpartum depression.…

  20. A novel knot selection method for the error-bounded B-spline curve fitting of sampling points in the measuring process

    International Nuclear Information System (INIS)

    Liang, Fusheng; Zhao, Ji; Ji, Shijun; Zhang, Bing; Fan, Cheng

    2017-01-01

    The B-spline curve has been widely used in the reconstruction of measurement data. The error-bounded sampling points reconstruction can be achieved by the knot addition method (KAM) based B-spline curve fitting. In KAM, the selection pattern of initial knot vector has been associated with the ultimate necessary number of knots. This paper provides a novel initial knots selection method to condense the knot vector required for the error-bounded B-spline curve fitting. The initial knots are determined by the distribution of features which include the chord length (arc length) and bending degree (curvature) contained in the discrete sampling points. Firstly, the sampling points are fitted into an approximate B-spline curve Gs with intensively uniform knot vector to substitute the description of the feature of the sampling points. The feature integral of Gs is built as a monotone increasing function in an analytic form. Then, the initial knots are selected according to the constant increment of the feature integral. After that, an iterative knot insertion (IKI) process starting from the initial knots is introduced to improve the fitting precision, and the ultimate knot vector for the error-bounded B-spline curve fitting is achieved. Lastly, two simulations and the measurement experiment are provided, and the results indicate that the proposed knot selection method can reduce the number of ultimate knots available. (paper)

  1. Structural modeling of age specific fertility curves in Peninsular Malaysia: An approach of Lee Carter method

    Science.gov (United States)

    Hanafiah, Hazlenah; Jemain, Abdul Aziz

    2013-11-01

    In recent years, the study of fertility has been getting a lot of attention among research abroad following fear of deterioration of fertility led by the rapid economy development. Hence, this study examines the feasibility of developing fertility forecasts based on age structure. Lee Carter model (1992) is applied in this study as it is an established and widely used model in analysing demographic aspects. A singular value decomposition approach is incorporated with an ARIMA model to estimate age specific fertility rates in Peninsular Malaysia over the period 1958-2007. Residual plots is used to measure the goodness of fit of the model. Fertility index forecast using random walk drift is then utilised to predict the future age specific fertility. Results indicate that the proposed model provides a relatively good and reasonable data fitting. In addition, there is an apparent and continuous decline in age specific fertility curves in the next 10 years, particularly among mothers' in their early 20's and 40's. The study on the fertility is vital in order to maintain a balance between the population growth and the provision of facilities related resources.

  2. Master sintering curve: A practical approach to its construction

    Directory of Open Access Journals (Sweden)

    Pouchly V.

    2010-01-01

    Full Text Available The concept of a Master Sintering Curve (MSC is a strong tool for optimizing the sintering process. However, constructing the MSC from sintering data involves complicated and time-consuming calculations. A practical method for the construction of a MSC is presented in the paper. With the help of a few dilatometric sintering experiments the newly developed software calculates the MSC and finds the optimal activation energy of a given material. The software, which also enables sintering prediction, was verified by sintering tetragonal and cubic zirconia, and alumina of two different particle sizes.

  3. A new approach to the analysis of Mira light curves

    International Nuclear Information System (INIS)

    Mennessier, M.O.; Barthes, D.; Mattei, J.A.

    1990-01-01

    Two different but complementary methods for predicting Mira luminosities are presented. One method is derived from a Fourier analysis, it requires performing deconvolution, and its results are not certain due to the inherent instability of deconvolution problems. The other method is a learning method utilizing artificial intelligence techniques where a light curve is presented as an ordered sequence of pseudocycles, and rules are learned by linking the characteristics of several consecutive pseudocycles to one characteristic of the future cycle. It is observed that agreement between these methods is obtainable when it is possible to eliminate similar false frequencies from the preliminary power spectrum and to improve the degree of confidence in the rules

  4. ROBUST DECLINE CURVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Sutawanir Darwis

    2012-05-01

    Full Text Available Empirical decline curve analysis of oil production data gives reasonable answer in hyperbolic type curves situations; however the methodology has limitations in fitting real historical production data in present of unusual observations due to the effect of the treatment to the well in order to increase production capacity. The development ofrobust least squares offers new possibilities in better fitting production data using declinecurve analysis by down weighting the unusual observations. This paper proposes a robustleast squares fitting lmRobMM approach to estimate the decline rate of daily production data and compares the results with reservoir simulation results. For case study, we usethe oil production data at TBA Field West Java. The results demonstrated that theapproach is suitable for decline curve fitting and offers a new insight in decline curve analysis in the present of unusual observations.

  5. Non-linear least squares curve fitting of a simple theoretical model to radioimmunoassay dose-response data using a mini-computer

    International Nuclear Information System (INIS)

    Wilkins, T.A.; Chadney, D.C.; Bryant, J.; Palmstroem, S.H.; Winder, R.L.

    1977-01-01

    Using the simple univalent antigen univalent-antibody equilibrium model the dose-response curve of a radioimmunoassay (RIA) may be expressed as a function of Y, X and the four physical parameters of the idealised system. A compact but powerful mini-computer program has been written in BASIC for rapid iterative non-linear least squares curve fitting and dose interpolation with this function. In its simplest form the program can be operated in an 8K byte mini-computer. The program has been extensively tested with data from 10 different assay systems (RIA and CPBA) for measurement of drugs and hormones ranging in molecular size from thyroxine to insulin. For each assay system the results have been analysed in terms of (a) curve fitting biases and (b) direct comparison with manual fitting. In all cases the quality of fitting was remarkably good in spite of the fact that the chemistry of each system departed significantly from one or more of the assumptions implicit in the model used. A mathematical analysis of departures from the model's principal assumption has provided an explanation for this somewhat unexpected observation. The essential features of this analysis are presented in this paper together with the statistical analyses of the performance of the program. From these and the results obtained to date in the routine quality control of these 10 assays, it is concluded that the method of curve fitting and dose interpolation presented in this paper is likely to be of general applicability. (orig.) [de

  6. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han’s model for rubber vulcanization

    OpenAIRE

    Milani, G.; Milani, F.

    2016-01-01

    A GUI software (GURU) for experimental data fitting of rheometer curves in Natural Rubber (NR) vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer). To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, ...

  7. Lie-Hamilton systems on curved spaces: a geometrical approach

    Science.gov (United States)

    Herranz, Francisco J.; de Lucas, Javier; Tobolski, Mariusz

    2017-12-01

    A Lie-Hamilton system is a nonautonomous system of first-order ordinary differential equations describing the integral curves of a t-dependent vector field taking values in a finite-dimensional Lie algebra, a Vessiot-Guldberg Lie algebra, of Hamiltonian vector fields relative to a Poisson structure. Its general solution can be written as an autonomous function, the superposition rule, of a generic finite family of particular solutions and a set of constants. We pioneer the study of Lie-Hamilton systems on Riemannian spaces (sphere, Euclidean and hyperbolic plane), pseudo-Riemannian spaces (anti-de Sitter, de Sitter, and Minkowski spacetimes) as well as on semi-Riemannian spaces (Newtonian spacetimes). Their corresponding constants of motion and superposition rules are obtained explicitly in a geometric way. This work extends the (graded) contraction of Lie algebras to a contraction procedure for Lie algebras of vector fields, Hamiltonian functions, and related symplectic structures, invariants, and superposition rules.

  8. Integrated healthcare networks' performance: a growth curve modeling approach.

    Science.gov (United States)

    Wan, Thomas T H; Wang, Bill B L

    2003-05-01

    This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.

  9. A dynamic approach to the Environmental Kuznets Curve hypothesis

    International Nuclear Information System (INIS)

    Agras, Jean; Chapman, Duane

    1999-01-01

    The Environmental Kuznets Curve (EKC) hypothesis states that pollution levels increase as a country develops, but begin to decrease as rising incomes pass beyond a turning point. In EKC analyses, the relationship between environmental degradation and income is usually expressed as a quadratic function with the turning point occurring at a maximum pollution level. Other explanatory variables have been included in these models, but income regularly has had the most significant effect on indicators of environmental quality. One variable consistently omitted in these relationships is the price of energy. This paper analyzes previous models to illustrate the importance of prices in these models and then includes prices in an econometric EKC framework testing energy/income and CO 2 /income relationships. These long-run price/income models find that income is no longer the most relevant indicator of environmental quality or energy demand. Indeed, we find no significant evidence for the existence of an EKC within the range of current incomes for energy in the presence of price and trade variables

  10. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    Science.gov (United States)

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  11. SU-F-I-63: Relaxation Times of Lipid Resonances in NAFLD Animal Model Using Enhanced Curve Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Song, K-H; Yoo, C-H; Lim, S-I; Choe, B-Y [Department of Biomedical Engineering, and Research Institute of Biomedical Engineering, The Catholic University of Korea College of Medicine, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: The objective of this study is to evaluate the relaxation time of methylene resonance in comparison with other lipid resonances. Methods: The examinations were performed on a 3.0T MRI scanner using a four-channel animal coil. Eight more Sprague-Dawley rats in the same baseline weight range were housed with ad libitum access to water and a high-fat (HF) diet (60% fat, 20% protein, and 20% carbohydrate). In order to avoid large blood vessels, a voxel (0.8×0.8×0.8 cm{sup 3}) was placed in a homogeneous area of the liver parenchyma during free breathing. Lipid relaxations in NC and HF diet rats were estimated at a fixed repetition time (TR) of 6000 msec, and multi echo time (TEs) of 40–220 msec. All spectra for data measurement were processed using the Advanced Method for Accurate, Robust, and Efficient Spectral (AMARES) fitting algorithm of the Java-based Magnetic Resonance User Interface (jMRUI) package. Results: The mean T2 relaxation time of the methylene resonance in normal-chow diet was 37.1 msec (M{sub 0}, 2.9±0.5), with a standard deviation of 4.3 msec. The mean T2 relaxation time of the methylene resonance was 31.4 msec (M{sub 0}, 3.7±0.3), with a standard deviation of 1.8 msec. The T2 relaxation times of methylene protons were higher in normal-chow diet rats than in HF rats (p<0.05), and the extrapolated M{sub 0} values were higher in HF rats than in NC rats (p<0.005). The excellent linear fit with R{sup 2}>0.9971 and R{sup 2}>0.9987 indicates T2 relaxation decay curves with mono-exponential function. Conclusion: In in vivo, a sufficient spectral resolution and a sufficiently high signal-to-noise ratio (SNR) can be achieved, so that the data measured over short TE values can be extrapolated back to TE = 0 to produce better estimates of the relative weights of the spectral components. In the short term, treating the effective decay rate as exponential is an adequate approximation.

  12. A Monte Carlo Study of the Effect of Item Characteristic Curve Estimation on the Accuracy of Three Person-Fit Statistics

    Science.gov (United States)

    St-Onge, Christina; Valois, Pierre; Abdous, Belkacem; Germain, Stephane

    2009-01-01

    To date, there have been no studies comparing parametric and nonparametric Item Characteristic Curve (ICC) estimation methods on the effectiveness of Person-Fit Statistics (PFS). The primary aim of this study was to determine if the use of ICCs estimated by nonparametric methods would increase the accuracy of item response theory-based PFS for…

  13. Electrolyte solutions at curved electrodes. II. Microscopic approach.

    Science.gov (United States)

    Reindl, Andreas; Bier, Markus; Dietrich, S

    2017-04-21

    Density functional theory is used to describe electrolyte solutions in contact with electrodes of planar or spherical shape. For the electrolyte solutions, we consider the so-called civilized model, in which all species present are treated on equal footing. This allows us to discuss the features of the electric double layer in terms of the differential capacitance. The model provides insight into the microscopic structure of the electric double layer, which goes beyond the mesoscopic approach studied in Paper I. This enables us to judge the relevance of microscopic details, such as the radii of the particles forming the electrolyte solutions or the dipolar character of the solvent particles, and to compare the predictions of various models. Similar to Paper I, a general behavior is observed for small radii of the electrode in that in this limit the results become independent of the surface charge density and of the particle radii. However, for large electrode radii, non-trivial behaviors are observed. Especially the particle radii and the surface charge density strongly influence the capacitance. From the comparison with the Poisson-Boltzmann approach, it becomes apparent that the shape of the electrode determines whether the microscopic details of the full civilized model have to be taken into account or whether already simpler models yield acceptable predictions.

  14. Dynamic Regulation of a Cell Adhesion Protein Complex Including CADM1 by Combinatorial Analysis of FRAP with Exponential Curve-Fitting

    Science.gov (United States)

    Sakurai-Yageta, Mika; Maruyama, Tomoko; Suzuki, Takashi; Ichikawa, Kazuhisa; Murakami, Yoshinori

    2015-01-01

    Protein components of cell adhesion machinery show continuous renewal even in the static state of epithelial cells and participate in the formation and maintenance of normal epithelial architecture and tumor suppression. CADM1 is a tumor suppressor belonging to the immunoglobulin superfamily of cell adhesion molecule and forms a cell adhesion complex with an actin-binding protein, 4.1B, and a scaffold protein, MPP3, in the cytoplasm. Here, we investigate dynamic regulation of the CADM1-4.1B-MPP3 complex in mature cell adhesion by fluorescence recovery after photobleaching (FRAP) analysis. Traditional FRAP analysis were performed for relatively short period of around 10min. Here, thanks to recent advances in the sensitive laser detector systems, we examine FRAP of CADM1 complex for longer period of 60 min and analyze the recovery with exponential curve-fitting to distinguish the fractions with different diffusion constants. This approach reveals that the fluorescence recovery of CADM1 is fitted to a single exponential function with a time constant (τ) of approximately 16 min, whereas 4.1B and MPP3 are fitted to a double exponential function with two τs of approximately 40-60 sec and 16 min. The longer τ is similar to that of CADM1, suggesting that 4.1B and MPP3 have two distinct fractions, one forming a complex with CADM1 and the other present as a free pool. Fluorescence loss in photobleaching analysis supports the presence of a free pool of these proteins near the plasma membrane. Furthermore, double exponential fitting makes it possible to estimate the ratio of 4.1B and MPP3 present as a free pool and as a complex with CADM1 as approximately 3:2 and 3:1, respectively. Our analyses reveal a central role of CADM1 in stabilizing the complex with 4.1B and MPP3 and provide insight in the dynamics of adhesion complex formation. PMID:25780926

  15. A bivariate contaminated binormal model for robust fitting of proper ROC curves to a pair of correlated, possibly degenerate, ROC datasets.

    Science.gov (United States)

    Zhai, Xuetong; Chakraborty, Dev P

    2017-06-01

    The objective was to design and implement a bivariate extension to the contaminated binormal model (CBM) to fit paired receiver operating characteristic (ROC) datasets-possibly degenerate-with proper ROC curves. Paired datasets yield two correlated ratings per case. Degenerate datasets have no interior operating points and proper ROC curves do not inappropriately cross the chance diagonal. The existing method, developed more than three decades ago utilizes a bivariate extension to the binormal model, implemented in CORROC2 software, which yields improper ROC curves and cannot fit degenerate datasets. CBM can fit proper ROC curves to unpaired (i.e., yielding one rating per case) and degenerate datasets, and there is a clear scientific need to extend it to handle paired datasets. In CBM, nondiseased cases are modeled by a probability density function (pdf) consisting of a unit variance peak centered at zero. Diseased cases are modeled with a mixture distribution whose pdf consists of two unit variance peaks, one centered at positive μ with integrated probability α, the mixing fraction parameter, corresponding to the fraction of diseased cases where the disease was visible to the radiologist, and one centered at zero, with integrated probability (1-α), corresponding to disease that was not visible. It is shown that: (a) for nondiseased cases the bivariate extension is a unit variances bivariate normal distribution centered at (0,0) with a specified correlation ρ 1 ; (b) for diseased cases the bivariate extension is a mixture distribution with four peaks, corresponding to disease not visible in either condition, disease visible in only one condition, contributing two peaks, and disease visible in both conditions. An expression for the likelihood function is derived. A maximum likelihood estimation (MLE) algorithm, CORCBM, was implemented in the R programming language that yields parameter estimates and the covariance matrix of the parameters, and other statistics

  16. Determination of the secondary energy from the electron beam with a flattening foil by computer. Percentage depth dose curve fitting using the specific higher order polynomial

    Energy Technology Data Exchange (ETDEWEB)

    Kawakami, H [Kyushu Univ., Beppu, Oita (Japan). Inst. of Balneotherapeutics

    1980-09-01

    A computer program written in FORTRAN is described for determining the secondary energy of the electron beam which passed through a flattening foil, using a time-sharing computer service. The procedure of this program is first to fit the specific higher order polynomial to the measured percentage depth dose curve. Next, the practical range is evaluated by the point of intersection R of the line tangent to the fitted curve at the inflection point P and the given dose E, as shown in Fig. 2. Finally, the secondary energy corresponded to the determined practical range can be obtained by the experimental equation (2.1) between the practial range R (g/cm/sup 2/) and the electron energy T (MeV). A graph for the fitted polynomial with the inflection points and the practical range can be plotted on a teletype machine by request of user. In order to estimate the shapes of percentage depth dose curves correspond to the electron beams of different energies, we tried to find some specific functional relationships between each coefficient of the fitted seventh-degree equation and the incident electron energies. However, exact relationships could not be obtained for irreguarity among these coefficients.

  17. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers; Metodos de ajuste de curvas de eficiencia obtidas por meio de espectrometros de HPGe

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Vanderlei

    2002-07-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  18. Master curve approach to monitor fracture toughness of reactor pressure vessels in nuclear power plants

    International Nuclear Information System (INIS)

    2009-10-01

    A series of coordinated research projects (CRPs) have been sponsored by the IAEA, starting in the early 1970s, focused on neutron radiation effects on reactor pressure vessel (RPV) steels. The purpose of the CRPs was to develop correlative comparisons to test the uniformity of results through coordinated international research studies and data sharing. The overall scope of the eighth CRP (CRP-8), Master Curve Approach to Monitor Fracture Toughness of Reactor Pressure Vessels in Nuclear Power Plants, has evolved from previous CRPs which have focused on fracture toughness related issues. The ultimate use of embrittlement understanding is application to assure structural integrity of the RPV under current and future operation and accident conditions. The Master Curve approach for assessing the fracture toughness of a sampled irradiated material has been gaining acceptance throughout the world. This direct measurement of fracture toughness approach is technically superior to the correlative and indirect methods used in the past to assess irradiated RPV integrity. Several elements have been identified as focal points for Master Curve use: (i) limits of applicability for the Master Curve at the upper range of the transition region for loading quasi-static to dynamic/impact loading rates; (ii) effects of non-homogeneous material or changes due to environment conditions on the Master Curve, and how heterogeneity can be integrated into a more inclusive Master Curve methodology; (iii) importance of fracture mode differences and changes affect the Master Curve shape. The collected data in this report represent mostly results from non-irradiated testing, although some results from test reactor irradiations and plant surveillance programmes have been included as available. The results presented here should allow utility engineers and scientists to directly measure fracture toughness using small surveillance size specimens and apply the results using the Master Curve approach

  19. Fitness of the analysis method of magnesium in drinking water using atomic absorption with quadratic calibration curve

    International Nuclear Information System (INIS)

    Perez-Lopez, Esteban

    2014-01-01

    The quantitative chemical analysis has been importance in research. Also, aspects like: quality control, sales of services and other areas of interest. Some instrumental analysis methods for quantification with linear calibration curve have presented limitations, because the short liner dynamic ranges of the analyte, or sometimes, by limiting the technique itself. The need has been to investigate a little more about the convenience of using quadratic calibration curves for analytical quantification, with which it has seeked demonstrate that has been a valid calculation model for chemical analysis instruments. An analysis base method is used on the technique of atomic absorption spectroscopy and in particular a determination of magnesium in a drinking water sample of the Tacares sector North of Grecia. A nonlinear calibration curve was used and specifically a curve with quadratic behavior. The same was compared with the test results obtained for the equal analysis with a linear calibration curve. The results have showed that the methodology has been valid for the determination referred with all confidence, since the concentrations have been very similar and, according to the used hypothesis testing, can be considered equal. (author) [es

  20. Comparing Angular and Curved Shapes in Terms of Implicit Associations and Approach/Avoidance Responses.

    Directory of Open Access Journals (Sweden)

    Letizia Palumbo

    Full Text Available Most people prefer smoothly curved shapes over more angular shapes. We investigated the origin of this effect using abstract shapes and implicit measures of semantic association and preference. In Experiment 1 we used a multidimensional Implicit Association Test (IAT to verify the strength of the association of curved and angular polygons with danger (safe vs. danger words, valence (positive vs. negative words and gender (female vs. male names. Results showed that curved polygons were associated with safe and positive concepts and with female names, whereas angular polygons were associated with danger and negative concepts and with male names. Experiment 2 used a different implicit measure, which avoided any need to categorise the stimuli. Using a revised version of the Stimulus Response Compatibility (SRC task we tested with a stick figure (i.e., the manikin approach and avoidance reactions to curved and angular polygons. We found that RTs for approaching vs. avoiding angular polygons did not differ, even in the condition where the angles were more pronounced. By contrast participants were faster and more accurate when moving the manikin towards curved shapes. Experiment 2 suggests that preference for curvature cannot derive entirely from an association of angles with threat. We conclude that smoothly curved contours make these abstract shapes more pleasant. Further studies are needed to clarify the nature of such a preference.

  1. Fracture toughness evaluation of steels through master curve approach using Charpy impact specimens

    International Nuclear Information System (INIS)

    Chatterjee, S.; Sriharsha, H.K.; Shah, Priti Kotak

    2007-01-01

    The master curve approach can be used for the evaluation of fracture toughness of all steels which exhibit a transition between brittle to ductile mode of fracture with increasing temperature, and to monitor the extent of embrittlement caused by metallurgical damage mechanisms. This paper details the procedure followed to evaluate the fracture toughness of a typical ferritic steel used as material for pressure vessels. The potential of master curve approach to overcome the inherent limitations of the estimation of fracture toughness using ASME Code reference toughness is also illustrated. (author)

  2. Physical fitness: An operator's approach to coping with shift work

    International Nuclear Information System (INIS)

    Hanks, D.H.

    1989-01-01

    There is a strong correlation between a shift worker's ability to remain alert and the physical fitness of the individual. Alertness is a key element of a nuclear plant operator's ability to effectively monitor and control plant status. The constant changes in one's metabolism caused by the rotation of work (and sleep) hours can be devastating to his or her health. Many workers with longevity in the field, however, have found it beneficial to maintain some sort of workout or sport activity, feeling that this activity offsets the physical burden of backshift. The author's experience working shifts for 10 years and his reported increase in alertness through exercise and diet manipulation are described in this paper

  3. Curve fitting using a genetic algorithm for the X-ray fluorescence measurement of lead in bone

    International Nuclear Information System (INIS)

    Luo, L.; McMaster University, Hamilton; Chettle, D.R.; Nie, H.; McNeill, F.E.; Popovic, M.

    2006-01-01

    We investigated the potential application of the genetic algorithm in the analysis of X-ray fluorescence spectra from measurement of lead in bone. Candidate solutions are first designed based on the field knowledge and the whole operation, evaluation, selection, crossover and mutation, is then repeated until a given convergence criterion is met. An average-parameters based genetic algorithm is suggested to improve the fitting precision and accuracy. Relative standard deviation (RSD%) of fitting amplitude, peak position and width is 1.3-7.1, 0.009-0.14 and 1.4-3.3, separately. The genetic algorithm was shown to make a good resolution and fitting of K lines of Pb and γ elastic peaks. (author)

  4. Prediction of ion-exchange column breakthrough curves by constant-pattern wave approach.

    Science.gov (United States)

    Lee, I-Hsien; Kuan, Yu-Chung; Chern, Jia-Ming

    2008-03-21

    The release of heavy metals from industrial wastewaters represents one of major threats to environment. Compared with chemical precipitation method, fixed-bed ion-exchange process can effectively remove heavy metals from wastewaters and generate no hazardous sludge. In order to design and operate fixed-bed ion-exchange processes successfully, it is very important to understand the column dynamics. In this study, the column experiments for Cu2+/H+, Zn2+/H+, and Cd2+/H+ systems using Amberlite IR-120 were performed to measure the breakthrough curves under varying operating conditions. The experimental results showed that total cation concentration in the mobile-phase played a key role on the breakthrough curves; a higher feed concentration resulted in an earlier breakthrough. Furthermore, the column dynamics was also predicted by self-sharpening and constant-pattern wave models. The self-sharpening wave model assuming local ion-exchange equilibrium could provide a simple and quick estimation for the breakthrough volume, but the predicted breakthrough curves did not match the experimental data very well. On the contrary, the constant-pattern wave model using a constant driving force model for finite ion-exchange rate provided a better fit to the experimental data. The obtained liquid-phase mass transfer coefficient was correlated to the flow velocity and other operating parameters; the breakthrough curves under varying operating conditions could thus be predicted by the constant-pattern wave model using the correlation.

  5. A curve fitting approach to estimate the extent of fermentation of indigestible carbohydrates

    NARCIS (Netherlands)

    Wang, H.; Weening, D.; Jonkers, E.; Boer, T.; Stellaard, F.; Small, A. C.; Preston, T.; Vonk, R. J.; Priebe, M. G.

    Background Information about the extent of carbohydrate digestion and fermentation is critical to our ability to explore the metabolic effects of carbohydrate fermentation in vivo. We used cooked (13)C-labelled barley kernels, which are rich in indigestible carbohydrates, to develop a method which

  6. A curve fitting approach to estimate the extent of fermentation of indigestible carbohydrates

    NARCIS (Netherlands)

    Wang, H.; Weening, D.; Jonkers, E.; Boer, T.; Stellaard, F.; Small, A. C.; Preston, T.; Vonk, R. J.; Priebe, M. G.

    2008-01-01

    Background Information about the extent of carbohydrate digestion and fermentation is critical to our ability to explore the metabolic effects of carbohydrate fermentation in vivo. We used cooked (13)C-labelled barley kernels, which are rich in indigestible carbohydrates, to develop a method which

  7. Wavelet transform approach for fitting financial time series data

    Science.gov (United States)

    Ahmed, Amel Abdoullah; Ismail, Mohd Tahir

    2015-10-01

    This study investigates a newly developed technique; a combined wavelet filtering and VEC model, to study the dynamic relationship among financial time series. Wavelet filter has been used to annihilate noise data in daily data set of NASDAQ stock market of US, and three stock markets of Middle East and North Africa (MENA) region, namely, Egypt, Jordan, and Istanbul. The data covered is from 6/29/2001 to 5/5/2009. After that, the returns of generated series by wavelet filter and original series are analyzed by cointegration test and VEC model. The results show that the cointegration test affirms the existence of cointegration between the studied series, and there is a long-term relationship between the US, stock markets and MENA stock markets. A comparison between the proposed model and traditional model demonstrates that, the proposed model (DWT with VEC model) outperforms traditional model (VEC model) to fit the financial stock markets series well, and shows real information about these relationships among the stock markets.

  8. The FITS model: an improved Learning by Design approach

    NARCIS (Netherlands)

    Drs. Ing. Koen Michels; Prof. Dr. Marc de Vries; MEd Dave van Breukelen; MEd Frank Schure

    2016-01-01

    Learning by Design (LBD) is a project-based inquiry approach for interdisciplinary teaching that uses design contexts to learn skills and conceptual knowledge. Research around the year 2000 showed that LBD students achieved high skill performances but disappointing conceptual learning gains. A

  9. DEVELOPING AN EXCELLENT SEDIMENT RATING CURVE FROM ONE HYDROLOGICAL YEAR SAMPLING PROGRAMME DATA: APPROACH

    Directory of Open Access Journals (Sweden)

    Preksedis M. Ndomba

    2008-01-01

    Full Text Available This paper presents preliminary findings on the adequacy of one hydrological year sampling programme data in developing an excellent sediment rating curve. The study case is a 1DD1 subcatchment in the upstream of Pangani River Basin (PRB, located in the North Eastern part of Tanzania. 1DD1 is the major runoff-sediment contributing tributary to the downstream hydropower reservoir, the Nyumba Ya Mungu (NYM. In literature sediment rating curve method is known to underestimate the actual sediment load. In the case of developing countries long-term sediment sampling monitoring or conservation campaigns have been reported as unworkable options. Besides, to the best knowledge of the authors, to date there is no consensus on how to develop an excellent rating curve. Daily-midway and intermittent-cross section sediment samples from Depth Integrating sampler (D-74 were used to calibrate the subdaily automatic sediment pumping sampler (ISCO 6712 near bank point samples for developing the rating curve. Sediment load correction factors were derived from both statistical bias estimators and actual sediment load approaches. It should be noted that the ongoing study is guided by findings of other studies in the same catchment. For instance, long term sediment yield rate estimated based on reservoir survey validated the performance of the developed rating curve. The result suggests that excellent rating curve could be developed from one hydrological year sediment sampling programme data. This study has also found that uncorrected rating curve underestimates sediment load. The degreeof underestimation depends on the type of rating curve developed and data used.

  10. GURU v2.0: An interactive Graphical User interface to fit rheometer curves in Han’s model for rubber vulcanization

    Directory of Open Access Journals (Sweden)

    G. Milani

    2016-01-01

    Full Text Available A GUI software (GURU for experimental data fitting of rheometer curves in Natural Rubber (NR vulcanized with sulphur at different curing temperatures is presented. Experimental data are automatically loaded in GURU from an Excel spreadsheet coming from the output of the experimental machine (moving die rheometer. To fit the experimental data, the general reaction scheme proposed by Han and co-workers for NR vulcanized with sulphur is considered. From the simplified kinetic scheme adopted, a closed form solution can be found for the crosslink density, with the only limitation that the induction period is excluded from computations. Three kinetic constants must be determined in such a way to minimize the absolute error between normalized experimental data and numerical prediction. Usually, this result is achieved by means of standard least-squares data fitting. On the contrary, GURU works interactively by means of a Graphical User Interface (GUI to minimize the error and allows an interactive calibration of the kinetic constants by means of sliders. A simple mouse click on the sliders allows the assignment of a value for each kinetic constant and a visual comparison between numerical and experimental curves. Users will thus find optimal values of the constants by means of a classic trial and error strategy. An experimental case of technical relevance is shown as benchmark.

  11. Memristance controlling approach based on modification of linear M—q curve

    International Nuclear Information System (INIS)

    Liu Hai-Jun; Li Zhi-Wei; Yu Hong-Qi; Sun Zhao-Lin; Nie Hong-Shan

    2014-01-01

    The memristor has broad application prospects in many fields, while in many cases, those fields require accurate impedance control. The nonlinear model is of great importance for realizing memristance control accurately, but the implementing complexity caused by iteration has limited the actual application of this model. Considering the approximate linear characteristics at the middle region of the memristance-charge (M—q) curve of the nonlinear model, this paper proposes a memristance controlling approach, which is achieved by linearizing the middle region of the M—q curve of the nonlinear memristor, and establishes the linear relationship between memristances M and input excitations so that it can realize impedance control precisely by only adjusting input signals briefly. First, it analyzes the feasibility for linearizing the middle part of the M—q curve of the memristor with a nonlinear model from the qualitative perspective. Then, the linearization equations of the middle region of the M—q curve is constructed by using the shift method, and under a sinusoidal excitation case, the analytical relation between the memristance M and the charge time t is derived through the Taylor series expansions. At last, the performance of the proposed approach is demonstrated, including the linearizing capability for the middle part of the M—q curve of the nonlinear model memristor, the controlling ability for memristance M, and the influence of input excitation on linearization errors. (interdisciplinary physics and related areas of science and technology)

  12. The FITS model: an improved Learning by Design approach

    OpenAIRE

    Michels, Koen; Vries, de, Marc; Breukelen, van, Dave; Schure, Frank

    2016-01-01

    Learning by Design (LBD) is a project-based inquiry approach for interdisciplinary teaching that uses design contexts to learn skills and conceptual knowledge. Research around the year 2000 showed that LBD students achieved high skill performances but disappointing conceptual learning gains. A series of exploratory studies, previous to the study in this paper, indicated how to enhance concept learning. Small-scale tested modifications, based on explicit teaching and scaffolding, were promisin...

  13. Model-fitting approach to kinetic analysis of non-isothermal oxidation of molybdenite

    International Nuclear Information System (INIS)

    Ebrahimi Kahrizsangi, R.; Abbasi, M. H.; Saidi, A.

    2007-01-01

    The kinetics of molybdenite oxidation was studied by non-isothermal TGA-DTA with heating rate 5 d eg C .min -1 . The model-fitting kinetic approach applied to TGA data. The Coats-Redfern method used of model fitting. The popular model-fitting gives excellent fit non-isothermal data in chemically controlled regime. The apparent activation energy was determined to be about 34.2 kcalmol -1 With pre-exponential factor about 10 8 sec -1 for extent of reaction less than 0.5

  14. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO3 standards

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s

  15. RigFit: a new approach to superimposing ligand molecules.

    Science.gov (United States)

    Lemmen, C; Hiller, C; Lengauer, T

    1998-09-01

    If structural knowledge of a receptor under consideration is lacking, drug design approaches focus on similarity or dissimilarity analysis of putative ligands. In this context the mutual ligand superposition is of utmost importance. Methods that are rapid enough to facilitate interactive usage, that allow to process sets of conformers and that enable database screening are of special interest here. The ability to superpose molecular fragments instead of entire molecules has proven to be helpful too. The RIGFIT approach meets these requirements and has several additional advantages. In three distinct test applications, we evaluated how closely we can approximate the observed relative orientation for a set of known crystal structures, we employed RIGFIT as a fragment placement procedure, and we performed a fragment-based database screening. The run time of RIGFIT can be traded off against its accuracy. To be competitive in accuracy with another state-of-the-art alignment tool, with which we compare our method explicitly, computing times of about 6 s per superposition on a common day workstation are required. If longer run times can be afforded the accuracy increases significantly. RIGFIT is part of the flexible superposition software FLEXS which can be accessed on the WWW [http:/(/)cartan.gmd.de/FlexS].

  16. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  17. New method of safety assessment for pressure vessel of nuclear power plant--brief introduction of master curve approach

    International Nuclear Information System (INIS)

    Yang Wendou

    2011-01-01

    The new Master Curve Method is called as a revolutionary advance to the assessment of- reactor pressure vessel integrity in USA. This paper explains the origin, basis and standard of the Master Curve from the reactor pressure-temperature limit curve which assures the safety of nuclear power plant. According to the characteristics of brittle fracture which is greatly susceptible to the microstructure, the theory and the test method of the Master Curve as well as its statistical law which can be modeled using Weibull distribution are described in this paper. The meaning, advantage, application and importance of the Master Curve as well as the relation between the Master Curve and nuclear power safety are understood from the fitting formula for the fracture toughness database by Weibull distribution model. (author)

  18. Robust Spatial Approximation of Laser Scanner Point Clouds by Means of Free-form Curve Approaches in Deformation Analysis

    Science.gov (United States)

    Bureick, Johannes; Alkhatib, Hamza; Neumann, Ingo

    2016-03-01

    In many geodetic engineering applications it is necessary to solve the problem of describing a measured data point cloud, measured, e. g. by laser scanner, by means of free-form curves or surfaces, e. g., with B-Splines as basis functions. The state of the art approaches to determine B-Splines yields results which are seriously manipulated by the occurrence of data gaps and outliers. Optimal and robust B-Spline fitting depend, however, on optimal selection of the knot vector. Hence we combine in our approach Monte-Carlo methods and the location and curvature of the measured data in order to determine the knot vector of the B-Spline in such a way that no oscillating effects at the edges of data gaps occur. We introduce an optimized approach based on computed weights by means of resampling techniques. In order to minimize the effect of outliers, we apply robust M-estimators for the estimation of control points. The above mentioned approach will be applied to a multi-sensor system based on kinematic terrestrial laserscanning in the field of rail track inspection.

  19. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  20. Four points function fitted and first derivative procedure for determining the end points in potentiometric titration curves: statistical analysis and method comparison.

    Science.gov (United States)

    Kholeif, S A

    2001-06-01

    A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.

  1. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    Science.gov (United States)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  2. A Bayesian inference approach to unveil supply curves in electricity markets

    DEFF Research Database (Denmark)

    Mitridati, Lesia Marie-Jeanne Mariane; Pinson, Pierre

    2017-01-01

    in the literature on modeling this uncertainty. In this study we introduce a Bayesian inference approach to reveal the aggregate supply curve in a day-ahead electricity market. The proposed algorithm relies on Markov Chain Monte Carlo and Sequential Monte Carlo methods. The major appeal of this approach......With increased competition in wholesale electricity markets, the need for new decision-making tools for strategic producers has arisen. Optimal bidding strategies have traditionally been modeled as stochastic profit maximization problems. However, for producers with non-negligible market power...

  3. Application of Bimodal Master Curve Approach on KSNP RPV steel SA508 Gr. 3

    International Nuclear Information System (INIS)

    Kim, Jongmin; Kim, Minchul; Choi, Kwonjae; Lee, Bongsang

    2014-01-01

    In this paper, the standard MC approach and BMC are applied to the forging material of the KSNP RPV steel SA508 Gr. 3. A series of fracture toughness tests were conducted in the DBTT transition region, and fracture toughness specimens were extracted from four regions, i.e., the surface, 1/8T, 1/4T and 1/2T. Deterministic material inhomogeneity was reviewed through a conventional MC approach and the random inhomogeneity was evaluated by BMC. In the present paper, four regions, surface, 1/8T, 1/4T and 1/2T, were considered for the fracture toughness specimens of KSNP (Korean Standard Nuclear Plant) SA508 Gr. 3 steel to provide deterministic material inhomogeneity and review the applicability of BMC. T0 determined by a conventional MC has a low value owing to the higher quenching rate at the surface as expected. However, more than about 15% of the KJC values lay above the 95% probability curves indexed with the standard MC T0 at the surface and 1/8T, which implies the existence of inhomogeneity in the material. To review the applicability of the BMC method, the deterministic inhomogeneity owing to the extraction location and quenching rate is treated as random inhomogeneity. Although the lower bound and upper bound curve of the BMC covered more KJC values than that of the conventional MC, there is no significant relationship between the BMC analysis lines and measured KJC values in the higher toughness distribution, and BMC and MC provide almost the same T0 values. Therefore, the standard MC evaluation method for this material is appropriate even though the standard MC has a narrow upper/lower bound curve range from the RPV evaluation point of view. The material is not homogeneous in reality. Such inhomogeneity comes in the effect of material inhomogeneity depending on the specimen location, heat treatment, and whole manufacturing process. The conventional master curve has a limitation to be applied to a large scatted data of fracture toughness such as the weld region

  4. Analyzing price and efficiency dynamics of large appliances with the experience curve approach

    International Nuclear Information System (INIS)

    Weiss, Martin; Patel, Martin K.; Junginger, Martin; Blok, Kornelis

    2010-01-01

    Large appliances are major power consumers in households of industrialized countries. Although their energy efficiency has been increasing substantially in past decades, still additional energy efficiency potentials exist. Energy policy that aims at realizing these potentials faces, however, growing concerns about possible adverse effects on commodity prices. Here, we address these concerns by applying the experience curve approach to analyze long-term price and energy efficiency trends of three wet appliances (washing machines, laundry dryers, and dishwashers) and two cold appliances (refrigerators and freezers). We identify a robust long-term decline in both specific price and specific energy consumption of large appliances. Specific prices of wet appliances decline at learning rates (LR) of 29±8% and thereby much faster than those of cold appliances (LR of 9±4%). Our results demonstrate that technological learning leads to substantial price decline, thus indicating that the introduction of novel and initially expensive energy efficiency technologies does not necessarily imply adverse price effects in the long term. By extending the conventional experience curve approach, we find a steady decline in the specific energy consumption of wet appliances (LR of 20-35%) and cold appliances (LR of 13-17%). Our analysis suggests that energy policy might be able to bend down energy experience curves. (author)

  5. Cubic Bezier Curve Approach for Automated Offline Signature Verification with Intrusion Identification

    Directory of Open Access Journals (Sweden)

    Arun Vijayaragavan

    2014-01-01

    Full Text Available Authentication is a process of identifying person’s rights over a system. Many authentication types are used in various systems, wherein biometrics authentication systems are of a special concern. Signature verification is a basic biometric authentication technique used widely. The signature matching algorithm uses image correlation and graph matching technique which provides false rejection or acceptance. We proposed a model to compare knowledge from signature. Intrusion in the signature repository system results in copy of the signature that leads to false acceptance. Our approach uses a Bezier curve algorithm to identify the curve points and uses the behaviors of the signature for verification. An analyzing mobile agent is used to identify the input signature parameters and compare them with reference signature repository. It identifies duplication of signature over intrusion and rejects it. Experiments are conducted on a database with thousands of signature images from various sources and the results are favorable.

  6. Greenhouse gas abatement cost curves of the residential heating market. A microeconomic approach

    International Nuclear Information System (INIS)

    Dieckhoener, Caroline; Hecking, Harald

    2012-01-01

    In this paper, we develop a microeconomic approach to deduce greenhouse gas abatement cost curves of the residential heating sector. By accounting for household behavior, we find that welfare-based abatement costs are generally higher than pure technical equipment costs. Our results are based on a microsimulation of private households' investment decision for heating systems until 2030. The households' investment behavior in the simulation is derived from a discrete choice estimation which allows investigating the welfare costs of different abatement policies in terms of the compensating variation and the excess burden. We simulate greenhouse gas abatements and welfare costs of carbon taxes and subsidies on heating system investments until 2030 to deduce abatement curves. Given utility maximizing households, our results suggest a carbon tax to be the welfare efficient policy. Assuming behavioral misperceptions instead, a subsidy on investments might have lower marginal greenhouse gas abatement costs than a carbon tax.

  7. Progress in evaluation of human observer visual detection performance using the ROC curve approach

    International Nuclear Information System (INIS)

    Metz, C.E.; Starr, S.J.; Lusted, L.B.; Rossmann, K.

    1976-01-01

    The ROC approach to analysis of human observer detection performance as playing a key role in elucidation the relationships among the physical parameters of an imaging operation, the ability of a human observer to use the image to make decisions regarding the state of health or disease in a medical diagnostic situation, and the medical and social utility of those decisions, was studied. The conventional ROC curve describing observer performance in simple detection tasks can be used to predict observer performance in complex detection tasks. The conventional ROC curve thus provides a description of observer detection performance which is useful in situations more relevant clinically than those for which it is measured. Similar predictions regarding observer performance in identification and recognition tasks are currently being sought. The ROC curve can be used to relate signal detectability to various measures of the diagnostic and social benefit derived from a medical imaging procedure. These relationships provide a means for assessing the relative desirability of alternative diagnostic techniques and can be used to evaluate combinations of diagnostic studies

  8. Highly curved image sensors: a practical approach for improved optical performance.

    Science.gov (United States)

    Guenter, Brian; Joshi, Neel; Stoakley, Richard; Keefe, Andrew; Geary, Kevin; Freeman, Ryan; Hundley, Jake; Patterson, Pamela; Hammon, David; Herrera, Guillermo; Sherman, Elena; Nowak, Andrew; Schubert, Randall; Brewer, Peter; Yang, Louis; Mott, Russell; McKnight, Geoff

    2017-06-12

    The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3" format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30° subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.

  9. A Bayesian Approach to Person Fit Analysis in Item Response Theory Models. Research Report.

    Science.gov (United States)

    Glas, Cees A. W.; Meijer, Rob R.

    A Bayesian approach to the evaluation of person fit in item response theory (IRT) models is presented. In a posterior predictive check, the observed value on a discrepancy variable is positioned in its posterior distribution. In a Bayesian framework, a Markov Chain Monte Carlo procedure can be used to generate samples of the posterior distribution…

  10. Work-to-Family Conflict, Positive Spillover, and Boundary Management: A Person-Environment Fit Approach

    Science.gov (United States)

    Chen, Zheng; Powell, Gary N.; Greenhaus, Jeffrey H.

    2009-01-01

    This study adopted a person-environment fit approach to examine whether greater congruence between employees' preferences for segmenting their work domain from their family domain (i.e., keeping work matters at work) and what their employers' work environment allowed would be associated with lower work-to-family conflict and higher work-to-family…

  11. Identifying multiple outliers in linear regression: robust fit and clustering approach

    International Nuclear Information System (INIS)

    Robiah Adnan; Mohd Nor Mohamad; Halim Setan

    2001-01-01

    This research provides a clustering based approach for determining potential candidates for outliers. This is modification of the method proposed by Serbert et. al (1988). It is based on using the single linkage clustering algorithm to group the standardized predicted and residual values of data set fit by least trimmed of squares (LTS). (Author)

  12. STRUCTURAL APPROACH TO THE MATHEMATICAL DESCRIPTION AND COMPUTER VISUALIZATION OF PLANE KINEMATIC CURVES FOR THE DISPLAY OF GEARS

    Directory of Open Access Journals (Sweden)

    Tatyana TRETYAK

    2018-05-01

    Full Text Available The structural approach stated in this paper allows to simulate the different plane kinematic curves without their concrete analytic equations. The summarized unified mapping system for rack gearing is used. The examples of plane kinematic curves received by the structural method on computer are adduced.

  13. Characterizing Synergistic Water and Energy Efficiency at the Residential Scale Using a Cost Abatement Curve Approach

    Science.gov (United States)

    Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.

    2015-12-01

    Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.

  14. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    Science.gov (United States)

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  15. Beyond the SCS curve number: A new stochastic spatial runoff approach

    Science.gov (United States)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  16. Calculation approaches for grid usage fees to influence the load curve in the distribution grid level

    International Nuclear Information System (INIS)

    Illing, Bjoern

    2014-01-01

    Dominated by the energy policy the decentralized German energy market is changing. One mature target of the government is to increase the contribution of renewable generation to the gross electricity consumption. In order to achieve this target disadvantages like an increased need for capacity management occurs. Load reduction and variable grid fees offer the grid operator solutions to realize capacity management by influencing the load profile. The evolution of the current grid fees towards more causality is required to adapt these approaches. Two calculation approaches are developed in this assignment. On the one hand multivariable grid fees keeping the current components demand and energy charge. Additional to the grid costs grid load dependent parameters like the amount of decentralized feed-ins, time and local circumstances as well as grid capacities are considered. On the other hand the grid fee flat-rate which represents a demand based model on a monthly level. Both approaches are designed to meet the criteria for future grid fees. By means of a case study the effects of the grid fees on the load profile at the low voltage grid is simulated. Thereby the consumption is represented by different behaviour models and the results are scaled at the benchmark grid area. The resulting load curve is analyzed concerning the effects of peak load reduction as well as the integration of renewable energy sources. Additionally the combined effect of grid fees and electricity tariffs is evaluated. Finally the work discusses the launching of grid fees in the tense atmosphere of politics, legislation and grid operation. Results of this work are two calculation approaches designed for grid operators to define the grid fees. Multivariable grid fees are based on the current calculation scheme. Hereby demand and energy charges are weighted by time, locational and load related dependencies. The grid fee flat-rate defines a limitation in demand extraction. Different demand levels

  17. The mapping approach in the path integral formalism applied to curve-crossing systems

    International Nuclear Information System (INIS)

    Novikov, Alexey; Kleinekathoefer, Ulrich; Schreiber, Michael

    2004-01-01

    The path integral formalism in a combined phase-space and coherent-state representation is applied to the problem of curve-crossing dynamics. The system of interest is described by two coupled one-dimensional harmonic potential energy surfaces interacting with a heat bath consisting of harmonic oscillators. The mapping approach is used to rewrite the Lagrangian function of the electronic part of the system. Using the Feynman-Vernon influence-functional method the bath is eliminated whereas the non-Gaussian part of the path integral is treated using the generating functional for the electronic trajectories. The dynamics of a Gaussian wave packet is analyzed along a one-dimensional reaction coordinate within a perturbative treatment for a small coordinate shift between the potential energy surfaces

  18. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    Science.gov (United States)

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  19. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    Science.gov (United States)

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  20. Longitudinal associations between exercise identity and exercise motivation: A multilevel growth curve model approach.

    Science.gov (United States)

    Ntoumanis, N; Stenling, A; Thøgersen-Ntoumani, C; Vlachopoulos, S; Lindwall, M; Gucciardi, D F; Tsakonitis, C

    2018-02-01

    Past work linking exercise identity and exercise motivation has been cross-sectional. This is the first study to model the relations between different types of exercise identity and exercise motivation longitudinally. Understanding the dynamic associations between these sets of variables has implications for theory development and applied research. This was a longitudinal survey study. Participants were 180 exercisers (79 men, 101 women) from Greece, who were recruited from fitness centers and were asked to complete questionnaires assessing exercise identity (exercise beliefs and role-identity) and exercise motivation (intrinsic, identified, introjected, external motivation, and amotivation) three times within a 6 month period. Multilevel growth curve modeling examined the role of motivational regulations as within- and between-level predictors of exercise identity, and a model in which exercise identity predicted exercise motivation at the within- and between-person levels. Results showed that within-person changes in intrinsic motivation, introjected, and identified regulations were positively and reciprocally related to within-person changes in exercise beliefs; intrinsic motivation was also a positive predictor of within-person changes in role-identity but not vice versa. Between-person differences in the means of predictor variables were predictive of initial levels and average rates of change in the outcome variables. The findings show support to the proposition that a strong exercise identity (particularly exercise beliefs) can foster motivation for behaviors that reinforce this identity. We also demonstrate that such relations can be reciprocal overtime and can depend on the type of motivation in question as well as between-person differences in absolute levels of these variables. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. A Simplified Micromechanical Modeling Approach to Predict the Tensile Flow Curve Behavior of Dual-Phase Steels

    Science.gov (United States)

    Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal

    2017-11-01

    Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.

  2. Perspectives for development friendly financial markets - No one-size fits all approach!

    DEFF Research Database (Denmark)

    Schmidt, Johannes Dragsbæk

    The paper argues against the usual "one size fits all" approach of the IFIs that all economies must follow the same financial policy. It is necessary to take into consideration a contextual and historical approach in order to enable more considerations  for different local political, economic...... and cultural circumstances. It was furthermore noted that the current deep crisis of the IFIs is associated with both lack of legitimacy and loss of liquidity. Following this the Brettonwoods institutions must either be reformed or abolished...

  3. Understanding the reductions in US corn ethanol production costs: An experience curve approach

    International Nuclear Information System (INIS)

    Hettinga, W.G.; Junginger, H.M.; Dekker, S.C.; Hoogwijk, M.; McAloon, A.J.; Hicks, K.B.

    2009-01-01

    The US is currently the world's largest ethanol producer. An increasing percentage is used as transportation fuel, but debates continue on its costs competitiveness and energy balance. In this study, technological development of ethanol production and resulting cost reductions are investigated by using the experience curve approach, scrutinizing costs of dry grind ethanol production over the timeframe 1980-2005. Cost reductions are differentiated between feedstock (corn) production and industrial (ethanol) processing. Corn production costs in the US have declined by 62% over 30 years, down to 100$ 2005 /tonne in 2005, while corn production volumes almost doubled since 1975. A progress ratio (PR) of 0.55 is calculated indicating a 45% cost decline over each doubling in cumulative production. Higher corn yields and increasing farm sizes are the most important drivers behind this cost decline. Industrial processing costs of ethanol have declined by 45% since 1983, to below 130$ 2005 /m 3 in 2005 (excluding costs for corn and capital), equivalent to a PR of 0.87. Total ethanol production costs (including capital and net corn costs) have declined approximately 60% from 800$ 2005 /m 3 in the early 1980s, to 300$ 2005 /m 3 in 2005. Higher ethanol yields, lower energy use and the replacement of beverage alcohol-based production technologies have mostly contributed to this substantial cost decline. In addition, the average size of dry grind ethanol plants increased by 235% since 1990. For the future it is estimated that solely due to technological learning, production costs of ethanol may decline 28-44%, though this excludes effects of the current rising corn and fossil fuel costs. It is also concluded that experience curves are a valuable tool to describe both past and potential future cost reductions in US corn-based ethanol production

  4. Method and Excel VBA Algorithm for Modeling Master Recession Curve Using Trigonometry Approach.

    Science.gov (United States)

    Posavec, Kristijan; Giacopetti, Marco; Materazzi, Marco; Birk, Steffen

    2017-11-01

    A new method was developed and implemented into an Excel Visual Basic for Applications (VBAs) algorithm utilizing trigonometry laws in an innovative way to overlap recession segments of time series and create master recession curves (MRCs). Based on a trigonometry approach, the algorithm horizontally translates succeeding recession segments of time series, placing their vertex, that is, the highest recorded value of each recession segment, directly onto the appropriate connection line defined by measurement points of a preceding recession segment. The new method and algorithm continues the development of methods and algorithms for the generation of MRC, where the first published method was based on a multiple linear/nonlinear regression model approach (Posavec et al. 2006). The newly developed trigonometry-based method was tested on real case study examples and compared with the previously published multiple linear/nonlinear regression model-based method. The results show that in some cases, that is, for some time series, the trigonometry-based method creates narrower overlaps of the recession segments, resulting in higher coefficients of determination R 2 , while in other cases the multiple linear/nonlinear regression model-based method remains superior. The Excel VBA algorithm for modeling MRC using the trigonometry approach is implemented into a spreadsheet tool (MRCTools v3.0 written by and available from Kristijan Posavec, Zagreb, Croatia) containing the previously published VBA algorithms for MRC generation and separation. All algorithms within the MRCTools v3.0 are open access and available free of charge, supporting the idea of running science on available, open, and free of charge software. © 2017, National Ground Water Association.

  5. New approach in the evaluation of a fitness program at a worksite.

    Science.gov (United States)

    Shirasaya, K; Miyakawa, M; Yoshida, K; Tanaka, C; Shimada, N; Kondo, T

    1999-03-01

    The most common methods for the economic evaluation of a fitness program at a worksite are cost-effectiveness, cost-benefit, and cost-utility analyses. In this study, we applied a basic microeconomic theory, "neoclassical firm's problems," as the new approach for it. The optimal number of physical-exercise classes that constitute the core of the fitness program are determined using the cubic health production function. The optimal number is defined as the number that maximizes the profit of the program. The optimal number corresponding to any willingness-to-pay amount of the participants for the effectiveness of the program is presented using a graph. For example, if the willingness-to-pay is $800, the optimal number of classes is 23. Our method can be applied to the evaluation of any health care program if the health production function can be estimated.

  6. An Empirical Fitting Method to Type Ia Supernova Light Curves. III. A Three-parameter Relationship: Peak Magnitude, Rise Time, and Photospheric Velocity

    Science.gov (United States)

    Zheng, WeiKang; Kelly, Patrick L.; Filippenko, Alexei V.

    2018-05-01

    We examine the relationship between three parameters of Type Ia supernovae (SNe Ia): peak magnitude, rise time, and photospheric velocity at the time of peak brightness. The peak magnitude is corrected for extinction using an estimate determined from MLCS2k2 fitting. The rise time is measured from the well-observed B-band light curve with the first detection at least 1 mag fainter than the peak magnitude, and the photospheric velocity is measured from the strong absorption feature of Si II λ6355 at the time of peak brightness. We model the relationship among these three parameters using an expanding fireball with two assumptions: (a) the optical emission is approximately that of a blackbody, and (b) the photospheric temperatures of all SNe Ia are the same at the time of peak brightness. We compare the precision of the distance residuals inferred using this physically motivated model against those from the empirical Phillips relation and the MLCS2k2 method for 47 low-redshift SNe Ia (0.005 Ia in our sample with higher velocities are inferred to be intrinsically fainter. Eliminating the high-velocity SNe and applying a more stringent extinction cut to obtain a “low-v golden sample” of 22 SNe, we obtain significantly reduced scatter of 0.108 ± 0.018 mag in the new relation, better than those of the Phillips relation and the MLCS2k2 method. For 250 km s‑1 of residual peculiar motions, we find 68% and 95% upper limits on the intrinsic scatter of 0.07 and 0.10 mag, respectively.

  7. Testing the Environmental Kuznets Curve Hypothesis for Biodiversity Risk in the US: A Spatial Econometric Approach

    Directory of Open Access Journals (Sweden)

    Robert P. Berrens

    2011-11-01

    Full Text Available This study investigates whether the environmental Kuznets curve (EKC relationship is supported for a measure of biodiversity risk and economic development across the United States (US. Using state-level data for all 48 contiguous states, biodiversity risk is measured using a Modified Index (MODEX. This index is an adaptation of a comprehensive National Biodiversity Risk Assessment Index. The MODEX differs from other measures in that it is takes into account the impact of human activities and conservation measures. The econometric approach includes corrections for spatial autocorrelation effects, which are present in the data. Modeling estimation results do not support the EKC hypothesis for biodiversity risk in the US. This finding is robust over ordinary least squares, spatial error, and spatial lag models, where the latter is shown to be the preferred model. Results from the spatial lag regression show that a 1% increase in human population density is associated with about a 0.19% increase in biodiversity risk. Spatial dependence in this case study explains 30% of the variation, as risk in one state spills over into adjoining states. From a policy perspective, this latter result supports the need for coordinated efforts at state and federal levels to address the problem of biodiversity loss.

  8. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    Science.gov (United States)

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  9. A multiresolution approach for the convergence acceleration of multivariate curve resolution methods.

    Science.gov (United States)

    Sawall, Mathias; Kubis, Christoph; Börner, Armin; Selent, Detlef; Neymeyr, Klaus

    2015-09-03

    Modern computerized spectroscopic instrumentation can result in high volumes of spectroscopic data. Such accurate measurements rise special computational challenges for multivariate curve resolution techniques since pure component factorizations are often solved via constrained minimization problems. The computational costs for these calculations rapidly grow with an increased time or frequency resolution of the spectral measurements. The key idea of this paper is to define for the given high-dimensional spectroscopic data a sequence of coarsened subproblems with reduced resolutions. The multiresolution algorithm first computes a pure component factorization for the coarsest problem with the lowest resolution. Then the factorization results are used as initial values for the next problem with a higher resolution. Good initial values result in a fast solution on the next refined level. This procedure is repeated and finally a factorization is determined for the highest level of resolution. The described multiresolution approach allows a considerable convergence acceleration. The computational procedure is analyzed and is tested for experimental spectroscopic data from the rhodium-catalyzed hydroformylation together with various soft and hard models. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. IEFIT - An Interactive Approach to High Temperature Fusion Plasma Magnetic Equilibrium Fitting

    International Nuclear Information System (INIS)

    Peng, Q.; Schachter, J.; Schissel, D.P.; Lao, L.L.

    1999-01-01

    An interactive IDL based wrapper, IEFIT, has been created for the magnetic equilibrium reconstruction code EFIT written in FORTRAN. It allows high temperature fusion physicists to rapidly optimize a plasma equilibrium reconstruction by eliminating the unnecessarily repeated initialization in the conventional approach along with the immediate display of the fitting results of each input variation. It uses a new IDL based graphics package, GaPlotObj, developed in cooperation with Fanning Software Consulting, that provides a unified interface with great flexibility in presenting and analyzing scientific data. The overall interactivity reduces the process to minutes from the usual hours

  11. Application of the master curve approach to fracture mechanics characterisation of reactor pressure vessel steel

    International Nuclear Information System (INIS)

    Viehrig, Hans-Werner; Zurbuchen, Conrad; Kalkhof, Dietmar

    2010-06-01

    The paper presents results of a research project founded by the Swiss Federal Nuclear Inspectorate concerning the application of the Master Curve approach in nuclear reactor pressure vessels integrity assessment. The main focus is put on the applicability of pre-cracked 0.4T-SE(B) specimens with short cracks, the verification of transferability of MC reference temperatures T 0 from 0.4T thick specimens to larger specimens, ascertaining the influence of the specimen type and the test temperature on T 0 , investigation of the applicability of specimens with electroerosive notches for the fracture toughness testing, and the quantification of the loading rate and specimen type on T 0 . The test material is a forged ring of steel 22 NiMoCr 3-7 of the uncommissioned German pressurized water reactor Biblis C. SE(B) specimens with different overall sizes (specimen thickness B=0.4T, 0.8T, 1.6T, 3T, fatigue pre-cracked to a/W=0.5 and 20% side-grooved) have comparable T 0 . T 0 varies within the 1σ scatter band. The testing of C(T) specimens results in higher T 0 compared to SE(B) specimens. It can be stated that except for the lowest test temperature allowed by ASTM E1921-09a, the T 0 values evaluated with specimens tested at different test temperatures are consistent. The testing in the temperature range of T 0 ± 20 K is recommended because it gave the highest accuracy. Specimens with a/W=0.3 and a/W=0.5 crack length ratios yield comparable T 0 . The T 0 of EDM notched specimens lie 41 K up to 54 K below the T 0 of fatigue pre-cracked specimens. A significant influence of the loading rate on the MC T 0 was observed. The HSK AN 425 test procedure is a suitable method to evaluate dynamic MC tests. The reference temperature T 0 is eligible to define a reference temperature RT To for the ASME-KIC reference curve as recommended in the ASME Code Case N-629. An additional margin has to be defined for the specific type of transient to be considered in the RPV integrity assessment

  12. A new approach to a global fit of the CKM matrix

    Energy Technology Data Exchange (ETDEWEB)

    Hoecker, A.; Lacker, H.; Laplace, S. [Laboratoire de l' Accelerateur Lineaire, 91 - Orsay (France); Le Diberder, F. [Laboratoire de Physique Nucleaire et des Hautes Energies, 75 - Paris (France)

    2001-05-01

    We report on a new approach to a global CKM matrix analysis taking into account most recent experimental and theoretical results. The statistical framework (Rfit) developed in this paper advocates frequentist statistics. Other approaches, such as Bayesian statistics or the 95% CL scan method are also discussed. We emphasize the distinction of a model testing and a model dependent, metrological phase in which the various parameters of the theory are estimated. Measurements and theoretical parameters entering the global fit are thoroughly discussed, in particular with respect to their theoretical uncertainties. Graphical results for confidence levels are drawn in various one and two-dimensional parameter spaces. Numerical results are provided for all relevant CKM parameterizations, the CKM elements and theoretical input parameters. Predictions for branching ratios of rare K and B meson decays are obtained. A simple, predictive SUSY extension of the Standard Model is discussed. (authors)

  13. Fit for purpose? Introducing a rational priority setting approach into a community care setting.

    Science.gov (United States)

    Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale

    2016-06-20

    Purpose - Program budgeting and marginal analysis (PBMA) is a priority setting approach that assists decision makers with allocating resources. Previous PBMA work establishes its efficacy and indicates that contextual factors complicate priority setting, which can hamper PBMA effectiveness. The purpose of this paper is to gain qualitative insight into PBMA effectiveness. Design/methodology/approach - A Canadian case study of PBMA implementation. Data consist of decision-maker interviews pre (n=20), post year-1 (n=12) and post year-2 (n=9) of PBMA to examine perceptions of baseline priority setting practice vis-à-vis desired practice, and perceptions of PBMA usability and acceptability. Findings - Fit emerged as a key theme in determining PBMA effectiveness. Fit herein refers to being of suitable quality and form to meet the intended purposes and needs of the end-users, and includes desirability, acceptability, and usability dimensions. Results confirm decision-maker desire for rational approaches like PBMA. However, most participants indicated that the timing of the exercise and the form in which PBMA was applied were not well-suited for this case study. Participant acceptance of and buy-in to PBMA changed during the study: a leadership change, limited organizational commitment, and concerns with organizational capacity were key barriers to PBMA adoption and thereby effectiveness. Practical implications - These findings suggest that a potential way-forward includes adding a contextual readiness/capacity assessment stage to PBMA, recognizing organizational complexity, and considering incremental adoption of PBMA's approach. Originality/value - These insights help us to better understand and work with priority setting conditions to advance evidence-informed decision making.

  14. Describing the Process of Adopting Nutrition and Fitness Apps: Behavior Stage Model Approach.

    Science.gov (United States)

    König, Laura M; Sproesser, Gudrun; Schupp, Harald T; Renner, Britta

    2018-03-13

    Although mobile technologies such as smartphone apps are promising means for motivating people to adopt a healthier lifestyle (mHealth apps), previous studies have shown low adoption and continued use rates. Developing the means to address this issue requires further understanding of mHealth app nonusers and adoption processes. This study utilized a stage model approach based on the Precaution Adoption Process Model (PAPM), which proposes that people pass through qualitatively different motivational stages when adopting a behavior. To establish a better understanding of between-stage transitions during app adoption, this study aimed to investigate the adoption process of nutrition and fitness app usage, and the sociodemographic and behavioral characteristics and decision-making style preferences of people at different adoption stages. Participants (N=1236) were recruited onsite within the cohort study Konstanz Life Study. Use of mobile devices and nutrition and fitness apps, 5 behavior adoption stages of using nutrition and fitness apps, preference for intuition and deliberation in eating decision-making (E-PID), healthy eating style, sociodemographic variables, and body mass index (BMI) were assessed. Analysis of the 5 behavior adoption stages showed that stage 1 ("unengaged") was the most prevalent motivational stage for both nutrition and fitness app use, with half of the participants stating that they had never thought about using a nutrition app (52.41%, 533/1017), whereas less than one-third stated they had never thought about using a fitness app (29.25%, 301/1029). "Unengaged" nonusers (stage 1) showed a higher preference for an intuitive decision-making style when making eating decisions, whereas those who were already "acting" (stage 4) showed a greater preference for a deliberative decision-making style (F 4,1012 =21.83, Pdigital interventions. This study highlights that new user groups might be better reached by apps designed to address a more intuitive

  15. Correlation between 2D and 3D flow curve modelling of DP steels using a microstructure-based RVE approach

    International Nuclear Information System (INIS)

    Ramazani, A.; Mukherjee, K.; Quade, H.; Prahl, U.; Bleck, W.

    2013-01-01

    A microstructure-based approach by means of representative volume elements (RVEs) is employed to evaluate the flow curve of DP steels using virtual tensile tests. Microstructures with different martensite fractions and morphologies are studied in two- and three-dimensional approaches. Micro sections of DP microstructures with various amounts of martensite have been converted to 2D RVEs, while 3D RVEs were constructed statistically with randomly distributed phases. A dislocation-based model is used to describe the flow curve of each ferrite and martensite phase separately as a function of carbon partitioning and microstructural features. Numerical tensile tests of RVE were carried out using the ABAQUS/Standard code to predict the flow behaviour of DP steels. It is observed that 2D plane strain modelling gives an underpredicted flow curve for DP steels, while the 3D modelling gives a quantitatively reasonable description of flow curve in comparison to the experimental data. In this work, a von Mises stress correlation factor σ 3D /σ 2D has been identified to compare the predicted flow curves of these two dimensionalities showing a third order polynomial relation with respect to martensite fraction and a second order polynomial relation with respect to equivalent plastic strain, respectively. The quantification of this polynomial correlation factor is performed based on laboratory-annealed DP600 chemistry with varying martensite content and it is validated for industrially produced DP qualities with various chemistry, strength level and martensite fraction.

  16. Toward a Conceptualization of Perceived Work-Family Fit and Balance: A Demands and Resources Approach

    Science.gov (United States)

    Voydanoff, Patricia

    2005-01-01

    Using person-environment fit theory, this article formulates a conceptual model that links work, family, and boundary-spanning demands and resources to work and family role performance and quality. Linking mechanisms include 2 dimensions of perceived work-family fit (work demands--family resources fit and family demands--work resources fit) and a…

  17. Linking occurrence and fitness to persistence: Habitat-based approach for endangered Greater Sage-Grouse

    Science.gov (United States)

    Aldridge, Cameron L.; Boyce, Mark S.

    2007-01-01

    Detailed empirical models predicting both species occurrence and fitness across a landscape are necessary to understand processes related to population persistence. Failure to consider both occurrence and fitness may result in incorrect assessments of habitat importance leading to inappropriate management strategies. We took a two-stage approach to identifying critical nesting and brood-rearing habitat for the endangered Greater Sage-Grouse (Centrocercus urophasianus) in Alberta at a landscape scale. First, we used logistic regression to develop spatial models predicting the relative probability of use (occurrence) for Sage-Grouse nests and broods. Secondly, we used Cox proportional hazards survival models to identify the most risky habitats across the landscape. We combined these two approaches to identify Sage-Grouse habitats that pose minimal risk of failure (source habitats) and attractive sink habitats that pose increased risk (ecological traps). Our models showed that Sage-Grouse select for heterogeneous patches of moderate sagebrush cover (quadratic relationship) and avoid anthropogenic edge habitat for nesting. Nests were more successful in heterogeneous habitats, but nest success was independent of anthropogenic features. Similarly, broods selected heterogeneous high-productivity habitats with sagebrush while avoiding human developments, cultivated cropland, and high densities of oil wells. Chick mortalities tended to occur in proximity to oil and gas developments and along riparian habitats. For nests and broods, respectively, approximately 10% and 5% of the study area was considered source habitat, whereas 19% and 15% of habitat was attractive sink habitat. Limited source habitats appear to be the main reason for poor nest success (39%) and low chick survival (12%). Our habitat models identify areas of protection priority and areas that require immediate management attention to enhance recruitment to secure the viability of this population. This novel

  18. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    Science.gov (United States)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  19. Investigation of erosion behavior in different pipe-fitting using Eulerian-Lagrangian approach

    Science.gov (United States)

    Kulkarni, Harshwardhan; Khadamkar, Hrushikesh; Mathpati, Channamallikarjun

    2017-11-01

    Erosion is a wear mechanism of piping system in which wall thinning occurs because of turbulent flow along with along with impact of solid particle on the pipe wall, because of this pipe ruptures causes costly repair of plant and personal injuries. In this study two way coupled Eulerian-Lagrangian approach is used to solve the liquid solid (water-ferrous suspension) flow in the different pipe fitting namely elbow, t-junction, reducer, orifice and 50% open gate valve. Simulations carried out using incomressible transient solver in OpenFOAM for different Reynolds's number (10k, 25k, 50k) and using WenYu drag model to find out possible higher erosion region in pipe fitting. Used transient solver is a hybrid in nature which is combination of Lagrangian library and pimpleFoam. Result obtained from simulation shows that exit region of elbow specially downstream of straight, extradose of the bend section more affected by erosion. Centrifugal force on solid particle at bend affect the erosion behavior. In case of t-junction erosion occurs below the locus of the projection of branch pipe on the wall. For the case of reducer, orifice and a gate valve reduction area as well as downstream is getting more affected by erosion because of increase in velocities.

  20. More basic approach to the analysis of multiple specimen R-curves for determination of J/sub c/

    International Nuclear Information System (INIS)

    Carlson, K.W.; Williams, J.A.

    1980-02-01

    Multiple specimen J-R curves were developed for groups of 1T compact specimens with different a/W values and depth of side grooving. The purpose of this investigation was to determine J/sub c/ (J at onset of crack extension) for each group. Judicious selection of points on the load versus load-line deflection record at which to unload and heat tint specimens permitted direct observation of approximate onset of crack extension. It was found that the present recommended procedure for determining J/sub c/ from multiple specimen R-curves, which is being considered for standardization, consistently yielded nonconservative J/sub c/ values. A more basic approach to analyzing multiple specimen R-curves is presented, applied, and discussed. This analysis determined J/sub c/ values that closely corresponded to actual observed onset of crack extension

  1. Use of regionalisation approach to develop fire frequency curves for Victoria, Australia

    Science.gov (United States)

    Khastagir, Anirban; Jayasuriya, Niranjali; Bhuyian, Muhammed A.

    2017-11-01

    It is important to perform fire frequency analysis to obtain fire frequency curves (FFC) based on fire intensity at different parts of Victoria. In this paper fire frequency curves (FFCs) were derived based on forest fire danger index (FFDI). FFDI is a measure related to fire initiation, spreading speed and containment difficulty. The mean temperature (T), relative humidity (RH) and areal extent of open water (LC2) during summer months (Dec-Feb) were identified as the most important parameters for assessing the risk of occurrence of bushfire. Based on these parameters, Andrews' curve equation was applied to 40 selected meteorological stations to identify homogenous stations to form unique clusters. A methodology using peak FFDI from cluster averaged FFDIs was developed by applying Log Pearson Type III (LPIII) distribution to generate FFCs. A total of nine homogeneous clusters across Victoria were identified, and subsequently their FFC's were developed in order to estimate the regionalised fire occurrence characteristics.

  2. An assessment of mode-coupling and falling-friction mechanisms in railway curve squeal through a simplified approach

    Science.gov (United States)

    Ding, Bo; Squicciarini, Giacomo; Thompson, David; Corradi, Roberto

    2018-06-01

    Curve squeal is one of the most annoying types of noise caused by the railway system. It usually occurs when a train or tram is running around tight curves. Although this phenomenon has been studied for many years, the generation mechanism is still the subject of controversy and not fully understood. A negative slope in the friction curve under full sliding has been considered to be the main cause of curve squeal for a long time but more recently mode coupling has been demonstrated to be another possible explanation. Mode coupling relies on the inclusion of both the lateral and vertical dynamics at the contact and an exchange of energy occurs between the normal and the axial directions. The purpose of this paper is to assess the role of the mode-coupling and falling-friction mechanisms in curve squeal through the use of a simple approach based on practical parameter values representative of an actual situation. A tramway wheel is adopted to study the effect of the adhesion coefficient, the lateral contact position, the contact angle and the damping ratio. Cases corresponding to both inner and outer wheels in the curve are considered and it is shown that there are situations in which both wheels can squeal due to mode coupling. Additionally, a negative slope is introduced in the friction curve while keeping active the vertical dynamics in order to analyse both mechanisms together. It is shown that, in the presence of mode coupling, the squealing frequency can differ from the natural frequency of either of the coupled wheel modes. Moreover, a phase difference between wheel vibration in the vertical and lateral directions is observed as a characteristic of mode coupling. For both these features a qualitative comparison is shown with field measurements which show the same behaviour.

  3. Fitting N-mixture models to count data with unmodeled heterogeneity: Bias, diagnostics, and alternative approaches

    Science.gov (United States)

    Duarte, Adam; Adams, Michael J.; Peterson, James T.

    2018-01-01

    Monitoring animal populations is central to wildlife and fisheries management, and the use of N-mixture models toward these efforts has markedly increased in recent years. Nevertheless, relatively little work has evaluated estimator performance when basic assumptions are violated. Moreover, diagnostics to identify when bias in parameter estimates from N-mixture models is likely is largely unexplored. We simulated count data sets using 837 combinations of detection probability, number of sample units, number of survey occasions, and type and extent of heterogeneity in abundance or detectability. We fit Poisson N-mixture models to these data, quantified the bias associated with each combination, and evaluated if the parametric bootstrap goodness-of-fit (GOF) test can be used to indicate bias in parameter estimates. We also explored if assumption violations can be diagnosed prior to fitting N-mixture models. In doing so, we propose a new model diagnostic, which we term the quasi-coefficient of variation (QCV). N-mixture models performed well when assumptions were met and detection probabilities were moderate (i.e., ≥0.3), and the performance of the estimator improved with increasing survey occasions and sample units. However, the magnitude of bias in estimated mean abundance with even slight amounts of unmodeled heterogeneity was substantial. The parametric bootstrap GOF test did not perform well as a diagnostic for bias in parameter estimates when detectability and sample sizes were low. The results indicate the QCV is useful to diagnose potential bias and that potential bias associated with unidirectional trends in abundance or detectability can be diagnosed using Poisson regression. This study represents the most thorough assessment to date of assumption violations and diagnostics when fitting N-mixture models using the most commonly implemented error distribution. Unbiased estimates of population state variables are needed to properly inform management decision

  4. Urban Concentration and Spatial Allocation of Rents from natural resources. A Zipf's Curve Approach

    Directory of Open Access Journals (Sweden)

    Tomaz Ponce Dentinho

    2017-10-01

    Full Text Available This paper aims at demonstrating how countries' dependency on natural resources plays a crucial role in urban concentration. The Zipf's Curve Elasticity is estimated for a group of countries and related to a set of indicators of unilateral transferences. Results show that in comparison to others, countries with higher urban concentration explained by higher Zipf's Curve Elasticity have a higher percentage of income coming from natural resources and education expenditures whereas public spending in health and outflow of Foreign Direct Investment seem to have spatial redistribution effects. Summing up, there are signs that the spatial allocation of property rights over natural resources and related rents influences urban concentration.

  5. A Data-driven Study of RR Lyrae Near-IR Light Curves: Principal Component Analysis, Robust Fits, and Metallicity Estimates

    Science.gov (United States)

    Hajdu, Gergely; Dékány, István; Catelan, Márcio; Grebel, Eva K.; Jurcsik, Johanna

    2018-04-01

    RR Lyrae variables are widely used tracers of Galactic halo structure and kinematics, but they can also serve to constrain the distribution of the old stellar population in the Galactic bulge. With the aim of improving their near-infrared photometric characterization, we investigate their near-infrared light curves, as well as the empirical relationships between their light curve and metallicities using machine learning methods. We introduce a new, robust method for the estimation of the light-curve shapes, hence the average magnitudes of RR Lyrae variables in the K S band, by utilizing the first few principal components (PCs) as basis vectors, obtained from the PC analysis of a training set of light curves. Furthermore, we use the amplitudes of these PCs to predict the light-curve shape of each star in the J-band, allowing us to precisely determine their average magnitudes (hence colors), even in cases where only one J measurement is available. Finally, we demonstrate that the K S-band light-curve parameters of RR Lyrae variables, together with the period, allow the estimation of the metallicity of individual stars with an accuracy of ∼0.2–0.25 dex, providing valuable chemical information about old stellar populations bearing RR Lyrae variables. The methods presented here can be straightforwardly adopted for other classes of variable stars, bands, or for the estimation of other physical quantities.

  6. A Probabilistic Approach to Fitting Period–luminosity Relations and Validating Gaia Parallaxes

    Energy Technology Data Exchange (ETDEWEB)

    Sesar, Branimir; Fouesneau, Morgan; Bailer-Jones, Coryn A. L.; Gould, Andy; Rix, Hans-Walter [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Price-Whelan, Adrian M., E-mail: bsesar@mpia.de [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

    2017-04-01

    Pulsating stars, such as Cepheids, Miras, and RR Lyrae stars, are important distance indicators and calibrators of the “cosmic distance ladder,” and yet their period–luminosity–metallicity (PLZ) relations are still constrained using simple statistical methods that cannot take full advantage of available data. To enable optimal usage of data provided by the Gaia mission, we present a probabilistic approach that simultaneously constrains parameters of PLZ relations and uncertainties in Gaia parallax measurements. We demonstrate this approach by constraining PLZ relations of type ab RR Lyrae stars in near-infrared W 1 and W 2 bands, using Tycho- Gaia Astrometric Solution (TGAS) parallax measurements for a sample of ≈100 type ab RR Lyrae stars located within 2.5 kpc of the Sun. The fitted PLZ relations are consistent with previous studies, and in combination with other data, deliver distances precise to 6% (once various sources of uncertainty are taken into account). To a precision of 0.05 mas (1 σ ), we do not find a statistically significant offset in TGAS parallaxes for this sample of distant RR Lyrae stars (median parallax of 0.8 mas and distance of 1.4 kpc). With only minor modifications, our probabilistic approach can be used to constrain PLZ relations of other pulsating stars, and we intend to apply it to Cepheid and Mira stars in the near future.

  7. The New Keynesian Phillips Curve

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper provides a survey on the recent literature on the new Keynesian Phillips curve: the controversies surrounding its microfoundation and estimation, the approaches that have been tried to improve its empirical fit and the challenges it faces adapting to the open-economy framework. The new......, learning or state-dependant pricing. The introduction of openeconomy factors into the new Keynesian Phillips curve complicate matters further as it must capture the nexus between price setting, inflation and the exchange rate. This is nevertheless a crucial feature for any model to be used for inflation...... forecasting in a small open economy like Iceland....

  8. The Forecasting Power of the Yield Curve, a Supervised Factor Model Approach

    DEFF Research Database (Denmark)

    Boldrini, Lorenzo; Hillebrand, Eric Tobias

    loadings have the Nelson and Siegel (1987) structure and we consider one forecast target at a time. We compare the forecasting performance of our specification to benchmark models such as principal components regression, partial least squares, and ARMA(p,q) processes. We use the yield curve data from G...

  9. Estimating water use of mature pecan orchards: A six stage crop growth curve approach

    CSIR Research Space (South Africa)

    Ibraimo, NA

    2016-11-01

    Full Text Available previous study in New Mexico, revealed that a six stage crop coefficient curve should be considered for pecans, together with higher mid-season crop coefficient (K(subc)) values for mature orchards. More accurate estimates of monthly ET for mature pecan...

  10. Exploring Person Fit with an Approach Based on Multilevel Logistic Regression

    Science.gov (United States)

    Walker, A. Adrienne; Engelhard, George, Jr.

    2015-01-01

    The idea that test scores may not be valid representations of what students know, can do, and should learn next is well known. Person fit provides an important aspect of validity evidence. Person fit analyses at the individual student level are not typically conducted and person fit information is not communicated to educational stakeholders. In…

  11. Fitting the multitemporal curve: a fourier series approach to the missing data problem in remote sensing analysis

    Science.gov (United States)

    Evan Brooks; Valerie Thomas; Wynne Randolph; John Coulston

    2012-01-01

    With the advent of free Landsat data stretching back decades, there has been a surge of interest in utilizing remotely sensed data in multitemporal analysis for estimation of biophysical parameters. Such analysis is confounded by cloud cover and other image-specific problems, which result in missing data at various aperiodic times of the year. While there is a wealth...

  12. Curve aligning approach for gait authentication based on a wearable accelerometer

    International Nuclear Information System (INIS)

    Sun, Hu; Yuao, Tao

    2012-01-01

    Gait authentication based on a wearable accelerometer is a novel biometric which can be used for identity identification, medical rehabilitation and early detection of neurological disorders. The method for matching gait patterns tells heavily on authentication performances. In this paper, curve aligning is introduced as a new method for matching gait patterns and it is compared with correlation and dynamic time warping (DTW). A support vector machine (SVM) is proposed to fuse pattern-matching methods in a decision level. Accelerations collected from ankles of 22 walking subjects are processed for authentications in our experiments. The fusion of curve aligning with backward–forward accelerations and DTW with vertical accelerations promotes authentication performances substantially and consistently. This fusion algorithm is tested repeatedly. Its mean and standard deviation of equal error rates are 0.794% and 0.696%, respectively, whereas among all presented non-fusion algorithms, the best one shows an EER of 3.03%. (paper)

  13. A unified conformational selection and induced fit approach to protein-peptide docking.

    Directory of Open Access Journals (Sweden)

    Mikael Trellet

    Full Text Available Protein-peptide interactions are vital for the cell. They mediate, inhibit or serve as structural components in nearly 40% of all macromolecular interactions, and are often associated with diseases, making them interesting leads for protein drug design. In recent years, large-scale technologies have enabled exhaustive studies on the peptide recognition preferences for a number of peptide-binding domain families. Yet, the paucity of data regarding their molecular binding mechanisms together with their inherent flexibility makes the structural prediction of protein-peptide interactions very challenging. This leaves flexible docking as one of the few amenable computational techniques to model these complexes. We present here an ensemble, flexible protein-peptide docking protocol that combines conformational selection and induced fit mechanisms. Starting from an ensemble of three peptide conformations (extended, a-helix, polyproline-II, flexible docking with HADDOCK generates 79.4% of high quality models for bound/unbound and 69.4% for unbound/unbound docking when tested against the largest protein-peptide complexes benchmark dataset available to date. Conformational selection at the rigid-body docking stage successfully recovers the most relevant conformation for a given protein-peptide complex and the subsequent flexible refinement further improves the interface by up to 4.5 Å interface RMSD. Cluster-based scoring of the models results in a selection of near-native solutions in the top three for ∼75% of the successfully predicted cases. This unified conformational selection and induced fit approach to protein-peptide docking should open the route to the modeling of challenging systems such as disorder-order transitions taking place upon binding, significantly expanding the applicability limit of biomolecular interaction modeling by docking.

  14. Microcanonical Monte Carlo approach for computing melting curves by atomistic simulations

    OpenAIRE

    Davis, Sergio; Gutiérrez, Gonzalo

    2017-01-01

    We report microcanonical Monte Carlo simulations of melting and superheating of a generic, Lennard-Jones system starting from the crystalline phase. The isochoric curve, the melting temperature $T_m$ and the critical superheating temperature $T_{LS}$ obtained are in close agreement (well within the microcanonical temperature fluctuations) with standard molecular dynamics one-phase and two-phase methods. These results validate the use of microcanonical Monte Carlo to compute melting points, a ...

  15. Novel hybrid (magnet plus curve grasper) technique during transumbilical cholecystectomy: initial experience of a promising approach.

    Science.gov (United States)

    Millan, Carolina; Bignon, Horacion; Bellia, Gaston; Buela, Enrique; Rabinovich, Fernando; Albertal, Mariano; Martinez Ferro, Marcelo

    2013-10-01

    The use of magnets in transumbilical cholecystectomy (TUC) improves triangulation and achieves an optimal critical view. Nonetheless, the tendency of the magnets to collide hinders the process. In order to simplify the surgical technique, we developed a hybrid model with a single magnet and a curved grasper. All TUCs performed with a hybrid strategy in our pediatric population between September 2009 and July 2012 were retrospectively reviewed. Of 260 surgical procedures in which at least one magnet was used, 87 were TUCs. Of those, 62 were hybrid: 33 in adults and 29 in pediatric patients. The technique combines a magnet and a curved grasper. Through a transumbilical incision, we placed a 12-mm trocar and another flexible 5-mm trocar. The laparoscope with the working channel used the 12-mm trocar. The magnetic grasper was introduced to the abdominal cavity using the working channel to provide cephalic retraction of the gallbladder fundus. Across the flexible trocar, the assistant manipulated the curved grasper to mobilize the infundibulum. The surgeon operated through the working channel of the laparoscope. In this pediatric population, the mean age was 14 years (range, 4-17 years), and mean weight was 50 kg (range, 18-90 kg); 65% were girls. Mean operative time was 62 minutes. All procedures achieved a critical view of safety with no instrumental collision. There were no intraoperative or postoperative complications. The hospital stay was 1.4±0.6 days, and the median follow-up was 201 days. A hybrid technique, combining magnets and a curved grasper, simplifies transumbilical surgery. It seems feasible and safe for TUC and potentially reproducible.

  16. The N-shaped environmental Kuznets curve: an empirical evaluation using a panel quantile regression approach.

    Science.gov (United States)

    Allard, Alexandra; Takman, Johanna; Uddin, Gazi Salah; Ahmed, Ali

    2018-02-01

    We evaluate the N-shaped environmental Kuznets curve (EKC) using panel quantile regression analysis. We investigate the relationship between CO 2 emissions and GDP per capita for 74 countries over the period of 1994-2012. We include additional explanatory variables, such as renewable energy consumption, technological development, trade, and institutional quality. We find evidence for the N-shaped EKC in all income groups, except for the upper-middle-income countries. Heterogeneous characteristics are, however, observed over the N-shaped EKC. Finally, we find a negative relationship between renewable energy consumption and CO 2 emissions, which highlights the importance of promoting greener energy in order to combat global warming.

  17. A note on the environmental Kuznets curve for CO2: A pooled mean group approach

    International Nuclear Information System (INIS)

    Iwata, Hiroki; Okada, Keisuke; Samreth, Sovannroeun

    2011-01-01

    This paper investigates whether the environmental Kuznets curve (EKC) hypothesis for CO 2 emissions is satisfied using the panel data of 28 countries by taking nuclear energy into account. Using the pooled mean group (PMG) estimation method, our main results indicate that (1) the impacts of nuclear energy on CO 2 emissions are significantly negative, (2) CO 2 emissions actually increase monotonically within the sample period in all cases: the full sample, OECD countries, and non-OECD countries, and (3) the growth rate in CO 2 emissions with income is decreasing in OECD countries and increasing in non-OECD countries.

  18. New approach to the adjustment of group cross sections fitting integral measurements

    International Nuclear Information System (INIS)

    Chao, Y.A.

    1979-01-01

    The adjustment of group cross sections fitting integral measurements is viewed as a process of estimating theoretical and/or experimental negligence errors to bring statistical consistency to the integral and differential data so that they can be combined to form an enlarged ensemble, based on which an improved estimation of the physical constants can be made. A three-step approach is suggested, and its formalism of general validity is developed. In step one, the data of negligence error are extracted from the given integral and differential data. The method of extraction is based on the concepts of prior probability and information entropy. It automatically leads to vanishing negligence error as the two sets of data are statistically consistent. The second step is to identify the sources of negligence error and adjust the data by an amount compensating for the extracted negligence discrepancy. In the last step, the two data sets, already adjusted to mutual consistency are combined as a single unified ensemble. Standard methods of statistics can then be applied to reestimate the physical constants. 1 figure

  19. Treatment of tophaceous pseudogout with custom-fitted temporomandibular joint: a two-staged approach

    Directory of Open Access Journals (Sweden)

    Robert Pellecchia, DDS

    2015-12-01

    Full Text Available Tophaceous pseudogout, a variant of calcium pyrophosphate dihydrate deposition, is a relatively rare juxta-articular disease. It is a metabolic condition, in which patients develop pseudo-tumoral calcifications associated with peri-articular structures secondary to calcium pyrophosphate deposition into joints with fibrocartilage rather than hyaline cartilage. These lesions are reported in the knee, wrist, pubis, shoulder, and temporomandibular joint (TMJ and induce a histocytic foreign body giant cell reaction. We report a case of tophaceous pseudogout affecting the left TMJ with destruction of the condyle and glenoid and middle cranial fossa that was reconstructed with a TMJ Concepts (Ventura, CA custom-fitted prosthesis in a 2-staged surgical approach using a silicone spacer. The surgical management using a patient-specific TMJ is a viable option when the fossa or condylar component has been compromised due to breakdown of bone secondary to a pathologic process. Our case describes and identifies the lesion and its rare occurrence in the region of the temporomandibular region. The successful management of tophaceous pseudogout of the TMJ must include a thorough patient workup including the involvement of other joints as well as the modification of bone of the glenoid fossa and condylar relationship of the TMJ.

  20. New approach to the adjustment of group cross-sections fitting integral measurements

    International Nuclear Information System (INIS)

    Chao, Y.A.

    1979-01-01

    The adjustment of group cross-sections fitting integral measurements is viewed as a process of uncovering theoretical and/or experimental negligence errors to bring statistical consistency to the integral and differential data so that they can be combined to form an enlarged ensemble, on which an improved estimation of the physical constants can be based. An approach with three steps is suggested, and its formalism of general validity is developed. In step one, the data of negligence error are extracted from the given integral and differential data. The method of extraction is based on the concepts of prior probability and information entropy. It automatically leads to vanishing negligence error as the two sets of data are statistically consistent. The second step is to identify the sources of negligence error and adjust the data by an amount compensating the extracted negligence discrepancy. In the last step the two data sets, already adjusted to mutual consistency, are combined as a single unified ensemble. Standard methods of statistics can then be applied to re-estimate the physical constants. A simple example is shown as a demonstration of the method. 1 figure

  1. Physician behavioral adaptability: A model to outstrip a "one size fits all" approach.

    Science.gov (United States)

    Carrard, Valérie; Schmid Mast, Marianne

    2015-10-01

    Based on a literature review, we propose a model of physician behavioral adaptability (PBA) with the goal of inspiring new research. PBA means that the physician adapts his or her behavior according to patients' different preferences. The PBA model shows how physicians infer patients' preferences and adapt their interaction behavior from one patient to the other. We claim that patients will benefit from better outcomes if their physicians show behavioral adaptability rather than a "one size fits all" approach. This literature review is based on a literature search of the PsycINFO(®) and MEDLINE(®) databases. The literature review and first results stemming from the authors' research support the validity and viability of parts of the PBA model. There is evidence suggesting that physicians are able to show behavioral flexibility when interacting with their different patients, that a match between patients' preferences and physician behavior is related to better consultation outcomes, and that physician behavioral adaptability is related to better consultation outcomes. Training of physicians' behavioral flexibility and their ability to infer patients' preferences can facilitate physician behavioral adaptability and positive patient outcomes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. A computational approach for functional mapping of quantitative trait loci that regulate thermal performance curves.

    Directory of Open Access Journals (Sweden)

    John Stephen Yap

    2007-06-01

    Full Text Available Whether and how thermal reaction norm is under genetic control is fundamental to understand the mechanistic basis of adaptation to novel thermal environments. However, the genetic study of thermal reaction norm is difficult because it is often expressed as a continuous function or curve. Here we derive a statistical model for dissecting thermal performance curves into individual quantitative trait loci (QTL with the aid of a genetic linkage map. The model is constructed within the maximum likelihood context and implemented with the EM algorithm. It integrates the biological principle of responses to temperature into a framework for genetic mapping through rigorous mathematical functions established to describe the pattern and shape of thermal reaction norms. The biological advantages of the model lie in the decomposition of the genetic causes for thermal reaction norm into its biologically interpretable modes, such as hotter-colder, faster-slower and generalist-specialist, as well as the formulation of a series of hypotheses at the interface between genetic actions/interactions and temperature-dependent sensitivity. The model is also meritorious in statistics because the precision of parameter estimation and power of QTLdetection can be increased by modeling the mean-covariance structure with a small set of parameters. The results from simulation studies suggest that the model displays favorable statistical properties and can be robust in practical genetic applications. The model provides a conceptual platform for testing many ecologically relevant hypotheses regarding organismic adaptation within the Eco-Devo paradigm.

  3. Single-centre experience of retroperitoneoscopic approach in urology with tips to overcome the steep learning curve

    Directory of Open Access Journals (Sweden)

    Aneesh Srivastava

    2016-01-01

    Full Text Available Context: The retroperitoneoscopic or retroperitoneal (RP surgical approach has not become as popular as the transperitoneal (TP one due to the steeper learning curve. Aims: Our single-institution experience focuses on the feasibility, advantages and complications of retroperitoneoscopic surgeries (RS performed over the past 10 years. Tips and tricks have been discussed to overcome the steep learning curve and these are emphasised. Settings and Design: This study made a retrospective analysis of computerised hospital data of patients who underwent RP urological procedures from 2003 to 2013 at a tertiary care centre. Patients and Methods: Between 2003 and 2013, 314 cases of RS were performed for various urological procedures. We analysed the operative time, peri-operative complications, time to return of bowel sound, length of hospital stay, and advantages and difficulties involved. Post-operative complications were stratified into five grades using modified Clavien classification (MCC. Results: RS were successfully completed in 95.5% of patients, with 4% of the procedures electively performed by the combined approach (both RP and TP; 3.2% required open conversion and 1.3% were converted to the TP approach. The most common cause for conversion was bleeding. Mean hospital stay was 3.2 ± 1.2 days and the mean time for returning of bowel sounds was 16.5 ± 5.4 h. Of the patients, 1.4% required peri-operative blood transfusion. A total of 16 patients (5% had post-operative complications and the majority were grades I and II as per MCC. The rates of intra-operative and post-operative complications depended on the difficulty of the procedure, but the complications diminished over the years with the increasing experience of surgeons. Conclusion: Retroperitoneoscopy has proven an excellent approach, with certain advantages. The tips and tricks that have been provided and emphasised should definitely help to minimise the steep learning curve.

  4. Authentication of virgin olive oil by a novel curve resolution approach combined with visible spectroscopy.

    Science.gov (United States)

    Ferreiro-González, Marta; Barbero, Gerardo F; Álvarez, José A; Ruiz, Antonio; Palma, Miguel; Ayuso, Jesús

    2017-04-01

    Adulteration of olive oil is not only a major economic fraud but can also have major health implications for consumers. In this study, a combination of visible spectroscopy with a novel multivariate curve resolution method (CR), principal component analysis (PCA) and linear discriminant analysis (LDA) is proposed for the authentication of virgin olive oil (VOO) samples. VOOs are well-known products with the typical properties of a two-component system due to the two main groups of compounds that contribute to the visible spectra (chlorophylls and carotenoids). Application of the proposed CR method to VOO samples provided the two pure-component spectra for the aforementioned families of compounds. A correlation study of the real spectra and the resolved component spectra was carried out for different types of oil samples (n=118). LDA using the correlation coefficients as variables to discriminate samples allowed the authentication of 95% of virgin olive oil samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Determination of performance degradation of a marine diesel engine by using curve based approach

    International Nuclear Information System (INIS)

    Kökkülünk, Görkem; Parlak, Adnan; Erdem, Hasan Hüseyin

    2016-01-01

    Highlights: • Mathematical model was developed for a marine diesel engine. • Measurements were taken from Main Engine of M/V Ince Inebolu. • The model was validated for the marine diesel engine. • Curve Based Method was performed to evaluate the performance. • Degradation values of a marine diesel engine were found for power and SFC. - Abstract: Nowadays, energy efficiency measures on ships are the top priority topic for the maritime sector. One of the important key parameters of energy efficiency is to find the useful tool to improve the energy efficiency. There are two steps to improve the energy efficiency on ships: Measurement and Evaluation of performance of main fuel consumers. Performance evaluation is the method that evaluates how much the performance changes owing to engine component degradation which cause to reduce the performance due to wear, fouling, mechanical problems, etc. In this study, zero dimensional two zone combustion model is developed and validated for two stroke marine diesel engine (MITSUI MAN B&W 6S50MC). The measurements are taken from a real ship named M/V Ince Inebolu by the research team during the normal operation of the main engine in the region of the Marmara Sea. To evaluate the performance, “Curve based method” is used to calculate the total performance degradation. This total degradation is classified as parameters of compression pressure, injection timing, injection pressure, scavenge air temperature and scavenge air pressure by means of developed mathematical model. In conclusion, the total degradation of the applied ship is found as 620 kW by power and 26.74 g/kW h by specific fuel consumption.

  6. An Inverse Function Least Square Fitting Approach of the Buildup Factor for Radiation Shielding Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Je [Sejong Univ., Seoul (Korea, Republic of); Alkhatee, Sari; Roh, Gyuhong; Lee, Byungchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    Dose absorption and energy absorption buildup factors are widely used in the shielding analysis. The dose rate of the medium is main concern in the dose buildup factor, however energy absorption is an important parameter in the energy buildup factors. ANSI/ANS-6.4.3-1991 standard data is widely used based on interpolation and extrapolation by means of an approximation method. Recently, Yoshida's geometric progression (GP) formulae are also popular and it is already implemented in QAD code. In the QAD code, two buildup factors are notated as DOSE for standard air exposure response and ENG for the response of the energy absorbed in the material itself. In this paper, a new least square fitting method is suggested to obtain a reliable buildup factors proposed since 1991. Total 4 datasets of air exposure buildup factors are used for evaluation including ANSI/ANS-6.4.3-1991, Taylor, Berger, and GP data. The standard deviation of the fitted data are analyzed based on the results. A new reverse least square fitting method is proposed in this study in order to reduce the fitting uncertainties. It adapts an inverse function rather than the original function by the distribution slope of dataset. Some quantitative comparisons are provided for concrete and lead in this paper, too. This study is focused on the least square fitting of existing buildup factors to be utilized in the point-kernel code for radiation shielding analysis. The inverse least square fitting method is suggested to obtain more reliable results of concave shaped dataset such as concrete. In the concrete case, the variance and residue are decreased significantly, too. However, the convex shaped case of lead can be applied to the usual least square fitting method. In the future, more datasets will be tested by using the least square fitting. And the fitted data could be implemented to the existing point-kernel codes.

  7. Navigation and flight director guidance for the NASA/FAA helicopter MLS curved approach flight test program

    Science.gov (United States)

    Phatak, A. V.; Lee, M. G.

    1985-01-01

    The navigation and flight director guidance systems implemented in the NASA/FAA helicopter microwave landing system (MLS) curved approach flight test program is described. Flight test were conducted at the U.S. Navy's Crows Landing facility, using the NASA Ames UH-lH helicopter equipped with the V/STOLAND avionics system. The purpose of these tests was to investigate the feasibility of flying complex, curved and descending approaches to a landing using MLS flight director guidance. A description of the navigation aids used, the avionics system, cockpit instrumentation and on-board navigation equipment used for the flight test is provided. Three generic reference flight paths were developed and flown during the test. They were as follows: U-Turn, S-turn and Straight-In flight profiles. These profiles and their geometries are described in detail. A 3-cue flight director was implemented on the helicopter. A description of the formulation and implementation of the flight director laws is also presented. Performance data and analysis is presented for one pilot conducting the flight director approaches.

  8. Land administration in Ecuador; Current situation and opportunities with adoption of fit-for-purpose land administration approach

    NARCIS (Netherlands)

    Todorovski, D.; Salazar, Rodolfo; Jacome, Ginella; Bermeo, Antonio; Orellana, Esteban; Zambrano, Fatima; Teran, Andrea; Mejia, Raul

    2018-01-01

    The aim of this paper is to explore current land administration situation in Ecuador and identify opportunities for fit-for-purpose (FFP) land administration approach that could improve the land administration functions for the country and its citizens. In this paper, initially literature about land

  9. An alternative approach to calculating Area-Under-the-Curve (AUC) in delay discounting research.

    Science.gov (United States)

    Borges, Allison M; Kuang, Jinyi; Milhorn, Hannah; Yi, Richard

    2016-09-01

    Applied to delay discounting data, Area-Under-the-Curve (AUC) provides an atheoretical index of the rate of delay discounting. The conventional method of calculating AUC, by summing the areas of the trapezoids formed by successive delay-indifference point pairings, does not account for the fact that most delay discounting tasks scale delay pseudoexponentially, that is, time intervals between delays typically get larger as delays get longer. This results in a disproportionate contribution of indifference points at long delays to the total AUC, with minimal contribution from indifference points at short delays. We propose two modifications that correct for this imbalance via a base-10 logarithmic transformation and an ordinal scaling transformation of delays. These newly proposed indices of discounting, AUClog d and AUCor d, address the limitation of AUC while preserving a primary strength (remaining atheoretical). Re-examination of previously published data provides empirical support for both AUClog d and AUCor d . Thus, we believe theoretical and empirical arguments favor these methods as the preferred atheoretical indices of delay discounting. © 2016 Society for the Experimental Analysis of Behavior.

  10. Institutional Fit and River Basin Governance: a New Approach Using Multiple Composite Measures

    Directory of Open Access Journals (Sweden)

    Louis Lebel

    2013-03-01

    Full Text Available The notion that effective environmental governance depends in part on achieving a reasonable fit between institutional arrangements and the features of ecosystems and their interconnections with users has been central to much thinking about social-ecological systems for more than a decade. Based on expert consultations this study proposes a set of six dimensions of fit for water governance regimes and then empirically explores variation in measures of these in 28 case studies of national parts of river basins in Europe, Asia, Latin America, and Africa drawing on a database compiled by the Twin2Go project. The six measures capture different but potentially important dimensions of fit: allocation, integration, conservation, basinization, participation, and adaptation. Based on combinations of responses to a standard questionnaire filled in by groups of experts in each basin we derived quantitative measures for each indicator. Substantial variation in these measures of fit was apparent among basins in developing and developed countries. Geographical location is not a barrier to high institutional fit; but within basins different measures of fit often diverge. This suggests it is difficult, but not impossible, to simultaneously achieve a high fit against multiple challenging conditions. Comparing multidimensional fit profiles give a sense of how well water governance regimes are equipped for dealing with a range of natural resource and use-related conditions and suggests areas for priority intervention. The findings of this study thus confirm and help explain previous work that has concluded that context is important for understanding the variable consequences of institutional reform on water governance practices as well as on social and environmental outcomes.

  11. SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook

    2012-01-01

    The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority (∼90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.

  12. Pediatric art preferences: countering the "one-size-fits-all" approach.

    Science.gov (United States)

    Nanda, Upali; Chanaud, Cheryl M; Brown, Linda; Hart, Robyn; Hathorn, Kathy

    2009-01-01

    three operational stages, so one should be careful before using the "one-size-fits-all" approach. Child art, typically used in pediatric wards, is better suited for younger children than for older children.

  13. Industry-Cost-Curve Approach for Modeling the Environmental Impact of Introducing New Technologies in Life Cycle Assessment.

    Science.gov (United States)

    Kätelhön, Arne; von der Assen, Niklas; Suh, Sangwon; Jung, Johannes; Bardow, André

    2015-07-07

    The environmental costs and benefits of introducing a new technology depend not only on the technology itself, but also on the responses of the market where substitution or displacement of competing technologies may occur. An internationally accepted method taking both technological and market-mediated effects into account, however, is still lacking in life cycle assessment (LCA). For the introduction of a new technology, we here present a new approach for modeling the environmental impacts within the framework of LCA. Our approach is motivated by consequential life cycle assessment (CLCA) and aims to contribute to the discussion on how to operationalize consequential thinking in LCA practice. In our approach, we focus on new technologies producing homogeneous products such as chemicals or raw materials. We employ the industry cost-curve (ICC) for modeling market-mediated effects. Thereby, we can determine substitution effects at a level of granularity sufficient to distinguish between competing technologies. In our approach, a new technology alters the ICC potentially replacing the highest-cost producer(s). The technologies that remain competitive after the new technology's introduction determine the new environmental impact profile of the product. We apply our approach in a case study on a new technology for chlor-alkali electrolysis to be introduced in Germany.

  14. Using Data-Mining Approaches for Wind Turbine Power Curve Monitoring: A Comparative Study

    DEFF Research Database (Denmark)

    Schlechtingen, Meik; Santos, Ilmar; Achiche, Sofiane

    2013-01-01

    are built and their performance compared against literature. Recently developed adaptive neuro-fuzzy-interference system models are set up and their performance compared with the other models, using the same data. Literature models often neglect the influence of the ambient temperature and the wind...... direction. The ambient temperature can influence the power output up to 20%. Nearby obstacles can lower the power output for certain wind directions. The approaches proposed in literature and the ANFIS models are compared by using wind speed only and two additional inputs. The comparison is based...

  15. A learning curve approach to projecting cost and performance for photovoltaic technologies

    Science.gov (United States)

    Cody, George D.; Tiedje, Thomas

    1997-04-01

    The current cost of electricity generated by PV power is still extremely high with respect to power supplied by the utility grid, and there remain questions as to whether PV power can ever be competitive with electricity generated by fossil fuels. An objective approach to this important question was given in a previous paper by the authors which introduced analytical tools to define and project the technical/economic status of PV power from 1988 through the year 2010. In this paper, we apply these same tools to update the conclusions of our earlier study in the context of recent announcements by Amoco/Enron-Solarex of projected sales of PV power at rates significantly less than the US utility average.

  16. Learning curve approach to projecting cost and performance for photovoltaic technologies

    Science.gov (United States)

    Cody, George D.; Tiedje, Thomas

    1997-10-01

    The current cost of electricity generated by PV power is still extremely high with respect to power supplied by the utility grid, and there remain questions as to whether PV power can ever be competitive with electricity generated by fossil fuels. An objective approach to this important question was given in a previous paper by the authors which introduced analytical tools to define and project the technical/economic status of PV power from 1988 through the year 2010. In this paper, we apply these same tools to update the conclusions of our earlier study in the context of recent announcements by Amoco/Enron-Solar of projected sales of PV power at rates significantly less than the U.S. utility average.

  17. Physical Activity, Physical Fitness and Academic Achievement in Adolescents: A Self-Organizing Maps Approach

    Science.gov (United States)

    Pellicer-Chenoll, Maite; Garcia-Massó, Xavier; Morales, Jose; Serra-Añó, Pilar; Solana-Tramunt, Mònica; González, Luis-Millán; Toca-Herrera, José-Luis

    2015-01-01

    The relationship among physical activity, physical fitness and academic achievement in adolescents has been widely studied; however, controversy concerning this topic persists. The methods used thus far to analyse the relationship between these variables have included mostly traditional lineal analysis according to the available literature. The…

  18. On the Usefulness of a Multilevel Logistic Regression Approach to Person-Fit Analysis

    Science.gov (United States)

    Conijn, Judith M.; Emons, Wilco H. M.; van Assen, Marcel A. L. M.; Sijtsma, Klaas

    2011-01-01

    The logistic person response function (PRF) models the probability of a correct response as a function of the item locations. Reise (2000) proposed to use the slope parameter of the logistic PRF as a person-fit measure. He reformulated the logistic PRF model as a multilevel logistic regression model and estimated the PRF parameters from this…

  19. Goodness of Fit of Skills Assessment Approaches: Insights from Patterns of Real vs. Synthetic Data Sets

    Science.gov (United States)

    Beheshti, Behzad; Desmarais, Michel C.

    2015-01-01

    This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…

  20. A fitness landscape approach to technological complexity, modularity, and vertical disintegration

    NARCIS (Netherlands)

    Frenken, K.

    2006-01-01

    The biological evolution of complex organisms, in which the functioning of genes is interdependent, has been analysed as "hill-climbing" on NK fitness landscapes through random mutation and natural selection [Kauffman, S.A., 1993. The Origins of Order. Self-organization and Selection in Evolution.

  1. Health and Fitness Courses in Higher Education: A Historical Perspective and Contemporary Approach

    Science.gov (United States)

    Bjerke, Wendy

    2013-01-01

    The prevalence of obesity among 18- to 24-year-olds has steadily increased. Given that the majority of young American adults are enrolled in colleges and universities, the higher education setting could be an appropriate environment for health promotion programs. Historically, health and fitness in higher education have been provided via…

  2. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  3. Healthy Lifestyle Fitness Camp: A Summer Approach to Prevent Obesity in Low-Income Youth.

    Science.gov (United States)

    George, Gretchen Lynn; Schneider, Constance; Kaiser, Lucia

    2016-03-01

    To examine the effect of participation in a summer camp focused on nutrition and fitness among low-income youth. In 2011-2012, overweight and obese youth (n = 126) from Fresno, CA participated in a free 6-week summer program, Healthy Lifestyle Fitness Camp (HLFC), which included 3 h/wk of nutrition education provided by University of California CalFresh and 3 hours of daily physical activity through Fresno Parks and Recreation. The researchers used repeated-measures ANOVA to examine changes in weight, waist circumference, and waist-to-height ratio (WHtR) between HLFC and the comparison group (n = 29). Significant pre-post WHtR reductions were observed in HLFC: 0.64 to 0.61 (P obesity prevention among low-income youth. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  4. The genetic basis of the fitness costs of antimicrobial resistance: a meta-analysis approach

    OpenAIRE

    Vogwill, Tom; MacLean, R. Craig

    2014-01-01

    The evolution of antibiotic resistance carries a fitness cost, expressed in terms of reduced competitive ability in the absence of antibiotics. This cost plays a key role in the dynamics of resistance by generating selection against resistance when bacteria encounter an antibiotic-free environment. Previous work has shown that the cost of resistance is highly variable, but the underlying causes remain poorly understood. Here, we use a meta-analysis of the published resistance literature to de...

  5. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    Science.gov (United States)

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  6. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    Directory of Open Access Journals (Sweden)

    Christian Held

    2013-01-01

    Full Text Available Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline′s modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  7. Perceived social isolation, evolutionary fitness and health outcomes: a lifespan approach.

    Science.gov (United States)

    Hawkley, Louise C; Capitanio, John P

    2015-05-26

    Sociality permeates each of the fundamental motives of human existence and plays a critical role in evolutionary fitness across the lifespan. Evidence for this thesis draws from research linking deficits in social relationship--as indexed by perceived social isolation (i.e. loneliness)--with adverse health and fitness consequences at each developmental stage of life. Outcomes include depression, poor sleep quality, impaired executive function, accelerated cognitive decline, unfavourable cardiovascular function, impaired immunity, altered hypothalamic pituitary-adrenocortical activity, a pro-inflammatory gene expression profile and earlier mortality. Gaps in this research are summarized with suggestions for future research. In addition, we argue that a better understanding of naturally occurring variation in loneliness, and its physiological and psychological underpinnings, in non-human species may be a valuable direction to better understand the persistence of a 'lonely' phenotype in social species, and its consequences for health and fitness. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  8. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    Science.gov (United States)

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  9. Simple approach to approximate predictions of the vapor–liquid equilibrium curve near the critical point and its application to Lennard-Jones fluids

    International Nuclear Information System (INIS)

    Staśkiewicz, B.; Okrasiński, W.

    2012-01-01

    We propose a simple analytical form of the vapor–liquid equilibrium curve near the critical point for Lennard-Jones fluids. Coexistence densities curves and vapor pressure have been determined using the Van der Waals and Dieterici equation of state. In described method the Bernoulli differential equations, critical exponent theory and some type of Maxwell's criterion have been used. Presented approach has not yet been used to determine analytical form of phase curves as done in this Letter. Lennard-Jones fluids have been considered for analysis. Comparison with experimental data is done. The accuracy of the method is described. -- Highlights: ► We propose a new analytical way to determine the VLE curve. ► Simple, mathematically straightforward form of phase curves is presented. ► Comparison with experimental data is discussed. ► The accuracy of the method has been confirmed.

  10. Model-based Approach for Long-term Creep Curves of Alloy 617 for a High Temperature Gas-cooled Reactor

    International Nuclear Information System (INIS)

    Kim, Woo Gon; Yin, Song Nan; Kim, Yong Wan

    2008-01-01

    Alloy 617 is a principal candidate alloy for the high temperature gas-cooled reactor (HTGR) components, because of its high creep rupture strength coupled with its good corrosion behavior in simulated HTGR-helium and its sufficient workability. To describe a creep strain-time curve well, various constitutive equations have been proposed by Kachanov-Rabotnov, Andrade, Garofalo, Evans and Maruyama, et al.. Among them, the K-R model has been used frequently, because a secondary creep resulting from a balance between a softening and a hardening of materials and a tertiary creep resulting from an appearance and acceleration of the internal or external damage processes are adequately considered. In the case of nickel-base alloys, it has been reported that a tertiary creep at a low strain range may be generated, and this tertiary stage may govern the total creep deformation. Therefore, a creep curve for nickel-based Alloy 617 will be predicted appropriately by using the K-R model that can reflect a tertiary creep. In this paper, the long-term creep curves for Alloy 617 were predicted by using the nonlinear least square fitting (NLSF) method in the K-R model. The modified K-R model was introduced to fit the full creep curves well. The values for the λ and K parameters in the modified K-R model were obtained with stresses

  11. A Fit-For-Purpose approach to Land Administration in Africa in support of the new 2030 Global Agenda

    DEFF Research Database (Denmark)

    Enemark, Stig

    2017-01-01

    on legacy approaches, have been fragmented and have not delivered the required pervasive changes and improvements at scale. The solutions have not helped the most needy - the poor and disadvantaged that have no security of tenure. In fact the beneficiaries have often been the elite and organizations...... involved in land grabbing. It is time to rethink the approaches. New solutions are required that can deliver security of tenure for all, are affordable and can be quickly developed and incrementally improved over time. The Fit-For-Purpose (FFP) approach to land administration has emerged to meet...... administration systems is the only viable solution to solving the global security of tenure divide. The FFP approach is flexible and includes the adaptability to meet the actual and basic needs of society today and having the capability to be incrementally improved over time. This will be triggered in response...

  12. Differential quantitative proteomics of Porphyromonas gingivalis by linear ion trap mass spectrometry: Non-label methods comparison, q-values and LOWESS curve fitting

    Science.gov (United States)

    Xia, Qiangwei; Wang, Tiansong; Park, Yoonsuk; Lamont, Richard J.; Hackett, Murray

    2007-01-01

    Differential analysis of whole cell proteomes by mass spectrometry has largely been applied using various forms of stable isotope labeling. While metabolic stable isotope labeling has been the method of choice, it is often not possible to apply such an approach. Four different label free ways of calculating expression ratios in a classic "two-state" experiment are compared: signal intensity at the peptide level, signal intensity at the protein level, spectral counting at the peptide level, and spectral counting at the protein level. The quantitative data were mined from a dataset of 1245 qualitatively identified proteins, about 56% of the protein encoding open reading frames from Porphyromonas gingivalis, a Gram-negative intracellular pathogen being studied under extracellular and intracellular conditions. Two different control populations were compared against P. gingivalis internalized within a model human target cell line. The q-value statistic, a measure of false discovery rate previously applied to transcription microarrays, was applied to proteomics data. For spectral counting, the most logically consistent estimate of random error came from applying the locally weighted scatter plot smoothing procedure (LOWESS) to the most extreme ratios generated from a control technical replicate, thus setting upper and lower bounds for the region of experimentally observed random error.

  13. New approach to the adjustment of group cross sections fitting integral measurements - 2

    International Nuclear Information System (INIS)

    Chao, Y.A.

    1980-01-01

    The method developed in the first paper concerning group cross sections fitting integral measurements is generalized to cover the case when the source of the extracted negligence discrepancy cannot be identified and the theoretical relation between the integral and differential measurements is also subject to uncertainty. The question of how to divide in such a case the negligence discrepancy between the integral and differential data is resolved. Application to a specific problem with real experimental data is shown as a demonstration of the method. 4 refs

  14. Effect of the SOS response on the mean fitness of unicellular populations: a quasispecies approach.

    Science.gov (United States)

    Kama, Amit; Tannenbaum, Emmanuel

    2010-11-30

    The goal of this paper is to develop a mathematical model that analyzes the selective advantage of the SOS response in unicellular organisms. To this end, this paper develops a quasispecies model that incorporates the SOS response. We consider a unicellular, asexually replicating population of organisms, whose genomes consist of a single, double-stranded DNA molecule, i.e. one chromosome. We assume that repair of post-replication mismatched base-pairs occurs with probability , and that the SOS response is triggered when the total number of mismatched base-pairs is at least . We further assume that the per-mismatch SOS elimination rate is characterized by a first-order rate constant . For a single fitness peak landscape where the master genome can sustain up to mismatches and remain viable, this model is analytically solvable in the limit of infinite sequence length. The results, which are confirmed by stochastic simulations, indicate that the SOS response does indeed confer a fitness advantage to a population, provided that it is only activated when DNA damage is so extensive that a cell will die if it does not attempt to repair its DNA.

  15. Multiple organ definition in CT using a Bayesian approach for 3D model fitting

    Science.gov (United States)

    Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.

    1995-08-01

    Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.

  16. Recent trends in application of multivariate curve resolution approaches for improving gas chromatography-mass spectrometry analysis of essential oils.

    Science.gov (United States)

    Jalali-Heravi, Mehdi; Parastar, Hadi

    2011-08-15

    Essential oils (EOs) are valuable natural products that are popular nowadays in the world due to their effects on the health conditions of human beings and their role in preventing and curing diseases. In addition, EOs have a broad range of applications in foods, perfumes, cosmetics and human nutrition. Among different techniques for analysis of EOs, gas chromatography-mass spectrometry (GC-MS) is the most important one in recent years. However, there are some fundamental problems in GC-MS analysis including baseline drift, spectral background, noise, low S/N (signal to noise) ratio, changes in the peak shapes and co-elution. Multivariate curve resolution (MCR) approaches cope with ongoing challenges and are able to handle these problems. This review focuses on the application of MCR techniques for improving GC-MS analysis of EOs published between January 2000 and December 2010. In the first part, the importance of EOs in human life and their relevance in analytical chemistry is discussed. In the second part, an insight into some basics needed to understand prospects and limitations of the MCR techniques are given. In the third part, the significance of the combination of the MCR approaches with GC-MS analysis of EOs is highlighted. Furthermore, the commonly used algorithms for preprocessing, chemical rank determination, local rank analysis and multivariate resolution in the field of EOs analysis are reviewed. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. A new approach for determining phase response curves reveals that Purkinje cells can act as perfect integrators.

    Directory of Open Access Journals (Sweden)

    Elena Phoka

    2010-04-01

    Full Text Available Cerebellar Purkinje cells display complex intrinsic dynamics. They fire spontaneously, exhibit bistability, and via mutual network interactions are involved in the generation of high frequency oscillations and travelling waves of activity. To probe the dynamical properties of Purkinje cells we measured their phase response curves (PRCs. PRCs quantify the change in spike phase caused by a stimulus as a function of its temporal position within the interspike interval, and are widely used to predict neuronal responses to more complex stimulus patterns. Significant variability in the interspike interval during spontaneous firing can lead to PRCs with a low signal-to-noise ratio, requiring averaging over thousands of trials. We show using electrophysiological experiments and simulations that the PRC calculated in the traditional way by sampling the interspike interval with brief current pulses is biased. We introduce a corrected approach for calculating PRCs which eliminates this bias. Using our new approach, we show that Purkinje cell PRCs change qualitatively depending on the firing frequency of the cell. At high firing rates, Purkinje cells exhibit single-peaked, or monophasic PRCs. Surprisingly, at low firing rates, Purkinje cell PRCs are largely independent of phase, resembling PRCs of ideal non-leaky integrate-and-fire neurons. These results indicate that Purkinje cells can act as perfect integrators at low firing rates, and that the integration mode of Purkinje cells depends on their firing rate.

  18. Virus fitness differences observed between two naturally occurring isolates of Ebola virus Makona variant using a reverse genetics approach.

    Science.gov (United States)

    Albariño, César G; Guerrero, Lisa Wiggleton; Chakrabarti, Ayan K; Kainulainen, Markus H; Whitmer, Shannon L M; Welch, Stephen R; Nichol, Stuart T

    2016-09-01

    During the large outbreak of Ebola virus disease that occurred in Western Africa from late 2013 to early 2016, several hundred Ebola virus (EBOV) genomes have been sequenced and the virus genetic drift analyzed. In a previous report, we described an efficient reverse genetics system designed to generate recombinant EBOV based on a Makona variant isolate obtained in 2014. Using this system, we characterized the replication and fitness of 2 isolates of the Makona variant. These virus isolates are nearly identical at the genetic level, but have single amino acid differences in the VP30 and L proteins. The potential effects of these differences were tested using minigenomes and recombinant viruses. The results obtained with this approach are consistent with the role of VP30 and L as components of the EBOV RNA replication machinery. Moreover, the 2 isolates exhibited clear fitness differences in competitive growth assays. Published by Elsevier Inc.

  19. An Epistemology of Leadership Perspective: Examining the Fit for a Critical Pragmatic Approach

    Science.gov (United States)

    Bourgeois, Nichole

    2011-01-01

    In this article the author examines the meaning of epistemology in relation to educational leadership. Argued is the position that generalizing the intent and tendencies of modernistic and postmodernistic approaches to educational reform and leadership preparation makes space for a critical pragmatic approach. Critical pragmatists as…

  20. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu [Department of Chemistry and Chemical Biology, University of New Mexico, Albuquerque, New Mexico 87131 (United States)

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resulting in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.

  1. A hands-on approach for fitting long-term survival models under the GAMLSS framework.

    Science.gov (United States)

    de Castro, Mário; Cancho, Vicente G; Rodrigues, Josemar

    2010-02-01

    In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. In this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  2. Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: The Henry Ford ExercIse Testing (FIT) project.

    Science.gov (United States)

    Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan; Sakr, Sherif

    2017-01-01

    Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.

  3. Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: The Henry Ford ExercIse Testing (FIT project.

    Directory of Open Access Journals (Sweden)

    Manal Alghamdi

    Full Text Available Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE. The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree and achieved high accuracy of prediction (AUC = 0.92. The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.

  4. Improving the reliability of POD curves in NDI methods using a Bayesian inversion approach for uncertainty quantification

    Science.gov (United States)

    Ben Abdessalem, A.; Jenson, F.; Calmon, P.

    2016-02-01

    This contribution provides an example of the possible advantages of adopting a Bayesian inversion approach to uncertainty quantification in nondestructive inspection methods. In such problem, the uncertainty associated to the random parameters is not always known and needs to be characterised from scattering signal measurements. The uncertainties may then correctly propagated in order to determine a reliable probability of detection curve. To this end, we establish a general Bayesian framework based on a non-parametric maximum likelihood function formulation and some priors from expert knowledge. However, the presented inverse problem is time-consuming and computationally intensive. To cope with this difficulty, we replace the real model by a surrogate one in order to speed-up the model evaluation and to make the problem to be computationally feasible for implementation. The least squares support vector regression is adopted as metamodelling technique due to its robustness to deal with non-linear problems. We illustrate the usefulness of this methodology through the control of tube with enclosed defect using ultrasonic inspection method.

  5. The Impacts of Technical Progress on Sulfur Dioxide Kuznets Curve in China: A Spatial Panel Data Approach

    Directory of Open Access Journals (Sweden)

    Zhimin Zhou

    2017-04-01

    Full Text Available This paper aims to reveal the nexus for sulfur dioxide (SO2 emission and income, as well as the effects of technical progress on SO2 emission in China based on environment Kuznets curve (EKC hypothesis. The spatial panel technique is used in case the coefficient estimates are biased due to the negligence of spatial dependence. With the provincial panel data of China from 2004 to 2014, this is the first research that finds an inverse N-trajectory of the relationship between SO2 emission and economic growth and confirms the beneficial impacts of technical advancement on SO2 emission abatement. The empirical results also suggest that the industrial structure change is an important driving force of the SO2 EKC. In addition, the direct and spillover effects of determinants on sulfur emission are clarified and estimated by a correct approach. Finally, we check the stability of our conclusions on the EKC shape for SO2 and technical progress effects when controlling for different variables and specifications, through which we find the turning points are sensitive to variables selections.

  6. An empirical examination of the Environmental Kuznets Curve hypothesis for carbon dioxide emissions in Ghana: an ARDL approach

    Directory of Open Access Journals (Sweden)

    Twerefou Daniel Kwabena

    2016-12-01

    Full Text Available The Environmental Kuznets Curve (EKC hypothesis postulates an inverted U-shaped relationship between different pollutants and economic growth. In Ghana, as in many other developing countries, there exist scanty studies that confirm or otherwise the EKC hypothesis with regards to CO2 emissions as well as the factors that drive CO2 emissions. This work aims to bridge this knowledge gap by addressing these two major questions using data from 1970 to 2010 and the Auto Regressive Distributed Lag (ARDL Bounds Testing approach. The results rather suggest a U-shaped relationship between per capita GDP and CO2 emissions per capita indicating the non-existence of the EKC hypothesis for CO2 in Ghana. This implies that further increase in per capita Gross Domestic Product (GDP will only be associated with increase in CO2 emissions as the income per capita turning point of about $624 at constant 2000 prices occurred between 1996 and 1997. Furthermore, our results reveal energy consumption and trade openness are positive long run drivers of CO2 emissions. It is therefore recommended that the enhancement of trade liberalization policies should ensure the use of cleaner technologies and products while investment in cleaner energy alternatives could help reduce CO2 emissions. We also recommend the implementation of the Low Carbon Development Strategy which integrates development and climate change mitigation actions.

  7. A predictive approach to fitness-for-service assessment of pitting corrosion

    International Nuclear Information System (INIS)

    Shekari, Elahe; Khan, Faisal; Ahmed, Salim

    2016-01-01

    Pitting corrosion is a localized corrosion that often causes leak and failure of process components. The aim of this work is to present a new fitness-for-service (FFS) assessment methodology for process equipment to track and predict pitting corrosion. In this methodology, pit density is modeled using a non-homogenous Poisson process and induction time for pit initiation is simulated as the realization of a Weibull process. The non-homogenous Markov process is used to estimate maximum pit depth, considering that only the current state of the damage influences its future development. Subsequently, the distributions of the operating pressure and the estimated burst pressure of the defected component are integrated with Monte Carlo simulations and First Order Second Moment (FOSM) method to calculate the reliability index and probability of failure. This methodology provides a more realistic failure assessment and enables consideration of uncertainty associated with estimating pit characteristics. The practical application of the proposed model is demonstrated using a piping case study. - Highlights: • A new model to estimate maximum pit depth and pit density as two main pit characteristics. • Integrating maximum pit depth with failure analysis considering allowable pressure of defected component. • Time dependent failure analysis to determine the remaining life.

  8. FIT ANALYSIS OF INDOSAT DOMPETKU BUSINESS MODEL USING A STRATEGIC DIAGNOSIS APPROACH

    Directory of Open Access Journals (Sweden)

    Fauzi Ridwansyah

    2015-09-01

    Full Text Available Mobile payment is an industry's response to global and regional technological-driven, as well as national social-economical driven in less cash society development. The purposes of this study were 1 identifying positioning of PT. Indosat in providing a response to Indonesian mobile payment market, 2 analyzing Indosat’s internal capabilities and business model fit with environment turbulence, and 3 formulating the optimum mobile payment business model development design for Indosat. The method used in this study was a combination of qualitative and quantitative analysis through in-depth interviews with purposive judgment sampling. The analysis tools used in this study were Business Model Canvas (MBC and Ansoff’s Strategic Diagnosis. The interviewees were the representatives of PT. Indosat internal management and mobile payment business value chain stakeholders. Based on BMC mapping which is then analyzed by strategic diagnosis model, a considerable gap (>1 between the current market environment and Indosat strategy of aggressiveness with the expected future of environment turbulence level was obtained. Therefore, changes in the competitive strategy that need to be conducted include 1 developing a new customer segment, 2 shifting the value proposition that leads to the extensification of mobile payment, 3 monetizing effective value proposition, and 4 integrating effective collaboration for harmonizing company’s objective with the government's vision. Keywords: business model canvas, Indosat, mobile payment, less cash society, strategic diagnosis

  9. One size does not fit all : An approach for differentiated supply chain management

    OpenAIRE

    Beck, Patrick; Hofmann, Erik; Stölzle, Wolfgang

    2012-01-01

    Supply chain management (SCM) has developed from an object of operational optimization into a strategic weapon for distinction from competitors. Dynamically changing and strongly varying customer needs demand a differentiated SCM approach. Supply chain differentiation (SCD) plans and designs supply chains based on customer needs, as increasingly demanded by SCM researchers. Therefore SCD offers a possibility to increase SCM effectiveness. While practitioners are highly interested in SCD, acad...

  10. Group fitness activities for the elderly: an innovative approach to reduce falls and injuries.

    Science.gov (United States)

    Bianco, Antonino; Patti, Antonino; Bellafiore, Marianna; Battaglia, Giuseppe; Sahin, Fatma Nese; Paoli, Antonio; Cataldo, Maria Concetta; Mammina, Caterina; Palma, Antonio

    2014-04-01

    The aim of this study was to examine the opportunity to adopt, for the elderly, already validated function ability tests to better understand how to prevent falls and injuries and to better plan group fitness activities like ballroom dance classes (e.g., Valzer, Polka, Mazurka). A cross-sectional study was conducted. The Berg Balance Scale (BBS) and the Barthel Index (BI) were administered and the occurrence of falls during the previous 2 years was evaluated by anamnesis. One hundred and twenty-two elderly subjects living in Palermo city participated to the study. According to the anamnesis, subjects were divided into two groups: experimental group (EG) and control group (CG). The EG consisted of 75 subjects attending classes of ballroom dancing (73.0 ± 5.6 years 26.1 ± 3.9 BMI), while the CG included 47 volunteers (74.3 ± 5.4 years, 26.8 ± 4.4 BMI). A threshold of 70 % for both scales (BBS-70 and BI-70 %) was set, according to the aims of the study. STATISTICA software was adopted to perform an unpaired t test. A P value lower than 0.05 was considered to be statistically relevant. The BI and BBS of CG were 76.7 ± 33.08 and 30.9 ± 14.9, respectively, while the BI and BBS of EG were 98.1 ± 6.9 and 50.5 ± 54. In EG the BBS-70 % showed 96.0 % of cases compared to 27.6 % of the CG. The BI showed a similar trend to BBS. In EG the BI-70 % showed 98.6 % of cases, while the BI-70 % of CG showed 70.2 % of cases. Moreover, only 36.0 % of EG reported falls previously, while CG reported 53.2 % of falls during the same period of time. The BBS seems to be a valid and reliable tool able to be adopted also by professionals of the ballroom dancing sector (e.g., Valzer, Polka and Mazurka classes). Instructors may evaluate the functional ability of their attendees through BBS to easily obtain more information and better plan ballroom dance classes. Moreover, we highlight that these conclusions need to be supported by other studies with different

  11. What are the key drivers of MAC curves? A partial-equilibrium modelling approach for the UK

    International Nuclear Information System (INIS)

    Kesicki, Fabian

    2013-01-01

    Marginal abatement cost (MAC) curves are widely used for the assessment of costs related to CO 2 emissions reduction in environmental economics, as well as domestic and international climate policy. Several meta-analyses and model comparisons have previously been performed that aim to identify the causes for the wide range of MAC curves. Most of these concentrate on general equilibrium models with a focus on aspects such as specific model type and technology learning, while other important aspects remain almost unconsidered, including the availability of abatement technologies and level of discount rates. This paper addresses the influence of several key parameters on MAC curves for the United Kingdom and the year 2030. A technology-rich energy system model, UK MARKAL, is used to derive the MAC curves. The results of this study show that MAC curves are robust even to extreme fossil fuel price changes, while uncertainty around the choice of the discount rate, the availability of key abatement technologies and the demand level were singled out as the most important influencing factors. By using a different model type and studying a wider range of influencing factors, this paper contributes to the debate on the sensitivity of MAC curves. - Highlights: ► A partial-equilibrium model is employed to test key sensitivities of MAC curves. ► MAC curves are found to be robust to wide-ranging changes in fossil fuel prices. ► Most influencing factors are the discount rate, availability of key technologies. ► Further important uncertainty in MAC curves is related to demand changes

  12. Combining Approach in Stages with Least Squares for fits of data in hyperelasticity

    Science.gov (United States)

    Beda, Tibi

    2006-10-01

    The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).

  13. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    Science.gov (United States)

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  14. Meta-analysis of single-arm survival studies: a distribution-free approach for estimating summary survival curves with random effects.

    Science.gov (United States)

    Combescure, Christophe; Foucher, Yohann; Jackson, Daniel

    2014-07-10

    In epidemiologic studies and clinical trials with time-dependent outcome (for instance death or disease progression), survival curves are used to describe the risk of the event over time. In meta-analyses of studies reporting a survival curve, the most informative finding is a summary survival curve. In this paper, we propose a method to obtain a distribution-free summary survival curve by expanding the product-limit estimator of survival for aggregated survival data. The extension of DerSimonian and Laird's methodology for multiple outcomes is applied to account for the between-study heterogeneity. Statistics I(2)  and H(2) are used to quantify the impact of the heterogeneity in the published survival curves. A statistical test for between-strata comparison is proposed, with the aim to explore study-level factors potentially associated with survival. The performance of the proposed approach is evaluated in a simulation study. Our approach is also applied to synthesize the survival of untreated patients with hepatocellular carcinoma from aggregate data of 27 studies and synthesize the graft survival of kidney transplant recipients from individual data from six hospitals. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Fit for the Future? A New Approach in the Debate about What Makes Healthcare Systems Really Sustainable

    Directory of Open Access Journals (Sweden)

    Matthias Fischer

    2014-12-01

    Full Text Available As healthcare systems face enormous challenges, sustainability is seen as a crucial requirement for making them fit for the future. However, there is no consensus with regard to either the definition of the term or the factors that characterize a “sustainable healthcare system”. Therefore, the aim of this article is twofold. First, it gives examples of the existing literature about sustainable healthcare systems and analyzes this literature with regard to its understanding of sustainability and the strengths and weaknesses of the different approaches. The article then identifies crucial factors for sustainable healthcare systems, and the result, a conceptual framework consisting of five distinct and interacting factors, can be seen as a starting point for further research.

  16. A regret theory approach to decision curve analysis: a novel method for eliciting decision makers' preferences and decision-making.

    Science.gov (United States)

    Tsalatsanis, Athanasios; Hozo, Iztok; Vickers, Andrew; Djulbegovic, Benjamin

    2010-09-16

    Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decision-maker, particularly

  17. A regret theory approach to decision curve analysis: A novel method for eliciting decision makers' preferences and decision-making

    Directory of Open Access Journals (Sweden)

    Vickers Andrew

    2010-09-01

    Full Text Available Abstract Background Decision curve analysis (DCA has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1, and analytical, deliberative process (system 2, thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA. Methods First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker. Results We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat vs. "commissions" (e.g. treating unnecessary and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease. Conclusions We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may

  18. Disadvantages of using the area under the receiver operating characteristic curve to assess imaging tests: A discussion and proposal for an alternative approach

    International Nuclear Information System (INIS)

    Halligan, Steve; Altman, Douglas G.; Mallett, Susan

    2015-01-01

    The objectives are to describe the disadvantages of the area under the receiver operating characteristic curve (ROC AUC) to measure diagnostic test performance and to propose an alternative based on net benefit. We use a narrative review supplemented by data from a study of computer-assisted detection for CT colonography. We identified problems with ROC AUC. Confidence scoring by readers was highly non-normal, and score distribution was bimodal. Consequently, ROC curves were highly extrapolated with AUC mostly dependent on areas without patient data. AUC depended on the method used for curve fitting. ROC AUC does not account for prevalence or different misclassification costs arising from false-negative and false-positive diagnoses. Change in ROC AUC has little direct clinical meaning for clinicians. An alternative analysis based on net benefit is proposed, based on the change in sensitivity and specificity at clinically relevant thresholds. Net benefit incorporates estimates of prevalence and misclassification costs, and it is clinically interpretable since it reflects changes in correct and incorrect diagnoses when a new diagnostic test is introduced. ROC AUC is most useful in the early stages of test assessment whereas methods based on net benefit are more useful to assess radiological tests where the clinical context is known. Net benefit is more useful for assessing clinical impact. (orig.)

  19. Adiabatic potential-energy curves of long-range Rydberg molecules: Two-electron R -matrix approach

    Czech Academy of Sciences Publication Activity Database

    Tarana, Michal; Čurík, Roman

    2016-01-01

    Roč. 93, č. 1 (2016), 012515 ISSN 0556-2791 R&D Projects: GA ČR(CZ) GP14-15989P Institutional support: RVO:61388955 Keywords : adiabatic-potential-energy curves * Rydberg molecules * theoretical chemistry Subject RIV: CF - Physical ; Theoretical Chemistry

  20. Adiabatic potential-energy curves of long-range Rydberg molecules: Two-electron R -matrix approach

    Czech Academy of Sciences Publication Activity Database

    Tarana, Michal; Čurík, Roman

    2016-01-01

    Roč. 93, č. 1 (2016), 012515 ISSN 0556-2791 R&D Projects: GA ČR(CZ) GP14-15989P Institutional support: RVO:61388955 Keywords : adiabatic-potential- energy curves * Rydberg molecules * theoretical chemistry Subject RIV: CF - Physical ; Theoretical Chemistry

  1. Estimation of surface water quality in a Yazoo River tributary using the duration curve and recurrence interval approach

    Science.gov (United States)

    Ying Ouyang; Prem B. Parajuli; Daniel A. Marion

    2013-01-01

    Pollution of surface water with harmful chemicals and eutrophication of rivers and lakes with excess nutrients are serious environmental concerns. This study estimated surface water quality in a stream within the Yazoo River Basin (YRB), Mississippi, USA, using the duration curve and recurrence interval analysis techniques. Data from the US Geological Survey (USGS)...

  2. Growth curves of preschool children in the northeast of iran: a population based study using quantile regression approach.

    Science.gov (United States)

    Payande, Abolfazl; Tabesh, Hamed; Shakeri, Mohammad Taghi; Saki, Azadeh; Safarian, Mohammad

    2013-01-14

    Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. The objectives of this study are going to construct growth charts and normal values of weight-for-age for children aged 0 to 5 years using a powerful and applicable methodology. The results compare with the World Health Organization (WHO) references and semi-parametric LMS method of Cole and Green. A total of 70737 apparently healthy boys and girls aged 0 to 5 years were recruited in July 2004 for 20 days from those attending community clinics for routine health checks as a part of a national survey. Anthropometric measurements were done by trained health staff using WHO methodology. The nonparametric quantile regression method obtained by local constant kernel estimation of conditional quantiles curves using for estimation of curves and normal values. The weight-for-age growth curves for boys and girls aged from 0 to 5 years were derived utilizing a population of children living in the northeast of Iran. The results were similar to the ones obtained by the semi-parametric LMS method in the same data. Among all age groups from 0 to 5 years, the median values of children's weight living in the northeast of Iran were lower than the corresponding values in WHO reference data. The weight curves of boys were higher than those of girls in all age groups. The differences between growth patterns of children living in the northeast of Iran versus international ones necessitate using local and regional growth charts. International normal values may not properly recognize the populations at risk for growth problems in Iranian children. Quantile regression (QR) as a flexible method which doesn't require restricted assumptions, proposed for estimation reference curves and normal values.

  3. Bootstrap rolling window estimation approach to analysis of the Environment Kuznets Curve hypothesis: evidence from the USA.

    Science.gov (United States)

    Aslan, Alper; Destek, Mehmet Akif; Okumus, Ilyas

    2018-01-01

    This study aims to examine the validity of inverted U-shaped Environmental Kuznets Curve by investigating the relationship between economic growth and environmental pollution for the period from 1966 to 2013 in the USA. Previous studies based on the assumption of parameter stability and obtained parameters do not change over the full sample. This study uses bootstrap rolling window estimation method to detect the possible changes in causal relations and also obtain the parameters for sub-sample periods. The results show that the parameter of economic growth has increasing trend in 1982-1996 sub-sample periods, and it has decreasing trend in 1996-2013 sub-sample periods. Therefore, the existence of inverted U-shaped Environmental Kuznets Curve is confirmed in the USA.

  4. A regret theory approach to decision curve analysis: A novel method for eliciting decision makers' preferences and decision-making

    OpenAIRE

    Vickers Andrew; Hozo Iztok; Tsalatsanis Athanasios; Djulbegovic Benjamin

    2010-01-01

    Abstract Background Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. ...

  5. Comparative analysis of the apparent saturation hysteresis approach and the domain theory of hysteresis in respect of prediction of scanning curves and air entrapment

    Science.gov (United States)

    Beriozkin, A.; Mualem, Y.

    2018-05-01

    This study theoretically analyzes the concept of apparent saturation hysteresis, combined with the Scott et al. (1983) scaling approach, as suggested by Parker and Lenhard (1987), to account for the effect of air entrapment and release on the soil water hysteresis. We found that the theory of Parker and Lenhard (1987) is comprised of some mutually canceling mathematical operations, and when cleared of the superfluous intermediate calculations, their model reduces to the original Scott et al.'s (1983) scaling method, supplemented with the requirement of closure of scanning loops. Our analysis reveals that actually there is no effect of their technique of accounting for the entrapped air on the final prediction of the effective saturation (or water content) scanning curves. Our consideration indicates that the use of the Land (1968) formula for assessing the amount of entrapped air is in disaccord with the apparent saturation concept as introduced by Parker and Lenhard (1987). In this paper, a proper routine is suggested for predicting hysteretic scanning curves of any order, given the two measured main curves, in the complete hysteretic domain and some verification tests are carried out versus measured results. Accordingly, explicit closed-form formulae for direct prediction (with no need of intermediate calculation) of scanning curves up to the third order are derived to sustain our analysis.

  6. The ambient dose equivalent at flight altitudes: a fit to a large set of data using a Bayesian approach

    International Nuclear Information System (INIS)

    Wissmann, F; Reginatto, M; Moeller, T

    2010-01-01

    The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.

  7. Influence of different approaches to training of main movements on physical fitness of 4 years boys with various motor asymmetry

    Directory of Open Access Journals (Sweden)

    L. L. Galamandjuk

    2015-05-01

    Full Text Available Purpose: determination of effectiveness of different training main movements’ methods in physical fitness improvement of boys with different manual motor asymmetry. Material: 50 boys with ambidexterity (4 years old age took part in the research. There was used the following: oral questioning, dynamometry and methodic by M.M. Bezrukikh. Results: usage of one of variants of “symmetric” approach determines specificities of motor qualities’ development: among boys with ambidexterity in motor asymmetry variant “first with passive hand, then with active one” and variant “first with active and then with passive hand” ensure improvement of all tested qualities (except flexibility and quickness. Boys with right orientation of manual motor asymmetry demonstrated improvement of all qualities (except coordination in ballistic movements for accuracy, fulfilled by right arm in the first variant. In the second variant all qualities (except already mentioned quickness are improved. Conclusions: with any orientation of manual motor asymmetry the necessary condition of high activity and successful child’s training is development of interaction between cerebral semi-spheres. Coordinated movements by left and right arms strengthen such interaction. That is why it is purposeful to consequently fulfill every movement by every arm and by two arms simultaneously.

  8. Influence of different approaches to training of main movements on physical fitness of 4 years boys with various motor asymmetry

    Directory of Open Access Journals (Sweden)

    Galamandjuk L. L.

    2015-04-01

    Full Text Available Purpose: determination of effectiveness of different training main movements’ methods in physical fitness improvement of boys with different manual motor asymmetry. Material: 50 boys with ambidexterity (4 years old age took part in the research. There was used the following: oral questioning, dynamometry and methodic by M.M. Bezrukikh. Results: usage of one of variants of “symmetric” approach determines specificities of motor qualities’ development: among boys with ambidexterity in motor asymmetry variant “first with passive hand, then with active one” and variant “first with active and then with passive hand” ensure improvement of all tested qualities (except flexibility and quickness. Boys with right orientation of manual motor asymmetry demonstrated improvement of all qualities (except coordination in ballistic movements for accuracy, fulfilled by right arm in the first variant. In the second variant all qualities (except already mentioned quickness are improved. Conclusions: with any orientation of manual motor asymmetry the necessary condition of high activity and successful child’s training is development of interaction between cerebral semi-spheres. Coordinated movements by left and right arms strengthen such interaction. That is why it is purposeful to consequently fulfill every movement by every arm and by two arms simultaneously.

  9. Total hip arthroplasty by the direct anterior approach using a neck-preserving stem: Safety, efficacy and learning curve

    Directory of Open Access Journals (Sweden)

    Aditya Khemka

    2018-01-01

    Full Text Available Background: The concept of femoral neck preservation in total hip replacement (THR was introduced in 1993. It is postulated that retaining cortical bone of the femoral neck offers triplanar stability, uniform stress distribution, and accommodates physiological anteversion. However, data on safety, efficacy and learning curve are lacking. Materials and Methods: We prospectively assessed all patients who were operated for a THR with a short neck preserving stem (MiniHip between 2012 and 2014. The safety and learning curve were assessed by recording operative time; stem size; and adverse events including periprosthetic fracture; paresthesia; and limb length discrepancy (LLD. The cohort was divided into equal groups to assess the learning curve effect, and the cumulative sums (CUSUM test was performed to monitor intraoperative neck fractures. For assessment of efficacy, Oxford Hip Score (OHS and Short Form-36 (SF-36 scores were compared preoperatively and postoperatively. Results: 138 patients with median age 62 years (range 35–82 years were included with a median followup of 42 months (range 30–56 months. The minimum followup was 2.5 years. The OHS, SF-36 (physical and mental component scores improved by a mean score of 26, 28, and 27 points, respectively. All patients had LLD of <10 mm (1.9 mm ± 1.3. Adverse events included intraoperative neck fracture (n = 6, subsidence (n = 1, periprosthetic fracture (n = 1, paresthesia (n = 12, and trochanteric bursitis (n = 2. After early modification of the technique to use a smaller finishing broach, the CUSUM test demonstrated acceptable intraoperative neck fracture risk. The second surgery group had a reduced risk of intraoperative neck fracture (5/69 vs. 1/69 P = 0.2, reduced operative time (66 vs. 61 min, P = 0.06, and increased stem size (5 vs. 6, P = 0.09 although these differences were not statistically significant. Conclusions: The MiniHip stem is safe alternative to standard THR with good

  10. Peak oil analyzed with a logistic function and idealized Hubbert curve

    International Nuclear Information System (INIS)

    Gallagher, Brian

    2011-01-01

    A logistic function is used to characterize peak and ultimate production of global crude oil and petroleum-derived liquid fuels. Annual oil production data were incrementally summed to construct a logistic curve in its initial phase. Using a curve-fitting approach, a population-growth logistic function was applied to complete the cumulative production curve. The simulated curve was then deconstructed into a set of annual oil production data producing an 'idealized' Hubbert curve. An idealized Hubbert curve (IHC) is defined as having properties of production data resulting from a constant growth-rate under fixed resource limits. An IHC represents a potential production curve constructed from cumulative production data and provides a new perspective for estimating peak production periods and remaining resources. The IHC model data show that idealized peak oil production occurred in 2009 at 83.2 Mb/d (30.4 Gb/y). IHC simulations of truncated historical oil production data produced similar results and indicate that this methodology can be useful as a prediction tool. - Research Highlights: →Global oil production data were analyzed by a simple curve fitting method. →Best fit-curve results were obtained using two logistic functions on select data. →A broad potential oil production peak is forecast for the years from 2004 to 2014. →Similar results were obtained using historical data from about 10 to 30 years ago. →Two potential oil production decline scenarios were presented and compared.

  11. Sensitivity equation for quantitative analysis with multivariate curve resolution-alternating least-squares: theoretical and experimental approach.

    Science.gov (United States)

    Bauza, María C; Ibañez, Gabriela A; Tauler, Romà; Olivieri, Alejandro C

    2012-10-16

    A new equation is derived for estimating the sensitivity when the multivariate curve resolution-alternating least-squares (MCR-ALS) method is applied to second-order multivariate calibration data. The validity of the expression is substantiated by extensive Monte Carlo noise addition simulations. The multivariate selectivity can be derived from the new sensitivity expression. Other important figures of merit, such as limit of detection, limit of quantitation, and concentration uncertainty of MCR-ALS quantitative estimations can be easily estimated from the proposed sensitivity expression and the instrumental noise. An experimental example involving the determination of an analyte in the presence of uncalibrated interfering agents is described in detail, involving second-order time-decaying sensitized lanthanide luminescence excitation spectra. The estimated figures of merit are reasonably correlated with the analytical features of the analyzed experimental system.

  12. Testing MONDian dark matter with galactic rotation curves

    International Nuclear Information System (INIS)

    Edmonds, Doug; Farrah, Duncan; Minic, Djordje; Takeuchi, Tatsu; Ho, Chiu Man; Ng, Y. Jack

    2014-01-01

    MONDian dark matter (MDM) is a new form of dark matter quantum that naturally accounts for Milgrom's scaling, usually associated with modified Newtonian dynamics (MOND), and theoretically behaves like cold dark matter (CDM) at cluster and cosmic scales. In this paper, we provide the first observational test of MDM by fitting rotation curves to a sample of 30 local spiral galaxies (z ≈ 0.003). For comparison, we also fit the galactic rotation curves using MOND and CDM. We find that all three models fit the data well. The rotation curves predicted by MDM and MOND are virtually indistinguishable over the range of observed radii (∼1 to 30 kpc). The best-fit MDM and CDM density profiles are compared. We also compare with MDM the dark matter density profiles arising from MOND if Milgrom's formula is interpreted as Newtonian gravity with an extra source term instead of as a modification of inertia. We find that discrepancies between MDM and MOND will occur near the center of a typical spiral galaxy. In these regions, instead of continuing to rise sharply, the MDM mass density turns over and drops as we approach the center of the galaxy. Our results show that MDM, which restricts the nature of the dark matter quantum by accounting for Milgrom's scaling, accurately reproduces observed rotation curves.

  13. Environmental Kuznets curves-real progress or passing the buck? A case for consumption-based approaches

    International Nuclear Information System (INIS)

    Rothman, Dale S.

    1998-01-01

    Recent research has examined the hypothesis of an environmental Kuznets curve (EKC): the notion that environmental impact increases in the early stages of development followed by declines in the later stages. These studies have focused on the relationship between per capita income and a variety of environmental indicators. Results imply that EKCs may exist for a number of cases. However, the measures of environmental impact used generally focus on production processes and reflect environmental impacts that are local in nature and for which abatement is relatively inexpensive in terms of monetary costs and/or lifestyle changes. Significantly, more consumption-based measures, such as CO 2 emissions and municipal waste, for which impacts are relatively easy to externalize or costly to control, show no tendency to decline with increasing per capita income. By considering consumption and trade patterns, the author re-examines the concept of the EKC and propose the use of alternative, consumption-based measures of environmental impact. The author speculates that what appear to be improvements in environmental quality may in reality be indicators of increased ability of consumers in wealthy nations to distance themselves from the environmental degradation associated with their consumption

  14. Is there an Environmental Kuznets Curve for South Africa? A co-summability approach using a century of data

    International Nuclear Information System (INIS)

    Ben Nasr, Adnen; Gupta, Rangan; Sato, João Ricardo

    2015-01-01

    There exists a huge international literature on the, so-called, Environmental Kuznets Curve (EKC) hypothesis, which in turn, postulates an inverted u-shaped relationship between environmental pollutants and output. The empirical literature on EKC has mainly used test for cointegration, based on polynomial relationships between pollution and income. Motivated by the fact that, measured in per capita CO 2 equivalent emissions, South Africa is the world's most carbon-intensive non-oil-producing developing country, this paper aims to test the validity of the EKC for South Africa. For this purpose, we use a century of data (1911–2010), to capture the process of development better compared to short sample-based research; and the concept of co-summability, which is designed to analyze non-linear long-run relations among persistent processes. Our results, however, provide no support of the EKC for South Africa, both for the full-sample and sub-samples (determined by tests of structural breaks), implying that to reduce emissions without sacrificing growth, policies should be aimed at promoting energy efficiency. - Highlights: • The co-summability concept is used to test the validity of the EKC for South Africa. • The case of structural breaks is also considered when testing for the EKC. • Results provide no support of the EKC for South Africa. • To reduce CO2 emissions without sacrificing growth, policies should be aimed at promoting energy efficiency.

  15. Fitness club

    CERN Multimedia

    Fitness club

    2011-01-01

    General fitness Classes Enrolments are open for general fitness classes at CERN taking place on Monday, Wednesday, and Friday lunchtimes in the Pump Hall (building 216). There are shower facilities for both men and women. It is possible to pay for 1, 2 or 3 classes per week for a minimum of 1 month and up to 6 months. Check out our rates and enrol at: http://cern.ch/club-fitness Hope to see you among us! CERN Fitness Club fitness.club@cern.ch  

  16. Multilevel Models for the Analysis of Angle-Specific Torque Curves with Application to Master Athletes

    Directory of Open Access Journals (Sweden)

    Carvalho Humberto M.

    2015-12-01

    Full Text Available The aim of this paper was to outline a multilevel modeling approach to fit individual angle-specific torque curves describing concentric knee extension and flexion isokinetic muscular actions in Master athletes. The potential of the analytical approach to examine between individual differences across the angle-specific torque curves was illustrated including between-individuals variation due to gender differences at a higher level. Torques in concentric muscular actions of knee extension and knee extension at 60°·s-1 were considered within a range of motion between 5°and 85° (only torques “truly” isokinetic. Multilevel time series models with autoregressive covariance structures with standard multilevel models were superior fits compared with standard multilevel models for repeated measures to fit anglespecific torque curves. Third and fourth order polynomial models were the best fits to describe angle-specific torque curves of isokinetic knee flexion and extension concentric actions, respectively. The fixed exponents allow interpretations for initial acceleration, the angle at peak torque and the decrement of torque after peak torque. Also, the multilevel models were flexible to illustrate the influence of gender differences on the shape of torque throughout the range of motion and in the shape of the curves. The presented multilevel regression models may afford a general framework to examine angle-specific moment curves by isokinetic dynamometry, and add to the understanding mechanisms of strength development, particularly the force-length relationship, both related to performance and injury prevention.

  17. Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series

    OpenAIRE

    Basu, Kanadpriya; Mariani, Maria; Serpa, Laura; Sinha, Ritwik

    2015-01-01

    This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical ...

  18. General Fit-Basis Functions and Specialized Coordinates in an Adaptive Density-Guided Approach to Potential Energy Surfaces

    DEFF Research Database (Denmark)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide

    . This results in a decreased number of single point calculations required during the potential construction. Especially the Morse-like fit-basis functions are of interest, when combined with rectilinear hybrid optimized and localized coordinates (HOLCs), which can be generated as orthogonal transformations......The overall shape of a molecular energy surface can be very different for different molecules and different vibrational coordinates. This means that the fit-basis functions used to generate an analytic representation of a potential will be met with different requirements. It is therefore worthwhile...... single point calculations when constructing the molecular potential. We therefore present a uniform framework that can handle general fit-basis functions of any type which are specified on input. This framework is implemented to suit the black-box nature of the ADGA in order to avoid arbitrary choices...

  19. Helping Students Find Their Sweet Spot: A Teaching Approach Using the Sales Process to Find Jobs That Fit

    Science.gov (United States)

    Allen, Concha K.; Dugan, Riley G.; Popa, Eugen M.; Tarasi, Crina O.

    2017-01-01

    Despite the importance of achieving person-job fit--and the role marketing educators play in developing students for career success--there remains a lack of guidance for faculty as they shepherd students through the career development process. This article details how the seven-stage selling process can be used as a basis for teaching the job…

  20. Climatic and basin factors affecting the flood frequency curve: PART I – A simple sensitivity analysis based on the continuous simulation approach

    Directory of Open Access Journals (Sweden)

    A. M. Hashemi

    2000-01-01

    Full Text Available Regionalized and at-site flood frequency curves exhibit considerable variability in their shapes, but the factors controlling the variability (other than sampling effects are not well understood. An application of the Monte Carlo simulation-based derived distribution approach is presented in this two-part paper to explore the influence of climate, described by simulated rainfall and evapotranspiration time series, and basin factors on the flood frequency curve (ffc. The sensitivity analysis conducted in the paper should not be interpreted as reflecting possible climate changes, but the results can provide an indication of the changes to which the flood frequency curve might be sensitive. A single site Neyman Scott point process model of rainfall, with convective and stratiform cells (Cowpertwait, 1994; 1995, has been employed to generate synthetic rainfall inputs to a rainfall runoff model. The time series of the potential evapotranspiration (ETp demand has been represented through an AR(n model with seasonal component, while a simplified version of the ARNO rainfall-runoff model (Todini, 1996 has been employed to simulate the continuous discharge time series. All these models have been parameterised in a realistic manner using observed data and results from previous applications, to obtain ‘reference’ parameter sets for a synthetic case study. Subsequently, perturbations to the model parameters have been made one-at-a-time and the sensitivities of the generated annual maximum rainfall and flood frequency curves (unstandardised, and standardised by the mean have been assessed. Overall, the sensitivity analysis described in this paper suggests that the soil moisture regime, and, in particular, the probability distribution of soil moisture content at the storm arrival time, can be considered as a unifying link between the perturbations to the several parameters and their effects on the standardised and unstandardised ffcs, thus revealing the

  1. Nonlinear gravitons and curved twistor theory

    International Nuclear Information System (INIS)

    Penrose, R.

    1976-01-01

    A new approach to the quantization of general relativity is suggested in which a state consisting of just one graviton can be described, but in a way which involves both the curvature and nonlinearities of Einstein's theory. It is felt that this approach can be justified solely on its own merits but it also receives striking encouragement from another direction: a surprising mathematical result enables one to construct the general such nonlinear gravitation state from a curved twistor space, the construction being given in terms of one arbitrary holomorphic function of three complex variables. In this way, the approach fits naturally into the general twistor program for the description of quantized fields. (U.K.)

  2. Abordagem Bayesiana das curvas de crescimento de duas cultivares de feijoeiro Bayesian approach in the growth curves of two cultivars of common bean

    Directory of Open Access Journals (Sweden)

    Sebastião Martins Filho

    2008-09-01

    Full Text Available Neste trabalho foi utilizada a metodologia Bayesiana para ajustar o modelo não-linear logístico para dados de crescimento de duas cultivares de feijoeiro, "Neguinho" e "Carioca". O delineamento experimental utilizado foi o inteiramente casualizado, com vinte repetições, no esquema de parcelas subdivididas, sendo que os tratamentos principais foram constituídos pelas cultivares e as subparcelas foram constituídas por 17 períodos de avaliações, do plantio até aos 85 dias. A metodologia permitiu comparar as curvas de crescimentos sem utilizar a teoria assintótica e estes resultados mostraram um maior incremento em altura para a cultivar "Carioca".In this paper the Bayesian methodology was used to fit the logistic nonlinear model to growth data of two common bean cultivars, 'Neguinho' and 'Carioca'. The experiment was a split plot under a completely randomized design with twenty replicates, being the main treatments constituted by cultivars and the sub plots constituted by seventeen periods of evaluations, from planting to 85 days. The methodology allowed comparing the growth curves without using the asymptotic theory, and these results showed a larger height increment for the 'Carioca' cultivar.

  3. Flexible competing risks regression modeling and goodness-of-fit

    DEFF Research Database (Denmark)

    Scheike, Thomas; Zhang, Mei-Jie

    2008-01-01

    In this paper we consider different approaches for estimation and assessment of covariate effects for the cumulative incidence curve in the competing risks model. The classic approach is to model all cause-specific hazards and then estimate the cumulative incidence curve based on these cause...... models that is easy to fit and contains the Fine-Gray model as a special case. One advantage of this approach is that our regression modeling allows for non-proportional hazards. This leads to a new simple goodness-of-fit procedure for the proportional subdistribution hazards assumption that is very easy...... of the flexible regression models to analyze competing risks data when non-proportionality is present in the data....

  4. Fitness Club

    CERN Multimedia

    Fitness Club

    2011-01-01

    The CERN Fitness Club is organising Zumba Classes on the first Wednesday of each month, starting 7 September (19.00 – 20.00). What is Zumba®? It’s an exhilarating, effective, easy-to-follow, Latin-inspired, calorie-burning dance fitness-party™ that’s moving millions of people toward joy and health. Above all it’s great fun and an excellent work out. Price: 22 CHF/person Sign-up via the following form: https://espace.cern.ch/club-fitness/Lists/Zumba%20Subscription/NewForm.aspx For more info: fitness.club@cern.ch

  5. Calculation approaches for grid usage fees to influence the load curve in the distribution grid level; Berechnungsansaetze fuer Netznutzungsentgelte zur Beeinflussung des Lastverlaufs in der Verteilernetzebene

    Energy Technology Data Exchange (ETDEWEB)

    Illing, Bjoern

    2014-09-08

    Dominated by the energy policy the decentralized German energy market is changing. One mature target of the government is to increase the contribution of renewable generation to the gross electricity consumption. In order to achieve this target disadvantages like an increased need for capacity management occurs. Load reduction and variable grid fees offer the grid operator solutions to realize capacity management by influencing the load profile. The evolution of the current grid fees towards more causality is required to adapt these approaches. Two calculation approaches are developed in this assignment. On the one hand multivariable grid fees keeping the current components demand and energy charge. Additional to the grid costs grid load dependent parameters like the amount of decentralized feed-ins, time and local circumstances as well as grid capacities are considered. On the other hand the grid fee flat-rate which represents a demand based model on a monthly level. Both approaches are designed to meet the criteria for future grid fees. By means of a case study the effects of the grid fees on the load profile at the low voltage grid is simulated. Thereby the consumption is represented by different behaviour models and the results are scaled at the benchmark grid area. The resulting load curve is analyzed concerning the effects of peak load reduction as well as the integration of renewable energy sources. Additionally the combined effect of grid fees and electricity tariffs is evaluated. Finally the work discusses the launching of grid fees in the tense atmosphere of politics, legislation and grid operation. Results of this work are two calculation approaches designed for grid operators to define the grid fees. Multivariable grid fees are based on the current calculation scheme. Hereby demand and energy charges are weighted by time, locational and load related dependencies. The grid fee flat-rate defines a limitation in demand extraction. Different demand levels

  6. Fodbold Fitness

    DEFF Research Database (Denmark)

    Bennike, Søren

    Samfundet forandrer sig og ligeså gør danskernes idrætsmønstre. Fodbold Fitness, der er afhandlingens omdrejningspunkt, kan iagttages som en reaktion på disse forandringer. Afhandlingen ser nærmere på Fodbold Fitness og implementeringen af dette, der ingenlunde er nogen let opgave. Bennike bidrager...

  7. Fitness cost

    DEFF Research Database (Denmark)

    Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.

    2012-01-01

    phage types, predominantly only penicillin resistant. We investigated whether isolates of this epidemic were associated with a fitness cost, and we employed a mathematical model to ask whether these fitness costs could have led to the observed reduction in frequency. Bacteraemia isolates of S. aureus...... from Denmark have been stored since 1957. We chose 40 S. aureus isolates belonging to phage complex 83A, clonal complex 8 based on spa type, ranging in time of isolation from 1957 to 1980 and with varyous antibiograms, including both methicillin-resistant and -susceptible isolates. The relative fitness...... of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...

  8. The play approach to learning in the context of families and schools: an alternative paradigm for nutrition and fitness education in the 21st century.

    Science.gov (United States)

    Rickard, K A; Gallahue, D L; Gruen, G E; Tridle, M; Bewley, N; Steele, K

    1995-10-01

    An alternative paradigm for nutrition and fitness education centers on understanding and developing skill in implementing a play approach to learning about healthful eating and promoting active play in the context of the child, the family, and the school. The play approach is defined as a process for learning that is intrinsically motivated, enjoyable, freely chosen, nonliteral, safe, and actively engaged in by young learners. Making choices, assuming responsibility for one's decisions and actions, and having fun are inherent components of the play approach to learning. In this approach, internal cognitive transactions and intrinsic motivation are the primary forces that ultimately determine healthful choices and life habits. Theoretical models of children's learning--the dynamic systems theory and the cognitive-developmental theory of Jean Piaget--provide a theoretical basis for nutrition and fitness education in the 21st century. The ultimate goal is to develop partnerships of children, families, and schools in ways that promote the well-being of children and translate into healthful life habits. The play approach is an ongoing process of learning that is applicable to learners of all ages.

  9. Short-term corneal changes with gas-permeable contact lens wear in keratoconus subjects: a comparison of two fitting approaches.

    Science.gov (United States)

    Romero-Jiménez, Miguel; Santodomingo-Rubido, Jacinto; Flores-Rodríguez, Patricia; González-Méijome, Jose-Manuel

    2015-01-01

    To evaluate changes in anterior corneal topography and higher-order aberrations (HOA) after 14-days of rigid gas-permeable (RGP) contact lens (CL) wear in keratoconus subjects comparing two different fitting approaches. Thirty-one keratoconus subjects (50 eyes) without previous history of CL wear were recruited for the study. Subjects were randomly fitted to either an apical-touch or three-point-touch fitting approach. The lens' back optic zone radius (BOZR) was 0.4mm and 0.1mm flatter than the first definite apical clearance lens, respectively. Differences between the baseline and post-CL wear for steepest, flattest and average corneal power (ACP) readings, central corneal astigmatism (CCA), maximum tangential curvature (KTag), anterior corneal surface asphericity, anterior corneal surface HOA and thinnest corneal thickness measured with Pentacam were compared. A statistically significant flattening was found over time on the flattest and steepest simulated keratometry and ACP in apical-touch group (all p<0.01). A statistically significant reduction in KTag was found in both groups after contact lens wear (all p<0.05). Significant reduction was found over time in CCA (p=0.001) and anterior corneal asphericity in both groups (p<0.001). Thickness at the thinnest corneal point increased significantly after CL wear (p<0.0001). Coma-like and total HOA root mean square (RMS) error were significantly reduced following CL wearing in both fitting approaches (all p<0.05). Short-term rigid gas-permeable CL wear flattens the anterior cornea, increases the thinnest corneal thickness and reduces anterior surface HOA in keratoconus subjects. Apical-touch was associated with greater corneal flattening in comparison to three-point-touch lens wear. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  10. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    Science.gov (United States)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  11. Digital evaluation of the fit of zirconia-reinforced lithium silicate crowns with a new three-dimensional approach.

    Science.gov (United States)

    Zimmermann, Moritz; Valcanaia, Andre; Neiva, Gisele; Mehl, Albert; Fasbinder, Dennis

    2017-11-30

    Several methods for the evaluation of fit of computer-aided design/computer-assisted manufacture (CAD/CAM)-fabricated restorations have been described. In the study, digital models were recorded with an intraoral scanning device and were measured using a new three-dimensional (3D) computer technique to evaluate restoration internal fit. The aim of the study was to evaluate the internal adaptation and fit of chairside CAD/CAM-fabricated zirconia-reinforced lithium silicate ceramic crowns fabricated with different post-milling protocols. The null hypothesis was that different post-milling protocols did not influence the fitting accuracy of zirconia-reinforced lithium silicate restorations. A master all-ceramic crown preparation was completed on a maxillary right first molar on a typodont. Twenty zirconia-reinforced lithium silicate ceramic crowns (Celtra Duo, Dentsply Sirona) were designed and milled using a chairside CAD/CAM system (CEREC Omnicam, Dentsply Sirona). The 20 crowns were randomly divided into two groups based on post-milling protocols: no manipulation after milling (Group MI) and oven fired-glazing after milling (Group FG). A 3D computer method was used to evaluate the internal adaptation of the crowns. This was based on a subtractive analysis of a digital scan of the crown preparation and a digital scan of the thickness of the cement space over the crown preparation as recorded by a polyvinylsiloxane (PVS) impression material. The preparation scan and PVS scan were matched in 3D and a 3D difference analysis was performed with a software program (OraCheck, Cyfex). Three areas of internal adaptation and fit were selected for analysis: margin (MA), axial wall (AX), and occlusal surface (OC). Statistical analysis was performed using 80% percentile and one-way ANOVA with post-hoc Scheffé test (P = .05). The closest internal adaptation of the crowns was measured at the axial wall with 102.0 ± 11.7 µm for group MI-AX and 106.3 ± 29.3 µm for group FG

  12. The identification of high potential archers based on fitness and motor ability variables: A Support Vector Machine approach.

    Science.gov (United States)

    Taha, Zahari; Musa, Rabiu Muazu; P P Abdul Majeed, Anwar; Alim, Muhammad Muaz; Abdullah, Mohamad Razali

    2018-02-01

    Support Vector Machine (SVM) has been shown to be an effective learning algorithm for classification and prediction. However, the application of SVM for prediction and classification in specific sport has rarely been used to quantify/discriminate low and high-performance athletes. The present study classified and predicted high and low-potential archers from a set of fitness and motor ability variables trained on different SVMs kernel algorithms. 50 youth archers with the mean age and standard deviation of 17.0 ± 0.6 years drawn from various archery programmes completed a six arrows shooting score test. Standard fitness and ability measurements namely hand grip, vertical jump, standing broad jump, static balance, upper muscle strength and the core muscle strength were also recorded. Hierarchical agglomerative cluster analysis (HACA) was used to cluster the archers based on the performance variables tested. SVM models with linear, quadratic, cubic, fine RBF, medium RBF, as well as the coarse RBF kernel functions, were trained based on the measured performance variables. The HACA clustered the archers into high-potential archers (HPA) and low-potential archers (LPA), respectively. The linear, quadratic, cubic, as well as the medium RBF kernel functions models, demonstrated reasonably excellent classification accuracy of 97.5% and 2.5% error rate for the prediction of the HPA and the LPA. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from a combination of the selected few measured fitness and motor ability performance variables examined which would consequently save cost, time and effort during talent identification programme. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Using a qualitative approach for understanding hospital-affiliated integrated clinical and fitness facilities: characteristics and members' experiences.

    Science.gov (United States)

    Yang, Jingzhen; Kingsbury, Diana; Nichols, Matthew; Grimm, Kristin; Ding, Kele; Hallam, Jeffrey

    2015-06-19

    With health care shifting away from the traditional sick care model, many hospitals are integrating fitness facilities and programs into their clinical services in order to support health promotion and disease prevention at the community level. Through a series of focus groups, the present study assessed characteristics of hospital-affiliated integrated facilities located in Northeast Ohio, United States and members' experiences with respect to these facilities. Adult members were invited to participate in a focus group using a recruitment flyer. A total of 6 focus groups were conducted in 2013, each lasting one hour, ranging from 5 to 12 participants per group. The responses and discussions were recorded and transcribed verbatim, then analyzed independently by research team members. Major themes were identified after consensus was reached. The participants' average age was 57, with 56.8% currently under a doctor's care. Four major themes associated with integrated facilities and members' experiences emerged across the six focus groups: 1) facility/program, 2) social atmosphere, 3) provider, and 4) member. Within each theme, several sub-themes were also identified. A key feature of integrated facilities is the availability of clinical and fitness services "under one roof". Many participants remarked that they initially attended physical therapy, becoming members of the fitness facility afterwards, or vice versa. The participants had favorable views of and experiences with the superior physical environment and atmosphere, personal attention, tailored programs, and knowledgeable, friendly, and attentive staff. In particular, participants favored the emphasis on preventive care and the promotion of holistic health and wellness. These results support the integration of wellness promotion and programming with traditional medical care and call for the further evaluation of such a model with regard to participants' health outcomes.

  14. A case study evaluation of a Critical Care Information System adoption using the socio-technical and fit approach.

    Science.gov (United States)

    Yusof, Maryati Mohd

    2015-07-01

    Clinical information systems have long been used in intensive care units but reports on their adoption and benefits are limited. This study evaluated a Critical Care Information System implementation. A case study summative evaluation was conducted, employing observation, interview, and document analysis in operating theatres and 16-bed adult intensive care units in a 400-bed Malaysian tertiary referral centre from the perspectives of users (nurses and physicians), management, and information technology staff. System implementation, factors influencing adoption, fit between these factors, and the impact of the Critical Care Information System were evaluated after eight months of operation. Positive influences on system adoption were associated with technical factors, including system ease of use, usefulness, and information relevancy; human factors, particularly user attitude; and organisational factors, namely clinical process-technology alignment and champions. Organisational factors such as planning, project management, training, technology support, turnover rate, clinical workload, and communication were barriers to system implementation and use. Recommendations to improve the current system problems were discussed. Most nursing staff positively perceived the system's reduction of documentation and data access time, giving them more time with patients. System acceptance varied among doctors. System use also had positive impacts on timesaving, data quality, and clinical workflow. Critical Care Information Systems is crucial and has great potentials in enhancing and delivering critical care. However, the case study findings showed that the system faced complex challenges and was underutilised despite its potential. The role of socio-technical factors and their fit in realizing the potential of Critical Care Information Systems requires continuous, in-depth evaluation and stakeholder understanding and acknowledgement. The comprehensive and specific evaluation

  15. Mobility, strength, and fitness after a graded activity program for patients with subacute low back pain. A randomized prospective clinical study with a behavioral therapy approach.

    Science.gov (United States)

    Lindström, I; Ohlund, C; Eek, C; Wallin, L; Peterson, L E; Nachemson, A

    1992-06-01

    Patients with nonspecific mechanical low back pain (n = 103), examined by an orthopaedic surgeon and a social worker, were randomized to an activity group (n = 51) and a control group (n = 52). Patients with defined orthopaedic, medical, or psychiatric diagnoses were excluded before randomization. No patients were excluded due to place of birth or difficulties in speaking or understanding the Swedish language. The purpose of the study was to compare mobility, strength and fitness after traditional care and after traditional care plus a graded activity program with a behavioral therapy approach. A graded activity program, with a behavioral therapy approach was given under the guidance of a physical therapist. The endpoint of the graded activity program was return to work. This program significantly increased mobility, strength, and fitness more than could be explained by only a time recovery effect, especially in males. The patients in the activity group returned to work earlier than did the patients in the control group. Spinal rotation, abdominal muscle endurance time and lifting capacity were significantly correlated to rate of return to work. Traditional care plus a graded activity program were superior to only traditional care, evaluated in terms of mobility, strength and fitness. The graded activity program proved to be a successful method of restoring occupational function and facilitating return to work in subacute low back pain patients. The patients in the graded activity program learned that it is safe to move, while regaining function.

  16. Impact of entrainment and impingement on fish populations in the Hudson River estuary. Volume III. An analysis of the validity of the utilities' stock-recruitment curve-fitting exercise and prior estimation of beta technique. Environmental Sciences Division publication No. 1792

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.; Kirk, B.L.

    1982-03-01

    This report addresses the validity of the utilities' use of the Ricker stock-recruitment model to extrapolate the combined entrainment-impingement losses of young fish to reductions in the equilibrium population size of adult fish. In our testimony, a methodology was developed and applied to address a single fundamental question: if the Ricker model really did apply to the Hudson River striped bass population, could the utilities' estimates, based on curve-fitting, of the parameter alpha (which controls the impact) be considered reliable. In addition, an analysis is included of the efficacy of an alternative means of estimating alpha, termed the technique of prior estimation of beta (used by the utilities in a report prepared for regulatory hearings on the Cornwall Pumped Storage Project). This validation methodology should also be useful in evaluating inferences drawn in the literature from fits of stock-recruitment models to data obtained from other fish stocks

  17. Longitudinal associations between body mass index, physical activity, and healthy dietary behaviors in adults: A parallel latent growth curve modeling approach.

    Directory of Open Access Journals (Sweden)

    Youngdeok Kim

    Full Text Available Physical activity (PA and healthy dietary behaviors (HDB are two well-documented lifestyle factors influencing body mass index (BMI. This study examined 7-year longitudinal associations between changes in PA, HDB, and BMI among adults using a parallel latent growth curve modeling (LGCM.We used prospective cohort data collected by a private company (SimplyWell LLC, Omaha, NE, USA implementing a workplace health screening program. Data from a total of 2,579 adults who provided valid BMI, PA, and HDB information for at least 5 out of 7 follow-up years from the time they entered the program were analyzed. PA and HDB were subjectively measured during an annual online health survey. Height and weight measured during an annual onsite health screening were used to calculate BMI (kg·m2. The parallel LGCMs stratified by gender and baseline weight status (normal: BMI30 were fitted to examine the longitudinal associations of changes in PA and HDB with change in BMI over years.On average, BMI gradually increased over years, at rates ranging from 0.06 to 0.20 kg·m2·year, with larger increases observed among those of normal baseline weight status across genders. The increases in PA and HDB were independently associated with a smaller increase in BMI for obese males (b = -1.70 and -1.98, respectively, and overweight females (b = -1.85 and -2.46, respectively and obese females (b = -2.78 and -3.08, respectively. However, no significant associations of baseline PA and HDB with changes in BMI were observed.Our study suggests that gradual increases in PA and HDB are independently associated with smaller increases in BMI in overweight and obese adults, but not in normal weight individuals. Further study is warranted to address factors that check increases in BMI in normal weight adults.

  18. Fitness Club

    CERN Multimedia

    Fitness Club

    2012-01-01

    Open to All: http://cern.ch/club-fitness  fitness.club@cern.ch Boxing Your supervisor makes your life too tough ! You really need to release the pressure you've been building up ! Come and join the fit-boxers. We train three times a week in Bd 216, classes for beginners and advanced available. Visit our website cern.ch/Boxing General Fitness Escape from your desk with our general fitness classes, to strengthen your heart, muscles and bones, improve you stamina, balance and flexibility, achieve new goals, be more productive and experience a sense of well-being, every Monday, Wednesday and Friday lunchtime, Tuesday mornings before work and Thursday evenings after work – join us for one of our monthly fitness workshops. Nordic Walking Enjoy the great outdoors; Nordic Walking is a great way to get your whole body moving and to significantly improve the condition of your muscles, heart and lungs. It will boost your energy levels no end. Pilates A body-conditioning technique de...

  19. Obtaining DDF Curves of Extreme Rainfall Data Using Bivariate Copula and Frequency Analysis

    DEFF Research Database (Denmark)

    Sadri, Sara; Madsen, Henrik; Mikkelsen, Peter Steen

    2009-01-01

    , situated near Copenhagen in Denmark. For rainfall extracted using method 2, the marginal distribution of depth was found to fit the Generalized Pareto distribution while duration was found to fit the Gamma distribution, using the method of L-moments. The volume was fit with a generalized Pareto...... with duration for a given return period and name them DDF (depth-duration-frequency) curves. The copula approach does not assume the rainfall variables are independent or jointly normally distributed. Rainfall series are extracted in three ways: (1) by maximum mean intensity; (2) by depth and duration...... distribution and the duration was fit with a Pearson type III distribution for rainfall extracted using method 3. The Clayton copula was found to be appropriate for bivariate analysis of rainfall depth and duration for both methods 2 and 3. DDF curves derived using the Clayton copula for depth and duration...

  20. Estimating Corporate Yield Curves

    OpenAIRE

    Antionio Diaz; Frank Skinner

    2001-01-01

    This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...

  1. Serial position curves in free recall.

    Science.gov (United States)

    Laming, Donald

    2010-01-01

    The scenario for free recall set out in Laming (2009) is developed to provide models for the serial position curves from 5 selected sets of data, for final free recall, and for multitrial free recall. The 5 sets of data reflect the effects of rate of presentation, length of list, delay of recall, and suppression of rehearsal. Each model accommodates the serial position curve for first recalls (where those data are available) as well as that for total recalls. Both curves are fit with the same parameter values, as also (with 1 exception) are all of the conditions compared within each experiment. The distributions of numbers of recalls are also examined and shown to have variances increased above what would be expected if successive recalls were independent. This is taken to signify that, in those experiments in which rehearsals were not recorded, the retrieval of words for possible recall follows the same pattern that is observed following overt rehearsal, namely, that retrieval consists of runs of consecutive elements from memory. Finally, 2 sets of data are examined that the present approach cannot accommodate. It is argued that the problem with these data derives from an interaction between the patterns of (covert) rehearsal and the parameters of list presentation.

  2. dftools: Distribution function fitting

    Science.gov (United States)

    Obreschkow, Danail

    2018-05-01

    dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

  3. Edge detection and mathematic fitting for corneal surface with Matlab software.

    Science.gov (United States)

    Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na

    2017-01-01

    To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.

  4. The FIT Game: preliminary evaluation of a gamification approach to increasing fruit and vegetable consumption in school.

    Science.gov (United States)

    Jones, Brooke A; Madden, Gregory J; Wengreen, Heidi J

    2014-11-01

    Incentive-based interventions designed to increase fruit and vegetable (FV) consumption tend to yield positive, short-term outcomes. Because consumption most often returns to baseline levels when incentives are removed, sustainable long-duration interventions may be needed to impact public health. Anticipating that low-cost interventions will be more appealing to schools, the present study explored a low-cost, game-based intervention. An alternating-treatments design was used to evaluate the effects of the FIT Game on objectively measured FV consumption in one elementary school (n=251) in Utah. During the Fall 2013 semester, game-based rewards were provided to heroic characters within a fictional narrative read by teachers on days when the school, as a whole, met a fruit or vegetable consumption goal in accord with the alternating-treatments design. On intervention days, fruit and vegetable consumption increased by 39% and 33%, (p<0.01, p<0.05; binomial tests), respectively. Teacher surveys indicated that students enjoyed the game and grade 1-3 teachers recommended its use in other schools. This game-based intervention provides a promising step towards developing a low-cost, effective, and sustainable FV intervention that schools can implement without outside assistance. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Capability, environment and internationalization fit, and financial and marketing performance of MNEs' foreign subsidiaries An abductive contingency approach

    NARCIS (Netherlands)

    Dikova, Dessislava; Van Witteloostuijn, Arjen; Parker, Simon

    2017-01-01

    Purpose - Extant work in international business (IB) involves a partial contingency-theoretic perspective: a holistic view of the impact of bundles of contingencies on an outcome variable is missing. The purpose of this paper is to adopt a contingency approach to study multinational enterprise (MNE)

  6. Capability, environment and internationalization fit, and financial and marketing performance of MNE's foreign subsidiaries : An abdicative contingency approach

    NARCIS (Netherlands)

    Dikova, D.; Parker, S.C.; van Witteloostuijn, Arjen

    2017-01-01

    Purpose Extant work in international business (IB) involves a partial contingency-theoretic perspective: a holistic view of the impact of bundles of contingencies on an outcome variable is missing. The purpose of this paper is to adopt a contingency approach to study multinational enterprise (MNE)

  7. Considerations for reference pump curves

    International Nuclear Information System (INIS)

    Stockton, N.B.

    1992-01-01

    This paper examines problems associated with inservice testing (IST) of pumps to assess their hydraulic performance using reference pump curves to establish acceptance criteria. Safety-related pumps at nuclear power plants are tested under the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (the Code), Section 11. The Code requires testing pumps at specific reference points of differential pressure or flow rate that can be readily duplicated during subsequent tests. There are many cases where test conditions cannot be duplicated. For some pumps, such as service water or component cooling pumps, the flow rate at any time depends on plant conditions and the arrangement of multiple independent and constantly changing loads. System conditions cannot be controlled to duplicate a specific reference value. In these cases, utilities frequently request to use pump curves for comparison of test data for acceptance. There is no prescribed method for developing a pump reference curve. The methods vary and may yield substantially different results. Some results are conservative when compared to the Code requirements; some are not. The errors associated with different curve testing techniques should be understood and controlled within reasonable bounds. Manufacturer's pump curves, in general, are not sufficiently accurate to use as reference pump curves for IST. Testing using reference curves generated with polynomial least squares fits over limited ranges of pump operation, cubic spline interpolation, or cubic spline least squares fits can provide a measure of pump hydraulic performance that is at least as accurate as the Code required method. Regardless of the test method, error can be reduced by using more accurate instruments, by correcting for systematic errors, by increasing the number of data points, and by taking repetitive measurements at each data point

  8. Use of orthonormal polynomials to fit energy spectrum data for water transported through membrane

    International Nuclear Information System (INIS)

    Bogdanova, N.; Todorova, L.

    2001-01-01

    A new application of our approach with orthonormal polynomials to curve fitting is given when both variables have errors. We approximate and describe data of a new effect due to change of water energy spectrum as a result of water transport in a porous membrane

  9. Fitness club

    CERN Multimedia

    Fitness club

    2013-01-01

      Nordic Walking Classes Come join the Nordic walking classes and outings offered by the CERN Fitness Club starting September 2013. Our licensed instructor Christine offers classes for people who’ve never tried Nordic Walking and who would like to learn the technique, and outings for people who have completed the classes and enjoy going out as a group. Course 1: Tuesdays 12:30 - 13:30 24 September, 1 October, 8 October, 15 October Course 2: Tuesdays 12:30 - 13:30 5 November, 12 November, 19 November, 26 November Outings will take place on Thursdays (12:30 to 13:30) from 12 September 2013. We meet at the CERN Club Barracks car park (close to Entrance A) 10 minutes before departure. Prices: 50 CHF for 4 classes, including the 10 CHF Club membership. Payments made directly to instructor. Renting Poles: Poles can be rented from Christine at 5 CHF / hour. Subscription: Please subscribe at: http://cern.ch/club-fitness Looking forward to seeing you among us! Fitness Club FitnessClub@c...

  10. Fitness Club

    CERN Multimedia

    Fitness Club

    2012-01-01

    Get in Shape for Summer with the CERN Fitness Club Saturday 23 June 2012 from 14:30 to 16.30 (doors open at 14.00) Germana’s Fitness Workshop. Build strength and stamina, sculpt and tone your body and get your heart pumping with Germana’s workout mixture of Cardio Attack, Power Pump, Power Step, Cardio Combat and Cross-Training. Where: 216 (Pump room – equipped with changing rooms and showers). What to wear: comfortable clothes and indoor sports shoes + bring a drink! How much: 15 chf Sign up here: https://espace.cern.ch/club-fitness/Lists/Test_Subscription/NewForm.aspx? Join the Party and dance yourself into shape at Marco + Marials Zumba Masterclass. Saturday 30 June 2012 from 15:00 to 16:30 Marco + Mariel’s Zumba Masterclass Where: 216 (Pump room – equipped with changing rooms and showers). What to wear: comfortable clothes and indoor sports shoes + bring a drink! How much: 25 chf Sign up here: https://espace.cern.ch/club-fitness/Lists/Zumba%20...

  11. Fitness Club

    CERN Multimedia

    Fitness Club

    2010-01-01

    Nordic Walking Please note that the subscriptions for the general fitness classes from July to December are open: Subscriptions general fitness classes Jul-Dec 2010 Sign-up to the Fitness Club mailing list here Nordic Walking: Sign-up to the Nordic Walking mailing list here Beginners Nordic walking lessons Monday Lunchtimes (rdv 12:20 for 12:30 departure) 13.09/20.09/27.09/04.10 11.10/18.10/08.11/15.11 22.11/29.11/06.12/20.12 Nordic walking lessons Tuesday evenings (rdv 17:50 for 18:00 departure) 07.09/14.09/21.09/28.09 05.10/12.10/19.10/26.10 Intermediate/Advanced Nordic walking outings (follow the nordic walking lessons before signing up for the outings) every Thursday from 16.09 - 16.12, excluding 28.10 and 09.12 Subscriptions and info: fitness.club@cern.ch  

  12. Fitness Club

    CERN Multimedia

    Fitness Club

    2012-01-01

      The CERN Fitness Club is pleased to announce its new early morning class which will be taking place on: Tuesdays from 24th April 07:30 to 08:15 216 (Pump Hall, close to entrance C) – Facilities include changing rooms and showers. The Classes: The early morning classes will focus on workouts which will help you build not only strength and stamina, but will also improve your balance, and coordination. Our qualified instructor Germana will accompany you throughout the workout  to ensure you stay motivated so you achieve the best results. Sign up and discover the best way to start your working day full of energy! How to subscribe? We invite you along to a FREE trial session, if you enjoy the activity, please sign up via our website: https://espace.cern.ch/club-fitness/Activities/SUBSCRIBE.aspx. * * * * * * * * Saturday 28th April Get in shape for the summer at our fitness workshop and zumba dance party: Fitness workshop with Germana 13:00 to 14:30 - 216 (Pump Hall) Price...

  13. A multi-phase approach to select new wine yeast strains with enhanced fermentative fitness and glutathione production.

    Science.gov (United States)

    Bonciani, Tommaso; De Vero, Luciana; Mezzetti, Francesco; Fay, Justin C; Giudici, Paolo

    2018-03-01

    The genetic improvement of winemaking yeasts is a virtually infinite process, as the design of new strains must always cope with varied and ever-evolving production contexts. Good wine yeasts must feature both good primary traits, which are related to the overall fermentative fitness of the strain, and secondary traits, which provide accessory features augmenting its technological value. In this context, the superiority of "blind," genetic improvement techniques, as those based on the direct selection of the desired phenotype without prior knowledge of the genotype, was widely proven. Blind techniques such as adaptive evolution strategies were implemented for the enhancement of many traits of interest in the winemaking field. However, these strategies usually focus on single traits: this possibly leads to genetic tradeoff phenomena, where the selection of enhanced secondary traits might lead to sub-optimal primary fermentation traits. To circumvent this phenomenon, we applied a multi-step and strongly directed genetic improvement strategy aimed at combining a strong fermentative aptitude (primary trait) with an enhanced production of glutathione (secondary trait). We exploited the random genetic recombination associated to a library of 69 monosporic clones of strain UMCC 855 (Saccharomyces cerevisiae) to search for new candidates possessing both traits. This was achieved by consecutively applying three directional selective criteria: molybdate resistance (1), fermentative aptitude (2), and glutathione production (3). The strategy brought to the selection of strain 21T2-D58, which produces a high concentration of glutathione, comparable to that of other glutathione high-producers, still with a much greater fermentative aptitude.

  14. An evaluation of the Bayesian approach to fitting the N-mixture model for use with pseudo-replicated count data

    Science.gov (United States)

    Toribo, S.G.; Gray, B.R.; Liang, S.

    2011-01-01

    The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.

  15. A Quantitative Genomic Approach for Analysis of Fitness and Stress Related Traits in a Drosophila melanogaster Model Population

    Directory of Open Access Journals (Sweden)

    Palle Duun Rohde

    2016-01-01

    Full Text Available The ability of natural populations to withstand environmental stresses relies partly on their adaptive ability. In this study, we used a subset of the Drosophila Genetic Reference Panel, a population of inbred, genome-sequenced lines derived from a natural population of Drosophila melanogaster, to investigate whether this population harbors genetic variation for a set of stress resistance and life history traits. Using a genomic approach, we found substantial genetic variation for metabolic rate, heat stress resistance, expression of a major heat shock protein, and egg-to-adult viability investigated at a benign and a higher stressful temperature. This suggests that these traits will be able to evolve. In addition, we outline an approach to conduct pathway associations based on genomic linear models, which has potential to identify adaptive genes and pathways, and therefore can be a valuable tool in conservation genomics.

  16. A Quantitative Genomic Approach for Analysis of Fitness and Stress Related Traits in a Drosophila melanogaster Model Population

    DEFF Research Database (Denmark)

    Rohde, Palle Duun; Krag, Kristian; Loeschcke, Volker

    2016-01-01

    , to investigate whether this population harbors genetic variation for a set of stress resistance and life history traits. Using a genomic approach, we found substantial genetic variation for metabolic rate, heat stress resistance, expression of a major heat shock protein, and egg-to-adult viability investigated......The ability of natural populations to withstand environmental stresses relies partly on their adaptive ability. In this study, we used a subset of the Drosophila Genetic Reference Panel, a population of inbred, genome-sequenced lines derived from a natural population of Drosophila melanogaster...... at a benign and a higher stressful temperature. This suggests that these traits will be able to evolve. In addition, we outline an approach to conduct pathway associations based on genomic linear models, which has potential to identify adaptive genes and pathways, and therefore can be a valuable tool...

  17. Lagrangian Curves on Spectral Curves of Monopoles

    International Nuclear Information System (INIS)

    Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.

    2010-01-01

    We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.

  18. Molecular dynamics simulations of the melting curve of NiAl alloy under pressure

    OpenAIRE

    Wenjin Zhang; Yufeng Peng; Zhongli Liu

    2014-01-01

    The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fit...

  19. Extended analysis of cooling curves

    International Nuclear Information System (INIS)

    Djurdjevic, M.B.; Kierkus, W.T.; Liliac, R.E.; Sokolowski, J.H.

    2002-01-01

    Thermal Analysis (TA) is the measurement of changes in a physical property of a material that is heated through a phase transformation temperature range. The temperature changes in the material are recorded as a function of the heating or cooling time in such a manner that allows for the detection of phase transformations. In order to increase accuracy, characteristic points on the cooling curve have been identified using the first derivative curve plotted versus time. In this paper, an alternative approach to the analysis of the cooling curve has been proposed. The first derivative curve has been plotted versus temperature and all characteristic points have been identified with the same accuracy achieved using the traditional method. The new cooling curve analysis also enables the Dendrite Coherency Point (DCP) to be detected using only one thermocouple. (author)

  20. One Size Does Not Fit All: Contextualising Family Physical Activity Using a Write, Draw, Show and Tell Approach.

    Science.gov (United States)

    Noonan, Robert J; Fairclough, Stuart J; Knowles, Zoe R; Boddy, Lynne M

    2017-07-14

    Understanding family physical activity (PA) behaviour is essential for designing effective family-based PA interventions. However, effective approaches to capture the perceptions and "lived experiences" of families are not yet well established. The aims of the study were to: (1) demonstrate how a "write, draw, show and tell" (WDST) methodological approach can be appropriate to family-based PA research, and (2) present two distinct family case studies to provide insights into the habitual PA behaviour and experiences of a nuclear and single-parent family. Six participants (including two "target" children aged 9-11 years, two mothers and two siblings aged 6-8 years) from two families were purposefully selected to take part in the study, based on their family structure. Participants completed a paper-based PA diary and wore an ActiGraph GT9X accelerometer on their left wrist for up to 10 weekdays and 16 weekend days. A range of WDST tasks were then undertaken by each family to offer contextual insight into their family-based PA. The selected families participated in different levels and modes of PA, and reported contrasting leisure opportunities and experiences. These novel findings encourage researchers to tailor family-based PA intervention programmes to the characteristics of the family.

  1. Fitness Club

    CERN Multimedia

    Fitness Club

    2012-01-01

    Nordic Walking Classes Sessions of four classes of one hour each are held on Tuesdays. RDV barracks parking at Entrance A, 10 minutes before class time. Session 1 =  11.09 / 18.09 / 25.09 / 02.10, 18:15 - 19:15 Session 2 = 25.09 / 02.10 / 09.10 / 16.10, 12:30 - 13:30 Session 3 = 23.10 / 30.10 / 06.11 / 13.11, 12:30 - 13:30 Session 4 = 20.11 / 27.11 / 04.12 / 11.12, 12:30 - 13:30 Prices 40 CHF per session + 10 CHF club membership 5 CHF/hour pole rental Check out our schedule and enroll at http://cern.ch/club-fitness   Hope to see you among us!  fitness.club@cern.ch In spring 2012 there was a long-awaited progress in CERN Fitness club. We have officially opened a Powerlifting @ CERN, and the number of members of the new section has been increasing since then reaching 70+ people in less than 4 months. Powerlifting is a strength sport, which is simple as 1-2-3 and efficient. The "1-2-3" are the three basic lifts (bench press...

  2. For Fit's Sake: A Norms-Based Approach to Healthy Behaviors Through Influence of Presumed Media Influence.

    Science.gov (United States)

    Ho, Shirley S; Lee, Edmund W J; Ng, Kaijie; Leong, Grace S H; Tham, Tiffany H M

    2016-09-01

    Based on the influence of presumed media influence (IPMI) model as the theoretical framework, this study examines how injunctive norms and personal norms mediate the influence of healthy lifestyle media messages on public intentions to engage in two types of healthy lifestyle behaviors-physical activity and healthy diet. Nationally representative data collected from 1,055 adults in Singapore demonstrate partial support for the key hypotheses that make up the extended IPMI model, highlighting the importance of a norms-based approach in health communication. Our results indicate that perceived media influence on others indirectly shaped public intentions to engage in healthy lifestyle behaviors through personal norms and attitude, providing partial theoretical support for the extended IPMI model. Practical implications for health communicators in designing health campaigns media messages to motivate the public to engage in healthy lifestyle are discussed.

  3. Improvements in Spectrum's fit to program data tool.

    Science.gov (United States)

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  4. Anatomical curve identification

    Science.gov (United States)

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  5. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

    Science.gov (United States)

    Christiansen, Bo

    2015-04-01

    Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

  6. Differential geometry and topology of curves

    CERN Document Server

    Animov, Yu

    2001-01-01

    Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.

  7. Fitness club

    CERN Multimedia

    Fitness club

    2013-01-01

    Nordic Walking Classes New session of 4 classes of 1 hour each will be held on Tuesdays in May 2013. Meet at the CERN barracks parking at Entrance A, 10 minutes before class time. Dates and time: 07.05, 14.05, 21.05 and 28.05, fom  12 h 30 to 13 h 30 Prices: 40 CHF per session + 10 CHF club membership – 5 CHF / hour pole rental Check out our schedule and enroll at http://cern.ch/club-fitness Hope to see you among us! 

  8. Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series

    Directory of Open Access Journals (Sweden)

    Kanadpriya Basu

    2015-08-01

    Full Text Available This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical interpolation techniques and statistical curve fitting techniques complement each other and can add value to the study of one dimensional time series seismographic data: they can be use to add more data to the system in case the data set is not large enough to perform standard statistical tests.

  9. Accessing the dynamics of end-grafted flexible polymer chains by atomic force-electrochemical microscopy. Theoretical modeling of the approach curves by the elastic bounded diffusion model and Monte Carlo simulations. Evidence for compression-induced lateral chain escape.

    Science.gov (United States)

    Abbou, Jeremy; Anne, Agnès; Demaille, Christophe

    2006-11-16

    The dynamics of a molecular layer of linear poly(ethylene glycol) (PEG) chains of molecular weight 3400, bearing at one end a ferrocene (Fc) label and thiol end-grafted at a low surface coverage onto a gold substrate, is probed using combined atomic force-electrochemical microscopy (AFM-SECM), at the scale of approximately 100 molecules. Force and current approach curves are simultaneously recorded as a force-sensing microelectrode (tip) is inserted within the approximately 10 nm thick, redox labeled, PEG chain layer. Whereas the force approach curve gives access to the structure of the compressed PEG layer, the tip-current, resulting from tip-to-substrate redox cycling of the Fc head of the chain, is controlled by chain dynamics. The elastic bounded diffusion model, which considers the motion of the Fc head as diffusion in a conformational field, complemented by Monte Carlo (MC) simulations, from which the chain conformation can be derived for any degree of confinement, allows the theoretical tip-current approach curve to be calculated. The experimental current approach curve can then be very satisfyingly reproduced by theory, down to a tip-substrate separation of approximately 2 nm, using only one adjustable parameter characterizing the chain dynamics: the effective diffusion coefficient of the chain head. At closer tip-substrate separations, an unpredicted peak is observed in the experimental current approach curve, which is shown to find its origin in a compression-induced escape of the chain from within the narrowing tip-substrate gap. MC simulations provide quantitative support for lateral chain elongation as the escape mechanism.

  10. Gestão de projetos em empresas no Brasil: abordagem "tamanho único"? Project management in companies in Brazil: a "one size fits all" approach?

    Directory of Open Access Journals (Sweden)

    Luiz José Marques Junior

    2011-01-01

    approaches focused on a rational and normative view of project management are predominant. On the other hand, the data also indicated the presence of adaptive approach practices in the management of strategic projects suggesting that there is no "one size fits all" approach to manage projects in the companies studied.

  11. Development of probabilistic fatigue curve for asphalt concrete based on viscoelastic continuum damage mechanics

    Directory of Open Access Journals (Sweden)

    Himanshu Sharma

    2016-07-01

    Full Text Available Due to its roots in fundamental thermodynamic framework, continuum damage approach is popular for modeling asphalt concrete behavior. Currently used continuum damage models use mixture averaged values for model parameters and assume deterministic damage process. On the other hand, significant scatter is found in fatigue data generated even under extremely controlled laboratory testing conditions. Thus, currently used continuum damage models fail to account the scatter observed in fatigue data. This paper illustrates a novel approach for probabilistic fatigue life prediction based on viscoelastic continuum damage approach. Several specimens were tested for their viscoelastic properties and damage properties under uniaxial mode of loading. The data thus generated were analyzed using viscoelastic continuum damage mechanics principles to predict fatigue life. Weibull (2 parameter, 3 parameter and lognormal distributions were fit to fatigue life predicted using viscoelastic continuum damage approach. It was observed that fatigue damage could be best-described using Weibull distribution when compared to lognormal distribution. Due to its flexibility, 3-parameter Weibull distribution was found to fit better than 2-parameter Weibull distribution. Further, significant differences were found between probabilistic fatigue curves developed in this research and traditional deterministic fatigue curve. The proposed methodology combines advantages of continuum damage mechanics as well as probabilistic approaches. These probabilistic fatigue curves can be conveniently used for reliability based pavement design. Keywords: Probabilistic fatigue curve, Continuum damage mechanics, Weibull distribution, Lognormal distribution

  12. ECM using Edwards curves

    DEFF Research Database (Denmark)

    Bernstein, Daniel J.; Birkner, Peter; Lange, Tanja

    2013-01-01

    -arithmetic level are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use extended Edwards coordinates; (3) use signed-sliding-window addition-subtraction chains; (4) batch primes to increase the window size; (5) choose curves with small parameters and base points; (6) choose curves with large...

  13. Intensity Conserving Spectral Fitting

    Science.gov (United States)

    Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.

    2015-01-01

    The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.

  14. Beam-hardening correction by a surface fitting and phase classification by a least square support vector machine approach for tomography images of geological samples

    Science.gov (United States)

    Khan, F.; Enzmann, F.; Kersten, M.

    2015-12-01

    In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.

  15. Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.

    Science.gov (United States)

    Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui

    2018-01-13

    Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.

  16. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, N. E.; Soderberg, A. M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Betancourt, M., E-mail: nsanders@cfa.harvard.edu [Department of Statistics, University of Warwick, Coventry CV4 7AL (United Kingdom)

    2015-02-10

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.

  17. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    International Nuclear Information System (INIS)

    Sanders, N. E.; Soderberg, A. M.; Betancourt, M.

    2015-01-01

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST

  18. Difficulties in fitting the thermal response of atomic force microscope cantilevers for stiffness calibration

    International Nuclear Information System (INIS)

    Cole, D G

    2008-01-01

    This paper discusses the difficulties of calibrating atomic force microscope (AFM) cantilevers, in particular the effect calibrating under light fluid-loading (in air) and under heavy fluid-loading (in water) has on the ability to use thermal motion response to fit model parameters that are used to determine cantilever stiffness. For the light fluid-loading case, the resonant frequency and quality factor can easily be used to determine stiffness. The extension of this approach to the heavy fluid-loading case is troublesome due to the low quality factor (high damping) caused by fluid-loading. Simple calibration formulae are difficult to realize, and the best approach is often to curve-fit the thermal response, using the parameters of natural frequency and mass ratio so that the curve-fit's response is within some acceptable tolerance of the actual thermal response. The parameters can then be used to calculate the cantilever stiffness. However, the process of curve-fitting can lead to erroneous results unless suitable care is taken. A feedback model of the fluid–structure interaction between the unloaded cantilever and the hydrodynamic drag provides a framework for fitting a modeled thermal response to a measured response and for evaluating the parametric uncertainty of the fit. The cases of uncertainty in the natural frequency, the mass ratio, and combined uncertainty are presented and the implications for system identification and stiffness calibration using curve-fitting techniques are discussed. Finally, considerations and recommendations for the calibration of AFM cantilevers are given in light of the results of this paper

  19. Growth Curve Models and Applications : Indian Statistical Institute

    CERN Document Server

    2017-01-01

    Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas.   There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of app lied work, and these contributions have been externally r...

  20. Relations between the development of school investment, self-confidence, and language achievement in elementary education: A multivariate latent growth curve approach

    NARCIS (Netherlands)

    Stoel, R.D.; Peetsma, T.T.D.; Roeleveld, J.

    2001-01-01

    Latent growth curve (LGC) analysis of longitudinal data for pupils' school investment, self confidence and language ability is presented. A multivariate model is tested that relates the three developmental processes to each other and to intelligence. All processes show significant differences

  1. Unveiling multiple solid-state transitions in pharmaceutical solid dosage forms using multi-series hyperspectral imaging and different curve resolution approaches

    DEFF Research Database (Denmark)

    Alexandrino, Guilherme L; Amigo Rubio, Jose Manuel; Khorasani, Milad Rouhi

    2017-01-01

    Solid-state transitions at the surface of pharmaceutical solid dosage forms (SDF) were monitored using multi-series hyperspectral imaging (HSI) along with Multivariate Curve Resolution – Alternating Least Squares (MCR-ALS) and Parallel Factor Analysis (PARAFAC and PARAFAC2). First, the solid-stat...

  2. Accuracy of progress ratios determined from experience curves: the case of photovoltaic technology development

    OpenAIRE

    van Sark, W.G.J.H.M.; Alsema, E.A.; Junginger, H.M.; de Moor, H.H.C.; Schaeffer, G.J.

    2008-01-01

    Learning curves are extensively used in policy and scenario studies. Progress ratios (PRs) are derived from historical data and are used for forecasting cost development of many technologies, including photovoltaics (PV). Forecasts are highly sensitive to uncertainties in the PR. A PR usually is determined together with the coefficient of determination R2, which should approach unity for a good fit of the available data. Although the R2 is instructive, we recommend using the error in the PR d...

  3. Contractibility of curves

    Directory of Open Access Journals (Sweden)

    Janusz Charatonik

    1991-11-01

    Full Text Available Results concerning contractibility of curves (equivalently: of dendroids are collected and discussed in the paper. Interrelations tetween various conditions which are either sufficient or necessary for a curve to be contractible are studied.

  4. A whole brain volumetric approach in overweight/obese children: Examining the association with different physical fitness components and academic performance. The ActiveBrains project.

    Science.gov (United States)

    Esteban-Cornejo, Irene; Cadenas-Sanchez, Cristina; Contreras-Rodriguez, Oren; Verdejo-Roman, Juan; Mora-Gonzalez, Jose; Migueles, Jairo H; Henriksson, Pontus; Davis, Catherine L; Verdejo-Garcia, Antonio; Catena, Andrés; Ortega, Francisco B

    2017-10-01

    Obesity, as compared to normal weight, is associated with detectable structural differences in the brain. To the best of our knowledge, no previous study has examined the association of physical fitness with gray matter volume in overweight/obese children using whole brain analyses. Thus, the aim of this study was to examine the association between the key components of physical fitness (i.e. cardiorespiratory fitness, speed-agility and muscular fitness) and brain structural volume, and to assess whether fitness-related changes in brain volumes are related to academic performance in overweight/obese children. A total of 101 overweight/obese children aged 8-11 years were recruited from Granada, Spain. The physical fitness components were assessed following the ALPHA health-related fitness test battery. T1-weighted images were acquired with a 3.0 T S Magnetom Tim Trio system. Gray matter tissue was calculated using Diffeomorphic Anatomical Registration Through Exponentiated Lie algebra (DARTEL). Academic performance was assessed by the Batería III Woodcock-Muñoz Tests of Achievement. All analyses were controlled for sex, peak high velocity offset, parent education, body mass index and total brain volume. The statistical threshold was calculated with AlphaSim and further Hayasaka adjusted to account for the non-isotropic smoothness of structural images. The main results showed that higher cardiorespiratory fitness was related to greater gray matter volumes (P structures; besides, some of these brain structures may be related to better academic performance. Importantly, the identified associations of fitness and gray matter volume were different for each fitness component. These findings suggest that increases in cardiorespiratory fitness and speed-agility may positively influence the development of distinctive brain regions and academic indicators, and thus counteract the harmful effect of overweight and obesity on brain structure during childhood. Copyright

  5. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  6. The 'fitting problem' in cosmology

    International Nuclear Information System (INIS)

    Ellis, G.F.R.; Stoeger, W.

    1987-01-01

    The paper considers the best way to fit an idealised exactly homogeneous and isotropic universe model to a realistic ('lumpy') universe; whether made explicit or not, some such approach of necessity underlies the use of the standard Robertson-Walker models as models of the real universe. Approaches based on averaging, normal coordinates and null data are presented, the latter offering the best opportunity to relate the fitting procedure to data obtainable by astronomical observations. (author)

  7. Alternative Forms of Fit in Contingency Theory.

    Science.gov (United States)

    Drazin, Robert; Van de Ven, Andrew H.

    1985-01-01

    This paper examines the selection, interaction, and systems approaches to fit in structural contingency theory. The concepts of fit evaluated may be applied not only to structural contingency theory but to contingency theories in general. (MD)

  8. Technological change in energy systems. Learning curves, logistic curves and input-output coefficients

    International Nuclear Information System (INIS)

    Pan, Haoran; Koehler, Jonathan

    2007-01-01

    Learning curves have recently been widely adopted in climate-economy models to incorporate endogenous change of energy technologies, replacing the conventional assumption of an autonomous energy efficiency improvement. However, there has been little consideration of the credibility of the learning curve. The current trend that many important energy and climate change policy analyses rely on the learning curve means that it is of great importance to critically examine the basis for learning curves. Here, we analyse the use of learning curves in energy technology, usually implemented as a simple power function. We find that the learning curve cannot separate the effects of price and technological change, cannot reflect continuous and qualitative change of both conventional and emerging energy technologies, cannot help to determine the time paths of technological investment, and misses the central role of R and D activity in driving technological change. We argue that a logistic curve of improving performance modified to include R and D activity as a driving variable can better describe the cost reductions in energy technologies. Furthermore, we demonstrate that the top-down Leontief technology can incorporate the bottom-up technologies that improve along either the learning curve or the logistic curve, through changing input-output coefficients. An application to UK wind power illustrates that the logistic curve fits the observed data better and implies greater potential for cost reduction than the learning curve does. (author)

  9. Migration and the Wage Curve:

    DEFF Research Database (Denmark)

    Brücker, Herbert; Jahn, Elke J.

    in a general equilibrium framework. For the empirical analysis we employ the IABS, a two percent sample of the German labor force. We find that the elasticity of the wage curve is particularly high for young workers and workers with a university degree, while it is low for older workers and workers......  Based on a wage curve approach we examine the labor market effects of migration in Germany. The wage curve relies on the assumption that wages respond to a change in the unemployment rate, albeit imperfectly. This allows one to derive the wage and employment effects of migration simultaneously...... with a vocational degree. The wage and employment effects of migration are moderate: a 1 percent increase in the German labor force through immigration increases the aggregate unemployment rate by less than 0.1 percentage points and reduces average wages by less 0.1 percent. While native workers benefit from...

  10. Hamiltonian inclusive fitness: a fitter fitness concept.

    Science.gov (United States)

    Costa, James T

    2013-01-01

    In 1963-1964 W. D. Hamilton introduced the concept of inclusive fitness, the only significant elaboration of Darwinian fitness since the nineteenth century. I discuss the origin of the modern fitness concept, providing context for Hamilton's discovery of inclusive fitness in relation to the puzzle of altruism. While fitness conceptually originates with Darwin, the term itself stems from Spencer and crystallized quantitatively in the early twentieth century. Hamiltonian inclusive fitness, with Price's reformulation, provided the solution to Darwin's 'special difficulty'-the evolution of caste polymorphism and sterility in social insects. Hamilton further explored the roles of inclusive fitness and reciprocation to tackle Darwin's other difficulty, the evolution of human altruism. The heuristically powerful inclusive fitness concept ramified over the past 50 years: the number and diversity of 'offspring ideas' that it has engendered render it a fitter fitness concept, one that Darwin would have appreciated.

  11. A chord error conforming tool path B-spline fitting method for NC machining based on energy minimization and LSPIA

    OpenAIRE

    He, Shanshan; Ou, Daojiang; Yan, Changya; Lee, Chen-Han

    2015-01-01

    Piecewise linear (G01-based) tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical...

  12. Understanding the adsorptive interactions of arsenate-iron nanoparticles with curved fullerene-like sheets in activated carbon using a quantum mechanics/molecular mechanics computational approach.

    Science.gov (United States)

    Ha, Nguyen Ngoc; Cam, Le Minh; Ha, Nguyen Thi Thu; Goh, Bee-Min; Saunders, Martin; Jiang, Zhong-Tao; Altarawneh, Mohammednoor; Dlugogorski, Bogdan Z; El-Harbawi, Mohanad; Yin, Chun-Yang

    2017-06-07

    The prevalence of global arsenic groundwater contamination has driven widespread research on developing effective treatment systems including adsorption using various sorbents. The uptake of arsenic-based contaminants onto established sorbents such as activated carbon (AC) can be effectively enhanced via immobilization/impregnation of iron-based elements on the porous AC surface. Recent suggestions that AC pores structurally consist of an eclectic mix of curved fullerene-like sheets may affect the arsenic adsorption dynamics within the AC pores and is further complicated by the presence of nano-sized iron-based elements. We have therefore, attempted to shed light on the adsorptive interactions of arsenate-iron nanoparticles with curved fullerene-like sheets by using hybridized quantum mechanics/molecular mechanics (QMMM) calculations and microscopy characterization. It is found that, subsequent to optimization, chemisorption between HAsO 4 2- and the AC carbon sheet (endothermic process) is virtually non-existent - this observation is supported by experimental results. Conversely, the incorporation of iron nanoparticles (FeNPs) into the AC carbon sheet greatly facilitates chemisorption of HAsO 4 2- . Our calculation implies that iron carbide is formed at the junction between the iron and the AC interface and this tightly chemosorbed layer prevents detachment of the FeNPs on the AC surface. Other aspects including electronic structure/properties, carbon arrangement defects and rate of adsorptive interaction, which are determined using the Climbing-Image NEB method, are also discussed.

  13. Evaluation of J-R curve testing of nuclear piping materials using the direct current potential drop technique

    International Nuclear Information System (INIS)

    Hackett, E.M.; Kirk, M.T.; Hays, R.A.

    1986-08-01

    A method is described for developing J-R curves for nuclear piping materials using the DC Potential Drop (DCPD) technique. Experimental calibration curves were developed for both three point bend and compact specimen geometries using ASTM A106 steel, a type 304 stainless steel and a high strength aluminum alloy. These curves were fit with a power law expression over the range of crack extension encountered during J-R curve tests (0.6 a/W to 0.8 a/W). The calibration curves were insensitive to both material and sidegrooving and depended solely on specimen geometry and lead attachment points. Crack initiation in J-R curve tests using DCPD was determined by a deviation from a linear region on a plot of COD vs. DCPD. The validity of this criterion for ASTM A106 steel was determined by a series of multispecimen tests that bracketed the initiation region. A statistical differential slope procedure for determination of the crack initiation point is presented and discussed. J-R curve tests were performed on ASTM A106 steel and type 304 stainless steel using both the elastic compliance and DCPD techniques to assess R-curve comparability. J-R curves determined using the two approaches were found to be in good agreement for ASTM A106 steel. The applicability of the DCPD technique to type 304 stainless steel and high rate loading of ferromagnetic materials is discussed. 15 refs., 33 figs

  14. Fit-for-Purpose

    DEFF Research Database (Denmark)

    Enemark, Stig

    2013-01-01

    ; completeness to cover the total jurisdiction; and credibility in terms of reliable data being trusted by the users. Accuracy can then be incrementally improved over time when relevant and justified by serving the needs of citizen, business and society in general. Such a fit-for-purpose approach is fundamental...... systems act within adopted land policies that define the legal regulatory pattern for dealing with land issues. Land administration systems - whether highly advanced or very basic – require a spatial framework to operate. This framework provides the fundamental information for dealing with land issues...... concepts may well be seen as the end target but not as the point of entry. When assessing the technology and investment choices the focus should be on building a fit-for-purpose framework that will meet the needs of society today and that can be incrementally improved over time....

  15. Principal Curves on Riemannian Manifolds.

    Science.gov (United States)

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  16. JUMPING THE CURVE

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-01-01

    Full Text Available This paper explores the notion ofjump ing the curve,following from Handy 's S-curve onto a new curve with new rules policies and procedures. . It claims that the curve does not generally lie in wait but has to be invented by leadership. The focus of this paper is the identification (mathematically and inferentially ofthat point in time, known as the cusp in catastrophe theory, when it is time to change - pro-actively, pre-actively or reactively. These three scenarios are addressed separately and discussed in terms ofthe relevance ofeach.

  17. Learning curves in energy planning models

    Energy Technology Data Exchange (ETDEWEB)

    Barreto, L; Kypreos, S [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    This study describes the endogenous representation of investment cost learning curves into the MARKAL energy planning model. A piece-wise representation of the learning curves is implemented using Mixed Integer Programming. The approach is briefly described and some results are presented. (author) 3 figs., 5 refs.

  18. A minicourse on moduli of curves

    International Nuclear Information System (INIS)

    Looijenga, E.

    2000-01-01

    These are notes that accompany a short course given at the School on Algebraic Geometry 1999 at the ICTP, Trieste. A major goal is to outline various approaches to moduli spaces of curves. In the last part I discuss the algebraic classes that naturally live on these spaces; these can be thought of as the characteristic classes for bundles of curves. (author)

  19. Symmetry Properties of Potentiometric Titration Curves.

    Science.gov (United States)

    Macca, Carlo; Bombi, G. Giorgio

    1983-01-01

    Demonstrates how the symmetry properties of titration curves can be efficiently and rigorously treated by means of a simple method, assisted by the use of logarithmic diagrams. Discusses the symmetry properties of several typical titration curves, comparing the graphical approach and an explicit mathematical treatment. (Author/JM)

  20. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to d...

  1. A mathematical function for the description of nutrient-response curve.

    Directory of Open Access Journals (Sweden)

    Hamed Ahmadi

    Full Text Available Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a has biological interpretation, b may be used to calculate reliable estimates of nutrient response relationships, and c provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements.

  2. Reference standards to assess physical fitness of children and adolescents of Brazil: an approach to the students of the Lake Itaipú region—Brazil

    Directory of Open Access Journals (Sweden)

    Edilson Hobold

    2017-11-01

    Full Text Available Background The importance of assessing body fat variables and physical fitness tests plays an important role in monitoring the level of activity and physical fitness of the general population. The objective of this study was to develop reference norms to evaluate the physical fitness aptitudes of children and adolescents based on age and sex from the lake region of Itaipú, Brazil. Methods A descriptive cross-sectional study was carried out with 5,962 students (2,938 males and 3,024 females with an age range of 6.0 and 17.9 years. Weight (kg, height (cm, and triceps (mm, and sub-scapular skinfolds (mm were measured. Body Mass Index (BMI kg/m2 was calculated. To evaluate the four physical fitness aptitude dimensions (morphological, muscular strength, flexibility, and cardio-respiratory, the following physical education tests were given to the students: sit-and-reach (cm, push-ups (rep, standing long jump (cm, and 20-m shuttle run (m. Results and Discussion Females showed greater flexibility in the sit-and-reach test and greater body fat than the males. No differences were found in BMI. Percentiles were created for the four components for the physical fitness aptitudes, BMI, and skinfolds by using the LMS method based on age and sex. The proposed reference values may be used for detecting talents and promoting health in children and adolescents.

  3. Tornado-Shaped Curves

    Science.gov (United States)

    Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio

    2017-01-01

    In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.

  4. Simulating Supernova Light Curves

    International Nuclear Information System (INIS)

    Even, Wesley Paul; Dolence, Joshua C.

    2016-01-01

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth's atmosphere.

  5. Simulating Supernova Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-05

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.

  6. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  7. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  8. Tempo curves considered harmful

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1993-01-01

    In the literature of musicology, computer music research and the psychology of music, timing or tempo measurements are mostly presented in the form of continuous curves. The notion of these tempo curves is dangerous, despite its widespread use, because it lulls its users into the false impression

  9. The Roles of Macrobenthic Mollusks as Bioindicator in Response to Environmental Disturbance : Cumulative k-dominance curves and bubble plots ordination approaches

    Science.gov (United States)

    Putro, Sapto P.; Muhammad, Fuad; Aininnur, Amalia; Widowati; Suhartana

    2017-02-01

    Floating net cage is one of the aquaculture practice operated in Indonesian coastal areas that has been growing rapidly over the last two decades. This study is aimed to assess the roles of macrobenthic mollusks as bioindicator in response to environmental disturbance caused by fish farming activities, and compare the samples within the locations using graphical methods. The research was done at the floating net cage fish farming area in the Awerange Gulf, South Sulawesi, Indonesia at the coordinates between 79°0500‧- 79°1500‧ LS and 953°1500‧- 953°2000‧ BT, at the polyculture and reference areas, which was located 1 km away from farming area. Sampling period was conducted between October 2014 to June 2015. The sediment samples were taken from the two locations with two sampling time and three replicates using Van Veen Grab for biotic and abiotic assessment. Mollusks as biotic parameter were fixed using 4% formalin solution and were preserved using 70% ethanol solution after 1mm mesh size. The macrobenthic mollusks were found as many as 15 species consisting of 14 families and 2 classes (gastropods and bivalves). Based on cumulative k-dominance analysis projected on each station, the line of station K3T1 (reference area; first sampling time) and KJAB P3T2 (polyculture area; second sampling time) are located below others curves, indicating the highest evenness and diversity compared to the other stations, whereas station K2T1 (reference area; first sampling time) and K3T2 (polyculture area, second sampling time) are located on the top, indicate the lowest value of evenness and diversity. Based on the bubble plots NMDS ordination, the four dominant taxa/species did not clearly show involvement in driving/shifting the ordinate position of station on the graph, except T. agilis. However, the two species showed involvement in driving/shifting the ordinate position of two stations of the reference areas from the first sampling time by Rynoclavis sordidula

  10. The curve shortening problem

    CERN Document Server

    Chou, Kai-Seng

    2001-01-01

    Although research in curve shortening flow has been very active for nearly 20 years, the results of those efforts have remained scattered throughout the literature. For the first time, The Curve Shortening Problem collects and illuminates those results in a comprehensive, rigorous, and self-contained account of the fundamental results.The authors present a complete treatment of the Gage-Hamilton theorem, a clear, detailed exposition of Grayson''s convexity theorem, a systematic discussion of invariant solutions, applications to the existence of simple closed geodesics on a surface, and a new, almost convexity theorem for the generalized curve shortening problem.Many questions regarding curve shortening remain outstanding. With its careful exposition and complete guide to the literature, The Curve Shortening Problem provides not only an outstanding starting point for graduate students and new investigations, but a superb reference that presents intriguing new results for those already active in the field.

  11. MICA: Multiple interval-based curve alignment

    Science.gov (United States)

    Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf

    2018-01-01

    MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA

  12. Inverse Diffusion Curves Using Shape Optimization.

    Science.gov (United States)

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  13. Convolution based profile fitting

    International Nuclear Information System (INIS)

    Kern, A.; Coelho, A.A.; Cheary, R.W.

    2002-01-01

    diffractometers (e.g. BM16 at ESRF and Station 2.3 at Daresbury). In the literature, convolution based profile fitting is normally associated with microstructure analysis where the sample contribution needs to be separated from the instrument contribution in an observed profile. This is no longer the case. Convolution based profile fitting can be also performed on a fully empirical basis to provide better fits to data and a greater variety of profile shapes. With convolution based profile fitting virtually any peak shape and its angular dependence can be modelled. The approach may be based on a physical model (FPA) or performed empirically. The quality of fit by convolution is normally better than using other methods. The uncertainty in derived parameters is therefore reduced. The number of parameters required to describe a pattern is normally smaller than the 'analytical function approach' and therefore parameter correlation is reduced significantly, therefore, increasing profile complexity does not necessarily require an increasing number of parameters. Copyright (2002) Australian X-ray Analytical Association Inc

  14. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  15. A study of pH-dependent photodegradation of amiloride by a multivariate curve resolution approach to combined kinetic and acid-base titration UV data.

    Science.gov (United States)

    De Luca, Michele; Ioele, Giuseppina; Mas, Sílvia; Tauler, Romà; Ragno, Gaetano

    2012-11-21

    Amiloride photostability at different pH values was studied in depth by applying Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) to the UV spectrophotometric data from drug solutions exposed to stressing irradiation. Resolution of all degradation photoproducts was possible by simultaneous spectrophotometric analysis of kinetic photodegradation and acid-base titration experiments. Amiloride photodegradation showed to be strongly dependent on pH. Two hard modelling constraints were sequentially used in MCR-ALS for the unambiguous resolution of all the species involved in the photodegradation process. An amiloride acid-base system was defined by using the equilibrium constraint, and the photodegradation pathway was modelled taking into account the kinetic constraint. The simultaneous analysis of photodegradation and titration experiments revealed the presence of eight different species, which were differently distributed according to pH and time. Concentration profiles of all the species as well as their pure spectra were resolved and kinetic rate constants were estimated. The values of rate constants changed with pH and under alkaline conditions the degradation pathway and photoproducts also changed. These results were compared to those obtained by LC-MS analysis from drug photodegradation experiments. MS analysis allowed the identification of up to five species and showed the simultaneous presence of more than one acid-base equilibrium.

  16. Metabolism of apolipoproteins A-I and A-II in human high-density lipoprotein: a mathematical approach for analysis of their specific activity decay curves

    International Nuclear Information System (INIS)

    Atmeh, R.F.

    1987-01-01

    The differential rate equations describing the compartmental model of human high-density lipoprotein (HDL) were integrated by means of Laplace transforms and an exponential equation was obtained for each of the three compartments. These equations were used to fit the observed plasma decay data and give estimates for the rate constants of the system by means of a written computer program. Furthermore, these estimates were used to calculate the exponential constants of the integrated equations. Consequently, the amount of label in any of the intravascular, extravascular, and urine compartments can be calculated as a fraction of the original dose of label at any time point. This method was tested using data for the (AI)HDL subclass because it contains only apolipoprotein A-I as the major apolipoprotein and does not contain apolipoprotein A-II. The calculated plasma and urine radioactivity data were compared with the experimentally obtained data from two normolipoproteinemic subjects and found to be in good agreement. The significance of this method is its application to the analysis of the decay data of the individual apolipoproteins of (AI + AII) HDL subclass where the urinary radioactivity data resulting from the individual apolipoprotein breakdown on the native particle cannot be measured experimentally at present. Such data are essential for the detailed calculation of the kinetic parameters of these apolipoproteins

  17. INFOS: spectrum fitting software for NMR analysis

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Albert A., E-mail: alsi@nmr.phys.chem.ethz.ch [ETH Zürich, Physical Chemistry (Switzerland)

    2017-02-15

    Software for fitting of NMR spectra in MATLAB is presented. Spectra are fitted in the frequency domain, using Fourier transformed lineshapes, which are derived using the experimental acquisition and processing parameters. This yields more accurate fits compared to common fitting methods that use Lorentzian or Gaussian functions. Furthermore, a very time-efficient algorithm for calculating and fitting spectra has been developed. The software also performs initial peak picking, followed by subsequent fitting and refinement of the peak list, by iteratively adding and removing peaks to improve the overall fit. Estimation of error on fitting parameters is performed using a Monte-Carlo approach. Many fitting options allow the software to be flexible enough for a wide array of applications, while still being straightforward to set up with minimal user input.

  18. Learning Curve? Which One?

    Directory of Open Access Journals (Sweden)

    Paulo Prochno

    2004-07-01

    Full Text Available Learning curves have been studied for a long time. These studies provided strong support to the hypothesis that, as organizations produce more of a product, unit costs of production decrease at a decreasing rate (see Argote, 1999 for a comprehensive review of learning curve studies. But the organizational mechanisms that lead to these results are still underexplored. We know some drivers of learning curves (ADLER; CLARK, 1991; LAPRE et al., 2000, but we still lack a more detailed view of the organizational processes behind those curves. Through an ethnographic study, I bring a comprehensive account of the first year of operations of a new automotive plant, describing what was taking place on in the assembly area during the most relevant shifts of the learning curve. The emphasis is then on how learning occurs in that setting. My analysis suggests that the overall learning curve is in fact the result of an integration process that puts together several individual ongoing learning curves in different areas throughout the organization. In the end, I propose a model to understand the evolution of these learning processes and their supporting organizational mechanisms.

  19. Spiral Galaxy Central Bulge Tangential Speed of Revolution Curves

    Science.gov (United States)

    Taff, Laurence

    2013-03-01

    The objective was to, for the first time in a century, scientifically analyze the ``rotation curves'' (sic) of the central bulges of scores of spiral galaxies. I commenced with a methodological, rational, geometrical, arithmetic, and statistical examination--none of them carried through before--of the radial velocity data. The requirement for such a thorough treatment is the paucity of data typically available for the central bulge: fewer than 10 observations and frequently only five. The most must be made of these. A consequence of this logical handling is the discovery of a unique model for the central bulge volume mass density resting on the positive slope, linear, rise of its tangential speed of revolution curve and hence--for the first time--a reliable mass estimate. The deduction comes from a known physics-based, mathematically valid, derivation (not assertion). It rests on the full (not partial) equations of motion plus Poisson's equation. Following that is a prediction for the gravitational potential energy and thence the gravitational force. From this comes a forecast for the tangential speed of revolution curve. It was analyzed in a fashion identical to that of the data thereby closing the circle and demonstrating internal self-consistency. This is a hallmark of a scientific method-informed approach to an experimental problem. Multiple plots of the relevant quantities and measures of goodness of fit will be shown. Astronomy related

  20. A statistical approach to evaluate the performance of cardiac biomarkers in predicting death due to acute myocardial infarction: time-dependent ROC curve

    Science.gov (United States)

    Karaismailoğlu, Eda; Dikmen, Zeliha Günnur; Akbıyık, Filiz; Karaağaoğlu, Ahmet Ergun

    2018-04-30

    Background/aim: Myoglobin, cardiac troponin T, B-type natriuretic peptide (BNP), and creatine kinase isoenzyme MB (CK-MB) are frequently used biomarkers for evaluating risk of patients admitted to an emergency department with chest pain. Recently, time- dependent receiver operating characteristic (ROC) analysis has been used to evaluate the predictive power of biomarkers where disease status can change over time. We aimed to determine the best set of biomarkers that estimate cardiac death during follow-up time. We also obtained optimal cut-off values of these biomarkers, which differentiates between patients with and without risk of death. A web tool was developed to estimate time intervals in risk. Materials and methods: A total of 410 patients admitted to the emergency department with chest pain and shortness of breath were included. Cox regression analysis was used to determine an optimal set of biomarkers that can be used for estimating cardiac death and to combine the significant biomarkers. Time-dependent ROC analysis was performed for evaluating performances of significant biomarkers and a combined biomarker during 240 h. The bootstrap method was used to compare statistical significance and the Youden index was used to determine optimal cut-off values. Results : Myoglobin and BNP were significant by multivariate Cox regression analysis. Areas under the time-dependent ROC curves of myoglobin and BNP were about 0.80 during 240 h, and that of the combined biomarker (myoglobin + BNP) increased to 0.90 during the first 180 h. Conclusion: Although myoglobin is not clinically specific to a cardiac event, in our study both myoglobin and BNP were found to be statistically significant for estimating cardiac death. Using this combined biomarker may increase the power of prediction. Our web tool can be useful for evaluating the risk status of new patients and helping clinicians in making decisions.

  1. Climatic and basin factors affecting the flood frequency curve: PART II – A full sensitivity analysis based on the continuous simulation approach combined with a factorial experimental design

    Directory of Open Access Journals (Sweden)

    M. Franchini

    2000-01-01

    Full Text Available The sensitivity analysis described in Hashemi et al. (2000 is based on one-at-a-time perturbations to the model parameters. This type of analysis cannot highlight the presence of parameter interactions which might indeed affect the characteristics of the flood frequency curve (ffc even more than the individual parameters. For this reason, the effects of the parameters of the rainfall, rainfall runoff models and of the potential evapotranspiration demand on the ffc are investigated here through an analysis of the results obtained from a factorial experimental design, where all the parameters are allowed to vary simultaneously. This latter, more complex, analysis confirms the results obtained in Hashemi et al. (2000 thus making the conclusions drawn there of wider validity and not related strictly to the reference set selected. However, it is shown that two-factor interactions are present not only between different pairs of parameters of an individual model, but also between pairs of parameters of different models, such as rainfall and rainfall-runoff models, thus demonstrating the complex interaction between climate and basin characteristics affecting the ffc and in particular its curvature. Furthermore, the wider range of climatic regime behaviour produced within the factorial experimental design shows that the probability distribution of soil moisture content at the storm arrival time is no longer sufficient to explain the link between the perturbations to the parameters and their effects on the ffc, as was suggested in Hashemi et al. (2000. Other factors have to be considered, such as the probability distribution of the soil moisture capacity, and the rainfall regime, expressed through the annual maximum rainfalls over different durations. Keywords: Monte Carlo simulation; factorial experimental design; analysis of variance (ANOVA

  2. Validity limits in J-resistance curve determination: A computational approach to ductile crack growth under large-scale yielding conditions. Volume 2

    International Nuclear Information System (INIS)

    Shih, C.F.; Xia, L.; Hutchinson, J.W.

    1995-02-01

    In this report, Volume 2, Mode I crack initiation and growth under plane strain conditions in tough metals are computed using an elastic/plastic continuum model which accounts for void growth and coalescence ahead of the crack tip. The material parameters include the stress-strain properties, along with the parameters characterizing the spacing and volume fraction of voids in material elements lying in the plane of the crack. For a given set of these parameters and a specific specimen, or component, subject to a specific loading, relationships among load, load-line displacement and crack advance can be computed with no restrictions on the extent of plastic deformation. Similarly, there is no limit on crack advance, except that it must take place on the symmetry plane ahead of the initial crack. Suitably defined measures of crack tip loading intensity, such as those based on the J-integral, can also be computed, thereby directly generating crack growth resistance curves. In this report, the model is applied to five specimen geometries which are known to give rise to significantly different crack tip constraints and crack growth resistance behaviors. Computed results are compared with sets of experimental data for two tough steels for four of the specimen types. Details of the load, displacement and crack growth histories are accurately reproduced, even when extensive crack growth takes place under conditions of fully plastic yielding. A description of material resistance to crack initiation and subsequent growth is essential for assessing structural integrity such as nuclear pressure vessels and piping

  3. Combination of BTrackS and Geri-Fit as a targeted approach for assessing and reducing the postural sway of older adults with high fall risk

    Directory of Open Access Journals (Sweden)

    Goble DJ

    2017-02-01

    Full Text Available Daniel J Goble, Mason C Hearn, Harsimran S Baweja School of Exercise and Nutritional Sciences, College of Health and Human Services, San Diego State University, San Diego, CA, USA Abstract: Atypically high postural sway measured by a force plate is a known risk factor for falls in older adults. Further, it has been shown that small, but significant, reductions in postural sway are possible with various balance exercise interventions. In the present study, a new low-cost force-plate technology called the Balance Tracking System (BTrackS was utilized to assess postural sway of older adults before and after 90 days of a well-established exercise program called Geri-Fit. Results showed an overall reduction in postural sway across all participants from pre- to post-intervention. However, the magnitude of effects was significantly influenced by the amount of postural sway demonstrated by individuals prior to Geri-Fit training. Specifically, more participants with atypically high postural sway pre-intervention experienced an overall postural sway reduction. These reductions experienced were typically greater than the minimum detectable change statistic for the BTrackS Balance Test. Taken together, these findings suggest that BTrackS is an effective means of identifying older adults with elevated postural sway, who are likely to benefit from Geri-Fit training to mitigate fall risk. Keywords: aging, balance, BTrackS, Geri-Fit, postural sway, fall risk

  4. The crime kuznets curve

    OpenAIRE

    Buonanno, Paolo; Fergusson, Leopoldo; Vargas, Juan Fernando

    2014-01-01

    We document the existence of a Crime Kuznets Curve in US states since the 1970s. As income levels have risen, crime has followed an inverted U-shaped pattern, first increasing and then dropping. The Crime Kuznets Curve is not explained by income inequality. In fact, we show that during the sample period inequality has risen monotonically with income, ruling out the traditional Kuznets Curve. Our finding is robust to adding a large set of controls that are used in the literature to explain the...

  5. Bond yield curve construction

    Directory of Open Access Journals (Sweden)

    Kožul Nataša

    2014-01-01

    Full Text Available In the broadest sense, yield curve indicates the market's view of the evolution of interest rates over time. However, given that cost of borrowing it closely linked to creditworthiness (ability to repay, different yield curves will apply to different currencies, market sectors, or even individual issuers. As government borrowing is indicative of interest rate levels available to other market players in a particular country, and considering that bond issuance still remains the dominant form of sovereign debt, this paper describes yield curve construction using bonds. The relationship between zero-coupon yield, par yield and yield to maturity is given and their usage in determining curve discount factors is described. Their usage in deriving forward rates and pricing related derivative instruments is also discussed.

  6. SRHA calibration curve

    Data.gov (United States)

    U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation. This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...

  7. Bragg Curve Spectroscopy

    International Nuclear Information System (INIS)

    Gruhn, C.R.

    1981-05-01

    An alternative utilization is presented for the gaseous ionization chamber in the detection of energetic heavy ions, which is called Bragg Curve Spectroscopy (BCS). Conceptually, BCS involves using the maximum data available from the Bragg curve of the stopping heavy ion (HI) for purposes of identifying the particle and measuring its energy. A detector has been designed that measures the Bragg curve with high precision. From the Bragg curve the range from the length of the track, the total energy from the integral of the specific ionization over the track, the dE/dx from the specific ionization at the beginning of the track, and the Bragg peak from the maximum of the specific ionization of the HI are determined. This last signal measures the atomic number, Z, of the HI unambiguously

  8. Power Curve Measurements FGW

    DEFF Research Database (Denmark)

    Georgieva Yankova, Ginka; Federici, Paolo

    This report describes power curve measurements carried out on a given turbine in a chosen period. The measurements are carried out in accordance to IEC 61400-12-1 Ed. 1 and FGW Teil 2.......This report describes power curve measurements carried out on a given turbine in a chosen period. The measurements are carried out in accordance to IEC 61400-12-1 Ed. 1 and FGW Teil 2....

  9. Curves and Abelian varieties

    CERN Document Server

    Alexeev, Valery; Clemens, C Herbert; Beauville, Arnaud

    2008-01-01

    This book is devoted to recent progress in the study of curves and abelian varieties. It discusses both classical aspects of this deep and beautiful subject as well as two important new developments, tropical geometry and the theory of log schemes. In addition to original research articles, this book contains three surveys devoted to singularities of theta divisors, of compactified Jacobians of singular curves, and of "strange duality" among moduli spaces of vector bundles on algebraic varieties.

  10. Flow characteristics of curved ducts

    Directory of Open Access Journals (Sweden)

    Rudolf P.

    2007-10-01

    Full Text Available Curved channels are very often present in real hydraulic systems, e.g. curved diffusers of hydraulic turbines, S-shaped bulb turbines, fittings, etc. Curvature brings change of velocity profile, generation of vortices and production of hydraulic losses. Flow simulation using CFD techniques were performed to understand these phenomena. Cases ranging from single elbow to coupled elbows in shapes of U, S and spatial right angle position with circular cross-section were modeled for Re = 60000. Spatial development of the flow was studied and consequently it was deduced that minor losses are connected with the transformation of pressure energy into kinetic energy and vice versa. This transformation is a dissipative process and is reflected in the amount of the energy irreversibly lost. Least loss coefficient is connected with flow in U-shape elbows, biggest one with flow in Sshape elbows. Finally, the extent of the flow domain influenced by presence of curvature was examined. This isimportant for proper placement of mano- and flowmeters during experimental tests. Simulations were verified with experimental results presented in literature.

  11. Fermions in curved spacetimes

    Energy Technology Data Exchange (ETDEWEB)

    Lippoldt, Stefan

    2016-01-21

    In this thesis we study a formulation of Dirac fermions in curved spacetime that respects general coordinate invariance as well as invariance under local spin base transformations. We emphasize the advantages of the spin base invariant formalism both from a conceptual as well as from a practical viewpoint. This suggests that local spin base invariance should be added to the list of (effective) properties of (quantum) gravity theories. We find support for this viewpoint by the explicit construction of a global realization of the Clifford algebra on a 2-sphere which is impossible in the spin-base non-invariant vielbein formalism. The natural variables for this formulation are spacetime-dependent Dirac matrices subject to the Clifford-algebra constraint. In particular, a coframe, i.e. vielbein field is not required. We disclose the hidden spin base invariance of the vielbein formalism. Explicit formulas for the spin connection as a function of the Dirac matrices are found. This connection consists of a canonical part that is completely fixed in terms of the Dirac matrices and a free part that can be interpreted as spin torsion. The common Lorentz symmetric gauge for the vielbein is constructed for the Dirac matrices, even for metrics which are not linearly connected. Under certain criteria, it constitutes the simplest possible gauge, demonstrating why this gauge is so useful. Using the spin base formulation for building a field theory of quantized gravity and matter fields, we show that it suffices to quantize the metric and the matter fields. This observation is of particular relevance for field theory approaches to quantum gravity, as it can serve for a purely metric-based quantization scheme for gravity even in the presence of fermions. Hence, in the second part of this thesis we critically examine the gauge, and the field-parametrization dependence of renormalization group flows in the vicinity of non-Gaussian fixed points in quantum gravity. While physical

  12. Physical Fitness Assessment.

    Science.gov (United States)

    Valdes, Alice

    This document presents baseline data on physical fitness that provides an outline for assessing the physical fitness of students. It consists of 4 tasks and a 13-item questionnaire on fitness-related behaviors. The fitness test evaluates cardiorespiratory endurance by a steady state jog; muscular strength and endurance with a two-minute bent-knee…

  13. Unge, sundhed og fitness

    DEFF Research Database (Denmark)

    Jensen, Jens-Ole

    2003-01-01

    Artiklen redegør for udbredelsen af fitness blandt unge og diskuterer, hvor det er blevet så populært at dyrke fitness.......Artiklen redegør for udbredelsen af fitness blandt unge og diskuterer, hvor det er blevet så populært at dyrke fitness....

  14. Targets for parathyroid hormone in secondary hyperparathyroidism: is a “one-size-fits-all” approach appropriate? A prospective incident cohort study

    OpenAIRE

    Laurain, Emmanuelle; Ayav, Carole; Erpelding, Marie-Line; Kessler, Michèle; Briançon, Serge; Brunaud, Laurent; Frimat, Luc

    2014-01-01

    Background Recommendations for secondary hyperparathyroidism (SHPT) consider that a “one-size-fits-all” target enables efficacy of care. In routine clinical practice, SHPT continues to pose diagnosis and treatment challenges. One hypothesis that could explain these difficulties is that dialysis population with SHPT is not homogeneous. Methods EPHEYL is a prospective, multicenter, pharmacoepidemiological study including chronic dialysis patients (≥3 months) with newly SHPT diagnosis, i.e. para...

  15. Combination of BTrackS and Geri-Fit as a targeted approach for assessing and reducing the postural sway of older adults with high fall risk.

    Science.gov (United States)

    Goble, Daniel J; Hearn, Mason C; Baweja, Harsimran S

    2017-01-01

    Atypically high postural sway measured by a force plate is a known risk factor for falls in older adults. Further, it has been shown that small, but significant, reductions in postural sway are possible with various balance exercise interventions. In the present study, a new low-cost force-plate technology called the Balance Tracking System (BTrackS) was utilized to assess postural sway of older adults before and after 90 days of a well-established exercise program called Geri-Fit. Results showed an overall reduction in postural sway across all participants from pre- to post-intervention. However, the magnitude of effects was significantly influenced by the amount of postural sway demonstrated by individuals prior to Geri-Fit training. Specifically, more participants with atypically high postural sway pre-intervention experienced an overall postural sway reduction. These reductions experienced were typically greater than the minimum detectable change statistic for the BTrackS Balance Test. Taken together, these findings suggest that BTrackS is an effective means of identifying older adults with elevated postural sway, who are likely to benefit from Geri-Fit training to mitigate fall risk.

  16. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  17. Power Curve Measurements REWS

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here, the refere......This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here......, the reference wind speed used in the power curve is the equivalent wind speed obtained from lidar measurements at several heights between lower and upper blade tip, in combination with a hub height meteorological mast. The measurements have been performed using DTU’s measurement equipment, the analysis...

  18. Curved electromagnetic missiles

    International Nuclear Information System (INIS)

    Myers, J.M.; Shen, H.M.; Wu, T.T.

    1989-01-01

    Transient electromagnetic fields can exhibit interesting behavior in the limit of great distances from their sources. In situations of finite total radiated energy, the energy reaching a distant receiver can decrease with distance much more slowly than the usual r - 2 . Cases of such slow decrease have been referred to as electromagnetic missiles. All of the wide variety of known missiles propagate in essentially straight lines. A sketch is presented here of a missile that can follow a path that is strongly curved. An example of a curved electromagnetic missile is explicitly constructed and some of its properties are discussed. References to details available elsewhere are given

  19. Algebraic curves and cryptography

    CERN Document Server

    Murty, V Kumar

    2010-01-01

    It is by now a well-known paradigm that public-key cryptosystems can be built using finite Abelian groups and that algebraic geometry provides a supply of such groups through Abelian varieties over finite fields. Of special interest are the Abelian varieties that are Jacobians of algebraic curves. All of the articles in this volume are centered on the theme of point counting and explicit arithmetic on the Jacobians of curves over finite fields. The topics covered include Schoof's \\ell-adic point counting algorithm, the p-adic algorithms of Kedlaya and Denef-Vercauteren, explicit arithmetic on

  20. IGMtransmission: Transmission curve computation

    Science.gov (United States)

    Harrison, Christopher M.; Meiksin, Avery; Stock, David

    2015-04-01

    IGMtransmission is a Java graphical user interface that implements Monte Carlo simulations to compute the corrections to colors of high-redshift galaxies due to intergalactic attenuation based on current models of the Intergalactic Medium. The effects of absorption due to neutral hydrogen are considered, with particular attention to the stochastic effects of Lyman Limit Systems. Attenuation curves are produced, as well as colors for a wide range of filter responses and model galaxy spectra. Photometric filters are included for the Hubble Space Telescope, the Keck telescope, the Mt. Palomar 200-inch, the SUBARU telescope and UKIRT; alternative filter response curves and spectra may be readily uploaded.

  1. Projection-based curve clustering

    International Nuclear Information System (INIS)

    Auder, Benjamin; Fischer, Aurelie

    2012-01-01

    This paper focuses on unsupervised curve classification in the context of nuclear industry. At the Commissariat a l'Energie Atomique (CEA), Cadarache (France), the thermal-hydraulic computer code CATHARE is used to study the reliability of reactor vessels. The code inputs are physical parameters and the outputs are time evolution curves of a few other physical quantities. As the CATHARE code is quite complex and CPU time-consuming, it has to be approximated by a regression model. This regression process involves a clustering step. In the present paper, the CATHARE output curves are clustered using a k-means scheme, with a projection onto a lower dimensional space. We study the properties of the empirically optimal cluster centres found by the clustering method based on projections, compared with the 'true' ones. The choice of the projection basis is discussed, and an algorithm is implemented to select the best projection basis among a library of orthonormal bases. The approach is illustrated on a simulated example and then applied to the industrial problem. (authors)

  2. High resolution melting curve analysis targeting the HBB gene mutational hot-spot offers a reliable screening approach for all common as well as most of the rare beta-globin gene mutations in Bangladesh.

    Science.gov (United States)

    Islam, Md Tarikul; Sarkar, Suprovath Kumar; Sultana, Nusrat; Begum, Mst Noorjahan; Bhuyan, Golam Sarower; Talukder, Shezote; Muraduzzaman, A K M; Alauddin, Md; Islam, Mohammad Sazzadul; Biswas, Pritha Promita; Biswas, Aparna; Qadri, Syeda Kashfi; Shirin, Tahmina; Banu, Bilquis; Sadya, Salma; Hussain, Manzoor; Sarwardi, Golam; Khan, Waqar Ahmed; Mannan, Mohammad Abdul; Shekhar, Hossain Uddin; Chowdhury, Emran Kabir; Sajib, Abu Ashfaqur; Akhteruzzaman, Sharif; Qadri, Syed Saleheen; Qadri, Firdausi; Mannoor, Kaiissar

    2018-01-02

    Bangladesh lies in the global thalassemia belt, which has a defined mutational hot-spot in the beta-globin gene. The high carrier frequencies of beta-thalassemia trait and hemoglobin E-trait in Bangladesh necessitate a reliable DNA-based carrier screening approach that could supplement the use of hematological and electrophoretic indices to overcome the barriers of carrier screening. With this view in mind, the study aimed to establish a high resolution melting (HRM) curve-based rapid and reliable mutation screening method targeting the mutational hot-spot of South Asian and Southeast Asian countries that encompasses exon-1 (c.1 - c.92), intron-1 (c.92 + 1 - c.92 + 130) and a portion of exon-2 (c.93 - c.217) of the HBB gene which harbors more than 95% of mutant alleles responsible for beta-thalassemia in Bangladesh. Our HRM approach could successfully differentiate ten beta-globin gene mutations, namely c.79G > A, c.92 + 5G > C, c.126_129delCTTT, c.27_28insG, c.46delT, c.47G > A, c.92G > C, c.92 + 130G > C, c.126delC and c.135delC in heterozygous states from the wild type alleles, implying the significance of the approach for carrier screening as the first three of these mutations account for ~85% of total mutant alleles in Bangladesh. Moreover, different combinations of compound heterozygous mutations were found to generate melt curves that were distinct from the wild type alleles and from one another. Based on the findings, sixteen reference samples were run in parallel to 41 unknown specimens to perform direct genotyping of the beta-thalassemia specimens using HRM. The HRM-based genotyping of the unknown specimens showed 100% consistency with the sequencing result. Targeting the mutational hot-spot, the HRM approach could be successfully applied for screening of beta-thalassemia carriers in Bangladesh as well as in other countries of South Asia and Southeast Asia. The approach could be a useful supplement of hematological and

  3. Superelastic Ball Bearings: Materials and Design to Avoid Mounting and Dismounting Brinell Damage in an Inaccessible Press-Fit Application-. I ; Design Approach

    Science.gov (United States)

    Dellacorte, Christopher; Howard, S. Adam

    2015-01-01

    Ball bearings require proper fit and installation into machinery structures (onto shafts and into bearing housings) to ensure optimal performance. For some applications, both the inner and outer race must be mounted with an interference fit and care must be taken during assembly and disassembly to avoid placing heavy static loads between the balls and races otherwise Brinell dent type damage can occur. In this paper, a highly dent resistant superelastic alloy, 60NiTi, is considered for rolling element bearing applications that encounter excessive static axial loading during assembly or disassembly. A small (R8) ball bearing is designed for an application in which access to the bearing races to apply disassembly tools is precluded. First Principles analyses show that by careful selection of materials, raceway curvature and land geometry, a bearing can be designed that allows blind assembly and disassembly without incurring raceway damage due to ball denting. Though such blind assembly applications are uncommon, the availability of bearings with unusually high static load capability may enable more such applications with additional benefits, especially for miniature bearings.

  4. Learning from uncertain curves

    DEFF Research Database (Denmark)

    Mallasto, Anton; Feragen, Aasa

    2017-01-01

    We introduce a novel framework for statistical analysis of populations of nondegenerate Gaussian processes (GPs), which are natural representations of uncertain curves. This allows inherent variation or uncertainty in function-valued data to be properly incorporated in the population analysis. Us...

  5. Power Curve Measurements

    DEFF Research Database (Denmark)

    Federici, Paolo; Kock, Carsten Weber

    This report describes the power curve measurements performed with a nacelle LIDAR on a given wind turbine in a wind farm and during a chosen measurement period. The measurements and analysis are carried out in accordance to the guidelines in the procedure “DTU Wind Energy-E-0019” [1]. The reporting...

  6. Power Curve Measurements, FGW

    DEFF Research Database (Denmark)

    Vesth, Allan; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine....

  7. Power Curve Measurements

    DEFF Research Database (Denmark)

    Federici, Paolo; Vesth, Allan

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine....

  8. Power Curve Measurements

    DEFF Research Database (Denmark)

    Villanueva, Héctor; Gómez Arranz, Paula

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine...

  9. Carbon Lorenz Curves

    NARCIS (Netherlands)

    Groot, L.F.M.|info:eu-repo/dai/nl/073642398

    2008-01-01

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across

  10. The Axial Curve Rotator.

    Science.gov (United States)

    Hunter, Walter M.

    This document contains detailed directions for constructing a device that mechanically produces the three-dimensional shape resulting from the rotation of any algebraic line or curve around either axis on the coordinate plant. The device was developed in response to student difficulty in visualizing, and thus grasping the mathematical principles…

  11. Nacelle lidar power curve

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Wagner, Rozenn

    This report describes the power curve measurements performed with a nacelle LIDAR on a given wind turbine in a wind farm and during a chosen measurement period. The measurements and analysis are carried out in accordance to the guidelines in the procedure “DTU Wind Energy-E-0019” [1]. The reporting...

  12. Power curve report

    DEFF Research Database (Denmark)

    Vesth, Allan; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...

  13. Textbook Factor Demand Curves.

    Science.gov (United States)

    Davis, Joe C.

    1994-01-01

    Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)

  14. ECM using Edwards curves

    NARCIS (Netherlands)

    Bernstein, D.J.; Birkner, P.; Lange, T.; Peters, C.P.

    2013-01-01

    This paper introduces EECM-MPFQ, a fast implementation of the elliptic-curve method of factoring integers. EECM-MPFQ uses fewer modular multiplications than the well-known GMP-ECM software, takes less time than GMP-ECM, and finds more primes than GMP-ECM. The main improvements above the

  15. Power Curve Measurements FGW

    DEFF Research Database (Denmark)

    Federici, Paolo; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine...

  16. FITS: a function-fitting program

    Energy Technology Data Exchange (ETDEWEB)

    Balestrini, S.J.; Chezem, C.G.

    1982-01-01

    FITS is an iterating computer program that adjusts the parameters of a function to fit a set of data points according to the least squares criterion and then lists and plots the results. The function can be programmed or chosen from a library that is provided. The library can be expanded to include up to 99 functions. A general plotting routine, contained in the program but useful in its own right, is described separately in an Appendix.

  17. Family Activities for Fitness

    Science.gov (United States)

    Grosse, Susan J.

    2009-01-01

    This article discusses how families can increase family togetherness and improve physical fitness. The author provides easy ways to implement family friendly activities for improving and maintaining physical health. These activities include: walking, backyard games, and fitness challenges.

  18. Computer code FIT

    International Nuclear Information System (INIS)

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  19. Can Low-Resolution Airborne Laser Scanning Data Be Used to Model Stream Rating Curves?

    Directory of Open Access Journals (Sweden)

    Steve W. Lyon

    2015-03-01

    Full Text Available This pilot study explores the potential of using low-resolution (0.2 points/m2 airborne laser scanning (ALS-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2 ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries. This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  20. Generalization of drying curves in conductive/convective drying of cellulose

    Directory of Open Access Journals (Sweden)

    M. Stenzel

    2003-03-01

    Full Text Available The objective of this work is to analyze the possibility of applying the drying curves generalization methodology to the conductive/convective hot plate drying of cellulose. The experiments were carried out at different heated plate temperatures and air velocities over the surface of the samples. This kind of approach is very interesting because it permits comparison of the results of different experiments by reducing them to only one set, which can be divided into two groups: the generalized drying curves and the generalized drying rate curves. The experimental apparatus is an attempt to reproduce the operational conditions of conventional paper dryers (ratio of paper/air movement and consists of a metallic box heated by a thermostatic bath containing an upper surface on which the cellulose samples are placed. Sample material is short- and long-fiber cellulose sheets, about 1 mm thick, and ambient air was introduced into the system by a adjustable blower under different conditions. Long-fiber cellulose generalized curves were obtained and analyzed first individually and then together with the short-fiber cellulose results from Motta Lima et al. (2000 a,b. Finally, a set of equations to fit the generalized curves obtained was proposed and discussed.

  1. Can low-resolution airborne laser scanning data be used to model stream rating curves?

    Science.gov (United States)

    Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar

    2015-01-01

    This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  2. FPGA curved track fitter with very low resource usage

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jin-Yuan; Wang, M.; Gottschalk, E.; Shi, Z.; /Fermilab

    2006-11-01

    Standard least-squares curved track fitting process is tailored for FPGA implementation. The coefficients in the fitting matrices are carefully chosen so that only shift and accumulation operations are used in the process. The divisions and full multiplications are eliminated. Comparison in an application example shows that the fitting errors of the low resource usage implementation are less than 4% bigger than the fitting errors of the exact least-squares algorithm. The implementation is suitable for low-cost, low-power applications such as high energy physics detector trigger systems.

  3. Adaptive Management Fitness of Watersheds

    Directory of Open Access Journals (Sweden)

    Ignacio Porzecanski

    2012-09-01

    Full Text Available Adaptive management (AM promises to improve our ability to cope with the inherent uncertainties of managing complex dynamic systems such as watersheds. However, despite the increasing adherence and attempts at implementation, the AM approach is rarely successful in practice. A one-size-fits-all AM strategy fails because some watersheds are better positioned at the outset to succeed at AM than others. We introduce a diagnostic tool called the Index of Management Condition (IMC and apply it to twelve diverse watersheds in order to determine their AM "fitness"; that is, the degree to which favorable adaptive management conditions are in place in a watershed.

  4. Stenting for curved lesions using a novel curved balloon: Preliminary experimental study.

    Science.gov (United States)

    Tomita, Hideshi; Higaki, Takashi; Kobayashi, Toshiki; Fujii, Takanari; Fujimoto, Kazuto

    2015-08-01

    Stenting may be a compelling approach to dilating curved lesions in congenital heart diseases. However, balloon-expandable stents, which are commonly used for congenital heart diseases, are usually deployed in a straight orientation. In this study, we evaluated the effect of stenting with a novel curved balloon considered to provide better conformability to the curved-angled lesion. In vitro experiments: A Palmaz Genesis(®) stent (Johnson & Johnson, Cordis Co, Bridgewater, NJ, USA) mounted on the Goku(®) curve (Tokai Medical Co. Nagoya, Japan) was dilated in vitro to observe directly the behavior of the stent and balloon assembly during expansion. Animal experiment: A short Express(®) Vascular SD (Boston Scientific Co, Marlborough, MA, USA) stent and a long Express(®) Vascular LD stent (Boston Scientific) mounted on the curved balloon were deployed in the curved vessel of a pig to observe the effect of stenting in vivo. In vitro experiments: Although the stent was dilated in a curved fashion, stent and balloon assembly also rotated conjointly during expansion of its curved portion. In the primary stenting of the short stent, the stent was dilated with rotation of the curved portion. The excised stent conformed to the curved vessel. As the long stent could not be negotiated across the mid-portion with the balloon in expansion when it started curving, the mid-portion of the stent failed to expand fully. Furthermore, the balloon, which became entangled with the stent strut, could not be retrieved even after complete deflation. This novel curved balloon catheter might be used for implantation of the short stent in a curved lesion; however, it should not be used for primary stenting of the long stent. Post-dilation to conform the stent to the angled vessel would be safer than primary stenting irrespective of stent length. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  5. Parametric representation of centrifugal pump homologous curves

    International Nuclear Information System (INIS)

    Veloso, Marcelo A.; Mattos, Joao R.L. de

    2015-01-01

    Essential for any mathematical model designed to simulate flow transient events caused by pump operations is the pump performance data. The performance of a centrifugal pump is characterized by four basic quantities: the rotational speed, the volumetric flow rate, the dynamic head, and the hydraulic torque. The curves showing the relationships between these four variables are called the pump characteristic curves. The characteristic curves are empirically developed by the pump manufacturer and uniquely describe head and torque as functions of volumetric flow rate and rotation speed. Because of comprising a large amount of points, this configuration is not suitable for computational purposes. However, it can be converted to a simpler form by the development of the homologous curves, in which dynamic head and hydraulic torque ratios are expressed as functions of volumetric flow and rotation speed ratios. The numerical use of the complete set of homologous curves requires specification of sixteen partial curves, being eight for the dynamic head and eight for the hydraulic torque. As a consequence, the handling of homologous curves is still somewhat complicated. In solving flow transient problems that require the pump characteristic data for all the operation zones, the parametric form appears as the simplest way to deal with the homologous curves. In this approach, the complete characteristics of a pump can be described by only two closed curves, one for the dynamic head and other for the hydraulic torque, both in function of a single angular coordinate defined adequately in terms of the quotient between volumetric flow ratio and rotation speed ratio. The usefulness and advantages of this alternative method are demonstrated through a practical example in which the homologous curves for a pump of the type used in the main coolant loops of a pressurized water reactor (PWR) are transformed to the parametric form. (author)

  6. Trend analyses with river sediment rating curves

    Science.gov (United States)

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  7. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  8. Carbon Lorenz Curves

    Energy Technology Data Exchange (ETDEWEB)

    Groot, L. [Utrecht University, Utrecht School of Economics, Janskerkhof 12, 3512 BL Utrecht (Netherlands)

    2008-11-15

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across countries. These tools allow policy-makers and the general public to grasp at a single glance the impact of conventional distribution rules such as equal caps or grandfathering, or more sophisticated ones, on the distribution of greenhouse gas emissions. Second, using the Samuelson rule for the optimal provision of a public good, the Pareto-optimal distribution of carbon emissions is compared with the distribution that follows if countries follow Nash-Cournot abatement strategies. It is shown that the Pareto-optimal distribution under the Samuelson rule can be approximated by the equal cap division, represented by the diagonal in the Lorenz curve diagram.

  9. Dynamics of curved fronts

    CERN Document Server

    Pelce, Pierre

    1989-01-01

    In recent years, much progress has been made in the understanding of interface dynamics of various systems: hydrodynamics, crystal growth, chemical reactions, and combustion. Dynamics of Curved Fronts is an important contribution to this field and will be an indispensable reference work for researchers and graduate students in physics, applied mathematics, and chemical engineering. The book consist of a 100 page introduction by the editor and 33 seminal articles from various disciplines.

  10. International Wage Curves

    OpenAIRE

    David G. Blanchflower; Andrew J. Oswald

    1992-01-01

    The paper provides evidence for the existence of a negatively sloped locus linking the level of pay to the rate of regional (or industry) unemployment. This "wage curve" is estimated using microeconomic data for Britain, the US, Canada, Korea, Austria, Italy, Holland, Switzerland, Norway, and Germany, The average unemployment elasticity of pay is approximately -0.1. The paper sets out a multi-region efficiency wage model and argues that its predictions are consistent with the data.

  11. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  12. Molecular dynamics simulations of the melting curve of NiAl alloy under pressure

    International Nuclear Information System (INIS)

    Zhang, Wenjin; Peng, Yufeng; Liu, Zhongli

    2014-01-01

    The melting curve of B2-NiAl alloy under pressure has been investigated using molecular dynamics technique and the embedded atom method (EAM) potential. The melting temperatures were determined with two approaches, the one-phase and the two-phase methods. The first one simulates a homogeneous melting, while the second one involves a heterogeneous melting of materials. Both approaches reduce the superheating effectively and their results are close to each other at the applied pressures. By fitting the well-known Simon equation to our melting data, we yielded the melting curves for NiAl: 1783(1 + P/9.801) 0.298 (one-phase approach), 1850(1 + P/12.806) 0.357 (two-phase approach). The good agreement of the resulting equation of states and the zero-pressure melting point (calc., 1850 ± 25 K, exp., 1911 K) with experiment proved the correctness of these results. These melting data complemented the absence of experimental high-pressure melting of NiAl. To check the transferability of this EAM potential, we have also predicted the melting curves of pure nickel and pure aluminum. Results show the calculated melting point of Nickel agrees well with experiment at zero pressure, while the melting point of aluminum is slightly higher than experiment

  13. Investigation of learning and experience curves

    Energy Technology Data Exchange (ETDEWEB)

    Krawiec, F.; Thornton, J.; Edesess, M.

    1980-04-01

    The applicability of learning and experience curves for predicting future costs of solar technologies is assessed, and the major test case is the production economics of heliostats. Alternative methods for estimating cost reductions in systems manufacture are discussed, and procedures for using learning and experience curves to predict costs are outlined. Because adequate production data often do not exist, production histories of analogous products/processes are analyzed and learning and aggregated cost curves for these surrogates estimated. If the surrogate learning curves apply, they can be used to estimate solar technology costs. The steps involved in generating these cost estimates are given. Second-generation glass-steel and inflated-bubble heliostat design concepts, developed by MDAC and GE, respectively, are described; a costing scenario for 25,000 units/yr is detailed; surrogates for cost analysis are chosen; learning and aggregate cost curves are estimated; and aggregate cost curves for the GE and MDAC designs are estimated. However, an approach that combines a neoclassical production function with a learning-by-doing hypothesis is needed to yield a cost relation compatible with the historical learning curve and the traditional cost function of economic theory.

  14. Reconceptualizing fit in strategic human resource management: 'Lost in translation?'

    NARCIS (Netherlands)

    Paauwe, J.; Boon, C.; Boselie, P.; den Hartog, D.; Paauwe, J.; Guest, D.; Wright, P.

    2013-01-01

    To date, studies that focus on the concept of 'fit' in strategic human resource management (SHRM) fail to show consistent evidence. A variety of fit approaches are available, but there is no general consensus about what constitutes fit. This chapter aims to reconceptualize fit through a literature

  15. Petroleum refining fitness assessment to the sectoral approaches to address climate change; Analise da aptidao do setor refino de petroleo as abordagens setoriais para lidar com as mudancas climaticas globais

    Energy Technology Data Exchange (ETDEWEB)

    Merschmann, Paulo Roberto de Campos

    2010-03-15

    The climate agreement that will take place from 2013 onwards needs to address some of the concerns that were not considered in the Kyoto Protocol. Such concerns include the absence of emission targets for big emitters developing countries and the impacts of unequal carbon-policies on the competitiveness of Annex 1 energy-intensive sectors. Sectoral approaches for energy-intensive sectors can be a solution to both concerns, mainly if they address climate change issues involving all the countries in which these sectors have a significant participation. A sector is a good candidate to the sectoral approaches if it has some characteristics. Such characteristics are high impact to the competitiveness of Annex 1 enterprises derived of the lack of compromises of enterprises located in non Annex 1 countries, high level of opportunities to mitigate GHG emissions based on the application of sectoral approaches and easy sectoral approaches implementation in the sector. Then, this work assesses the petroleum refining sector fitness to the sectoral approaches to address climate change. Also, this dissertation compares the petroleum refining sector characteristics to the characteristics of well suited sectors to the sectoral approaches. (author)

  16. FITS: a function-fitting program

    Energy Technology Data Exchange (ETDEWEB)

    Balestrini, S.J.; Chezem, C.G.

    1982-08-01

    FITS is an iterating computer program that adjusts the parameters of a function to fit a set of data points according to the least squares criterion and then lists and plots the results. The function can be programmed or chosen from a library that is provided. The library can be expanded to include up to 99 functions. A general plotting routine, contained in the program but useful in its own right, is described separately in Appendix A. An example problem file and its solution is given in Appendix B.

  17. A standard curve based method for relative real time PCR data processing

    Directory of Open Access Journals (Sweden)

    Krause Andreas

    2005-03-01

    Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that

  18. Discovering an Accessible Enzyme: Salivary [alpha]-Amylase--"Prima Digestio Fit in Ore"--A Didactic Approach for High School Students

    Science.gov (United States)

    Marini, Isabella

    2005-01-01

    Human salivary [alpha]-amylase is used in this experimental approach to introduce biology high school students to the concept of enzyme activity in a dynamic way. Through a series of five easy, rapid, and inexpensive laboratory experiments students learn what the activity of an enzyme consists of: first in a qualitative then in a semi-quantitative…

  19. Learning curves in health professions education.

    Science.gov (United States)

    Pusic, Martin V; Boutis, Kathy; Hatala, Rose; Cook, David A

    2015-08-01

    Learning curves, which graphically show the relationship between learning effort and achievement, are common in published education research but are not often used in day-to-day educational activities. The purpose of this article is to describe the generation and analysis of learning curves and their applicability to health professions education. The authors argue that the time is right for a closer look at using learning curves-given their desirable properties-to inform both self-directed instruction by individuals and education management by instructors.A typical learning curve is made up of a measure of learning (y-axis), a measure of effort (x-axis), and a mathematical linking function. At the individual level, learning curves make manifest a single person's progress towards competence including his/her rate of learning, the inflection point where learning becomes more effortful, and the remaining distance to mastery attainment. At the group level, overlaid learning curves show the full variation of a group of learners' paths through a given learning domain. Specifically, they make overt the difference between time-based and competency-based approaches to instruction. Additionally, instructors can use learning curve information to more accurately target educational resources to those who most require them.The learning curve approach requires a fine-grained collection of data that will not be possible in all educational settings; however, the increased use of an assessment paradigm that explicitly includes effort and its link to individual achievement could result in increased learner engagement and more effective instructional design.

  20. Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.

    Science.gov (United States)

    Roman, A; Ahmed, K; Challacombe, B

    2016-05-01

    Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  1. Optimization of Fit for Mass Customized Apparel Ordering Using Fit Preference and Self Measurement.

    Science.gov (United States)

    2000-01-01

    in significance and definition for both consumers and manufacturers. Fit preference involves an individualized bias toward a particular look, size...Committee. Bishton, D. (1984). The sweatshop report. Birmingham: AFFOR. 268 269 Bjerve, S. & Doksum, K. (1993). Correlation curves: Measures of...anthropometry methods. New York: John Wiley & Sons. Rosenbaum, R. & Schilling D. (1997). In sweatshops , wages are the issue. The Corporate Examiner

  2. Simultaneous fitting of real-time PCR data with efficiency of amplification modeled as Gaussian function of target fluorescence

    Directory of Open Access Journals (Sweden)

    Lazar Andreas

    2008-02-01

    Full Text Available Abstract Background In real-time PCR, it is necessary to consider the efficiency of amplification (EA of amplicons in order to determine initial target levels properly. EAs can be deduced from standard curves, but these involve extra effort and cost and may yield invalid EAs. Alternatively, EA can be extracted from individual fluorescence curves. Unfortunately, this is not reliable enough. Results Here we introduce simultaneous non-linear fitting to determine – without standard curves – an optimal common EA for all samples of a group. In order to adjust EA as a function of target fluorescence, and still to describe fluorescence as a function of cycle number, we use an iterative algorithm that increases fluorescence cycle by cycle and thus simulates the PCR process. A Gauss peak function is used to model the decrease of EA with increasing amplicon accumulation. Our approach was validated experimentally with hydrolysis probe or SYBR green detection with dilution series of 5 different targets. It performed distinctly better in terms of accuracy than standard curve, DART-PCR, and LinRegPCR approaches. Based on reliable EAs, it was possible to detect that for some amplicons, extraordinary fluorescence (EA > 2.00 was generated with locked nucleic acid hydrolysis probes, but not with SYBR green. Conclusion In comparison to previously reported approaches that are based on the separate analysis of each curve and on modelling EA as a function of cycle number, our approach yields more accurate and precise estimates of relative initial target levels.

  3. A variant of the Hubbert curve for world oil production forecasts

    International Nuclear Information System (INIS)

    Maggio, G.; Cacciola, G.

    2009-01-01

    In recent years, the economic and political aspects of energy problems have prompted many researchers and analysts to focus their attention on the Hubbert Peak Theory with the aim of forecasting future trends in world oil production. In this paper, a model that attempts to contribute in this regard is presented; it is based on a variant of the well-known Hubbert curve. In addition, the sum of multiple-Hubbert curves (two cycles) is used to provide a better fit for the historical data on oil production (crude and natural gas liquid (NGL)). Taking into consideration three possible scenarios for oil reserves, this approach allowed us to forecast when peak oil production, referring to crude oil and NGL, should occur. In particular, by assuming a range of 2250-3000 gigabarrels (Gb) for ultimately recoverable conventional oil, our predictions foresee a peak between 2009 and 2021 at 29.3-32.1 Gb/year.

  4. Getting CSR communication fit

    DEFF Research Database (Denmark)

    Schmeltz, Line

    2017-01-01

    Companies experience increasing legal and societal pressure to communicate about their corporate social responsibility (CSR) engagements from a number of different publics. One very important group is that of young consumers who are predicted to be the most important and influential consumer group...... in the near future. From a value- theoretical base, this article empirically explores the role and applicability of ‘fit’ in strategic CSR communication targeted at young consumers. Point of departure is taken in the well-known strategic fit (a logical link between a company’s CSR commitment and its core...... values) and is further developed by introducing two additional fits, the CSR- Consumer fit and the CSR-Consumer-Company fit (Triple Fit). Through a sequential design, the three fits are empirically tested and their potential for meeting young consumers’ expectations for corporate CSR messaging...

  5. Fragment Impact Toolkit (FIT)

    Energy Technology Data Exchange (ETDEWEB)

    Shevitz, Daniel Wolf [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Garcia, Daniel B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-05

    The Fragment Impact Toolkit (FIT) is a software package used for probabilistic consequence evaluation of fragmenting sources. The typical use case for FIT is to simulate an exploding shell and evaluate the consequence on nearby objects. FIT is written in the programming language Python and is designed as a collection of interacting software modules. Each module has a function that interacts with the other modules to produce desired results.

  6. Uniformization of elliptic curves

    OpenAIRE

    Ülkem, Özge; Ulkem, Ozge

    2015-01-01

    Every elliptic curve E defined over C is analytically isomorphic to C*=qZ for some q ∊ C*. Similarly, Tate has shown that if E is defined over a p-adic field K, then E is analytically isomorphic to K*=qZ for some q ∊ K . Further the isomorphism E(K) ≅ K*/qZ respects the action of the Galois group GK/K, where K is the algebraic closure of K. I will explain the construction of this isomorphism.

  7. Roc curves for continuous data

    CERN Document Server

    Krzanowski, Wojtek J

    2009-01-01

    Since ROC curves have become ubiquitous in many application areas, the various advances have been scattered across disparate articles and texts. ROC Curves for Continuous Data is the first book solely devoted to the subject, bringing together all the relevant material to provide a clear understanding of how to analyze ROC curves.The fundamental theory of ROC curvesThe book first discusses the relationship between the ROC curve and numerous performance measures and then extends the theory into practice by describing how ROC curves are estimated. Further building on the theory, the authors prese

  8. Light extraction block with curved surface

    Science.gov (United States)

    Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.

    2016-03-22

    Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.

  9. Contribution to the boiling curve of sodium

    International Nuclear Information System (INIS)

    Schins, H.E.J.

    1975-01-01

    Sodium in a pool was preheated to saturation temperatures at system pressures of 200, 350 and 500 torr. A test section of normal stainless steel was then extra heated by means of the conical fitting condenser zone of a heat pipe. Measurements were made of heat transfer fluxes, q in W/cm 2 , as a function of wall excess temperature above saturation, THETA = Tsub(w) - Tsub(s) in 0 C, both, in natural convection and in boiling regimes. These measurements make it possible to select the Subbotin natural convection and nucleate boiling curves among other variants proposed in literature. Further it is empirically demonstrated on water that the minimum film boiling point corresponds to the homogeneous nucleation temperature calculated by the Doering formula. Assuming that the minimum film boiling point of sodium can be obtained in the same manner, it is then possible to give an appoximate boiling curve of sodium for the use in thermal interaction studies. At 1 atm the heat transfer fluxes q versus wall temperatures THETA are for a point on the natural convection curve 0.3 W/cm 2 and 2 0 C; for start of boiling 1.6 W/cm 2 and 6 0 C; for peak heat flux 360 W/cm 2 and 37 0 C; for minimum film boiling 30 W/cm 2 and 905 0 C and for a point on the film boiling curve 160 W/cm 2 and 2,000 0 C. (orig.) [de

  10. Evaluating the effectiveness of a peer-led education intervention to improve the patient safety attitudes of junior pharmacy students: a cross-sectional study using a latent growth curve modelling approach.

    Science.gov (United States)

    Walpola, Ramesh L; Fois, Romano A; McLachlan, Andrew J; Chen, Timothy F

    2015-12-08

    Despite the recognition that educating healthcare students in patient safety is essential, changing already full curricula can be challenging. Furthermore, institutions may lack the capacity and capability to deliver patient safety education, particularly from the start of professional practice studies. Using senior students as peer educators to deliver practice-based education can potentially overcome some of the contextual barriers in training junior students. Therefore, this study aimed to evaluate the effectiveness of a peer-led patient safety education programme for junior pharmacy students. A repeat cross-sectional design utilising a previously validated patient safety attitudinal survey was used to evaluate attitudes prior to, immediately after and 1 month after the delivery of a patient safety education programme. Latent growth curve (LGC) modelling was used to evaluate the change in attitudes of first-year students using second-year students as a comparator group. Undergraduate university students in Sydney, Australia. 175 first-year and 140 second-year students enrolled in the Bachelor of Pharmacy programme at the University of Sydney. An introductory patient safety programme was implemented into the first-year Bachelor of Pharmacy curriculum at the University of Sydney. The programme covered introductory patient safety topics including teamwork, communication skills, systems thinking and open disclosure. The programme consisted of 2 lectures, delivered by a senior academic, and a workshop delivered by trained final-year pharmacy students. A full LGC model was constructed including the intervention as a non-time-dependent predictor of change (χ(2) (51)=164.070, root mean square error of approximation=0.084, comparative fit index=0.913, standardised root mean square=0.056). First-year students' attitudes significantly improved as a result of the intervention, particularly in relation to internalising errors (p=0.010), questioning behaviours (pmethod that

  11. Inferring genetic interactions from comparative fitness data.

    Science.gov (United States)

    Crona, Kristina; Gavryushkin, Alex; Greene, Devin; Beerenwinkel, Niko

    2017-12-20

    Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax , the fungus Aspergillus niger , and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations.

  12. Are Current Physical Match Performance Metrics in Elite Soccer Fit for Purpose or is the Adoption of an Integrated Approach Needed?

    Science.gov (United States)

    Bradley, Paul S; Ade, Jack D

    2018-01-18

    Time-motion analysis is a valuable data-collection technique used to quantify the physical match performance of elite soccer players. For over 40 years researchers have adopted a 'traditional' approach when evaluating match demands by simply reporting the distance covered or time spent along a motion continuum of walking through to sprinting. This methodology quantifies physical metrics in isolation without integrating other factors and this ultimately leads to a one-dimensional insight into match performance. Thus, this commentary proposes a novel 'integrated' approach that focuses on a sensitive physical metric such as high-intensity running but contextualizes this in relation to key tactical activities for each position and collectively for the team. In the example presented, the 'integrated' model clearly unveils the unique high-intensity profile that exists due to distinct tactical roles, rather than one-dimensional 'blind' distances produced by 'traditional' models. Intuitively this innovative concept may aid the coaches understanding of the physical performance in relation to the tactical roles and instructions given to the players. Additionally, it will enable practitioners to more effectively translate match metrics into training and testing protocols. This innovative model may well aid advances in other team sports that incorporate similar intermittent movements with tactical purpose. Evidence of the merits and application of this new concept are needed before the scientific community accepts this model as it may well add complexity to an area that conceivably needs simplicity.

  13. Global experience curves for wind farms

    International Nuclear Information System (INIS)

    Junginger, M.; Faaij, A.; Turkenburg, W.C.

    2005-01-01

    In order to forecast the technological development and cost of wind turbines and the production costs of wind electricity, frequent use is made of the so-called experience curve concept. Experience curves of wind turbines are generally based on data describing the development of national markets, which cause a number of problems when applied for global assessments. To analyze global wind energy price development more adequately, we compose a global experience curve. First, underlying factors for past and potential future price reductions of wind turbines are analyzed. Also possible implications and pitfalls when applying the experience curve methodology are assessed. Second, we present and discuss a new approach of establishing a global experience curve and thus a global progress ratio for the investment cost of wind farms. Results show that global progress ratios for wind farms may lie between 77% and 85% (with an average of 81%), which is significantly more optimistic than progress ratios applied in most current scenario studies and integrated assessment models. While the findings are based on a limited amount of data, they may indicate faster price reduction opportunities than so far assumed. With this global experience curve we aim to improve the reliability of describing the speed with which global costs of wind power may decline

  14. The curvature of sensitometric curves for Kodak XV-2 film irradiated with photon and electron beams.

    Science.gov (United States)

    van Battum, L J; Huizenga, H

    2006-07-01

    Sensitometric curves of Kodak XV-2 film, obtained in a time period of ten years with various types of equipment, have been analyzed both for photon and electron beams. The sensitometric slope in the dataset varies more than a factor of 2, which is attributed mainly to variations in developer conditions. In the literature, the single hit equation has been proposed as a model for the sensitometric curve, as with the parameters of the sensitivity and maximum optical density. In this work, the single hit equation has been translated into a polynomial like function as with the parameters of the sensitometric slope and curvature. The model has been applied to fit the sensitometric data. If the dataset is fitted for each single sensitometric curve separately, a large variation is observed for both fit parameters. When sensitometric curves are fitted simultaneously it appears that all curves can be fitted adequately with a sensitometric curvature that is related to the sensitometric slope. When fitting each curve separately, apparently measurement uncertainty hides this relation. This relation appears to be dependent only on the type of densitometer used. No significant differences between beam energies or beam modalities are observed. Using the intrinsic relation between slope and curvature in fitting sensitometric data, e.g., for pretreatment verification of intensity-modulated radiotherapy, will increase the accuracy of the sensitometric curve. A calibration at a single dose point, together with a predetermined densitometer-dependent parameter ODmax will be adequate to find the actual relation between optical density and dose.

  15. Some genus 3 curves with many points

    NARCIS (Netherlands)

    Auer, R; Top, J; Fieker, C; Kohel, DR

    2002-01-01

    We explain a naive approach towards the problem of finding genus 3 curves C over any given finite field F-q of odd characteristic, with a number of rational points close to the Hasse-Weil-Serre upper bound q+1+3[2rootq]. The method turns out to be successful at least in characteristic 3.

  16. Bootstrap confidence intervals for principal response curves

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ter Braak, Cajo J. F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  17. Bootstrap Confidence Intervals for Principal Response Curves

    NARCIS (Netherlands)

    Timmerman, M.E.; Braak, ter C.J.F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  18. Singular interactions supported by embedded curves

    International Nuclear Information System (INIS)

    Kaynak, Burak Tevfik; Turgut, O Teoman

    2012-01-01

    In this work, singular interactions supported by embedded curves on Riemannian manifolds are discussed from a more direct and physical perspective, via the heat kernel approach. We show that the renormalized problem is well defined, the ground state is finite and the corresponding wavefunction is positive. The renormalization group invariance of the model is also discussed. (paper)

  19. A Probabilistic Framework for Curve Evolution

    DEFF Research Database (Denmark)

    Dahl, Vedrana Andersen

    2017-01-01

    approach include ability to handle textured images, simple generalization to multiple regions, and efficiency in computation. We test our probabilistic framework in combination with parametric (snakes) and geometric (level-sets) curves. The experimental results on composed and natural images demonstrate...

  20. Fitness: Tips for Staying Motivated

    Science.gov (United States)

    Healthy Lifestyle Fitness Fitness is for life. Motivate yourself with these practical tips. By Mayo Clinic Staff Have ... 27, 2015 Original article: http://www.mayoclinic.org/healthy-lifestyle/fitness/in-depth/fitness/art-20047624 . Mayo Clinic ...

  1. Curved Josephson junction

    International Nuclear Information System (INIS)

    Dobrowolski, Tomasz

    2012-01-01

    The constant curvature one and quasi-one dimensional Josephson junction is considered. On the base of Maxwell equations, the sine–Gordon equation that describes an influence of curvature on the kink motion was obtained. It is showed that the method of geometrical reduction of the sine–Gordon model from three to lower dimensional manifold leads to an identical form of the sine–Gordon equation. - Highlights: ► The research on dynamics of the phase in a curved Josephson junction is performed. ► The geometrical reduction is applied to the sine–Gordon model. ► The results of geometrical reduction and the fundamental research are compared.

  2. Curved-Duct

    Directory of Open Access Journals (Sweden)

    Je Hyun Baekt

    2000-01-01

    Full Text Available A numerical study is conducted on the fully-developed laminar flow of an incompressible viscous fluid in a square duct rotating about a perpendicular axis to the axial direction of the duct. At the straight duct, the rotation produces vortices due to the Coriolis force. Generally two vortex cells are formed and the axial velocity distribution is distorted by the effect of this Coriolis force. When a convective force is weak, two counter-rotating vortices are shown with a quasi-parabolic axial velocity profile for weak rotation rates. As the rotation rate increases, the axial velocity on the vertical centreline of the duct begins to flatten and the location of vorticity center is moved near to wall by the effect of the Coriolis force. When the convective inertia force is strong, a double-vortex secondary flow appears in the transverse planes of the duct for weak rotation rates but as the speed of rotation increases the secondary flow is shown to split into an asymmetric configuration of four counter-rotating vortices. If the rotation rates are increased further, the secondary flow restabilizes to a slightly asymmetric double-vortex configuration. Also, a numerical study is conducted on the laminar flow of an incompressible viscous fluid in a 90°-bend square duct that rotates about axis parallel to the axial direction of the inlet. At a 90°-bend square duct, the feature of flow by the effect of a Coriolis force and a centrifugal force, namely a secondary flow by the centrifugal force in the curved region and the Coriolis force in the downstream region, is shown since the centrifugal force in curved region and the Coriolis force in downstream region are dominant respectively.

  3. Elliptic curves for applications (Tutorial)

    NARCIS (Netherlands)

    Lange, T.; Bernstein, D.J.; Chatterjee, S.

    2011-01-01

    More than 25 years ago, elliptic curves over finite fields were suggested as a group in which the Discrete Logarithm Problem (DLP) can be hard. Since then many researchers have scrutinized the security of the DLP on elliptic curves with the result that for suitably chosen curves only exponential

  4. Titration Curves: Fact and Fiction.

    Science.gov (United States)

    Chamberlain, John

    1997-01-01

    Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…

  5. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    Energy Technology Data Exchange (ETDEWEB)

    Jenkin, Thomas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Larson, Andrew [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ruth, Mark F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ben [U.S. Department of Energy; Spitsen, Paul [U.S. Department of Energy

    2018-03-27

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market and dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing

  6. REDUCED DATA FOR CURVE MODELING – APPLICATIONS IN GRAPHICS, COMPUTER VISION AND PHYSICS

    Directory of Open Access Journals (Sweden)

    Małgorzata Janik

    2013-06-01

    Full Text Available In this paper we consider the problem of modeling curves in Rn via interpolation without a priori specified interpolation knots. We discuss two approaches to estimate the missing knots for non-parametric data (i.e. collection of points. The first approach (uniform evaluation is based on blind guess in which knots are chosen uniformly. The second approach (cumulative chord parameterization incorporates the geometry of the distribution of data points. More precisely, the difference is equal to the Euclidean distance between data points qi+1 and qi. The second method partially compensates for the loss of the information carried by the reduced data. We also present the application of the above schemes for fitting non-parametric data in computer graphics (light-source motion rendering, in computer vision (image segmentation and in physics (high velocity particles trajectory modeling. Though experiments are conducted for points in R2 and R3 the entire method is equally applicable in Rn.

  7. Targeting the psychosocial and functional fitness challenges of older adults with hearing loss: a participatory approach to adaptation of the walk and talk for your life program.

    Science.gov (United States)

    Jutras, Marc; Lambert, Justin; Hwang, Jiyoung; Wang, Lisa; Simon, Shane; Del Medico, Talia; Mick, Paul; Miller, Harry; Kurtz, Donna; Murphy, Mary-Ann; Jones, Charlotte Ann

    2018-03-20

    Explore the acceptability of a socialisation, health education and falls prevention programme (Walk and Talk for Your Life: WTL) as an adjunct to group auditory rehabilitation (GAR) and how it might be adapted for older adults with hearing loss (HL). Content theme analysis (CTA) of guided interviews explored the experience of HL, the acceptability of a WTL programme and suggestions on how to adapt the WTL programme to better suit the needs of older adults with HL. Twenty-eight (20 women, 8 men) adults (>55 years of age) with HL were interviewed. Seventeen had participated in past WTL programmes and eleven were sampled from the community. Interviewees reported difficulty socialising and a tendency to withdraw from social interactions. Addition of GAR to a WTL programme was found to be highly acceptable. Interviewees suggested that to best suit their needs, sessions should take place in a location with optimal acoustics; include small groups integrating hearing-impaired and hearing-intact participants; include appropriate speaking ground rules; and include an option for partner involvement. The adapted WTL programme provides a holistic and unique approach to the treatment of HL that has the potential to positively impact the hearing-impaired elderly.

  8. Limitations of inclusive fitness.

    Science.gov (United States)

    Allen, Benjamin; Nowak, Martin A; Wilson, Edward O

    2013-12-10

    Until recently, inclusive fitness has been widely accepted as a general method to explain the evolution of social behavior. Affirming and expanding earlier criticism, we demonstrate that inclusive fitness is instead a limited concept, which exists only for a small subset of evolutionary processes. Inclusive fitness assumes that personal fitness is the sum of additive components caused by individual actions. This assumption does not hold for the majority of evolutionary processes or scenarios. To sidestep this limitation, inclusive fitness theorists have proposed a method using linear regression. On the basis of this method, it is claimed that inclusive fitness theory (i) predicts the direction of allele frequency changes, (ii) reveals the reasons for these changes, (iii) is as general as natural selection, and (iv) provides a universal design principle for evolution. In this paper we evaluate these claims, and show that all of them are unfounded. If the objective is to analyze whether mutations that modify social behavior are favored or opposed by natural selection, then no aspect of inclusive fitness theory is needed.

  9. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    International Nuclear Information System (INIS)

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    2016-01-01

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North American datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate. - Highlights: • We develop a segmented regression technique to estimate historical CFL learning curves. • CFL experience curves do not have a constant learning rate. • CFLs exhibited a learning rate of approximately 21% from 1990 to 1997. • The CFL learning rate significantly increased after 1998. • Increased CFL learning rate is correlated to technology deployment programs.

  10. A Journey Between Two Curves

    Directory of Open Access Journals (Sweden)

    Sergey A. Cherkis

    2007-03-01

    Full Text Available A typical solution of an integrable system is described in terms of a holomorphic curve and a line bundle over it. The curve provides the action variables while the time evolution is a linear flow on the curve's Jacobian. Even though the system of Nahm equations is closely related to the Hitchin system, the curves appearing in these two cases have very different nature. The former can be described in terms of some classical scattering problem while the latter provides a solution to some Seiberg-Witten gauge theory. This note identifies the setup in which one can formulate the question of relating the two curves.

  11. GOSSIP: SED fitting code

    Science.gov (United States)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  12. Fitness Club / Nordic Walking

    CERN Multimedia

    Fitness Club

    2011-01-01

    Nordic Walking at CERN Enrollments are open for Nordic Walking courses and outings at CERN. Classes will be on Tuesdays as of 20 September, and outings for the more experienced will be on Thursdays as of 15 September. We meet at the CERN Club barracks car park (near entrance A). • 18:00 to 19:00 on 20 & 27 September, as well as 4 & 11 October. Check out our schedule and rates and enroll at: http://cern.ch/club-fitness Hope to see you among us! CERN Fitness Club fitness.club@cern.ch  

  13. IDF-curves for precipitation In Belgium

    International Nuclear Information System (INIS)

    Mohymont, Bernard; Demarde, Gaston R.

    2004-01-01

    The Intensity-Duration-Frequency (IDF) curves for precipitation constitute a relationship between the intensity, the duration and the frequency of rainfall amounts. The intensity of precipitation is expressed in mm/h, the duration or aggregation time is the length of the interval considered while the frequency stands for the probability of occurrence of the event. IDF-curves constitute a classical and useful tool that is primarily used to dimension hydraulic structures in general, as e.g., sewer systems and which are consequently used to assess the risk of inundation. In this presentation, the IDF relation for precipitation is studied for different locations in Belgium. These locations correspond to two long-term, high-quality precipitation networks of the RMIB: (a) the daily precipitation depths of the climatological network (more than 200 stations, 1951-2001 baseline period); (b) the high-frequency 10-minutes precipitation depths of the hydro meteorological network (more than 30 stations, 15 to 33 years baseline period). For the station of Uccle, an uninterrupted time-series of more than one hundred years of 10-minutes rainfall data is available. The proposed technique for assessing the curves is based on maximum annual values of precipitation. A new analytical formula for the IDF-curves was developed such that these curves stay valid for aggregation times ranging from 10 minutes to 30 days (when fitted with appropriate data). Moreover, all parameters of this formula have physical dimensions. Finally, adequate spatial interpolation techniques are used to provide nationwide extreme values precipitation depths for short- to long-term durations With a given return period. These values are estimated on the grid points of the Belgian ALADIN-domain used in the operational weather forecasts at the RMIB.(Author)

  14. Soil Water Retention Curve

    Science.gov (United States)

    Johnson, L. E.; Kim, J.; Cifelli, R.; Chandra, C. V.

    2016-12-01

    Potential water retention, S, is one of parameters commonly used in hydrologic modeling for soil moisture accounting. Physically, S indicates total amount of water which can be stored in soil and is expressed in units of depth. S can be represented as a change of soil moisture content and in this context is commonly used to estimate direct runoff, especially in the Soil Conservation Service (SCS) curve number (CN) method. Generally, the lumped and the distributed hydrologic models can easily use the SCS-CN method to estimate direct runoff. Changes in potential water retention have been used in previous SCS-CN studies; however, these studies have focused on long-term hydrologic simulations where S is allowed to vary at the daily time scale. While useful for hydrologic events that span multiple days, the resolution is too coarse for short-term applications such as flash flood events where S may not recover its full potential. In this study, a new method for estimating a time-variable potential water retention at hourly time-scales is presented. The methodology is applied for the Napa River basin, California. The streamflow gage at St Helena, located in the upper reaches of the basin, is used as the control gage site to evaluate the model performance as it is has minimal influences by reservoirs and diversions. Rainfall events from 2011 to 2012 are used for estimating the event-based SCS CN to transfer to S. As a result, we have derived the potential water retention curve and it is classified into three sections depending on the relative change in S. The first is a negative slope section arising from the difference in the rate of moving water through the soil column, the second is a zero change section representing the initial recovery the potential water retention, and the third is a positive change section representing the full recovery of the potential water retention. Also, we found that the soil water moving has traffic jam within 24 hours after finished first

  15. Estimation of growth curve parameters in Konya Merino sheep ...

    African Journals Online (AJOL)

    The objective of this study was to determine the fitness of Quadratic, Cubic, Gompertz and Logistic functions to the growth curves of Konya Merino lambs obtained by using monthly records of live weight from birth to 480 days of age. The models were evaluated according to determination coefficient (R2), mean square ...

  16. Exponential models applied to automated processing of radioimmunoassay standard curves

    International Nuclear Information System (INIS)

    Morin, J.F.; Savina, A.; Caroff, J.; Miossec, J.; Legendre, J.M.; Jacolot, G.; Morin, P.P.

    1979-01-01

    An improved computer processing is described for fitting of radio-immunological standard curves by means of an exponential model on a desk-top calculator. This method has been applied to a variety of radioassays and the results are in accordance with those obtained by more sophisticated models [fr

  17. Survival curves study of platelet labelling with 51Cr

    International Nuclear Information System (INIS)

    Penas, M.E.

    1981-01-01

    Platelet kinetics and idiopathic thrombocytopenic purpura were researched in the literature. An 'in vitro' platelet labelling with 51 Cr procedure in implementation has been evaluated in human beings. Functions used for fitting considered the cases whether the curve was linear or exponential as well as the presence of hematies. (author)

  18. Modeling of alpha mass-efficiency curve

    International Nuclear Information System (INIS)

    Semkow, T.M.; Jeter, H.W.; Parsa, B.; Parekh, P.P.; Haines, D.K.; Bari, A.

    2005-01-01

    We present a model for efficiency of a detector counting gross α radioactivity from both thin and thick samples, corresponding to low and high sample masses in the counting planchette. The model includes self-absorption of α particles in the sample, energy loss in the absorber, range straggling, as well as detector edge effects. The surface roughness of the sample is treated in terms of fractal geometry. The model reveals a linear dependence of the detector efficiency on the sample mass, for low masses, as well as a power-law dependence for high masses. It is, therefore, named the linear-power-law (LPL) model. In addition, we consider an empirical power-law (EPL) curve, and an exponential (EXP) curve. A comparison is made of the LPL, EPL, and EXP fits to the experimental α mass-efficiency data from gas-proportional detectors for selected radionuclides: 238 U, 230 Th, 239 Pu, 241 Am, and 244 Cm. Based on this comparison, we recommend working equations for fitting mass-efficiency data. Measurement of α radioactivity from a thick sample can determine the fractal dimension of its surface

  19. Measuring Your Fitness Level

    Science.gov (United States)

    ... online calculator. If you'd rather do the math yourself, divide your weight in pounds by your ... Human Services recommends one of the following activity levels for adult fitness and health benefits: 150 minutes ...

  20. The universal Higgs fit

    DEFF Research Database (Denmark)

    Giardino, P. P.; Kannike, K.; Masina, I.

    2014-01-01

    We perform a state-of-the-art global fit to all Higgs data. We synthesise them into a 'universal' form, which allows to easily test any desired model. We apply the proposed methodology to extract from data the Higgs branching ratios, production cross sections, couplings and to analyse composite...... Higgs models, models with extra Higgs doublets, supersymmetry, extra particles in the loops, anomalous top couplings, and invisible Higgs decays into Dark Matter. Best fit regions lie around the Standard Model predictions and are well approximated by our 'universal' fit. Latest data exclude the dilaton...... as an alternative to the Higgs, and disfavour fits with negative Yukawa couplings. We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining M-h = 124.4 +/- 1.6 GeV....