WorldWideScience

Sample records for variability availability calculations

  1. Field calculations. Part I: Choice of variables and methods

    International Nuclear Information System (INIS)

    Turner, L.R.

    1981-01-01

    Magnetostatic calculations can involve (in order of increasing complexity) conductors only, material with constant or infinite permeability, or material with variable permeability. We consider here only the most general case, calculations involving ferritic material with variable permeability. Variables suitable for magnetostatic calculations are the magnetic field, the magnetic vector potential, and the magnetic scalar potential. For two-dimensional calculations the potentials, which each have only one component, have advantages over the field, which has two components. Because it is a single-valued variable, the vector potential is perhaps the best variable for two-dimensional calculations. In three dimensions, both the field and the vector potential have three components; the scalar potential, with only one component,provides a much smaller system of equations to be solved. However the scalar potential is not single-valued. To circumvent this problem, a calculation with two scalar potentials can be performed. The scalar potential whose source is the conductors can be calculated directly by the Biot-Savart law, and the scalar potential whose source is the magnetized material is single valued. However in some situations, the fields from the two potentials nearly cancel; and the numerical accuracy is lost. The 3-D magnetostatic program TOSCA employs a single total scalar potential; the program GFUN uses the magnetic field as its variable

  2. Power calculator for instrumental variable analysis in pharmacoepidemiology.

    Science.gov (United States)

    Walker, Venexia M; Davies, Neil M; Windmeijer, Frank; Burgess, Stephen; Martin, Richard M

    2017-10-01

    Instrumental variable analysis, for example with physicians' prescribing preferences as an instrument for medications issued in primary care, is an increasingly popular method in the field of pharmacoepidemiology. Existing power calculators for studies using instrumental variable analysis, such as Mendelian randomization power calculators, do not allow for the structure of research questions in this field. This is because the analysis in pharmacoepidemiology will typically have stronger instruments and detect larger causal effects than in other fields. Consequently, there is a need for dedicated power calculators for pharmacoepidemiological research. The formula for calculating the power of a study using instrumental variable analysis in the context of pharmacoepidemiology is derived before being validated by a simulation study. The formula is applicable for studies using a single binary instrument to analyse the causal effect of a binary exposure on a continuous outcome. An online calculator, as well as packages in both R and Stata, are provided for the implementation of the formula by others. The statistical power of instrumental variable analysis in pharmacoepidemiological studies to detect a clinically meaningful treatment effect is an important consideration. Research questions in this field have distinct structures that must be accounted for when calculating power. The formula presented differs from existing instrumental variable power formulae due to its parametrization, which is designed specifically for ease of use by pharmacoepidemiologists. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association

  3. Calculating Characteristics of the Screws with Constant And Variable Step

    Directory of Open Access Journals (Sweden)

    B. N. Zotov

    2015-01-01

    Full Text Available This work is devoted to creating a technique for calculating power characteristics of the screws with constant and variable step for the centrifugal pumps. The technique feature is that the reverse currents, which are observed in screws working at low flow, are numerically taken into account. The paper presents a diagram of the stream in the screw with flow to the network Q=0, and the static pressure of the screw in this mode is computed according to reverse current parameters. Maximum flow of screw is determined from the known formulas. When calculating the power characteristics and computing the overall efficiency of the screw, for the first time a volumetric efficiency of the screw is introduced. It is defined as a ratio between the flow into the network and the sum of the reverse current flows and a flow into the network. This approach allowed us to determine the efficiency of the screw over the entire range of flows.A comparison of experimental characteristics of the constant step screw with those of calculated by the proposed technique shows their good agreement.The technique is also used in calculating characteristics of the variable step screws. The variable step screw is considered as a screw consisting of two screws with a smooth transition of the blades from the inlet to the outlet. Screws in which the step at the inlet is less than that of at the outlet as well as screws with the step at the inlet being more than that of at the outlet were investigated. It is shown that a pressure of the screw with zero step and the value of the reverse currents depend only on the parameters of the input section of the screw, and the maximum flow, if the step at the inlet is more than the step at the outlet, is determined by the parameters of the output part of the screw. Otherwise, the maximum flow is determined a little bit differently.The paper compares experimental characteristics with characteristics calculated by the technique for variable step

  4. Efficient Method for Calculating the Composite Stiffness of Parabolic Leaf Springs with Variable Stiffness for Vehicle Rear Suspension

    Directory of Open Access Journals (Sweden)

    Wen-ku Shi

    2016-01-01

    Full Text Available The composite stiffness of parabolic leaf springs with variable stiffness is difficult to calculate using traditional integral equations. Numerical integration or FEA may be used but will require computer-aided software and long calculation times. An efficient method for calculating the composite stiffness of parabolic leaf springs with variable stiffness is developed and evaluated to reduce the complexity of calculation and shorten the calculation time. A simplified model for double-leaf springs with variable stiffness is built, and a composite stiffness calculation method for the model is derived using displacement superposition and material deformation continuity. The proposed method can be applied on triple-leaf and multileaf springs. The accuracy of the calculation method is verified by the rig test and FEA analysis. Finally, several parameters that should be considered during the design process of springs are discussed. The rig test and FEA analytical results indicate that the calculated results are acceptable. The proposed method can provide guidance for the design and production of parabolic leaf springs with variable stiffness. The composite stiffness of the leaf spring can be calculated quickly and accurately when the basic parameters of the leaf spring are known.

  5. Available transfer capability calculation considering voltage stability margin

    International Nuclear Information System (INIS)

    Pan, Xiong; Xu, Guoyu

    2005-01-01

    To make the electricity trades carry out successfully, the calculation of available transfer capability (ATC) must coordinate the relationship between the security and economic benefits. In this paper, a model for ATC calculations accorded with trade-off mechanism in electricity market was set up. The impact of branch outage contingency on the static voltage stability margin was analyzed, and contingency ranking was performed through sensitivity indices of branch flows with respect to the loading margin. Optimal power flow based on primal-dual interior point method was applied to obtain ATC when the N-1 security constraints were included. The calculation results of IEEE 30-bus and IEEE 118-bus systems show that the proposed model and method are valid. (author) (N-1 security constraints; Electricity market; Available transfer capability; Optimal power flow; Voltage stability)

  6. Automated Calculation of DIII-D Neutral Beam Availability

    International Nuclear Information System (INIS)

    Phillips, J.C.; Hong, R.M.; Scoville, B.G.

    1999-01-01

    The neutral beam systems for the DIII-D tokamak are an extremely reliable source of auxiliary plasma heating, capable of supplying up to 20 MW of injected power, from eight separate beam sources into each tokamak discharge. The high availability of these systems for tokamak operations is sustained by careful monitoring of performance and following up on failures. One of the metrics for this performance is the requested injected power profile as compared to the power profile delivered for a particular pulse. Calculating this was a relatively straightforward task, however innovations such as the ability to modulate the beams and more recently the ability to substitute an idle beam for one which has failed during a plasma discharge, have made the task very complex. For example, with this latest advance it is possible for one or more beams to have failed, yet the delivered power profile may appear perfect. Availability used to be manually calculated. This paper presents the methods and algorithms used to produce a system which performs the calculations based on information concerning the neutral beam and plasma current waveforms, along with post-discharge information from the Plasma Control System, which has the ability to issue commands for beams in real time. Plots representing both the requested and actual power profiles, along with statistics, are automatically displayed and updated each shot, on a web-based interface viewable both at DIII-D and by our remote collaborators using no-cost software

  7. Temporal variability of available P, microbial P and some ...

    African Journals Online (AJOL)

    Temporal variability of available P, microbial P and some phosphomonoesterase activities in a sewage sludge treated soil: The effect of soil water potential. ... African Journal of Biotechnology ... The objective of this study was to test the effects of water potential on soil available P, microbial biomass P(MBP) and some

  8. Temporal variability of available P, microbial P and some ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-12-15

    Dec 15, 2009 ... carbon, major nutrients (e.g., N, P), water-holding capa- city and porosity ... variability and to test the soil water potential effects on available P ..... seawater: influence of reaction conditions on the kinetic parameters of ALP.

  9. Implementation of upper limit calculation for a poisson variable by bayesian approach

    International Nuclear Information System (INIS)

    Zhu Yongsheng

    2008-01-01

    The calculation of Bayesian confidence upper limit for a Poisson variable including both signal and background with and without systematic uncertainties has been formulated. A Fortran 77 routine, BPULE, has been developed to implement the calculation. The routine can account for systematic uncertainties in the background expectation and signal efficiency. The systematic uncertainties may be separately parameterized by a Gaussian, Log-Gaussian or flat probability density function (pdf). Some technical details of BPULE have been discussed. (authors)

  10. Algorithm for calculating an availability factor for the inhalation of radioactive and chemical materials

    International Nuclear Information System (INIS)

    1984-02-01

    This report presents a method of calculating the availability of buried radioactive and nonradioactive materials via an inhalation pathway. Availability is the relationship between the concentration of a substance in the soil and the dose rate to a human receptor. Algorithms presented for calculating availabiliy of elemental inorganic substances are based on atmospheric enrichment factors; those presented for calculating availability of organic substances are based on vapor pressures. The basis, use, and limitations of the developed equations are discussed. 32 references, 5 tables

  11. Supervision as a management variable that enhances availability of ...

    African Journals Online (AJOL)

    The purpose of this research was to investigate the management variable of supervision as it enhances availability of information sources in Nigerian university libraries. Four federal university libraries in the south-south zone of Nigeria were selected for the study. The population of the study consisted of all academic ...

  12. Application of a primitive variable Newton's method for the calculation of an axisymmetric laminar diffusion flame

    International Nuclear Information System (INIS)

    Xu, Yuenong; Smooke, M.D.

    1993-01-01

    In this paper we present a primitive variable Newton-based solution method with a block-line linear equation solver for the calculation of reacting flows. The present approach is compared with the stream function-vorticity Newton's method and the SIMPLER algorithm on the calculation of a system of fully elliptic equations governing an axisymmetric methane-air laminar diffusion flame. The chemical reaction is modeled by the flame sheet approximation. The numerical solution agrees well with experimental data in the major chemical species. The comparison of three sets of numerical results indicates that the stream function-vorticity solution using the approximate boundary conditions reported in the previous calculations predicts a longer flame length and a broader flame shape. With a new set of modified vorticity boundary conditions, we obtain agreement between the primitive variable and stream function-vorticity solutions. The primitive variable Newton's method converges much faster than the other two methods. Because of much less computer memory required for the block-line tridiagonal solver compared to a direct solver, the present approach makes it possible to calculate multidimensional flames with detailed reaction mechanisms. The SIMPLER algorithm shows a slow convergence rate compared to the other two methods in the present calculation

  13. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    International Nuclear Information System (INIS)

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  14. Numerical calculation of the main variables of the laminar flow around a circunferential square obstacle at the wall of a circular pipe

    International Nuclear Information System (INIS)

    Nogueira, A.C.R.

    1981-10-01

    The numerical calculation of the main variables of the laminar, incompressible, axissimmetric, steady flow around a circunferential square obstacle placed at the wall of a circular pipe, is done. The velocity profiles, the separating length and the shape of the separating streamline are compared with experimental available data and a good agreement is achieved. (E.G.) [pt

  15. Evaluation of an open access software for calculating glucose variability parameters of a continuous glucose monitoring system applied at pediatric intensive care unit.

    Science.gov (United States)

    Marics, Gábor; Lendvai, Zsófia; Lódi, Csaba; Koncz, Levente; Zakariás, Dávid; Schuster, György; Mikos, Borbála; Hermann, Csaba; Szabó, Attila J; Tóth-Heyn, Péter

    2015-04-24

    Continuous Glucose Monitoring (CGM) has become an increasingly investigated tool, especially with regards to monitoring of diabetic and critical care patients. The continuous glucose data allows the calculation of several glucose variability parameters, however, without specific application the interpretation of the results is time-consuming, utilizing extreme efforts. Our aim was to create an open access software [Glycemic Variability Analyzer Program (GVAP)], readily available to calculate the most common parameters of the glucose variability and to test its usability. The GVAP was developed in MATLAB® 2010b environment. The calculated parameters were the following: average area above/below the target range (Avg. AUC-H/L); Percentage Spent Above/Below the Target Range (PATR/PBTR); Continuous Overall Net Glycemic Action (CONGA); Mean of Daily Differences (MODD); Mean Amplitude of Glycemic Excursions (MAGE). For verification purposes we selected 14 CGM curves of pediatric critical care patients. Medtronic® Guardian® Real-Time with Enlite® sensor was used. The reference values were obtained from Medtronic®(')s own software for Avg. AUC-H/L and PATR/PBTR, from GlyCulator for MODD and CONGA, and using manual calculation for MAGE. The Pearson and Spearman correlation coefficients were above 0.99 for all parameters. The initial execution took 30 minutes, for further analysis with the Windows® Standalone Application approximately 1 minute was needed. The GVAP is a reliable open access program for analyzing different glycemic variability parameters, hence it could be a useful tool for the study of glycemic control among critically ill patients.

  16. Hardware availability calculations and results of the IFMIF accelerator facility

    International Nuclear Information System (INIS)

    Bargalló, Enric; Arroyo, Jose Manuel; Abal, Javier; Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne; Weber, Moisés; Podadera, Ivan; Grespan, Francesco; Fagotti, Enrico; De Blas, Alfredo; Dies, Javier; Tapia, Carlos; Mollá, Joaquín; Ibarra, Ángel

    2014-01-01

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design

  17. Hardware availability calculations and results of the IFMIF accelerator facility

    Energy Technology Data Exchange (ETDEWEB)

    Bargalló, Enric, E-mail: enric.bargallo-font@upc.edu [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Arroyo, Jose Manuel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Abal, Javier [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Beauvais, Pierre-Yves; Gobin, Raphael; Orsini, Fabienne [Commissariat à l’Energie Atomique, Saclay (France); Weber, Moisés; Podadera, Ivan [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain); Grespan, Francesco; Fagotti, Enrico [Istituto Nazionale di Fisica Nucleare, Legnaro (Italy); De Blas, Alfredo; Dies, Javier; Tapia, Carlos [Fusion Energy Engineering Laboratory (FEEL), Technical University of Catalonia (UPC), Barcelona (Spain); Mollá, Joaquín; Ibarra, Ángel [Laboratorio Nacional de Fusión por Confinamiento Magnético – CIEMAT, Madrid (Spain)

    2014-10-15

    Highlights: • IFMIF accelerator facility hardware availability analyses methodology is described. • Results of the individual hardware availability analyses are shown for the reference design. • Accelerator design improvements are proposed for each system. • Availability results are evaluated and compared with the requirements. - Abstract: Hardware availability calculations have been done individually for each system of the deuteron accelerators of the International Fusion Materials Irradiation Facility (IFMIF). The principal goal of these analyses is to estimate the availability of the systems, compare it with the challenging IFMIF requirements and find new paths to improve availability performances. Major unavailability contributors are highlighted and possible design changes are proposed in order to achieve the hardware availability requirements established for each system. In this paper, such possible improvements are implemented in fault tree models and the availability results are evaluated. The parallel activity on the design and construction of the linear IFMIF prototype accelerator (LIPAc) provides detailed design information for the RAMI (reliability, availability, maintainability and inspectability) analyses and allows finding out the improvements that the final accelerator could have. Because of the R and D behavior of the LIPAc, RAMI improvements could be the major differences between the prototype and the IFMIF accelerator design.

  18. Thermophysical property calculation in thermal plasmas: status, applications, and availability of basic data

    International Nuclear Information System (INIS)

    Murphy, Anthony B.

    2002-01-01

    The status of the calculation of the composition, thermodynamic properties and transport coefficients of thermal plasmas is reviewed. The availability of the required basic data, i.e., thermodynamic properties of individual species and collision integrals for pairs of species, is surveyed. The calculation of diffusion coefficients, required in mixed-gas plasmas, is discussed, and the advantages of the combined diffusion coefficient formulation are outlined. The specific application of demixing is presented. Recent work addressing the difficulties that arise in calculating the composition and transport coefficients of two-temperature plasmas is briefly reviewed. (author)

  19. Estimation of Finite Population Ratio When Other Auxiliary Variables are Available in the Study

    Directory of Open Access Journals (Sweden)

    Jehad Al-Jararha

    2014-12-01

    Full Text Available The estimation of the population total $t_y,$ by using one or moreauxiliary variables, and the population ratio $\\theta_{xy}=t_y/t_x,$$t_x$ is the population total for the auxiliary variable $X$, for afinite population are heavily discussed in the literature. In thispaper, the idea of estimation the finite population ratio$\\theta_{xy}$ is extended to use the availability of auxiliaryvariable $Z$ in the study, such auxiliary variable  is not used inthe definition of the population ratio. This idea may be  supported by the fact that the variable $Z$  is highly correlated with the interest variable $Y$ than the correlation between the variables $X$ and $Y.$ The availability of such auxiliary variable can be used to improve the precision of the estimation of the population ratio.  To our knowledge, this idea is not discussed in the literature.  The bias, variance and the mean squares error  are given for our approach. Simulation from real data set,  the empirical relative bias and  the empirical relative mean squares error are computed for our approach and different estimators proposed in the literature  for estimating the population ratio $\\theta_{xy}.$ Analytically and the simulation results show that, by suitable choices, our approach gives negligible bias and has less mean squares error.  

  20. European Randomized Study of Screening for Prostate Cancer Risk Calculator: External Validation, Variability, and Clinical Significance.

    Science.gov (United States)

    Gómez-Gómez, Enrique; Carrasco-Valiente, Julia; Blanca-Pedregosa, Ana; Barco-Sánchez, Beatriz; Fernandez-Rueda, Jose Luis; Molina-Abril, Helena; Valero-Rosa, Jose; Font-Ugalde, Pilar; Requena-Tapia, Maria José

    2017-04-01

    To externally validate the European Randomized Study of Screening for Prostate Cancer (ERSPC) risk calculator (RC) and to evaluate its variability between 2 consecutive prostate-specific antigen (PSA) values. We prospectively catalogued 1021 consecutive patients before prostate biopsy for suspicion of prostate cancer (PCa). The risk of PCa and significant PCa (Gleason score ≥7) from 749 patients was calculated according to ERSPC-RC (digital rectal examination-based version 3 of 4) for 2 consecutive PSA tests per patient. The calculators' predictions were analyzed using calibration plots and the area under the receiver operating characteristic curve (area under the curve). Cohen kappa coefficient was used to compare the ability and variability. Of 749 patients, PCa was detected in 251 (33.5%) and significant PCa was detected in 133 (17.8%). Calibration plots showed an acceptable parallelism and similar discrimination ability for both PSA levels with an area under the curve of 0.69 for PCa and 0.74 for significant PCa. The ERSPC showed 226 (30.2%) unnecessary biopsies with the loss of 10 significant PCa. The variability of the RC was 16% for PCa and 20% for significant PCa, and a higher variability was associated with a reduced risk of significant PCa. We can conclude that the performance of the ERSPC-RC in the present cohort shows a high similitude between the 2 PSA levels; however, the RC variability value is associated with a decreased risk of significant PCa. The use of the ERSPC in our cohort detects a high number of unnecessary biopsies. Thus, the incorporation of ERSPC-RC could help the clinical decision to carry out a prostate biopsy. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. The Influence of Output Variability from Renewable Electricity Generation on Net Energy Calculations

    Directory of Open Access Journals (Sweden)

    Hannes Kunz

    2014-01-01

    Full Text Available One key approach to analyzing the feasibility of energy extraction and generation technologies is to understand the net energy they contribute to society. These analyses most commonly focus on a simple comparison of a source’s expected energy outputs to the required energy inputs, measured in the form of energy return on investment (EROI. What is not typically factored into net energy analysis is the influence of output variability. This omission ignores a key attribute of biological organisms and societies alike: the preference for stable returns with low dispersion versus equivalent returns that are intermittent or variable. This biologic predilection for stability, observed and refined in academic financial literature, has a direct relationship to many new energy technologies whose outputs are much more variable than traditional energy sources. We investigate the impact of variability on net energy metrics and develop a theoretical framework to evaluate energy systems based on existing financial and biological risk models. We then illustrate the impact of variability on nominal energy return using representative technologies in electricity generation, with a more detailed analysis on wind power, where intermittence and stochastic availability of hard-to-store electricity will be factored into theoretical returns.

  2. Spatial modelling of marine organisms in Forsmark and Oskarshamn. Including calculation of physical predictor variables

    Energy Technology Data Exchange (ETDEWEB)

    Carlen, Ida; Nikolopoulos, Anna; Isaeus, Martin (AquaBiota Water Research, Stockholm (SE))

    2007-06-15

    GIS grids (maps) of marine parameters were created using point data from previous site investigations in the Forsmark and Oskarshamn areas. The proportion of global radiation reaching the sea bottom in Forsmark and Oskarshamn was calculated in ArcView, using Secchi depth measurements and the digital elevation models for the respective area. The number of days per year when the incoming light exceeds 5 MJ/m2 at the bottom was then calculated using the result of the previous calculations together with measured global radiation. Existing modelled grid-point data on bottom and pelagic temperature for Forsmark were interpolated to create surface covering grids. Bottom and pelagic temperature grids for Oskarshamn were calculated using point measurements to achieve yearly averages for a few points and then using regressions with existing grids to create new maps. Phytoplankton primary production in Forsmark was calculated using point measurements of chlorophyll and irradiance, and a regression with a modelled grid of Secchi depth. Distribution of biomass of macrophyte communities in Forsmark and Oskarshamn was calculated using spatial modelling in GRASP, based on field data from previous surveys. Physical parameters such as those described above were used as predictor variables. Distribution of biomass of different functional groups of fish in Forsmark was calculated using spatial modelling based on previous surveys and with predictor variables such as physical parameters and results from macrophyte modelling. All results are presented as maps in the report. The quality of the modelled predictions varies as a consequence of the quality and amount of the input data, the ecology and knowledge of the predicted phenomena, and by the modelling technique used. A substantial part of the variation is not described by the models, which should be expected for biological modelling. Therefore, the resulting grids should be used with caution and with this uncertainty kept in mind. All

  3. Spatial modelling of marine organisms in Forsmark and Oskarshamn. Including calculation of physical predictor variables

    International Nuclear Information System (INIS)

    Carlen, Ida; Nikolopoulos, Anna; Isaeus, Martin

    2007-06-01

    GIS grids (maps) of marine parameters were created using point data from previous site investigations in the Forsmark and Oskarshamn areas. The proportion of global radiation reaching the sea bottom in Forsmark and Oskarshamn was calculated in ArcView, using Secchi depth measurements and the digital elevation models for the respective area. The number of days per year when the incoming light exceeds 5 MJ/m2 at the bottom was then calculated using the result of the previous calculations together with measured global radiation. Existing modelled grid-point data on bottom and pelagic temperature for Forsmark were interpolated to create surface covering grids. Bottom and pelagic temperature grids for Oskarshamn were calculated using point measurements to achieve yearly averages for a few points and then using regressions with existing grids to create new maps. Phytoplankton primary production in Forsmark was calculated using point measurements of chlorophyll and irradiance, and a regression with a modelled grid of Secchi depth. Distribution of biomass of macrophyte communities in Forsmark and Oskarshamn was calculated using spatial modelling in GRASP, based on field data from previous surveys. Physical parameters such as those described above were used as predictor variables. Distribution of biomass of different functional groups of fish in Forsmark was calculated using spatial modelling based on previous surveys and with predictor variables such as physical parameters and results from macrophyte modelling. All results are presented as maps in the report. The quality of the modelled predictions varies as a consequence of the quality and amount of the input data, the ecology and knowledge of the predicted phenomena, and by the modelling technique used. A substantial part of the variation is not described by the models, which should be expected for biological modelling. Therefore, the resulting grids should be used with caution and with this uncertainty kept in mind. All

  4. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  5. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    Science.gov (United States)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  6. NULLIJN, a program to calculate zero curves of a function of two variables of which one may be complex

    International Nuclear Information System (INIS)

    Jagher, P.C. de

    1978-01-01

    When an algorithm for a function f of two variables, for instance a dispersion function f(ω, k) or a potential V(r, z), is known, the program calculates and plots the zero curves, thus giving a graphical representation of an implicitly defined function. One of the variables may be complex. A quadratic extrapolation, followed by a regula falsi algorithm to find a zero is used to calculate a succession of zero-points along a curve. The starting point of a curve is found by detecting a change of sign of the function on the edge of the area G that is examined. Curves that lie entirely inside G are not found. Starting points of curves where the imaginary part of the complex variable is large might be missed. (Auth.)

  7. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  8. Dynamics calculation with variable mass of mountain self-propelled chassis

    Directory of Open Access Journals (Sweden)

    R.M. Makharoblidze

    2016-12-01

    Full Text Available Many technological processes in the field of agricultural production mechanization, such as a grain crop, planting root-tuber fruits, fertilizing, spraying and dusting, pressing feed materials, harvesting of various cultures, etc. are performed by the machine-tractor units with variable mass of links or processed media and materials. In recent years, are also developing the systems of automatic control, adjusting and control of technological processes and working members in agriculture production. Is studied the dynamics of transition processes of mountain self-propelled chassis with variable mass at real change disconnect or joining masses that is most often used in the function of movement (m(t = ctm(t = ct. Are derived the formulas of change of velocity of movement on displacement of unit and is defined the dependence of this velocity on the tractor and technological machine performance, with taking into account the gradual increase or removing of agricultural materials masses. According to the equation is possible to define a linear movement of machine-tractor unit. According to the obtained expressions we can define the basic operating parameters of machine-tractor unit with variable mass. The results of research would be applied at definition of characteristics of units, at development of new agricultural tractors.

  9. Available climatological and oceanographical data for site investigation program

    International Nuclear Information System (INIS)

    Lindell, S.; Ambjoern, C.; Juhlin, B.; Larsson-McCann, S.; Lindquist, K.

    2000-03-01

    Information on available data, measurements and models for climate, meteorology, hydrology and oceanography for six communities have been analysed and studied. The six communities are Nykoeping, Oesthammar, Oskarshamn, Tierp, Hultsfred and Aelvkarleby all of them selected by Svensk Kaernbraenslehantering AB, SKB, for a pre-study on possibilities for deep disposal of used nuclear fuel. For each of them a thorough and detailed register of available climatological data together with appropriate statistical properties are listed. The purpose is to compare the six communities concerning climatological and oceanographical data available and analyse the extent of new measurements or model applications needed for all of the selected sites. Statistical information on precipitation, temperature and runoff has good coverage in all of the six communities. If new information concerning any of these variables is needed in sites where no data collection exist today new installation can be made. Data on precipitation in form of snow and days with snow coverage is also available but to a lesser extent. This concerns also days with ground frost and average ground frost level where there is no fully representation of data. If more information is wanted concerning these variables new measurements or model calculations must be initiated. Data on freeze and break-up of ice on lakes is also insufficient but this variable can be calculated with good result by use of one-dimensional models. Data describing air pressure tendency and wind velocity and direction is available for all communities and this information should be sufficient for the purpose of SKB. This is also valid for the variables global radiation and duration of sunshine where no new data should be needed. Measured data on evaporation is normally not available in Sweden more than in special research basins. Actual evaporation is though a variable that easily can be calculated by use of models. There are many lakes in the six

  10. Available climatological and oceanographical data for site investigation program

    Energy Technology Data Exchange (ETDEWEB)

    Lindell, S.; Ambjoern, C.; Juhlin, B.; Larsson-McCann, S.; Lindquist, K. [Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden)

    2000-03-15

    Information on available data, measurements and models for climate, meteorology, hydrology and oceanography for six communities have been analysed and studied. The six communities are Nykoeping, Oesthammar, Oskarshamn, Tierp, Hultsfred and Aelvkarleby all of them selected by Svensk Kaernbraenslehantering AB, SKB, for a pre-study on possibilities for deep disposal of used nuclear fuel. For each of them a thorough and detailed register of available climatological data together with appropriate statistical properties are listed. The purpose is to compare the six communities concerning climatological and oceanographical data available and analyse the extent of new measurements or model applications needed for all of the selected sites. Statistical information on precipitation, temperature and runoff has good coverage in all of the six communities. If new information concerning any of these variables is needed in sites where no data collection exist today new installation can be made. Data on precipitation in form of snow and days with snow coverage is also available but to a lesser extent. This concerns also days with ground frost and average ground frost level where there is no fully representation of data. If more information is wanted concerning these variables new measurements or model calculations must be initiated. Data on freeze and break-up of ice on lakes is also insufficient but this variable can be calculated with good result by use of one-dimensional models. Data describing air pressure tendency and wind velocity and direction is available for all communities and this information should be sufficient for the purpose of SKB. This is also valid for the variables global radiation and duration of sunshine where no new data should be needed. Measured data on evaporation is normally not available in Sweden more than in special research basins. Actual evaporation is though a variable that easily can be calculated by use of models. There are many lakes in the six

  11. A first approach to calculate BIOCLIM variables and climate zones for Antarctica

    Science.gov (United States)

    Wagner, Monika; Trutschnig, Wolfgang; Bathke, Arne C.; Ruprecht, Ulrike

    2018-02-01

    For testing the hypothesis that macroclimatological factors determine the occurrence, biodiversity, and species specificity of both symbiotic partners of Antarctic lecideoid lichens, we present a first approach for the computation of the full set of 19 BIOCLIM variables, as available at http://www.worldclim.org/ for all regions of the world with exception of Antarctica. Annual mean temperature (Bio 1) and annual precipitation (Bio 12) were chosen to define climate zones of the Antarctic continent and adjacent islands as required for ecological niche modeling (ENM). The zones are based on data for the years 2009-2015 which was obtained from the Antarctic Mesoscale Prediction System (AMPS) database of the Ohio State University. For both temperature and precipitation, two separate zonings were specified; temperature values were divided into 12 zones (named 1 to 12) and precipitation values into five (named A to E). By combining these two partitions, we defined climate zonings where each geographical point can be uniquely assigned to exactly one zone, which allows an immediate explicit interpretation. The soundness of the newly calculated climate zones was tested by comparison with already published data, which used only three zones defined on climate information from the literature. The newly defined climate zones result in a more precise assignment of species distribution to the single habitats. This study provides the basis for a more detailed continental-wide ENM using a comprehensive dataset of lichen specimens which are located within 21 different climate regions.

  12. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    Science.gov (United States)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  13. Acquisition of data from on-line laser turbidimeter and calculation of some kinetic variables in computer-coupled automated fed-batch culture

    International Nuclear Information System (INIS)

    Kadotani, Y.; Miyamoto, K.; Mishima, N.; Kominami, M.; Yamane, T.

    1995-01-01

    Output signals of a commercially available on-line laser turbidimeter exhibit fluctuations due to air and/or CO 2 bubbles. A simple data processing algorithm and a personal computer software have been developed to smooth the noisy turbidity data acquired, and to utilize them for the on-line calculations of some kinetic variables involved in batch and fed-batch cultures of uniformly dispersed microorganisms. With this software, about 10 3 instantaneous turbidity data acquired over 55 s are averaged and convert it to dry cell concentration, X, every minute. Also, volume of the culture broth, V, is estimated from the averaged output data of weight loss of feed solution reservoir, W, using an electronic balance on which the reservoir is placed. Then, the computer software is used to perform linear regression analyses over the past 30 min of the total biomass, VX, the natural logarithm of the total biomass, ln(VX), and the weight loss, W, in order to calculate volumetric growth rate, d(VX)/dt, specific growth rate, μ [ = dln(VX)/dt] and the rate of W, dW/dt, every minute in a fed-batch culture. The software used to perform the first-order regression analyses of VX, ln(VX) and W was applied to batch or fed-batch cultures of Escherichia coli on minimum synthetic or natural complex media. Sample determination coefficients of the three different variables (VX, ln(VX) and W) were close to unity, indicating that the calculations are accurate. Furthermore, growth yield, Y x/s , and specific substrate consumption rate, q sc , were approximately estimated from the data, dW/dt and in a ‘balanced’ fed-batch culture of E. coli on the minimum synthetic medium where the computer-aided substrate-feeding system automatically matches well with the cell growth. (author)

  14. Variabilidade da água disponível de uma terra roxa estruturada latossólica Available soil-water variability of a "terra roxa estruturada latossólica" (rhodic kanhapludalf

    Directory of Open Access Journals (Sweden)

    S.O. Moraes

    1993-12-01

    Full Text Available A partir de 250 curvas de retenção da água no solo, elaboradas com amostras indeformadas coletadas de uma área de 6250 m² de uma Terra Roxa Estruturada Latossólica de Piracicaba,SP, foram calculados quatro conjuntos de valores de água disponível assumindo-se -1x10³, -6x10³, -1x10(4 e -3x10(4 Pa como possíveis valores de potencial mátrico correspondentes à capacidade de campo e -1,5x10³ Pa um possível valor correspondente ao ponto de murchamento permanente. Foram feitas medidas de posição (média, variabilidade (coeficiente de variação, assimetria e curtose e numero necessário de amostras para estimar a média a um dado nível de probabilidade a fim de quantificar a variabilidade e a sensibilidade dos resultados em cada conjunto e entre conjuntos de valores de água disponível. A análise dos resultados mostrou que a variabilidade da água disponível, obtida à partir de dois valores de umidade da Curva de Retenção é muito maior que a variabilidade de cada valor individualmente. Ou seja, embora as variáveis envolvidas possam ser as mesmas, o grau de variabilidade (expresso, por exemplo, pelo coeficiente de variação ou a sensibilidade das medidas (expressa pelo número necessário de amostras para estimar a média dentro de um determinado intervalo de confiança pode ser bem distinto, indicando que nem sempre resultados de uma amostragem realizada com determinado objetivo poderá servir a outros, embora possam tratar-se de variáveis dependentes.From 250 soil-water retention curves of an area of 6250 m² of a "Terra Roxa Estruturada Latossólica" (Rhodic Kanhapludalf located in Piracicaba,SP, four sets of available soil-water were calculated assuming field capacity values based on soil-water contents corresponding to -1x10³, -6x10³, -1x10(4 and -3x10(4 Pa of soil water matric potential; and permanent wilting point based on soil-water contents corresponding to -1,5x10(6 Pa. Aiming to quantify the variability and the

  15. Study on availability of GPU for scientific and engineering calculations

    International Nuclear Information System (INIS)

    Sakamoto, Kensaku; Kobayashi, Seiji

    2009-07-01

    Recently, the number of scientific and engineering calculations on GPUs (Graphic Processing Units) is increasing. It is said that GPUs have much higher peak floating-point processing power and memory bandwidth than CPUs (Central Processing Units). We have studied the effectiveness of GPUs by applying them to fundamental scientific and engineering calculations with CUDA (Compute Unified Device Architecture) development tools. The results have shown as follows: 1) Computations on GPUs are effective for such calculations as matrix operation, FFT (Fast Fourier Transform) and CFD (Computational Fluid Dynamics) in nuclear research region. 2) Highly-advanced programming is required for bringing out high performance of GPUs. 3) Double-precision performance is low and ECC (Error Correction Code) in graphic memory systems supports are lacking. (author)

  16. Measurement-Device Independency Analysis of Continuous-Variable Quantum Digital Signature

    Directory of Open Access Journals (Sweden)

    Tao Shang

    2018-04-01

    Full Text Available With the practical implementation of continuous-variable quantum cryptographic protocols, security problems resulting from measurement-device loopholes are being given increasing attention. At present, research on measurement-device independency analysis is limited in quantum key distribution protocols, while there exist different security problems for different protocols. Considering the importance of quantum digital signature in quantum cryptography, in this paper, we attempt to analyze the measurement-device independency of continuous-variable quantum digital signature, especially continuous-variable quantum homomorphic signature. Firstly, we calculate the upper bound of the error rate of a protocol. If it is negligible on condition that all measurement devices are untrusted, the protocol is deemed to be measurement-device-independent. Then, we simplify the calculation by using the characteristics of continuous variables and prove the measurement-device independency of the protocol according to the calculation result. In addition, the proposed analysis method can be extended to other quantum cryptographic protocols besides continuous-variable quantum homomorphic signature.

  17. A meta analysis of the variability in firm performance attributable to human resource variables

    Directory of Open Access Journals (Sweden)

    Lloyd Kapondoro

    2015-01-01

    Full Text Available The contribution of Human Resource Management (HRM practices to organisation-wide performance is a critical aspect of the Human Resource (HR value proposition. The purpose of the study was to describe the strength of HRM practices and systems in influencing overall organisational performance. While research has concluded that there is a significant positive relationship between HRM practices or systems and an organisation’s market performance, the strength of this relationship has relatively not received much analysis in order to explain the degree to which HRM practices explain variance in firm performance. The study undertook a meta-analysis of published researches in international journals. The study established that HRM variables accounted for an average of 31% of the variability in firm performance. Cohen’s f2 calculated for this study as a meta effect size calculation yielded an average of 0.681, implying that HRM variables account for 68% of variability in firm performance. A one sample Kolmogorov-Smirnov test showed that the distribution of R2 is not normal. A major managerial implication of this study is that effective HRM practices have a significant business case. The study provides, quantitatively, the average variability in firm success that HRM accounts for.

  18. On a New Technique for Discovering Variable Stars

    Directory of Open Access Journals (Sweden)

    Mironov A. V.

    2003-12-01

    Full Text Available A technique for discovering variable stars based on the calculation of the correlation coefficients is proposed. Applications of the technique are shown on the results of numerical experiments and on the Hipparcos photometric data.

  19. Trophically available metal - A variable feast

    International Nuclear Information System (INIS)

    Rainbow, Philip S.; Luoma, Samuel N.; Wang Wenxiong

    2011-01-01

    Assimilation of trace metals by predators from prey is affected by the physicochemical form of the accumulated metal in the prey, leading to the concept of a Trophically Available Metal (TAM) component in the food item definable in terms of particular subcellular fractions of accumulated metal. As originally defined TAM consists of soluble metal forms and metal associated with cell organelles, the combination of separated fractions which best explained particular results involving a decapod crustacean predator feeding on bivalve mollusc tissues. Unfortunately TAM as originally defined has subsequently frequently been used in the literature as an absolute description of that component of accumulated metal that is trophically available in all prey to all consumers. It is now clear that what is trophically available varies between food items, consumers and metals. TAM as originally defined should be seen as a useful starting hypothesis, not as a statement of fact. - Trophically Available Metal (TAM), the component of accumulated metal in food that is taken up by a feeding animal, varies with food type and consumer.

  20. Trophically available metal - A variable feast

    Energy Technology Data Exchange (ETDEWEB)

    Rainbow, Philip S., E-mail: p.rainbow@nhm.ac.uk [Department of Zoology, Natural History Museum, Cromwell Rd, London SW7 5BD (United Kingdom); Luoma, Samuel N. [Department of Zoology, Natural History Museum, Cromwell Rd, London SW7 5BD (United Kingdom); John Muir Institute of the Environment, University of California, Davis, CA 95616 (United States); Wang Wenxiong [College of Marine and Environmental Sciences, State Key Laboratory for Marine Environmental Sciences, Xiamen University, Fujian (China)

    2011-10-15

    Assimilation of trace metals by predators from prey is affected by the physicochemical form of the accumulated metal in the prey, leading to the concept of a Trophically Available Metal (TAM) component in the food item definable in terms of particular subcellular fractions of accumulated metal. As originally defined TAM consists of soluble metal forms and metal associated with cell organelles, the combination of separated fractions which best explained particular results involving a decapod crustacean predator feeding on bivalve mollusc tissues. Unfortunately TAM as originally defined has subsequently frequently been used in the literature as an absolute description of that component of accumulated metal that is trophically available in all prey to all consumers. It is now clear that what is trophically available varies between food items, consumers and metals. TAM as originally defined should be seen as a useful starting hypothesis, not as a statement of fact. - Trophically Available Metal (TAM), the component of accumulated metal in food that is taken up by a feeding animal, varies with food type and consumer.

  1. Fourier transform methods for calculating action variables and semiclassical eigenvalues for coupled oscillator systems

    International Nuclear Information System (INIS)

    Eaker, C.W.; Schatz, G.C.; De Leon, N.; Heller, E.J.

    1984-01-01

    Two methods for calculating the good action variables and semiclassical eigenvalues for coupled oscillator systems are presented, both of which relate the actions to the coefficients appearing in the Fourier representation of the normal coordinates and momenta. The two methods differ in that one is based on the exact expression for the actions together with the EBK semiclassical quantization condition while the other is derived from the Sorbie--Handy (SH) approximation to the actions. However, they are also very similar in that the actions in both methods are related to the same set of Fourier coefficients and both require determining the perturbed frequencies in calculating actions. These frequencies are also determined from the Fourier representations, which means that the actions in both methods are determined from information entirely contained in the Fourier expansion of the coordinates and momenta. We show how these expansions can very conveniently be obtained from fast Fourier transform (FFT) methods and that numerical filtering methods can be used to remove spurious Fourier components associated with the finite trajectory integration duration. In the case of the SH based method, we find that the use of filtering enables us to relax the usual periodicity requirement on the calculated trajectory. Application to two standard Henon--Heiles models is considered and both are shown to give semiclassical eigenvalues in good agreement with previous calculations for nondegenerate and 1:1 resonant systems. In comparing the two methods, we find that although the exact method is quite general in its ability to be used for systems exhibiting complex resonant behavior, it converges more slowly with increasing trajectory integration duration and is more sensitive to the algorithm for choosing perturbed frequencies than the SH based method

  2. Assessment of the available {sup 233}U cross-section evaluations in the calculation of critical benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Wright, R.Q.

    1996-10-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U.S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the S{sub n} transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  3. Assessment of the Available (Sup 233)U Cross Sections Evaluations in the Calculation of Critical Benchmark Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.

    1993-01-01

    In this report we investigate the adequacy of the available {sup 233}U cross-section data for calculation of experimental critical systems. The {sup 233}U evaluations provided in two evaluated nuclear data libraries, the U. S. Data Bank [ENDF/B (Evaluated Nuclear Data Files)] and the Japanese Data Bank [JENDL (Japanese Evaluated Nuclear Data Library)] are examined. Calculations were performed for six thermal and ten fast experimental critical systems using the Sn transport XSDRNPM code. To verify the performance of the {sup 233}U cross-section data for nuclear criticality safety application in which the neutron energy spectrum is predominantly in the epithermal energy range, calculations of four numerical benchmark systems with energy spectra in the intermediate energy range were done. These calculations serve only as an indication of the difference in calculated results that may be expected when the two {sup 233}U cross-section evaluations are used for problems with neutron spectra in the intermediate energy range. Additionally, comparisons of experimental and calculated central fission rate ratios were also made. The study has suggested that an ad hoc {sup 233}U evaluation based on the JENDL library provides better overall results for both fast and thermal experimental critical systems.

  4. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  5. A geometric model for magnetizable bodies with internal variables

    Directory of Open Access Journals (Sweden)

    Restuccia, L

    2005-11-01

    Full Text Available In a geometrical framework for thermo-elasticity of continua with internal variables we consider a model of magnetizable media previously discussed and investigated by Maugin. We assume as state variables the magnetization together with its space gradient, subjected to evolution equations depending on both internal and external magnetic fields. We calculate the entropy function and necessary conditions for its existence.

  6. CO2 flowrate calculator

    International Nuclear Information System (INIS)

    Carossi, Jean-Claude

    1969-02-01

    A CO 2 flowrate calculator has been designed for measuring and recording the gas flow in the loops of Pegase reactor. The analog calculator applies, at every moment, Bernoulli's formula to the values that characterize the carbon dioxide flow through a nozzle. The calculator electronics is described (it includes a sampling calculator and a two-variable function generator), with its amplifiers, triggers, interpolator, multiplier, etc. Calculator operation and setting are presented

  7. Widely available active sites on Ni2P for electrochemical hydrogen evolution - insights from first principles calculations

    DEFF Research Database (Denmark)

    Hansen, Martin Hangaard; Stern, Lucas-Alexandre; Feng, Ligang

    2015-01-01

    We present insights into the mechanism and the active site for hydrogen evolution on nickel phosphide (Ni2P). Ni2P was recently discovered to be a very active non-precious hydrogen evolution catalyst. Current literature attributes the activity of Ni2P to a particular site on the (0001) facet....... In the present study, using Density Functional Theory (DFT) calculations, we show that several widely available low index crystal facets on Ni2P have better properties for a high catalytic activity. DFT calculations were used to identify moderately bonding nickel bridge sites and nickel hollow sites for hydrogen...... adsorption and to calculate barriers for the Tafel pathway. The investigated surfaces in this study were the (10 (1) over bar0), ((1) over bar(1) over bar 20), (11 (2) over bar0), (11 (2) over bar1) and (0001) facets of the hexagonal Ni2P crystal. In addition to the DFT results, we present experiments on Ni2...

  8. Measuring the chemical and cytotoxic variability of commercially available kava (Piper methysticum G. Forster.

    Directory of Open Access Journals (Sweden)

    Amanda C Martin

    Full Text Available Formerly used world-wide as a popular botanical medicine to reduce anxiety, reports of hepatotoxicity linked to consuming kava extracts in the late 1990s have resulted in global restrictions on kava use and have hindered kava-related research. Despite its presence on the United States Food and Drug Administration consumer advisory list for the past decade, export data from kava producing countries implies that US kava imports, which are not publicly reported, are both increasing and of a fairly high volume. We have measured the variability in extract chemical composition and cytotoxicity towards human lung adenocarcinoma A549 cancer cells of 25 commercially available kava products. Results reveal a high level of variation in chemical content and cytotoxicity of currently available kava products. As public interest and use of kava products continues to increase in the United States, efforts to characterize products and expedite research of this potentially useful botanical medicine are necessary.

  9. Kendall-Theil Robust Line (KTRLine--version 1.0)-A Visual Basic Program for Calculating and Graphing Robust Nonparametric Estimates of Linear-Regression Coefficients Between Two Continuous Variables

    Science.gov (United States)

    Granato, Gregory E.

    2006-01-01

    The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and

  10. Developing a Model of Tuition Fee Calculation for Universities of Medical Sciences

    Directory of Open Access Journals (Sweden)

    Seyed Amir Mohsen Ziaee

    2018-01-01

    Full Text Available Background: The aim of our study was to introduce and evaluate a practicable model for tuition fee calculation of each medical field in universities of medical sciences in Iran.Methods: Fifty experts in 11 panels were interviewed to identify variables that affect tuition fee calculation. This led to key points including total budgets, expenses of the universities, different fields’ attractiveness, universities’ attractiveness, and education quality. Tuition fees were calculated for different levels of education, such as post-diploma, Bachelor, Master, and Doctor of Philosophy (Ph.D degrees, Medical specialty, and Fellowship. After tuition fee calculation, the model was tested during 2013-2015. Since then, a questionnaire including 20 questions was prepared. All Universities’ financial and educational managers were asked to respond to the questions regarding the model’s reliability and effectiveness.Results: According to the results, fields’ attractiveness, universities’ attractiveness, zone distinction and education quality were selected as effective variables for tuition fee calculation. In this model, tuition fees per student were calculated for the year 2013, and, therefore, the inflation rate of the same year was used. Testing of the model showed that there is a 92% of satisfaction. This model is used by medical science universities in Iran.Conclusion: Education quality, zone coefficient, fields’ attractiveness, universities’ attractiveness, inflation rate, and portion of each level of education were the most important variables affecting tuition fee calculation.Keywords: TUITION FEES, FIELD’S ATTRACTIVENESS, UNIVERSITIES’ ATTRACTIVENESS, ZONE DISTINCTION, EDUCATION QUALITY

  11. Available transmission capacity assessment

    Directory of Open Access Journals (Sweden)

    Škokljev Ivan

    2012-01-01

    Full Text Available Effective power system operation requires the analysis of vast amounts of information. Power market activities expose power transmission networks to high-level power transactions that threaten normal, secure operation of the power system. When there are service requests for a specific sink/source pair in a transmission system, the transmission system operator (TSO must allocate the available transfer capacity (ATC. It is common that ATC has a single numerical value. Additionally, the ATC must be calculated for the base case configuration of the system, while generation dispatch and topology remain unchanged during the calculation. Posting ATC on the internet should benefit prospective users by aiding them in formulating their requests. However, a single numerical value of ATC offers little for prospect for analysis, planning, what-if combinations, etc. A symbolic approach to the power flow problem (DC power flow and ATC offers a numerical computation at the very end, whilst the calculation beforehand is performed by using symbols for the general topology of the electrical network. Qualitative analysis of the ATC using only qualitative values, such as increase, decrease or no change, offers some new insights into ATC evaluation, multiple transactions evaluation, value of counter-flows and their impact etc. Symbolic analysis in this paper is performed after the execution of the linear, symbolic DC power flow. As control variables, the mathematical model comprises linear security constraints, ATC, PTDFs and transactions. The aim is to perform an ATC sensitivity study on a five nodes/seven lines transmission network, used for zonal market activities tests. A relatively complicated environment with twenty possible bilateral transactions is observed.

  12. The temporal variability of species densities

    International Nuclear Information System (INIS)

    Redfearn, A.; Pimm, S.L.

    1993-01-01

    Ecologists use the term 'stability' to mean to number of different things (Pimm 1984a). One use is to equate stability with low variability in population density over time (henceforth, temporal variability). Temporal variability varies greatly from species to species, so what effects it? There are at least three sets of factors: the variability of extrinsic abiotic factors, food web structure, and the intrinsic features of the species themselves. We can measure temporal variability using at least three statistics: the coefficient of variation of density (CV); the standard deviation of the logarithms of density (SDL); and the variance in the differences between logarithms of density for pairs of consecutive years (called annual variability, hence AV, b y Wolda 1978). There are advantages and disadvantages to each measure (Williamson 1984), though in our experience, the measures are strongly correlated across sets of taxonomically related species. The increasing availability of long-term data sets allows one to calculate these statistics for many species and so to begin to understand the various causes of species differences in temporal variability

  13. Clinical Implications of Glucose Variability: Chronic Complications of Diabetes

    Directory of Open Access Journals (Sweden)

    Hye Seung Jung

    2015-06-01

    Full Text Available Glucose variability has been identified as a potential risk factor for diabetic complications; oxidative stress is widely regarded as the mechanism by which glycemic variability induces diabetic complications. However, there remains no generally accepted gold standard for assessing glucose variability. Representative indices for measuring intraday variability include calculation of the standard deviation along with the mean amplitude of glycemic excursions (MAGE. MAGE is used to measure major intraday excursions and is easily measured using continuous glucose monitoring systems. Despite a lack of randomized controlled trials, recent clinical data suggest that long-term glycemic variability, as determined by variability in hemoglobin A1c, may contribute to the development of microvascular complications. Intraday glycemic variability is also suggested to accelerate coronary artery disease in high-risk patients.

  14. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  15. Optimal Height Calculation and Modelling of Noise Barrier

    Directory of Open Access Journals (Sweden)

    Raimondas Grubliauskas

    2011-04-01

    Full Text Available Transport is one of the main sources of noise having a particularly strong negative impact on the environment. In the city, one of the best methods to reduce the spread of noise in residential areas is a noise barrier. The article presents noise reduction barrier adaptation with empirical formulas calculating and modelling noise distribution. The simulation of noise dispersion has been performed applying the CadnaA program that allows modelling the noise levels of various developments under changing conditions. Calculation and simulation is obtained by assessing the level of noise reduction using the same variables. The investigation results are presented as noise distribution isolines. The selection of a different height of noise barriers are the results calculated at the heights of 1, 4 and 15 meters. The level of noise reduction at the maximum overlap of data, calculation and simulation has reached about 10%.Article in Lithuanian

  16. Statistical identification of effective input variables

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications

  17. Dose-Response Calculator for ArcGIS

    Science.gov (United States)

    Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.

    2011-01-01

    The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.

  18. Magnetic field calculation of variably polarizing undulator (APPLE-type) for SX beamline in the SPring-8

    International Nuclear Information System (INIS)

    Kobayashi, Hideki; Sasaki, Shigemi; Shimada, Taihei; Takao, Masaru; Yokoya, Akinori; Miyahara, Yoshikazu

    1996-03-01

    This paper describes the design of a variably polarizing undulator (APPLE-type) to be installed in soft X-ray beamline in the SPring-8 facility. The magnetic field distribution and radiation spectrum expected from this undulator were calculated. The magnetic field strength is varied by changing the gap distance of upper and lower jaws, so it changes the photon energy in soft X-ray range. By moving the relative position of pairs of magnet rows (phase shift), the polarization of radiation is varied circularly, elliptically and linearly in the horizontal and vertical direction. We expect that right and left handed circular polarizations are obtained alternately at a rate of 1 Hz by high speed phase shifting. The repulsive and attractive magnetic force working on the magnet rows were calculated which interfere in phase shifting at high speed. The magnetic force changes with gap distance and phase shift position, and the magnetic force working on a row in the direction of phase shift becomes up to 500 kgf. The construction of this undulator is started in 1996, that will be inserted in the storage ring in 1997. (author)

  19. A calculation method of available soil water content : application to viticultural terroirs mapping of the Loire valley

    Directory of Open Access Journals (Sweden)

    Etienne Goulet

    2004-12-01

    Full Text Available Vine water supply is one of the most important elements in the determination of grape composition and wine quality. Water supply conditions are in relation with available soil water content, therefore this one has to be determined when vineyard terroir mapping is undertaken. The available soil water content depends on soil factors like water content at field capacity, water content at the permanent wilting point, apparent density and rooting depth. The aim of this study is to seek the relationship between these factors and a simple soil characteristic such as texture which could be easily measurable in routine cartography. Study area is located in the Loire valley, in two different geological regions. First results indicate that it is possible to determine available soil water content from clay percentage, then from soil texture. These results also show that available soil water content algorithms differ with geological properties. This calculation can be used at each auger boring and results can be spatialised within a Geographical Information System that allows the production of available water content maps.

  20. Variability of carotid artery measurements on 3-Tesla MRI and its impact on sample size calculation for clinical research.

    Science.gov (United States)

    Syed, Mushabbar A; Oshinski, John N; Kitchen, Charles; Ali, Arshad; Charnigo, Richard J; Quyyumi, Arshed A

    2009-08-01

    Carotid MRI measurements are increasingly being employed in research studies for atherosclerosis imaging. The majority of carotid imaging studies use 1.5 T MRI. Our objective was to investigate intra-observer and inter-observer variability in carotid measurements using high resolution 3 T MRI. We performed 3 T carotid MRI on 10 patients (age 56 +/- 8 years, 7 male) with atherosclerosis risk factors and ultrasound intima-media thickness > or =0.6 mm. A total of 20 transverse images of both right and left carotid arteries were acquired using T2 weighted black-blood sequence. The lumen and outer wall of the common carotid and internal carotid arteries were manually traced; vessel wall area, vessel wall volume, and average wall thickness measurements were then assessed for intra-observer and inter-observer variability. Pearson and intraclass correlations were used in these assessments, along with Bland-Altman plots. For inter-observer variability, Pearson correlations ranged from 0.936 to 0.996 and intraclass correlations from 0.927 to 0.991. For intra-observer variability, Pearson correlations ranged from 0.934 to 0.954 and intraclass correlations from 0.831 to 0.948. Calculations showed that inter-observer variability and other sources of error would inflate sample size requirements for a clinical trial by no more than 7.9%, indicating that 3 T MRI is nearly optimal in this respect. In patients with subclinical atherosclerosis, 3 T carotid MRI measurements are highly reproducible and have important implications for clinical trial design.

  1. Version of ORIGEN2 with automated sensitivity-calculation capability

    International Nuclear Information System (INIS)

    Worley, B.A.; Wright, R.Q.; Pin, F.G.

    1986-01-01

    ORIGEN2 is a widely used point-depletion and radioactive-decay computer code for use in simulating nuclear fuel cycles and/or spent fuel characteristics. The code calculates the amount of each nuclide being considered in the problem at a specified number of times, and upon request, a database of conversion factors relating mass compositions to specific material characteristics is used to calculate and print the total nuclide-dependent radioactivity, thermal power, and toxicity, as well as absorption, fission, neutron emission, and photon emission rates. The purpose of this paper is to report on the availability of a version of ORIGEN2 that will calculate, on option the derivative of all responses with respect to any variable used in the code

  2. Calculation of wastage by small water leaks in sodium heated steam generators

    International Nuclear Information System (INIS)

    Tregonning, K.

    1976-01-01

    On the basis of mechanistic arguments it is suggested that the temperature of the wasting surface would provide a single physically meaningful parameter with which to correlate wastage data. A lumped parameter model is developed which predicts reaction temperature as a function of the major variables in the small water leak situation (Leak rate, tube spacing, sodium temperature). The calculated temperatures explain much of the observed behaviour of wastage rate with these variables and compare well with the limited temperature data available. Wastage rates are correlated with predicted temperature on a total activation energy basis. The results are encouraging and a first conservative method for the calculation of wastage by small water leaks in sodium-heated steam generators is produced

  3. VOLUMETRIC METHOD FOR EVALUATION OF BEACHES VARIABILITY BASED ON GIS-TOOLS

    Directory of Open Access Journals (Sweden)

    V. V. Dolotov

    2015-01-01

    Full Text Available In frame of cadastral beach evaluation the volumetric method of natural variability index is proposed. It base on spatial calculations with Cut-Fill method and volume accounting ofboththe common beach contour and specific areas for the each time.

  4. Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.

    1984-08-01

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.

  5. Metronome cueing of walking reduces gait variability after a cerebellar stroke

    Directory of Open Access Journals (Sweden)

    Rachel Lindsey Wright

    2016-06-01

    Full Text Available Cerebellar stroke typically results in increased variability during walking. Previous research has suggested that auditory-cueing reduces excessive variability in conditions such as Parkinson’s disease and post-stroke hemiparesis. The aim of this case report was to investigate whether the use of a metronome cue during walking could reduce excessive variability in gait parameters after a cerebellar stroke. An elderly female with a history of cerebellar stroke and recurrent falling undertook 3 standard gait trials and 3 gait trials with an auditory metronome. A Vicon system was used to collect 3-D marker trajectory data. The coefficient of variation was calculated for temporal and spatial gait parameters. Standard deviations of the joint angles were calculated and used to give a measure of joint kinematic variability. Step time, stance time and double support time variability were reduced with metronome cueing. Variability in the sagittal hip, knee and ankle angles were reduced to normal values when walking to the metronome. In summary, metronome cueing resulted in a decrease in variability for step, stance and double support times and joint kinematics. Further research is needed to establish whether a metronome may be useful in gait rehabilitation after cerebellar stroke, and whether this leads to a decreased risk of falling.

  6. AC-DC integrated load flow calculation for variable speed offshore wind farms

    DEFF Research Database (Denmark)

    Zhao, Menghua; Chen, Zhe; Blaabjerg, Frede

    2005-01-01

    This paper proposes a sequential AC-DC integrated load flow algorithm for variable speed offshore wind farms. In this algorithm, the variable frequency and the control strategy of variable speed wind turbine systems are considered. In addition, the losses of wind turbine systems and the losses...... of converters are also integrated into the load flow algorithm. As a general algorithm, it can be applied to different types of wind farm configurations, and the load flow is related to the wind speed....

  7. National database for calculating fuel available to wildfires

    Science.gov (United States)

    Donald McKenzie; Nancy H.F. French; Roger D. Ottmar

    2012-01-01

    Wildfires are increasingly emerging as an important component of Earth system models, particularly those that involve emissions from fires and their effects on climate. Currently, there are few resources available for estimating emissions from wildfires in real time, at subcontinental scales, in a spatially consistent manner. Developing subcontinental-scale databases...

  8. Hypoxia tolerance in reptiles, amphibians, and fishes: life with variable oxygen availability.

    Science.gov (United States)

    Bickler, Philip E; Buck, Leslie T

    2007-01-01

    The ability of fishes, amphibians, and reptiles to survive extremes of oxygen availability derives from a core triad of adaptations: profound metabolic suppression, tolerance of ionic and pH disturbances, and mechanisms for avoiding free-radical injury during reoxygenation. For long-term anoxic survival, enhanced storage of glycogen in critical tissues is also necessary. The diversity of body morphologies and habitats and the utilization of dormancy have resulted in a broad array of adaptations to hypoxia in lower vertebrates. For example, the most anoxia-tolerant vertebrates, painted turtles and crucian carp, meet the challenge of variable oxygen in fundamentally different ways: Turtles undergo near-suspended animation, whereas carp remain active and responsive in the absence of oxygen. Although the mechanisms of survival in both of these cases include large stores of glycogen and drastically decreased metabolism, other mechanisms, such as regulation of ion channels in excitable membranes, are apparently divergent. Common themes in the regulatory adjustments to hypoxia involve control of metabolism and ion channel conductance by protein phosphorylation. Tolerance of decreased energy charge and accumulating anaerobic end products as well as enhanced antioxidant defenses and regenerative capacities are also key to hypoxia survival in lower vertebrates.

  9. Hydraulic modelling of the spatial and temporal variability in Atlantic salmon parr habitat availability in an upland stream.

    Science.gov (United States)

    Fabris, Luca; Malcolm, Iain Archibald; Buddendorf, Willem Bastiaan; Millidine, Karen Jane; Tetzlaff, Doerthe; Soulsby, Chris

    2017-12-01

    We show how spatial variability in channel bed morphology affects the hydraulic characteristics of river reaches available to Atlantic salmon parr (Salmo salar) under different flow conditions in an upland stream. The study stream, the Girnock Burn, is a long-term monitoring site in the Scottish Highlands. Six site characterised by different bed geometry and morphology were investigated. Detailed site bathymetries were collected and combined with discharge time series in a 2D hydraulic model to obtain spatially distributed depth-averaged velocities under different flow conditions. Available habitat (AH) was estimated for each site. Stream discharge was used according to the critical displacement velocity (CDV) approach. CDV defines a velocity threshold above which salmon parr are not able to hold station and effective feeding opportunities or habitat utilization are reduced, depending on fish size and water temperature. An average value of the relative available habitat () for the most significant period for parr growth - April to May - was used for inter-site comparison and to analyse temporal variations over 40years. Results show that some sites are more able than others to maintain zones where salmon parr can forage unimpeded by high flow velocities under both wet and dry conditions. With lower flow velocities, dry years offer higher values of than wet years. Even though can change considerably across the sites as stream flow changes, the directions of change are consistent. Relative available habitat (RAH) shows a strong relationship with discharge per unit width, whilst channel slope and bed roughness either do not have relevant impact or compensate each other. The results show that significant parr habitat was available at all sites across all flows during this critical growth period, suggesting that hydrological variability is not a factor limiting growth in the Girnock. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  10. Taylor Series Trajectory Calculations Including Oblateness Effects and Variable Atmospheric Density

    Science.gov (United States)

    Scott, James R.

    2011-01-01

    Taylor series integration is implemented in NASA Glenn's Spacecraft N-body Analysis Program, and compared head-to-head with the code's existing 8th- order Runge-Kutta Fehlberg time integration scheme. This paper focuses on trajectory problems that include oblateness and/or variable atmospheric density. Taylor series is shown to be significantly faster and more accurate for oblateness problems up through a 4x4 field, with speedups ranging from a factor of 2 to 13. For problems with variable atmospheric density, speedups average 24 for atmospheric density alone, and average 1.6 to 8.2 when density and oblateness are combined.

  11. STEPWISE SELECTION OF VARIABLES IN DEA USING CONTRIBUTION LOADS

    Directory of Open Access Journals (Sweden)

    Fernando Fernandez-Palacin

    Full Text Available ABSTRACT In this paper, we propose a new methodology for variable selection in Data Envelopment Analysis (DEA. The methodology is based on an internal measure which evaluates the contribution of each variable in the calculation of the efficiency scores of DMUs. In order to apply the proposed method, an algorithm, known as “ADEA”, was developed and implemented in R. Step by step, the algorithm maximizes the load of the variable (input or output which contribute least to the calculation of the efficiency scores, redistributing the weights of the variables without altering the efficiency scores of the DMUs. Once the weights have been redistributed, if the lower contribution does not reach a previously given critical value, a variable with minimum contribution will be removed from the model and, as a result, the DEA will be solved again. The algorithm will stop when all variables reach a given contribution load to the DEA or until no more variables can be removed. In this way and contrary to what is usual, the algorithm provides a clear stop rule. In both cases, the efficiencies obtained from the DEA will be considered suitable and rightly interpreted in terms of the remaining variables, indicating the load themselves; moreover, the algorithm will provide a sequence of alternative nested models - potential solutions - that could be evaluated according to external criterion. To illustrate the procedure, we have applied the methodology proposed to obtain a research ranking of Spanish public universities. In this case, at each step of the algorithm, the critical value is obtained based on a simulation study.

  12. Variable Lifting Index (VLI): A New Method for Evaluating Variable Lifting Tasks.

    Science.gov (United States)

    Waters, Thomas; Occhipinti, Enrico; Colombini, Daniela; Alvarez-Casado, Enrique; Fox, Robert

    2016-08-01

    We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI). There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks. In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI. The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed. The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated. The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift. © 2015, Human Factors and Ergonomics Society.

  13. Understanding Existing Salmonid Habitat Availability and Connectivity to Improve River Management

    Science.gov (United States)

    Duffin, J.; Yager, E.; Tonina, D.; Benjankar, R. M.

    2017-12-01

    In the Pacific Northwest river restoration is common for salmon conservation. Mangers need methods to help target restoration to problem areas in rivers to create habitat that meets a species' needs. Hydraulic models and habitat suitability curves provide basic information on habitat availability and overall quality, but these analyses need to be expanded to address habitat quality based on the accessibility of habitats required for multiple life stages. Scientists are starting to use connectivity measurements to understand the longitudinal proximity of habitat patches, which can be used to address the habitat variability of a reach. By evaluating the availability and quality of habitat and calculating the connectivity between complementary habitats, such as spawning and rearing habitats, we aim to identify areas that should be targeted for restoration. To meet these goals, we assessed Chinook salmon habitat on the Lemhi River in Idaho. The depth and velocity outputs from a 2D hydraulic model are used in conjunction with locally created habitat suitability curves to evaluate the availability and quality of habitat for multiple Chinook salmon life stages. To assess the variability of the habitat, connectivity between habitat patches necessary for different life stages is calculated with a proximity index. A spatial representation of existing habitat quality and connectivity between complimentary habitats can be linked to river morphology by the evaluation of local geomorphic characteristics, including sinuosity and channel units. The understanding of the current habitat availability for multiple life stage needs, the connectivity between these habitat patches, and their relationship with channel morphology can help managers better identify restoration needs and direct their limited resources.

  14. Calculation of the uptake of CO into the human blood

    Energy Technology Data Exchange (ETDEWEB)

    Zankl, J.G.

    1981-01-01

    CO is a toxic substance mainly because, due to its high hemoglobin affinity, it inhibits oxygen transport in the human blood. This process must be quantified in order to establish limiting CO concentrations for garages, road tunnels, places of work, and for purpose of accident analysis. The medical information given is based on literature and is considered only as an introduction for non-experts. The emphasis of the work was on the development of a computer program by which the influence also of variable air CO concentrations and sea levels can be calculated. The program is available at the author's institute along with detailed calculations and results not considered in this publication.

  15. Transfer Area Mechanical Handling Calculation

    International Nuclear Information System (INIS)

    Dianda, B.

    2004-01-01

    This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC--28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC--28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use of these components or their

  16. Online plasma calculator

    Science.gov (United States)

    Wisniewski, H.; Gourdain, P.-A.

    2017-10-01

    APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

  17. Calculation of Collective Variable-based PMF by Combining WHAM with Umbrella Sampling

    International Nuclear Information System (INIS)

    Xu Wei-Xin; Li Yang; Zhang, John Z. H.

    2012-01-01

    Potential of mean force (PMF) with respect to localized reaction coordinates (RCs) such as distance is often applied to evaluate the free energy profile along the reaction pathway for complex molecular systems. However, calculation of PMF as a function of global RCs is still a challenging and important problem in computational biology. We examine the combined use of the weighted histogram analysis method and the umbrella sampling method for the calculation of PMF as a function of a global RC from the coarse-grained Langevin dynamics simulations for a model protein. The method yields the folding free energy profile projected onto a global RC, which is in accord with benchmark results. With this method rare global events would be sufficiently sampled because the biased potential can be used for restricting the global conformation to specific regions during free energy calculations. The strategy presented can also be utilized in calculating the global intra- and intermolecular PMF at more detailed levels. (cross-disciplinary physics and related areas of science and technology)

  18. Régression orthogonale de trois variables liées Orthogonal Regression of Linked Variables

    Directory of Open Access Journals (Sweden)

    Phelizon J. -F.

    2006-11-01

    Full Text Available On propose dans cet article un algorithme permettant de déterminer les paramètres de l'équation de régression orthogonale de trois variables liées par une relation linéaire. Cet algorithme est remarquablement simple puisqu'il n'implique pas de devoir calculer les valeurs propres de la matrice des covariances. D'autre part, on montre que l'équation obtenue (celle d'une droite dans l'espace à trois dimensions caractérise aussi une droite dans un diagramme triangulaire, ce qui rend l'interprétation des résultats immédiate. L'exposé théorique se poursuit par deux exemples qui ont été effectivement testés sur ordinateur. This article proposes on algorithm for determining the parameters of the equation for the orthogonal regression of three variables linked by a linear relation. This algorithm is remarkably simple in that il does not require the actual values of the covariance matrix to be calculated. In addition, the equation obtained (for a straight line in three-dimensional space is shown to characterize a straight line in a triang ular diagram as well, thus making il immediately possible ta interpret the resulis. The theoretical explanation continues with two examples that were actually tried out on a computer.

  19. Thermohydraulic calculations of PWR primary circuits

    International Nuclear Information System (INIS)

    Botelho, D.A.

    1984-01-01

    Some mathematical and numerical models from Retran computer codes aiming to simulate reactor transients, are presented. The equations used for calculating one-dimensional flow are integrated using mathematical methods from Flash code, with steam code to correlate the variables from thermodynamic state. The algorithm obtained was used for calculating a PWR reactor. (E.G.) [pt

  20. Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, A.R.; Moreno, S.; Deringer, J.; Watson, C.R.

    1984-08-01

    The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.

  1. The Calculator of Anti-Alzheimer's Diet. Macronutrients.

    Science.gov (United States)

    Studnicki, Marcin; Woźniak, Grażyna; Stępkowski, Dariusz

    2016-01-01

    The opinions about optimal proportions of macronutrients in a healthy diet have changed significantly over the last century. At the same time nutritional sciences failed to provide strong evidence backing up any of the variety of views on macronutrient proportions. Herein we present an idea how these proportions can be calculated to find an optimal balance of macronutrients with respect to prevention of Alzheimer's Disease (AD) and dementia. These calculations are based on our published observation that per capita personal income (PCPI) in the USA correlates with age-adjusted death rates for AD (AADR). We have previously reported that PCPI through the period 1925-2005 correlated with AADR in 2005 in a remarkable, statistically significant oscillatory manner, as shown by changes in the correlation coefficient R (Roriginal). A question thus arises what caused the oscillatory behavior of Roriginal? What historical events in the life of 2005 AD victims had shaped their future with AD? Looking for the answers we found that, considering changes in the per capita availability of macronutrients in the USA in the period 1929-2005, we can mathematically explain the variability of Roriginal for each quarter of a human life. On the basis of multiple regression of Roriginal with regard to the availability of three macronutrients: carbohydrates, total fat, and protein, with or without alcohol, we propose seven equations (referred to as "the calculator" throughout the text) which allow calculating optimal changes in the proportions of macronutrients to reduce the risk of AD for each age group: youth, early middle age, late middle age and late age. The results obtained with the use of "the calculator" are grouped in a table (Table 4) of macronutrient proportions optimal for reducing the risk of AD in each age group through minimizing Rpredicted-i.e., minimizing the strength of correlation between PCPI and future AADR.

  2. Method of non-interacting thermodynamic calculation of binary phase diagrams containing p disordered phases with variable composition and q phases with constant composition at (p, q) ≤ 10

    International Nuclear Information System (INIS)

    Udovskij, A.L.; Karpushkin, V.N.; Nikishina, E.A.

    1991-01-01

    Method of non-interacting thermodynamic calculation of state diagram of binary systems contacting p disordered phases with variable composition and q phases with constant composition for (p, q) ≤ 10 case is developed. Determination of all possible solutions of phase equilibrium equations is realized in the method. Certain application examples of computer-realized method of T-x thermodynamic calculation using PC for Cr-W, Ni-W, Ni-Al, Ni-Re binary systems are given

  3. Regional and site-specific absolute humidity data for use in tritium dose calculations

    International Nuclear Information System (INIS)

    Etnier, E.L.

    1980-01-01

    Due to the potential variability in average absolute humidity over the continental U.S., and the dependence of atmospheric 3 H specific activity on absolute humidity, availability of regional absolute humidity data is of value in estimating the radiological significance of 3 H releases. Most climatological data are in the form of relative humidity, which must be converted to absolute humidity for dose calculations. Absolute humidity was calculated for 218 points across the U.S., using the 1977 annual summary of U.S. Climatological Data, and is given in a table. Mean regional values are shown on a map. (author)

  4. Improved core protection calculator system algorithm

    International Nuclear Information System (INIS)

    Yoon, Tae Young; Park, Young Ho; In, Wang Kee; Bae, Jong Sik; Baeg, Seung Yeob

    2009-01-01

    Core Protection Calculator System (CPCS) is a digitized core protection system which provides core protection functions based on two reactor core operation parameters, Departure from Nucleate Boiling Ratio (DNBR) and Local Power Density (LPD). It generates a reactor trip signal when the core condition exceeds the DNBR or LPD design limit. It consists of four independent channels which adapted a two out of four trip logic. CPCS algorithm improvement for the newly designed core protection calculator system, RCOPS (Reactor COre Protection System), is described in this paper. New features include the improvement of DNBR algorithm for thermal margin, the addition of pre trip alarm generation for auxiliary trip function, VOPT (Variable Over Power Trip) prevention during RPCS (Reactor Power Cutback System) actuation and the improvement of CEA (Control Element Assembly) signal checking algorithm. To verify the improved CPCS algorithm, CPCS algorithm verification tests, 'Module Test' and 'Unit Test', would be performed on RCOPS single channel facility. It is expected that the improved CPCS algorithm will increase DNBR margin and enhance the plant availability by reducing unnecessary reactor trips

  5. OCOPTR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation. DRVOCR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation

    International Nuclear Information System (INIS)

    Nazareth, J. L.

    1979-01-01

    1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension

  6. (Super Variable Costing-Throughput Costing)

    OpenAIRE

    Çakıcı, Cemal

    2006-01-01

    (Super Variable Costing-Throughput Costing) The aim of this study is to explain the super-variable costing method which is a new subject in cost and management accounting and to show it’s working practicly.Shortly, super-variable costing can be defined as a costing method which is use only direct material costs in calculate of product costs and treats all costs except these (direct labor and overhead) as periad costs or operating costs.By using super-variable costing method, product costs ar...

  7. Dealing with variability in water availability: the case of the Verde Grande River basin, Brazil

    Directory of Open Access Journals (Sweden)

    B. Collischonn

    2014-09-01

    Full Text Available This paper presents a water resources management strategy developed by the Brazilian National Water Agency (ANA to cope with the conflicts between water users in the Verde Grande River basin, located at the southern border of the Brazilian semi-arid region. The basin is dominated by water-demanding fruit irrigation agriculture, which has grown significantly and without adequate water use control, over the last 30 years. The current water demand for irrigation exceeds water availability (understood as a 95 % percentile of the flow duration curve in a ratio of three to one, meaning that downstream water users are experiencing more frequent water shortages than upstream ones. The management strategy implemented in 2008 has the objective of equalizing risk for all water users and consists of a set of rules designed to restrict water withdrawals according to current river water level (indicative of water availability and water demand. Under that rule, larger farmers have proportionally larger reductions in water use, preserving small subsistence irrigators. Moreover, dry season streamflow is forecasted at strategic points by the end of every rainy season, providing evaluation of shortage risk. Thus, water users are informed about the forecasts and corresponding restrictions well in advance, allowing for anticipated planning of irrigated areas and practices. In order to enforce restriction rules, water meters were installed in all larger water users and inefficient farmers were obligated to improve their irrigation systems’ performance. Finally, increases in irrigated area are only allowed in the case of annual crops and during months of higher water availability (November to June. The strategy differs from convectional approached based only on water use priority and has been successful in dealing with natural variability of water availability, allowing more water to be used in wet years and managing risk in an isonomic manner during dry years.

  8. Understanding surface-water availability in the Central Valley as a means to projecting future groundwater storage with climate variability

    Science.gov (United States)

    Goodrich, J. P.; Cayan, D. R.

    2017-12-01

    California's Central Valley (CV) relies heavily on diverted surface water and groundwater pumping to supply irrigated agriculture. However, understanding the spatiotemporal character of water availability in the CV is difficult because of the number of individual farms and local, state, and federal agencies involved in using and managing water. Here we use the Central Valley Hydrologic Model (CVHM), developed by the USGS, to understand the relationships between climatic variability, surface water inputs, and resulting groundwater use over the historical period 1970-2013. We analyzed monthly surface water diversion data from >500 CV locations. Principle components analyses were applied to drivers constructed from meteorological data, surface reservoir storage, ET, land use cover, and upstream inflows, to feed multiple regressions and identify factors most important in predicting surface water diversions. Two thirds of the diversion locations ( 80% of total diverted water) can be predicted to within 15%. Along with monthly inputs, representations of cumulative precipitation over the previous 3 to 36 months can explain an additional 10% of variance, depending on location, compared to results that excluded this information. Diversions in the southern CV are highly sensitive to inter-annual variability in precipitation (R2 = 0.8), whereby more surface water is used during wet years. Until recently, this was not the case in the northern and mid-CV, where diversions were relatively constant annually, suggesting relative insensitivity to drought. In contrast, this has important implications for drought response in southern regions (eg. Tulare Basin) where extended dry conditions can severely limit surface water supplies and lead to excess groundwater pumping, storage loss, and subsidence. In addition to fueling our understanding of spatiotemporal variability in diversions, our ability to predict these water balance components allows us to update CVHM predictions before

  9. The calculation of the field of the three-electrode transaxial lens

    Directory of Open Access Journals (Sweden)

    Duseinova A.G.

    2017-04-01

    Full Text Available the article offered method of the calculation of the transaxial fields based on the partition of the potential into two terms. The main term is a harmonic function of two variables and satisfies the given boundary conditions. The harmonic component of the potential is found analytically using the methods of complex analysis. The second term is the solution of an inhomogeneous equation with zero Dirichlet boundary conditions and can be found numerically with the required accuracy.

  10. Pulsation properties of Mira long period variables

    International Nuclear Information System (INIS)

    Cahn, J.H.

    1980-01-01

    A matter of great interest to variable star students concerns the mode of pulsation of Mira long period variables. In this report we first give observational evidence for the pulsation constant Q. We then compare the observations with calculations. Next, we review two interesting groups of papers dealing with hydrodynamic properties of long period variables. In the first, a fully dynamic nonlinear calculation maps out the Mira instability domain. In the second, special attention is paid to shock propagation beyond the photosphere which in large measure accounts for the complex spectra from this region. (orig./WL)

  11. Particle water and pH in the eastern Mediterranean: source variability and implications for nutrient availability

    Directory of Open Access Journals (Sweden)

    A. Bougiatioti

    2016-04-01

    Full Text Available Particle water (liquid water content, LWC and aerosol pH are important parameters of the aerosol phase, affecting heterogeneous chemistry and bioavailability of nutrients that profoundly impact cloud formation, atmospheric composition, and atmospheric fluxes of nutrients to ecosystems. Few measurements of in situ LWC and pH, however, exist in the published literature. Using concurrent measurements of aerosol chemical composition, cloud condensation nuclei activity, and tandem light scattering coefficients, the particle water mass concentrations associated with the aerosol inorganic (Winorg and organic (Worg components are determined for measurements conducted at the Finokalia atmospheric observation station in the eastern Mediterranean between June and November 2012. These data are interpreted using the ISORROPIA-II thermodynamic model to predict the pH of aerosols originating from the various sources that influence air quality in the region. On average, closure between predicted aerosol water and that determined by comparison of ambient with dry light scattering coefficients was achieved to within 8 % (slope  =  0.92, R2  =  0.8, n  =  5201 points. Based on the scattering measurements, a parameterization is also derived, capable of reproducing the hygroscopic growth factor (f(RH within 15 % of the measured values. The highest aerosol water concentrations are observed during nighttime, when relative humidity is highest and the collapse of the boundary layer increases the aerosol concentration. A significant diurnal variability is found for Worg with morning and afternoon average mass concentrations being 10–15 times lower than nighttime concentrations, thus rendering Winorg the main form of particle water during daytime. The average value of total aerosol water was 2.19 ± 1.75 µg m−3, contributing on average up to 33 % of the total submicron mass concentration. Average aerosol water associated with

  12. Determinants of cell-to-cell variability in protein kinase signaling.

    Directory of Open Access Journals (Sweden)

    Matthias Jeschke

    Full Text Available Cells reliably sense environmental changes despite internal and external fluctuations, but the mechanisms underlying robustness remain unclear. We analyzed how fluctuations in signaling protein concentrations give rise to cell-to-cell variability in protein kinase signaling using analytical theory and numerical simulations. We characterized the dose-response behavior of signaling cascades by calculating the stimulus level at which a pathway responds ('pathway sensitivity' and the maximal activation level upon strong stimulation. Minimal kinase cascades with gradual dose-response behavior show strong variability, because the pathway sensitivity and the maximal activation level cannot be simultaneously invariant. Negative feedback regulation resolves this trade-off and coordinately reduces fluctuations in the pathway sensitivity and maximal activation. Feedbacks acting at different levels in the cascade control different aspects of the dose-response curve, thereby synergistically reducing the variability. We also investigated more complex, ultrasensitive signaling cascades capable of switch-like decision making, and found that these can be inherently robust to protein concentration fluctuations. We describe how the cell-to-cell variability of ultrasensitive signaling systems can be actively regulated, e.g., by altering the expression of phosphatase(s or by feedback/feedforward loops. Our calculations reveal that slow transcriptional negative feedback loops allow for variability suppression while maintaining switch-like decision making. Taken together, we describe design principles of signaling cascades that promote robustness. Our results may explain why certain signaling cascades like the yeast pheromone pathway show switch-like decision making with little cell-to-cell variability.

  13. Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.

    Science.gov (United States)

    Olsen, Thomas

    2007-02-01

    This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p ultrasound, respectively (p power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.

  14. FRELIB, Failure Reliability Index Calculation

    International Nuclear Information System (INIS)

    Parkinson, D.B.; Oestergaard, C.

    1984-01-01

    1 - Description of problem or function: Calculation of the reliability index given the failure boundary. A linearization point (design point) is found on the failure boundary for a stationary reliability index (min) and a stationary failure probability density function along the failure boundary, provided that the basic variables are normally distributed. 2 - Method of solution: Iteration along the failure boundary which must be specified - together with its partial derivatives with respect to the basic variables - by the user in a subroutine FSUR. 3 - Restrictions on the complexity of the problem: No distribution information included (first-order-second-moment-method). 20 basic variables (could be extended)

  15. Investigation of Cycle-to-Cycle Variability of NO in Homogeneous Combustion

    Directory of Open Access Journals (Sweden)

    Karvountzis-Kontakiotis A.

    2015-01-01

    Full Text Available Cyclic variability of spark ignition engines is recognized as a scatter in the combustion parameter recordings during actual operation in steady state conditions. Combustion variability may occur due to fluctuations in both early flame kernel development and in turbulent flame propagation with an impact on fuel consumption and emissions. In this study, a detailed chemistry model for the prediction of NO formation in homogeneous engine conditions is presented. The Wiebe parameterization is used for the prediction of heat release; then the calculated thermodynamic data are fed into the chemistry model to predict NO evolution at each degree of crank angle. Experimental data obtained from literature studies were used to validate the mean NO levels calculated. Then the model was applied to predict the impact of cyclic variability on mean NO and the amplitude of its variation. The cyclic variability was simulated by introducing random perturbations, which followed a normal distribution, to the Wiebe function parameters. The results of this approach show that the model proposed better predicts mean NO formation than earlier methods. Also, it shows that to the non linear formation rate of NO with temperature, cycle-to-cycle variation leads to higher mean NO emission levels than what one would predict without taking cyclic variation into account.

  16. The impact of obstructive sleep apnea variability measured in-lab versus in-home on sample size calculations

    Directory of Open Access Journals (Sweden)

    Levendowski Daniel

    2009-01-01

    Full Text Available Abstract Background When conducting a treatment intervention, it is assumed that variability associated with measurement of the disease can be controlled sufficiently to reasonably assess the outcome. In this study we investigate the variability of Apnea-Hypopnea Index obtained by polysomnography and by in-home portable recording in untreated mild to moderate obstructive sleep apnea (OSA patients at a four- to six-month interval. Methods Thirty-seven adult patients serving as placebo controls underwent a baseline polysomnography and in-home sleep study followed by a second set of studies under the same conditions. The polysomnography studies were acquired and scored at three independent American Academy of Sleep Medicine accredited sleep laboratories. The in-home studies were acquired by the patient and scored using validated auto-scoring algorithms. The initial in-home study was conducted on average two months prior to the first polysomnography, the follow-up polysomnography and in-home studies were conducted approximately five to six months after the initial polysomnography. Results When comparing the test-retest Apnea-hypopnea Index (AHI and apnea index (AI, the in-home results were more highly correlated (r = 0.65 and 0.68 than the comparable PSG results (r = 0.56 and 0.58. The in-home results provided approximately 50% less test-retest variability than the comparable polysomnography AHI and AI values. Both the overall polysomnography AHI and AI showed a substantial bias toward increased severity upon retest (8 and 6 events/hr respectively while the in-home bias was essentially zero. The in-home percentage of time supine showed a better correlation compared to polysomnography (r = 0.72 vs. 0.43. Patients biased toward more time supine during the initial polysomnography; no trends in time supine for in-home studies were noted. Conclusion Night-to-night variability in sleep-disordered breathing can be a confounding factor in assessing

  17. Approximate method of calculation of non-equilibrium flow parameters of chemically reacting nitrogen tetroxide in the variable cross-section channels with energy exchange

    International Nuclear Information System (INIS)

    Bazhin, M.A.; Fedosenko, G.Eh.; Shiryaeva, N.M.; Mal'ko, M.V.

    1986-01-01

    It is shown that adiabatic non-equilibrium chemically reacting gas flow with energy exchange in a variable cross-section channel may be subdivided into five possible types: 1) quasi-equilibrium flow; 2) flow in the linear region of deviation from equilibrium state; 3) quasi-frozen flow; 4) flow in the linear region of deviation from frozen state; 5) non-equilibrium flow. Criteria of quasi-equilibrium and quazi-frozen flows, including factors of external action of chemically reacting gas on flow, allow to obtain simple but sufficiently reliable approximate method of calculation of flow parameters. The considered method for solving the problem of chemically reacting nitrogen tetroxide in the variable cross-section channel with energy exchange can be used for evaluation of chemical reaction kinetics on the flow parameter in the stages of axial-flow and radial-flow turbines and in another practical problems

  18. A Study on the Consistency of Discretization Equation in Unsteady Heat Transfer Calculations

    Directory of Open Access Journals (Sweden)

    Wenhua Zhang

    2013-01-01

    Full Text Available The previous studies on the consistency of discretization equation mainly focused on the finite difference method, but the issue of consistency still remains with several problems far from totally solved in the actual numerical computation. For instance, the consistency problem is involved in the numerical case where the boundary variables are solved explicitly while the variables away from the boundary are solved implicitly. And when the coefficient of discretization equation of nonlinear numerical case is the function of variables, calculating the coefficient explicitly and the variables implicitly might also give rise to consistency problem. Thus the present paper mainly researches the consistency problems involved in the explicit treatment of the second and third boundary conditions and that of thermal conductivity which is the function of temperature. The numerical results indicate that the consistency problem should be paid more attention and not be neglected in the practical computation.

  19. Enhancing automatic closed-loop glucose control in type 1 diabetes with an adaptive meal bolus calculator - in silico evaluation under intra-day variability.

    Science.gov (United States)

    Herrero, Pau; Bondia, Jorge; Adewuyi, Oloruntoba; Pesl, Peter; El-Sharkawy, Mohamed; Reddy, Monika; Toumazou, Chris; Oliver, Nick; Georgiou, Pantelis

    2017-07-01

    Current prototypes of closed-loop systems for glucose control in type 1 diabetes mellitus, also referred to as artificial pancreas systems, require a pre-meal insulin bolus to compensate for delays in subcutaneous insulin absorption in order to avoid initial post-prandial hyperglycemia. Computing such a meal bolus is a challenging task due to the high intra-subject variability of insulin requirements. Most closed-loop systems compute this pre-meal insulin dose by a standard bolus calculation, as is commonly found in insulin pumps. However, the performance of these calculators is limited due to a lack of adaptiveness in front of dynamic changes in insulin requirements. Despite some initial attempts to include adaptation within these calculators, challenges remain. In this paper we present a new technique to automatically adapt the meal-priming bolus within an artificial pancreas. The technique consists of using a novel adaptive bolus calculator based on Case-Based Reasoning and Run-To-Run control, within a closed-loop controller. Coordination between the adaptive bolus calculator and the controller was required to achieve the desired performance. For testing purposes, the clinically validated Imperial College Artificial Pancreas controller was employed. The proposed system was evaluated against itself but without bolus adaptation. The UVa-Padova T1DM v3.2 system was used to carry out a three-month in silico study on 11 adult and 11 adolescent virtual subjects taking into account inter-and intra-subject variability of insulin requirements and uncertainty on carbohydrate intake. Overall, the closed-loop controller enhanced by an adaptive bolus calculator improves glycemic control when compared to its non-adaptive counterpart. In particular, the following statistically significant improvements were found (non-adaptive vs. adaptive). Adults: mean glucose 142.2 ± 9.4vs. 131.8 ± 4.2mg/dl; percentage time in target [70, 180]mg/dl, 82.0 ± 7.0vs. 89.5 ± 4

  20. Empirical Formulas for the Calculations of the Hardness of Steels Cooled From the Austenitizing Temperature

    Directory of Open Access Journals (Sweden)

    Trzaska J.

    2016-09-01

    Full Text Available In this paper, the equations applied for the purpose of the calculations of the hardness of continuously cooled structural steels upon the basis of the temperature of austenitizing. The independent variables of the hardness model were: the mass concentrations of elements, the austenitizing temperature and the cooling rate. The equations were developed with the application of the following methods: multiple regression and logistic regression. In this paper, attention was paid to preparing data for the purpose of calculations, to the methodology of the calculations, and also to the assessment of the quality of developed formulas. The collection of empirical data was prepared upon the basis of more than 500 CCT diagrams.

  1. Performance of a Predictive Model for Calculating Ascent Time to a Target Temperature

    Directory of Open Access Journals (Sweden)

    Jin Woo Moon

    2016-12-01

    Full Text Available The aim of this study was to develop an artificial neural network (ANN prediction model for controlling building heating systems. This model was used to calculate the ascent time of indoor temperature from the setback period (when a building was not occupied to a target setpoint temperature (when a building was occupied. The calculated ascent time was applied to determine the proper moment to start increasing the temperature from the setback temperature to reach the target temperature at an appropriate time. Three major steps were conducted: (1 model development; (2 model optimization; and (3 performance evaluation. Two software programs—Matrix Laboratory (MATLAB and Transient Systems Simulation (TRNSYS—were used for model development, performance tests, and numerical simulation methods. Correlation analysis between input variables and the output variable of the ANN model revealed that two input variables (current indoor air temperature and temperature difference from the target setpoint temperature, presented relatively strong relationships with the ascent time to the target setpoint temperature. These two variables were used as input neurons. Analyzing the difference between the simulated and predicted values from the ANN model provided the optimal number of hidden neurons (9, hidden layers (3, moment (0.9, and learning rate (0.9. At the study’s conclusion, the optimized model proved its prediction accuracy with acceptable errors.

  2. Program for the calculation of the semiempirical radial wave functions by means of the variable Tomas-Fermi potential and for the determination of the radial integrals of the dipole transitions

    International Nuclear Information System (INIS)

    Kuzmitskite, L.L.

    1980-01-01

    The program is meant for the determination of the semiempirical radial wave functions of the positive ions and the calculation of the radial integrals of the dipole transition. The semiempirical wave functions are calculated using Tomas-Fermi potential with the variable parameter, which provides for the coincidence of the energy obtained with the ionization energy of the state under consideration. The program is written in the FORTRAN language for the BESM-6 computer

  3. Description of a stable scheme for steady-state coupled Monte Carlo–thermal–hydraulic calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Eduard Hoogenboom, J.

    2014-01-01

    Highlights: • A stable coupling scheme for steady-state MC–TH calculations is described. • The coupling scheme is based on the stochastic approximation method. • The neutron flux (or power) distribution is relaxed using a variable step-size. - Abstract: We provide a detailed description of a numerically stable and efficient coupling scheme for steady-state Monte Carlo neutronic calculations with thermal–hydraulic feedback. While we have previously derived and published the stochastic approximation based method for coupling the Monte Carlo criticality and thermal–hydraulic calculations, its possible implementation has not been described in a step-by-step manner. As the simple description of the coupling scheme was repeatedly requested from us, we have decided to make it available via this note

  4. Variability and Comprehensiveness of North American Online Available Physical Therapy Protocols Following Hip Arthroscopy for Femoroacetabular Impingement and Labral Repair.

    Science.gov (United States)

    Cvetanovich, Gregory L; Lizzio, Vincent; Meta, Fabien; Chan, Derek; Zaltz, Ira; Nho, Shane J; Makhni, Eric C

    2017-11-01

    To assess comprehensiveness and variability of postoperative physical therapy protocols published online following hip arthroscopy for femoroacetabular impingement (FAI) and/or labral repair. Surgeons were identified by the International Society for Hip Arthroscopy "Find a Surgeon" feature in North America (http://www.isha.net/members/, search August 10, 2016). Exclusion criteria included nonsurgeons and protocols for conditions other than hip arthroscopy for FAI and/or labral tear. Protocols were identified by review of surgeons' personal and departmental websites and evaluated for postoperative restrictions, rehabilitation components, and the time points for ending restrictions and initiating activities. Of 111 surgeons available online, 31 (27.9%) had postoperative hip arthroscopy physical therapy protocols available online. Bracing was used in 54.8% (17/31) of protocols for median 2-week duration (range, 1-6 weeks). Most protocols specified the initial postoperative weight-bearing status (29/31, 93.5%), most frequently partial weight-bearing with 20 pounds foot flat (20/29, 69.0%). The duration of weight-bearing restriction was median 3 weeks (range, 2-6) for FAI and median 6 weeks (range, 3-8) for microfracture. The majority of protocols specified initial range of motion limitations (26/31, 83.9%) for median 3 weeks (range, 1.5-12). There was substantial variation in the rehabilitation activities and time points for initiating activities. Time to return to running was specified by 20/31 (64.5%) protocols at median 12 weeks (range, 6-19), and return to sport timing was specified by 13/31 (41.9%) protocols at median 15.5 weeks (range, 9-23). There is considerable variability in postoperative physical therapy protocols available online following hip arthroscopy for FAI, including postoperative restrictions, rehabilitation activities, and time points for activities. This information offers residents, fellows, and established hip arthroscopists a centralized

  5. Effectiveness of Variable-Gain Kalman Filter Based on Angle Error Calculated from Acceleration Signals in Lower Limb Angle Measurement with Inertial Sensors

    Science.gov (United States)

    Watanabe, Takashi

    2013-01-01

    The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442

  6. Associations between bolus infusion of hydrocortisone, glycemic variability and insulin infusion rate variability in critically Ill patients under moderate glycemic control

    NARCIS (Netherlands)

    van Hooijdonk, Roosmarijn T. M.; Binnekade, Jan M.; Bos, Lieuwe D. J.; Horn, Janneke; Juffermans, Nicole P.; Abu-Hanna, Ameen; Schultz, Marcus J.

    2015-01-01

    We retrospectively studied associations between bolus infusion of hydrocortisone and variability of the blood glucose level and changes in insulin rates in intensive care unit (ICU) patients. 'Glycemic variability' and 'insulin infusion rate variability' were calculated from and expressed as the

  7. Theoretical statistics of zero-age cataclysmic variables

    International Nuclear Information System (INIS)

    Politano, M.J.

    1988-01-01

    The distribution of the white dwarf masses, the distribution of the mass ratios and the distribution of the orbital periods in cataclysmic variables which are forming at the present time are calculated. These systems are referred to as zero-age cataclysmic variables. The results show that 60% of the systems being formed contain helium white dwarfs and 40% contain carbon-oxygen white dwarfs. The mean dwarf mass in those systems containing helium white dwarfs is 0.34. The mean white dwarf mass in those systems containing carbon-oxygen white dwarfs is 0.75. The orbital period distribution identifies four main classes of zero-age cataclysmic variables: (1) short-period systems containing helium white dwarfs, (2) systems containing carbon-oxygen white dwarfs whose secondaries are convectively stable against rapid mass transfer to the white dwarf, (3) systems containing carbon-oxygen white dwarfs whose secondaries are radiatively stable against rapid mass transfer to the white dwarf and (4) long period systems with evolved secondaries. The white dwarf mass distribution in zero-age cataclysmic variables has direct application to the calculation of the frequency of outburst in classical novae as a function of the mass of the white dwarf. The method developed in this thesis to calculate the distributions of the orbital parameters in zero-age cataclysmic variables can be used to calculate theoretical statistics of any class of binary systems. This method provides a theoretical framework from which to investigate the statistical properties and the evolution of the orbital parameters of binary systems

  8. Improved flux calculations for viscous incompressible flow by the variable penalty method

    International Nuclear Information System (INIS)

    Kheshgi, H.; Luskin, M.

    1985-01-01

    The Navier-Stokes system for viscous, incompressible flow is considered, taking into account a replacement of the continuity equation by the perturbed continuity equation. The introduction of the approximation allows the pressure variable to be eliminated to obtain the system of equations for the approximate velocity. The penalty approximation is often applied to numerical discretizations since it provides a reduction in the size and band-width of the system of equations. Attention is given to error estimates, and to two numerical experiments which illustrate the error estimates considered. It is found that the variable penalty method provides an accurate solution for a much wider range of epsilon than the classical penalty method. 8 references

  9. A Variable Impacts Measurement in Random Forest for Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Jae-Hee Hur

    2017-01-01

    Full Text Available Recently, the importance of mobile cloud computing has increased. Mobile devices can collect personal data from various sensors within a shorter period of time and sensor-based data consists of valuable information from users. Advanced computation power and data analysis technology based on cloud computing provide an opportunity to classify massive sensor data into given labels. Random forest algorithm is known as black box model which is hardly able to interpret the hidden process inside. In this paper, we propose a method that analyzes the variable impact in random forest algorithm to clarify which variable affects classification accuracy the most. We apply Shapley Value with random forest to analyze the variable impact. Under the assumption that every variable cooperates as players in the cooperative game situation, Shapley Value fairly distributes the payoff of variables. Our proposed method calculates the relative contributions of the variables within its classification process. In this paper, we analyze the influence of variables and list the priority of variables that affect classification accuracy result. Our proposed method proves its suitability for data interpretation in black box model like a random forest so that the algorithm is applicable in mobile cloud computing environment.

  10. Plant response to nutrient availability across variable bedrock geologies

    Science.gov (United States)

    Castle, S.C.; Neff, J.C.

    2009-01-01

    We investigated the role of rock-derived mineral nutrient availability on the nutrient dynamics of overlying forest communities (Populus tremuloides and Picea engelmanni-Abies lasiocarpa v. arizonica) across three parent materials (andesite, limestone, and sandstone) in the southern Rocky Mountains of Colorado. Broad geochemical differences were observed between bedrock materials; however, bulk soil chemistries were remarkably similar between the three different sites. In contrast, soil nutrient pools were considerably different, particularly for P, Ca, and Mg concentrations. Despite variations in nutrient stocks and nutrient availability in soils, we observed relatively inflexible foliar concentrations and foliar stoichiometries for both deciduous and coniferous species. Foliar nutrient resorption (P and K) in the deciduous species followed patterns of nutrient content across substrate types, with higher resorption corresponding to lower bedrock concentrations. Work presented here indicates a complex plant response to available soil nutrients, wherein plant nutrient use compensates for variations in supply gradients and results in the maintenance of a narrow range in foliar stoichiometry. ?? 2008 Springer Science+Business Media, LLC.

  11. The variable effects of soil nitrogen availability and insect herbivory on aboveground and belowground plant biomass in an old-field ecosystem

    DEFF Research Database (Denmark)

    Blue, Jarrod D.; Souza, Lara; Classen, Aimée T.

    2011-01-01

    in an old-field ecosystem. In 2004, we established 36 experimental plots in which we manipulated soil nitrogen (N) availability and insect abundance in a completely randomized plot design. In 2009, after 6 years of treatments, we measured aboveground biomass and assessed root production at peak growth...... not be limiting primary production in this ecosystem. Insects reduced the aboveground biomass of subdominant plant species and decreased coarse root production. We found no statistical interactions between N availability and insect herbivory for any response variable. Overall, the results of 6 years of nutrient...

  12. Comparison of thick-target (alpha,n yield calculation codes

    Directory of Open Access Journals (Sweden)

    Fernandes Ana C.

    2017-01-01

    Full Text Available Neutron production yields and energy distributions from (α,n reactions in light elements were calculated using three different codes (SOURCES, NEDIS and USD and compared with the existing experimental data in the 3.5-10 MeV alpha energy range. SOURCES and NEDIS display an agreement between calculated and measured yields in the decay series of 235U, 238U and 232Th within ±10% for most materials. The discrepancy increases with alpha energy but still an agreement of ±20% applies to all materials with reliable elemental production yields (the few exceptions are identified. The calculated neutron energy distributions describe the experimental data, with NEDIS retrieving very well the detailed features. USD generally underestimates the measured yields, in particular for compounds with heavy elements and/or at high alpha energies. The energy distributions exhibit sharp peaks that do not match the observations. These findings may be caused by a poor accounting of the alpha particle energy loss by the code. A big variability was found among the calculated neutron production yields for alphas from Sm decay; the lack of yield measurements for low (~2 MeV alphas does not allow to conclude on the codes’ accuracy in this energy region.

  13. "Shut up and calculate": the available discursive positions in quantum physics courses

    Science.gov (United States)

    Johansson, Anders; Andersson, Staffan; Salminen-Karlsson, Minna; Elmgren, Maja

    2018-03-01

    Educating new generations of physicists is often seen as a matter of attracting good students, teaching them physics and making sure that they stay at the university. Sometimes, questions are also raised about what could be done to increase diversity in recruitment. Using a discursive perspective, in this study of three introductory quantum physics courses at two Swedish universities, we instead ask what it means to become a physicist, and whether certain ways of becoming a physicist and doing physics is privileged in this process. Asking the question of what discursive positions are made accessible to students, we use observations of lectures and problem solving sessions together with interviews with students to characterize the discourse in the courses. Many students seem to have high expectations for the quantum physics course and generally express that they appreciate the course more than other courses. Nevertheless, our analysis shows that the ways of being a "good quantum physics student" are limited by the dominating focus on calculating quantum physics in the courses. We argue that this could have negative consequences both for the education of future physicists and the discipline of physics itself, in that it may reproduce an instrumental "shut up and calculate"-culture of physics, as well as an elitist physics education. Additionally, many students who take the courses are not future physicists, and the limitation of discursive positions may also affect these students significantly.

  14. Numerical Calculation of Transient Thermal Characteristics in Gas-Insulated Transmission Lines

    Directory of Open Access Journals (Sweden)

    Hongtao Li

    2013-11-01

    Full Text Available For further knowledge of the thermal characteristics in gas-insulated transmission lines (GILs installed above ground, a finite-element model coupling fluid field and thermal field is established, in which the corresponding assumptions and boundary conditions are given.  Transient temperature rise processes of the GIL under the conditions of variable ambient temperature, wind velocity and solar radiation are respectively investigated. Equivalent surface convective heat transfer coefficient and heat flux boundary conditions are updated in the analysis process. Unlike the traditional finite element methods (FEM, the variability of the thermal properties with temperature is considered. The calculation results are validated by the tests results reported in the literature. The conclusion provides method and theory basis for the knowledge of transient temperature rise characteristics of GILs in open environment.

  15. About hidden influence of predictor variables: Suppressor and mediator variables

    Directory of Open Access Journals (Sweden)

    Milovanović Boško

    2013-01-01

    Full Text Available In this paper procedure for researching hidden influence of predictor variables in regression models and depicting suppressor variables and mediator variables is shown. It is also shown that detection of suppressor variables and mediator variables could provide refined information about the research problem. As an example for applying this procedure, relation between Atlantic atmospheric centers and air temperature and precipitation amount in Serbia is chosen. [Projekat Ministarstva nauke Republike Srbije, br. 47007

  16. Broyden's method in nuclear structure calculations

    International Nuclear Information System (INIS)

    Baran, Andrzej; Bulgac, Aurel; Forbes, Michael McNeil; Hagen, Gaute; Nazarewicz, Witold; Schunck, Nicolas; Stoitsov, Mario V.

    2008-01-01

    Broyden's method, widely used in quantum chemistry electronic-structure calculations for the numerical solution of nonlinear equations in many variables, is applied in the context of the nuclear many-body problem. Examples include the unitary gas problem, the nuclear density functional theory with Skyrme functionals, and the nuclear coupled-cluster theory. The stability of the method, its ease of use, and its rapid convergence rates make Broyden's method a tool of choice for large-scale nuclear structure calculations

  17. County-Scale Spatial Variability of Macronutrient Availability Ratios in Paddy Soils

    Directory of Open Access Journals (Sweden)

    Mingkai Qu

    2014-01-01

    Full Text Available Macronutrients (N, P, and K are essential to plants but also can be harmful to the environment when their available concentrations in soil are excessive. Availability ratios (available concentration/total concentration of macronutrients may reflect their transforming potential between fixed and available forms in soil. Understanding their spatial distributions and impact factors can be, therefore, helpful to applying specific measures to modify the availability of macronutrients for agricultural and environmental management purposes. In this study, 636 topsoil samples (0–15 cm were collected from paddy fields in Shayang County, Central China, for measuring soil properties. Factors influencing macronutrient availability ratios were investigated, and total and available concentrations of macronutrients were mapped using geostatistical method. Spatial distribution maps of macronutrient availability ratios were further derived. Results show that (1 availability of macronutrients is controlled by multiple factors, and (2 macronutrient availability ratios are spatially varied and may not always have spatial patterns identical to those of their corresponding total and available concentrations. These results are more useful than traditional soil macronutrient average content data for guiding site-specific field management for agricultural production and environmental protection.

  18. The Pattern Across the Continental United States of Evapotranspiration Variability Associated with Water Availability

    Science.gov (United States)

    Koster, Randal D.; Salvucci, Guido D.; Rigden, Angela J.; Jung, Martin; Collatz, G. James; Schubert, Siegfried D.

    2015-01-01

    The spatial pattern across the continental United States of the interannual variance of warm season water-dependent evapotranspiration, a pattern of relevance to land-atmosphere feedback, cannot be measured directly. Alternative and indirect approaches to estimating the pattern, however, do exist, and given the uncertainty of each, we use several such approaches here. We first quantify the water dependent evapotranspiration variance pattern inherent in two derived evapotranspiration datasets available from the literature. We then search for the pattern in proxy geophysical variables (air temperature, stream flow, and NDVI) known to have strong ties to evapotranspiration. The variances inherent in all of the different (and mostly independent) data sources show some differences but are generally strongly consistent they all show a large variance signal down the center of the U.S., with lower variances toward the east and (for the most part) toward the west. The robustness of the pattern across the datasets suggests that it indeed represents the pattern operating in nature. Using Budykos hydroclimatic framework, we show that the pattern can largely be explained by the relative strength of water and energy controls on evapotranspiration across the continent.

  19. Variability in millimeter wave scattering properties of dendritic ice crystals

    International Nuclear Information System (INIS)

    Botta, Giovanni; Aydin, Kültegin; Verlinde, Johannes

    2013-01-01

    A detailed electromagnetic scattering model for ice crystals is necessary for calculating radar reflectivity from cloud resolving model output in any radar simulator. The radar reflectivity depends on the backscattering cross sections and size distributions of particles in the radar resolution volume. The backscattering cross section depends on the size, mass and distribution of mass within the crystal. Most of the available electromagnetic scattering data for ice hydrometeors rely on simple ice crystal types and a single mass–dimensional relationship for a given type. However, a literature survey reveals that the mass–dimensional relationships for dendrites cover a relatively broad region in the mass–dimensional plane. This variability of mass and mass distribution of dendritic ice crystals cause significant variability in their backscattering cross sections, more than 10 dB for all sizes (0.5–5 mm maximum dimension) and exceeding 20 dB for the larger ones at X-, Ka-, and W-band frequencies. Realistic particle size distributions are used to calculate radar reflectivity and ice water content (IWC) for three mass–dimensional relationships. The uncertainty in the IWC for a given reflectivity spans an order of magnitude in value at all three frequencies because of variations in the unknown mass–dimensional relationship and particle size distribution. The sensitivity to the particle size distribution is reduced through the use of dual frequency reflectivity ratios, e.g., Ka- and W-band frequencies, together with the reflectivity at one of the frequencies for estimating IWC. -- Highlights: • Millimeter wave backscattering characteristics of dendritic crystals are modeled. • Natural variability of dendrite shapes leads to large variability in their mass. • Dendrite mass variability causes large backscattering cross section variability. • Reflectivity–ice water content relation is sensitive to mass and size distribution. • Dual frequency

  20. Artificial neural networks applied to DNBR calculation in digital core protection systems

    International Nuclear Information System (INIS)

    Lee, H. C.; Chang, S. H.

    2003-01-01

    The nuclear power plant has to be operated with sufficient margin from the specified DNBR limit for assuring its safety. The digital core protection system calculates on-line real-time DNBR by using a complex subchannel analysis program, and triggers a reliable reactor shutdown if the calculated DNBR approaches the specified limit. However, it takes relatively long calculation time even for a steady state condition, which may have an adverse effect on the operation flexibility. To overcome the drawback, a method using artificial neural networks is studied in this paper. Nonparametric training approach is utilized, which shows dramatic reduction of the training time, no tedious heuristic process for optimizing parameters, and no local minima problem during the training. The test results show that the predicted DNBR is within about ±2% deviation from the target DNBR for the fixed axial flux shape case. For the variable axial flux case including severely skewed shapes appeared during accidents, the deviation is about ±10∼15%. The suggested method could be the alternative that can calculate DNBR very quickly while increasing the plant availability

  1. Low-Frequency Temporal Variability in Mira and Semiregular Variables

    Science.gov (United States)

    Templeton, Matthew R.; Karovska, M.; Waagen, E. O.

    2012-01-01

    We investigate low-frequency variability in a large sample of Mira and semiregular variables with long-term visual light curves from the AAVSO International Database. Our aim is to determine whether we can detect and measure long-timescale variable phenomena in these stars, for example photometric variations that might be associated with supergranular convection. We analyzed the long-term light curves of 522 variable stars of the Mira and SRa, b, c, and d classes. We calculated their low-frequency time-series spectra to characterize rednoise with the power density spectrum index, and then correlate this index with other observable characteristics such as spectral type and primary pulsation period. In our initial analysis of the sample, we see that the semiregular variables have a much broader range of spectral index than the Mira types, with the SRb subtype having the broadest range. Among Mira variables we see that the M- and S-type Miras have similarly wide ranges of index, while the C-types have the narrowest with generally shallower slopes. There is also a trend of steeper slope with larger amplitude, but at a given amplitude, a wide range of slopes are seen. The ultimate goal of the project is to identify stars with strong intrinsic red noise components as possible targets for resolved surface imaging with interferometry.

  2. Particle water and pH in the Eastern Mediterranean: sources variability and implications for nutrients availability

    Science.gov (United States)

    Nikolaou, P.; Bougiatioti, A.; Stavroulas, I.; Kouvarakis, G.; Nenes, A.; Weber, R.; Kanakidou, M.; Mihalopoulos, N.

    2015-10-01

    during early morning, late evening and nighttime hours. The aerosol was found to be highly acidic with calculated aerosol pH varying from 0.5 to 2.8 throughout the study period. Biomass burning aerosol presented the highest values of pH in the submicron fraction and the lowest values in total water mass concentration. The low pH values observed in the submicron mode and independently of air masses origin could increase nutrient availability and especially P solubility, which is the nutrient limiting sea water productivity of the eastern Mediterranean.

  3. Particle water and pH in the eastern Mediterranean: source variability and implications for nutrient availability

    Science.gov (United States)

    Bougiatioti, Aikaterini; Nikolaou, Panayiota; Stavroulas, Iasonas; Kouvarakis, Giorgos; Weber, Rodney; Nenes, Athanasios; Kanakidou, Maria; Mihalopoulos, Nikolaos

    2016-04-01

    contributed about 27.5 % to the total aerosol water, mostly during early morning, late evening, and nighttime hours.The aerosol was found to be highly acidic with calculated aerosol pH varying from 0.5 to 2.8 throughout the study period. Biomass burning aerosol presented the highest values of pH in the submicron fraction and the lowest values in total water mass concentration. The low pH values observed in the submicron mode and independently of air mass origin could increase nutrient availability and especially P solubility, which is the nutrient limiting sea water productivity of the eastern Mediterranean.

  4. Artificial neural network based model to calculate the environmental variables of the tobacco drying process; Modelo basado en redes neuronales artificiales para el cálculo de parámetros ambientales en el proceso de curado del tabaco

    Directory of Open Access Journals (Sweden)

    Víctor Martínez-Martínez

    2013-06-01

    Full Text Available This paper presents an Artificial Neural Network (ANN based model for environmental variables related to the tobacco drying process. A fitting ANN was used to estimate and predict temperature and relative humidity inside the tobacco dryer: the estimation consists of calculating the value of these variables in different locations of the dryer and the prediction consists of forecasting the value of these variables with different time horizons. The proposed model has been validated with temperature and relative humidity data obtained from a real tobacco dryer using a Wireless Sensor Network (WSN. On the one hand, an error under 2% was achieved, obtaining temperature as a function of temperature and relative humidity in other locations in the estimation task. Besides, an error around 1.5 times lower than the one obtained with an interpolation method was achieved in the prediction task when the temperature inside the tobacco mass was predicted with time horizons over 2.5 hours as a function of its present and past values. These results show that ANN-based models can be used to improve the tobacco drying process because with these types of models the value of environmental variables can be predicted in the near future and can be estimated in other locations with low errors.

  5. Radiation-use efficiency and gas exchange responses to water and nutrient availability in irrigated and fertilized stands of sweetgum and sycamore

    Science.gov (United States)

    Christopher B. Allen; Rodney E. Will; Robert C. McGravey; David R. Coyle; Mark D. Coleman

    2005-01-01

    We investigated how water and nutrient availability affect radiation-use effeciency (e) and assessed leaf gas exchange as a possible mechanism for shifts in e. We measured aboveground net primary production (ANPP) and annual photosynthetically active radiation (PAR) capture to calculate e as well as leaf-level physiological variables (light-saturated net photosynthesis...

  6. Risk assessment of groundwater level variability using variable Kriging methods

    Science.gov (United States)

    Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2015-04-01

    Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the

  7. Mobile application-based Seoul National University Prostate Cancer Risk Calculator: development, validation, and comparative analysis with two Western risk calculators in Korean men.

    Directory of Open Access Journals (Sweden)

    Chang Wook Jeong

    Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy

  8. Neutron transport calculations of some fast critical assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Val Penalosa, J A

    1976-07-01

    To analyse the influence of the input variables of the transport codes upon the neutronic results (eigenvalues, generation times, . . . ) four Benchmark calculations have been performed. Sensitivity analysis have been applied to express these dependences in a useful way, and also to get an unavoidable experience to carry out calculations achieving the required accuracy and doing them in practical computing times. (Author) 29 refs.

  9. Neutron transport calculations of some fast critical assemblies

    International Nuclear Information System (INIS)

    Martinez-Val Penalosa, J. A.

    1976-01-01

    To analyse the influence of the input variables of the transport codes upon the neutronic results (eigenvalues, generation times, . . . ) four Benchmark calculations have been performed. Sensitivity analysis have been applied to express these dependences in a useful way, and also to get an unavoidable experience to carry out calculations achieving the required accuracy and doing them in practical computing times. (Author) 29 refs

  10. There Is No Further Gain from Calculating Disease Activity Score in 28 Joints with High Sensitivity Assays of C-Reactive Protein Because of High Intraindividual Variability of CRP

    DEFF Research Database (Denmark)

    Jensen Hansen, Inger Marie; Asmussen Andreasen, Rikke; Antonsen, Steen

    2016-01-01

    Background/Purpose: The threshold for reporting of C-reactive protein (CRP) differs from laboratory to laboratory. Moreover, CRP values are affected by the intra individual biological variability.[1] With respect to disease activity score in 28 joints (DAS28) and Rheumatoid Arthritis (RA), precise...... threshold for reporting CRP is important due to the direct effects of CRP on calculating DAS28, patient classification and subsequent treatment decisions[2] Methods: This study consists of two sections: a theoretical consideration discussing the performance of CRP in calculating DAS28 with regard...... to the biological variation and reporting limit for CRP and a cross sectional study of all RA patients from our department (n=876) applying our theoretical results. In the second section, we calculate DAS28 twice with actual CRP and CRP=9, the latter to elucidate the positive consequences of changing the lower...

  11. KENO-IV code benchmark calculation, (6)

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Naito, Yoshitaka; Yamakawa, Yasuhiro.

    1980-11-01

    A series of benchmark tests has been undertaken in JAERI in order to examine the capability of JAERI's criticality safety evaluation system consisting of the Monte Carlo calculation code KENO-IV and the newly developed multigroup constants library MGCL. The present report describes the results of a benchmark test using criticality experiments about Plutonium fuel in various shape. In all, 33 cases of experiments have been calculated for Pu(NO 3 ) 4 aqueous solution, Pu metal or PuO 2 -polystyrene compact in various shape (sphere, cylinder, rectangular parallelepiped). The effective multiplication factors calculated for the 33 cases distribute widely between 0.955 and 1.045 due to wide range of system variables. (author)

  12. The impact of spatial variability of hydrogeological parameters - Monte Carlo calculations using SITE-94 data

    International Nuclear Information System (INIS)

    Pereira, A.; Broed, R.

    2002-03-01

    In this report, several issues related to the probabilistic methodology for performance assessments of repositories for high-level nuclear waste and spent fuel are addressed. Random Monte Carlo sampling is used to make uncertainty analyses for the migration of four nuclides and a decay chain in the geosphere. The nuclides studied are cesium, chlorine, iodine and carbon, and radium from a decay chain. A procedure is developed to take advantage of the information contained in the hydrogeological data obtained from a three-dimensional discrete fracture model as the input data for one-dimensional transport models for use in Monte Carlo calculations. This procedure retains the original correlations between parameters representing different physical entities, namely, between the groundwater flow rate and the hydrodynamic dispersion in fractured rock, in contrast with the approach commonly used that assumes that all parameters supplied for the Monte Carlo calculations are independent of each other. A small program is developed to allow the above-mentioned procedure to be used if the available three-dimensional data are scarce for Monte Carlo calculations. The program allows random sampling of data from the 3-D data distribution in the hydrogeological calculations. The impact of correlations between the groundwater flow and the hydrodynamic dispersion on the uncertainty associated with the output distribution of the radionuclides' peak releases is studied. It is shown that for the SITE-94 data, this impact can be disregarded. A global sensitivity analysis is also performed on the peak releases of the radionuclides studied. The results of these sensitivity analyses, using several known statistical methods, show discrepancies that are attributed to the limitations of these methods. The reason for the difficulties is to be found in the complexity of the models needed for the predictions of radionuclide migration, models that deliver results covering variation of several

  13. Availability of potassium in biomass combustion ashes and gasification biochars after application to soils with variable pH and clay content

    DEFF Research Database (Denmark)

    Li, Xiaoxi; Rubæk, Gitte Holton; Sørensen, Peter

    2017-01-01

    .8–7.8) and clay contents (3–17%). Exchangeable K in the product-soil mixture was determined, and the K recovery rate from the applied products varied from 31 to 86%. The relative recovery compared to applied KCl was used to indicate K availability and was 50–86% across all soils, but lower for two sewage sludge....... The objective of this study was to evaluate the potassium (K) availability in various types of biomass ashes and gasification biochars (GBs) derived from straw, wood, sewage sludge and poultry manure when mixed with soil. A 16-week incubation study was conducted with three contrasting soils of variable pH (5...

  14. Intensive variable and its application

    CERN Document Server

    Zheng, Xinqi; Yuan, Zhiyuan

    2014-01-01

    Opening with intensive variables theory, using a combination of static and dynamic GIS and integrating numerical calculation and spatial optimization, this book creates a framework and methodology for evaluating land use effect, among other concepts.

  15. Modeling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    Science.gov (United States)

    Reichstein, Markus; Rey, Ana; Freibauer, Annette; Tenhunen, John; Valentini, Riccardo; Banza, Joao; Casals, Pere; Cheng, Yufu; Grünzweig, Jose M.; Irvine, James; Joffre, Richard; Law, Beverly E.; Loustau, Denis; Miglietta, Franco; Oechel, Walter; Ourcival, Jean-Marc; Pereira, Joao S.; Peressotti, Alessandro; Ponti, Francesca; Qi, Ye; Rambal, Serge; Rayment, Mark; Romanya, Joan; Rossi, Federica; Tedeschi, Vanessa; Tirone, Giampiero; Xu, Ming; Yakir, Dan

    2003-12-01

    Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, interannual and spatial variability of soil respiration as affected by water availability, temperature, and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g., leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical nonlinear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content, and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and intersite variability of soil respiration with a mean absolute error of 0.82 μmol m-2 s-1. The parameterized model exhibits the following principal properties: (1) At a relative amount of upper-layer soil water of 16% of field capacity, half-maximal soil respiration rates are reached. (2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. (3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly timescale, we employed the approach by [2002] that used monthly precipitation and air temperature to globally predict soil respiration (T&P model). While this model was able to

  16. Modelling temporal and large-scale spatial variability of soil respiration from soil water availability, temperature and vegetation productivity indices

    Science.gov (United States)

    Reichstein, M.; Rey, A.; Freibauer, A.; Tenhunen, J.; Valentini, R.; Soil Respiration Synthesis Team

    2003-04-01

    Field-chamber measurements of soil respiration from 17 different forest and shrubland sites in Europe and North America were summarized and analyzed with the goal to develop a model describing seasonal, inter-annual and spatial variability of soil respiration as affected by water availability, temperature and site properties. The analysis was performed at a daily and at a monthly time step. With the daily time step, the relative soil water content in the upper soil layer expressed as a fraction of field capacity was a good predictor of soil respiration at all sites. Among the site variables tested, those related to site productivity (e.g. leaf area index) correlated significantly with soil respiration, while carbon pool variables like standing biomass or the litter and soil carbon stocks did not show a clear relationship with soil respiration. Furthermore, it was evidenced that the effect of precipitation on soil respiration stretched beyond its direct effect via soil moisture. A general statistical non-linear regression model was developed to describe soil respiration as dependent on soil temperature, soil water content and site-specific maximum leaf area index. The model explained nearly two thirds of the temporal and inter-site variability of soil respiration with a mean absolute error of 0.82 µmol m-2 s-1. The parameterised model exhibits the following principal properties: 1) At a relative amount of upper-layer soil water of 16% of field capacity half-maximal soil respiration rates are reached. 2) The apparent temperature sensitivity of soil respiration measured as Q10 varies between 1 and 5 depending on soil temperature and water content. 3) Soil respiration under reference moisture and temperature conditions is linearly related to maximum site leaf area index. At a monthly time-scale we employed the approach by Raich et al. (2002, Global Change Biol. 8, 800-812) that used monthly precipitation and air temperature to globally predict soil respiration (T

  17. Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations

    International Nuclear Information System (INIS)

    De la Cruz, O. O. Galvan; Moreno-Jimenez, S.; Larraga-Gutierrez, J. M.; Celis-Lopez, M. A.

    2010-01-01

    In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of the high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.

  18. Optimization Shape of Variable Capacitance Micromotor Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2010-01-01

    Full Text Available A new method for optimum shape design of variable capacitance micromotor (VCM using Differential Evolution (DE, a stochastic search algorithm, is presented. In this optimization exercise, the objective function aims to maximize torque value and minimize the torque ripple, where the geometric parameters are considered to be the variables. The optimization process is carried out using a combination of DE algorithm and FEM analysis. Fitness value is calculated by FEM analysis using COMSOL3.4, and the DE algorithm is realized by MATLAB7.4. The proposed method is applied to a VCM with 8 poles at the stator and 6 poles at the rotor. The results show that the optimized micromotor using DE algorithm had higher torque value and lower torque ripple, indicating the validity of this methodology for VCM design.

  19. Options to Improve the Quality of Wind Generation Output Forecasting with the Use of Available Information as Explanatory Variables

    Directory of Open Access Journals (Sweden)

    Rafał Magulski

    2015-06-01

    Full Text Available Development of wind generation, besides its positive aspects related to the use of renewable energy, is a challenge from the point of view of power systems’ operational security and economy. The uncertain and variable nature of wind generation sources entails the need for the for the TSO to provide adequate reserves of power, necessary to maintain the grid’s stable operation, and the actors involved in the trading of energy from these sources incur additional of balancing unplanned output deviations. The paper presents the results of analyses concerning the options to forecast a selected wind farm’s output exercised by means of different methods of prediction, using a different range of measurement and forecasting data available on the farm and its surroundings. The analyses focused on the evaluation of forecast errors, and selection of input data for forecasting models and assessment of their impact on prediction quality improvement.

  20. Practical calculation of amplitudes for electron-impact ionization

    International Nuclear Information System (INIS)

    McCurdy, C. William; Horner, Daniel A.; Rescigno, Thomas N.

    2001-01-01

    An integral expression that is formally valid only for short-range potentials is applied to the problem of calculating the amplitude for electron-impact ionization. It is found that this expression provides a practical and accurate path to the calculation of singly differential cross sections for electron-impact ionization. Calculations are presented for the Temkin-Poet and collinear models for ionization of hydrogen by electron impact. An extension of the finite-element approach using the discrete-variable representation, appropriate for potentials with discontinuous derivatives like the Temkin-Poet interaction, is also presented

  1. Spatial variability of soil carbon, pH, available phosphorous and potassium in organic farm located in Mediterranean Croatia

    Science.gov (United States)

    Bogunović, Igor; Pereira, Paulo; Šeput, Miranda

    2016-04-01

    Soil organic carbon (SOC), pH, available phosphorus (P), and potassium (K) are some of the most important factors to soil fertility. These soil parameters are highly variable in space and time, with implications to crop production. The aim of this work is study the spatial variability of SOC, pH, P and K in an organic farm located in river Rasa valley (Croatia). A regular grid (100 x 100 m) was designed and 182 samples were collected on Silty Clay Loam soil. P, K and SOC showed moderate heterogeneity with coefficient of variation (CV) of 21.6%, 32.8% and 51.9%, respectively. Soil pH record low spatial variability with CV of 1.5%. Soil pH, P and SOC did not follow normal distribution. Only after a Box-Cox transformation, data respected the normality requirements. Directional exponential models were the best fitted and used to describe spatial autocorrelation. Soil pH, P and SOC showed strong spatial dependence with nugget to sill ratio with 13.78%, 0.00% and 20.29%, respectively. Only K recorded moderate spatial dependence. Semivariogram ranges indicate that future sampling interval could be 150 - 200 m in order to reduce sampling costs. Fourteen different interpolation models for mapping soil properties were tested. The method with lowest Root Mean Square Error was the most appropriated to map the variable. The results showed that radial basis function models (Spline with Tension and Completely Regularized Spline) for P and K were the best predictors, while Thin Plate Spline and inverse distance weighting models were the least accurate. The best interpolator for pH and SOC was the local polynomial with the power of 1, while the least accurate were Thin Plate Spline. According to soil nutrient maps investigated area record very rich supply with K while P supply was insufficient on largest part of area. Soil pH maps showed mostly neutral reaction while individual parts of alkaline soil indicate the possibility of penetration of seawater and salt accumulation in the

  2. Water Availability in a Warming World

    Science.gov (United States)

    Aminzade, Jennifer

    As climate warms during the 21st century, the resultant changes in water availability are a vital issue for society, perhaps even more important than the magnitude of warming itself. Yet our climate models disagree in their forecasts of water availability, limiting our ability to plan accordingly. This thesis investigates future water availability projections from Coupled Ocean-Atmosphere General Circulation Models (GCMs), primarily using two water availability measures: soil moisture and the Supply Demand Drought Index (SDDI). Chapter One introduces methods of measuring water availability and explores some of the fundamental differences between soil moisture, SDDI and the Palmer Drought Severity Index (PDSI). SDDI and PDSI tend to predict more severe future drought conditions than soil moisture; 21st century projections of SDDI show conditions rivaling North American historic mega-droughts. We compare multiple potential evapotranspiration (EP) methods in New York using input from the GISS Model ER GCM and local station data from Rochester, NY, and find that they compare favorably with local pan evaporation measurements. We calculate SDDI and PDSI values using various EP methods, and show that changes in future projections are largest when using EP methods most sensitive to global warming, not necessarily methods producing EP values with the largest magnitudes. Chapter Two explores the characteristics and biases of the five GCMs and their 20th and 21st century climate projections. We compare atmospheric variables that drive water availability changes globally, zonally, and geographically among models. All models show increases in both dry and wet extremes for SDDI and soil moisture, but increases are largest for extreme drying conditions using SDDI. The percentage of gridboxes that agree on the sign of change of soil moisture and SDDI between models is very low, but does increase in the 21st century. Still, differences between models are smaller than differences

  3. Intra-basin variability of snowmelt water balance calculations in a subarctic catchment

    Science.gov (United States)

    McCartney, Stephen E.; Carey, Sean K.; Pomeroy, John W.

    2006-03-01

    The intra-basin variability of snowmelt and melt-water runoff hydrology in an 8 km2 subarctic alpine tundra catchment was examined for the 2003 melt period. The catchment, Granger Creek, is within the Wolf Creek Research Basin, Yukon, which is typical of mountain subarctic landscapes in northwestern Canada. The study catchment was segmented into nine internally uniform zones termed hydrological response units (HRUs) based on their similar hydrological, physiographic, vegetation and soil properties. Snow accumulation exhibited significant variability among the HRUs, with greatest snow water equivalent in areas of tall shrub vegetation. Melt began first on southerly exposures and at lower elevations, yet average melt rates for the study period varied little among HRUs with the exception of those with steep aspects. In HRUs with capping organic soils, melt water first infiltrated this surface horizon, satisfying its storage capacity, and then percolated into the frozen mineral substrate. Infiltration and percolation into frozen mineral soils was restricted where melt occurred rapidly and organic soils were thin; in this case, melt-water delivery rates exceeded the frozen mineral soil infiltration rate, resulting in high runoff rates. In contrast, where there were slower melt rates and thick organic soils, infiltration was unlimited and runoff was suppressed. The snow water equivalent had a large impact on runoff volume, as soil storage capacity was quickly surpassed in areas of deep snow, diverting the bulk of melt water laterally to the drainage network. A spatially distributed water balance indicated that the snowmelt freshet was primarily controlled by areas with tall shrub vegetation that accumulate large quantities of snow and by alpine areas with no capping organic soils. The intra-basin water balance variability has important implications for modelling freshet in hydrological models.

  4. Constraint-Led Changes in Internal Variability in Running

    OpenAIRE

    Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich

    2012-01-01

    We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % ...

  5. Variable displacement alpha-type Stirling engine

    Science.gov (United States)

    Homutescu, V. M.; Bălănescu, D. T.; Panaite, C. E.; Atanasiu, M. V.

    2016-08-01

    The basic design and construction of an alpha-type Stirling engine with on load variable displacement is presented. The variable displacement is obtained through a planar quadrilateral linkage with one on load movable ground link. The physico-mathematical model used for analyzing the variable displacement alpha-type Stirling engine behavior is an isothermal model that takes into account the real movement of the pistons. Performances and power adjustment capabilities of such alpha-type Stirling engine are calculated and analyzed. An exemplification through the use of the numerical simulation was performed in this regard.

  6. Methodologies of Uncertainty Propagation Calculation

    International Nuclear Information System (INIS)

    Chojnacki, Eric

    2002-01-01

    After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory

  7. Improving the accuracy of dynamic mass calculation

    Directory of Open Access Journals (Sweden)

    Oleksandr F. Dashchenko

    2015-06-01

    Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.

  8. A Fast Time-Delay Calculation Method in Through-Wall-Radar Detection Scenario

    Directory of Open Access Journals (Sweden)

    Zhang Qi

    2016-01-01

    Full Text Available In TWR (Through Wall Radar signal processing procedure, time delay estimation is one of the key steps in target localization and high resolution imaging. In time domain imaging procedure such as back projection imaging algorithm, round trip propagation time delay at the path of “transmitter-target-receiver” needs to be calculated for each pixel in imaging region. In typical TWR scenario, transmitter and receiver are at one side and targets at the other side of a wall. Based on two-dimensional searching algorithm or solving two variables equation of four times, traditional time delay calculation algorithms are complex and time consuming, and cannot be used to real-time imaging procedure. In this paper, a new fast time-delay (FTD algorithm is presented. Because of that incident angle at one side equals to refracting angle at the other side, an equation of lateral distance through the wall can be established. By solving this equation, the lateral distance can be obtained and total propagation time delay can be calculated subsequently. Through processing simulation data, the result shows that new algorithm can be applied effectively to real-time time-delay calculation in TWR signal processing.

  9. Calculating Cumulative Binomial-Distribution Probabilities

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  10. Parameter analysis calculation on characteristics of portable FAST reactor

    International Nuclear Information System (INIS)

    Otsubo, Akira; Kowata, Yasuki

    1998-06-01

    In this report, we performed a parameter survey analysis by using the analysis program code STEDFAST (Space, TErrestrial and Deep sea FAST reactor-gas turbine system). Concerning the deep sea fast reactor-gas turbine system, calculations with many variable parameters were performed on the base case of a NaK cooled reactor of 40 kWe. We aimed at total equipment weight and surface area necessary to remove heat from the system as important values of the characteristics of the system. Electric generation power and the material of a pressure hull were specially influential for the weight. The electric generation power, reactor outlet/inlet temperatures, a natural convection heat transfer coefficient of sea water were specially influential for the area. Concerning the space reactor-gas turbine system, the calculations with the variable parameters of compressor inlet temperature, reactor outlet/inlet temperatures and turbine inlet pressure were performed on the base case of a Na cooled reactor of 40 kWe. The first and the second variable parameters were influential for the total equipment weight of the important characteristic of the system. Concerning the terrestrial fast reactor-gas turbine system, the calculations with the variable parameters of heat transferred pipe number in a heat exchanger to produce hot water of 100degC for cogeneration, compressor stage number and the kind of primary coolant material were performed on the base case of a Pb cooled reactor of 100 MWt. In the comparison of calculational results for Pb and Na of primary coolant material, the primary coolant weight flow rate was naturally large for the former case compared with for the latter case because density is very different between them. (J.P.N.)

  11. Simple multicomponent batch distillation procedure with a variable reflux policy

    Directory of Open Access Journals (Sweden)

    A. N. García

    2014-06-01

    Full Text Available This paper describes a shortcut procedure for batch distillation simulation with a variable reflux policy. The procedure starts from a shortcut method developed by Sundaram and Evans in 1993 and uses an iterative cycle to calculate the reflux ratio at each moment. The functional relationship between the concentrations at the bottom and the dome is evaluated using the Fenske equation and is complemented with the equations proposed by Underwood and Gilliland. The results of this procedure are consistent with those obtained using a fast method widely validated in the relevant literature.

  12. THE CALCULATION OF WOODEN CONSTRUCTIONS TAKING INTO ACCOUNT THE CREEP OF WOOD ON THE EXAMPLE OF A STATICALLY INDETERMINATE LENTICULAR BLOCKED TRUSS

    Directory of Open Access Journals (Sweden)

    I. S. Inzhutov

    2017-01-01

    Full Text Available Objectives The aim of the study is to refine the calculation of wooden constructions, in particular, to use variable elastic modulus  for the calculation of the second group of the limiting state in order to predict the deformations more accurately.Methods The study is carried out using the method of creep  consideration based on the use of either variable elastic modulus or  the “modulus of total deformations” for the calculations. These  moduli, besides the elastic, account for residual deformations, while  the fraction of the latter increases with increasing stress levels in the wooden elements.Results The calculation of statically indeterminate spatial timber-metallic lenticular block-truss loaded with a uniformly distributed  load is carried out. At the first stage, the construction was calculated using the elastic modulus of all wooden elements E =  10000 MPa in accordance with the set of rules (SP 64.13330.2011  (updated version of SNiP II-25-80. At the second stage, the elastic  modulus was replaced by variable, i.e., matched to the level of  stresses in the elements by means of interpolation. The obtained  deflection values are analysed and compared to the construction  limiting value. The study was conducted without taking into account the flexibility of node connections and defects of the wood, which  can also have a significant effect on the deflection value.Conclusion The use of a variable elastic modulus for calculations  significantly influences the magnitude of deformations (in our case,  deflections are increased by 30%. The study confirms the need to  take into account the creep of wood and refine the calculations of  wooden structures. Such approximating dependence at different moisture levels of wood will allow the calculation of wooden  structures to be performed at a higher theoretical level.

  13. Spatial variability of correlated color temperature of lightning channels

    Directory of Open Access Journals (Sweden)

    Nobuaki Shimoji

    Full Text Available In this paper, we present the spatial variability of the correlated color temperature of lightning channel shown in a digital still image. In order to analyze the correlated color temperature, we calculated chromaticity coordinates of the lightning channels in the digital still image. From results, the spatial variation of the correlated color temperature of the lightning channel was confirmed. Moreover, the results suggest that the correlated color temperature and peak current of the lightning channels are related to each other. Keywords: Lightning, Color analysis, Correlated color temperature, Chromaticity coordinate, CIE 1931 xy-chromaticity diagram

  14. Sequence determinants of human microsatellite variability

    Directory of Open Access Journals (Sweden)

    Jakobsson Mattias

    2009-12-01

    Full Text Available Abstract Background Microsatellite loci are frequently used in genomic studies of DNA sequence repeats and in population studies of genetic variability. To investigate the effect of sequence properties of microsatellites on their level of variability we have analyzed genotypes at 627 microsatellite loci in 1,048 worldwide individuals from the HGDP-CEPH cell line panel together with the DNA sequences of these microsatellites in the human RefSeq database. Results Calibrating PCR fragment lengths in individual genotypes by using the RefSeq sequence enabled us to infer repeat number in the HGDP-CEPH dataset and to calculate the mean number of repeats (as opposed to the mean PCR fragment length, under the assumption that differences in PCR fragment length reflect differences in the numbers of repeats in the embedded repeat sequences. We find the mean and maximum numbers of repeats across individuals to be positively correlated with heterozygosity. The size and composition of the repeat unit of a microsatellite are also important factors in predicting heterozygosity, with tetra-nucleotide repeat units high in G/C content leading to higher heterozygosity. Finally, we find that microsatellites containing more separate sets of repeated motifs generally have higher heterozygosity. Conclusions These results suggest that sequence properties of microsatellites have a significant impact in determining the features of human microsatellite variability.

  15. Some novel inequalities for fuzzy variables on the variance and its rational upper bound

    Directory of Open Access Journals (Sweden)

    Xiajie Yi

    2016-02-01

    Full Text Available Abstract Variance is of great significance in measuring the degree of deviation, which has gained extensive usage in many fields in practical scenarios. The definition of the variance on the basis of the credibility measure was first put forward in 2002. Following this idea, the calculation of the accurate value of the variance for some special fuzzy variables, like the symmetric and asymmetric triangular fuzzy numbers and the Gaussian fuzzy numbers, is presented in this paper, which turns out to be far more complicated. Thus, in order to better implement variance in real-life projects like risk control and quality management, we suggest a rational upper bound of the variance based on an inequality, together with its calculation formula, which can largely simplify the calculation process within a reasonable range. Meanwhile, some discussions between the variance and its rational upper bound are presented to show the rationality of the latter. Furthermore, two inequalities regarding the rational upper bound of variance and standard deviation of the sum of two fuzzy variables and their individual variances and standard deviations are proved. Subsequently, some numerical examples are illustrated to show the effectiveness and the feasibility of the proposed inequalities.

  16. Daily affect variability and context-specific alcohol consumption.

    Science.gov (United States)

    Mohr, Cynthia D; Arpin, Sarah; McCabe, Cameron T

    2015-11-01

    Research explored the effects of variability in negative and positive affect on alcohol consumption, specifying daily fluctuation in affect as a critical form of emotion dysregulation. Using daily process methodology allows for a more objective calculation of affect variability relative to traditional self-reports. The present study models within-person negative and positive affect variabilities as predictors of context-specific consumption (i.e. solitary vs. social drinking), controlling for mean levels of affect. A community sample of moderate-to-heavy drinkers (n = 47; 49% women) from a US metropolitan area reported on affect and alcohol consumption thrice daily for 30 days via a handheld electronic interviewer. Within-person affect variability was calculated using daily standard deviations in positive and negative affect. Within person, greater negative and positive variabilities are related to greater daily solitary and social consumption. Across study days, mean levels of negative and positive affect variabilities related to greater social consumption between persons; yet, aggregated negative affect variability was related to less solitary consumption. Results affirm affect variability as a unique predictor of alcohol consumption, independent of mean affect levels. Yet, it is important to differentiate social context of consumption, as well as type of affect variability, particularly at the between-person level. These distinctions help clarify inconsistencies in the self-medication literature regarding associations between average levels of affect and consumption. Importantly, consistent within-person relationships for both variabilities support arguments that both negative and positive affect variabilities are detrimental and reflect an inability to regulate emotional experience. © 2015 Australasian Professional Society on Alcohol and other Drugs.

  17. Implicit thermohydraulic coupling of two-phause flow calculations

    International Nuclear Information System (INIS)

    Lekach, S.; Kaufman, J.M.

    1980-01-01

    A numerical scheme that implicitly couples the hydraulic variables with thermal variables during a one or two-phase transient calculation in a one-dimensional pipe is presented. The transients are performed to achieve a steady-state condition. It is shown that by preserving the strong interdependence that exists between the hydraulic and thermal variables with an implicit flux treatment, it is possible to achieve a greater degree of numerical stability and in less computer time than with an explicit treatment. The method is slightly more complex but the large time step advantage more than pays for the overhead

  18. Short- and long-term variability of radon progeny concentration in dwellings in the Czech Republic.

    Science.gov (United States)

    Slezáková, M; Navrátilová Rovenská, K; Tomásek, L; Holecek, J

    2013-03-01

    In this paper, repeated measurements of radon progeny concentration in dwellings in the Czech Republic are described. Two distinct data sets are available: one based on present measurements in 170 selected dwellings in the Central Bohemian Pluton with a primary measurement carried out in the 1990s and the other based on 1920 annual measurements in 960 single-family houses in the Czech Republic in 1992 and repeatedly in 1993. The analysis of variance model with random effects is applied to data to evaluate the variability of measurements. The calculated variability attributable to repeated measurements is compared with results from other countries. In epidemiological studies, ignoring the variability of measurements may lead to biased estimates of risk of lung cancer.

  19. Calculation of climate factors as an additional criteria to determine agriculturally less favoured areas

    Directory of Open Access Journals (Sweden)

    Tjaša POGAČAR

    2016-04-01

    Full Text Available Climate factors that are proposed to determine agriculturally less favoured areas (LFA in Slovenia were analyzed for the period 1981–2010. Following the instructions of European Commission prepared by Joint Research Centre (JRC 30-years averages of low air temperatures criteria (the vegetation period duration and sums of effective air temperatures and aridity criteria (aridity index AI have to be calculated. Calculations were additionally done using Slovenian Environment Agency (ARSO method, which is slightly different when determining temperature thresholds. Only hilly areas are below the LFA low air temperatures threshold with the lowest located meteorological station in Rateče. According to aridity criteria no area in Slovenia is below the threshold, so meteorological water balance was also examined. Average water balance in the period 1981–2010 was in most of locations lower than in the period 1971–2000. Climate change impacts are already expressed as trend presence in time series of studied variables, so it is recommended to calculate trends and take them into account or to perform regular iterations of calculations.

  20. Interrelationships between morphometric variables and rounded fish body yields evaluated by path analysis

    Directory of Open Access Journals (Sweden)

    Rafael Vilhena Reis Neto

    2012-07-01

    Full Text Available The objective of this study was to verify which morphometric measures and ratios are more directly associated with the weight and body yields of rounded fish. A total of 225 specimens of rounded fish (59 pacus, 61 tambaquis, 52 tambacus and 53 paquis with average weight of 972.43 g (±115.52 g were sampled, stunned, slaughtered, weighed, measured, and processed for morphometric and processing yield analysis. The morphometric measures taken were: standard length (CP; head length (CC; head height (AC; body height (A1; and body width (L1. For completeness, the following morphometric ratios were calculated: CC/CP, AC/CP, A1/CP, L1/CP, CC/A1, AC/A1, L1/A1, CC/AC and L1/CC. The yields of carcass, filet, rib and filet with rib were estimated after processing. Initially, a "stepwise" procedure was performed in order to eliminate multicollinearity problems among the morphometric variables, and the phenotypic correlations were then calculated for the dependent variables (weight and body yields and independent variables (morphometric measurements and ratios. These correlations were later deployed in direct and indirect effects through path analysis, and the direct and indirect contributions of each variable were measured in percentage terms. The CC and A1 measures were important for determining the weight of rounded fish. The CC/A1 ratio was the variable most directly associated with carcass yield. For filet, filet with rib and rib yields, the L1/CC ratio was found to be more appropriate and can be used directly.

  1. Consensus structures of the Mo(v) sites of sulfite-oxidizing enzymes derived from variable frequency pulsed EPR spectroscopy, isotopic labelling and DFT calculations.

    Science.gov (United States)

    Enemark, John H

    2017-10-10

    Sulfite-oxidizing enzymes from eukaryotes and prokaryotes have five-coordinate distorted square-pyramidal coordination about the molybdenum atom. The paramagnetic Mo(v) state is easily generated, and over the years four distinct CW EPR spectra have been identified, depending upon enzyme source and the reaction conditions, namely high and low pH (hpH and lpH), phosphate inhibited (P i ) and sulfite (or blocked). Extensive studies of these paramagnetic forms of sulfite-oxidizing enzymes using variable frequency pulsed electron spin echo (ESE) spectroscopy, isotopic labeling and density functional theory (DFT) calculations have led to the consensus structures that are described here. Errors in some of the previously proposed structures are corrected.

  2. Calculation of locomotive traction force in transient rolling contact

    Directory of Open Access Journals (Sweden)

    Voltr P.

    2017-06-01

    Full Text Available To represent thewheel-rail contact in numerical simulations of rail vehicles, simplified models (Fastsim, Pola´ch etc. are usually employed. These models are designed for steady rolling only, which is perfectly suitable in many cases. However, it is shown to be limiting for simulations at very low vehicle speeds, and therefore it does not actually allow simulation of vehicle running at arbitrarily variable speed. The simplified model of transient rolling, which involves calculation of the stress distribution in the discretised contact area, overcomes this disadvantage but might be unnecessarily complex for more simple simulations. In this paper, an approximative creep force computation method for transient rolling is presented. Its purpose is not to study the transient phenomena themselves but provide a simple and readily available way to prevent incorrect results of the numerical simulation when the vehicle speed approaches zero. The proper function of the proposed method is demonstrated by a simulation of start-up and interrupted sliding of a four-axle locomotive.

  3. Calculated Low-Speed Steady and Time-Dependent Aerodynamic Derivatives for Some Airfoils Using a Discrete Vortex Method

    Science.gov (United States)

    Riley, Donald R.

    2015-01-01

    This paper contains a collection of some results of four individual studies presenting calculated numerical values for airfoil aerodynamic stability derivatives in unseparated inviscid incompressible flow due separately to angle-of-attack, pitch rate, flap deflection, and airfoil camber using a discrete vortex method. Both steady conditions and oscillatory motion were considered. Variables include the number of vortices representing the airfoil, the pitch axis / moment center chordwise location, flap chord to airfoil chord ratio, and circular or parabolic arc camber. Comparisons with some experimental and other theoretical information are included. The calculated aerodynamic numerical results obtained using a limited number of vortices provided in each study compared favorably with thin airfoil theory predictions. Of particular interest are those aerodynamic results calculated herein (such as induced drag) that are not readily available elsewhere.

  4. PCC/SRC, PCC and SRC Calculation from Multivariate Input for Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.; Johnson, J.D.

    1995-01-01

    1 - Description of program or function: PCC/SRC is designed for use in conjunction with sensitivity analyses of complex computer models. PCC/SRC calculates the partial correlation coefficients (PCC) and the standardized regression coefficients (SRC) from the multivariate input to, and output from, a computer model. 2 - Method of solution: PCC/SRC calculates the coefficients on either the original observations or on the ranks of the original observations. These coefficients provide alternative measures of the relative contribution (importance) of each of the various input variables to the observed variations in output. Relationships between the coefficients and differences in their interpretations are identified. If the computer model output has an associated time or spatial history, PCC/SRC will generate a graph of the coefficients over time or space for each input-variable, output- variable combination of interest, indicating the importance of each input value over time or space. 3 - Restrictions on the complexity of the problem - Maxima of: 100 observations, 100 different time steps or intervals between successive dependent variable readings, 50 independent variables (model input), 20 dependent variables (model output). 10 ordered triples specifying intervals between dependent variable readings

  5. Elementary exact calculations of degree growth and entropy for discrete equations.

    Science.gov (United States)

    Halburd, R G

    2017-05-01

    Second-order discrete equations are studied over the field of rational functions [Formula: see text], where z is a variable not appearing in the equation. The exact degree of each iterate as a function of z can be calculated easily using the standard calculations that arise in singularity confinement analysis, even when the singularities are not confined. This produces elementary yet rigorous entropy calculations.

  6. Estimating annoyance to calculated wind turbine shadow flicker is improved when variables associated with wind turbine noise exposure are considered.

    Science.gov (United States)

    Voicescu, Sonia A; Michaud, David S; Feder, Katya; Marro, Leonora; Than, John; Guay, Mireille; Denning, Allison; Bower, Tara; van den Berg, Frits; Broner, Norm; Lavigne, Eric

    2016-03-01

    The Community Noise and Health Study conducted by Health Canada included randomly selected participants aged 18-79 yrs (606 males, 632 females, response rate 78.9%), living between 0.25 and 11.22 km from operational wind turbines. Annoyance to wind turbine noise (WTN) and other features, including shadow flicker (SF) was assessed. The current analysis reports on the degree to which estimating high annoyance to wind turbine shadow flicker (HAWTSF) was improved when variables known to be related to WTN exposure were also considered. As SF exposure increased [calculated as maximum minutes per day (SFm)], HAWTSF increased from 3.8% at 0 ≤ SFm wind turbine-related features, concern for physical safety, and noise sensitivity. Reported dizziness was also retained in the final model at p = 0.0581. Study findings add to the growing science base in this area and may be helpful in identifying factors associated with community reactions to SF exposure from wind turbines.

  7. Evaluating variability and uncertainty in radiological impact assessment using SYMBIOSE

    International Nuclear Information System (INIS)

    Simon-Cornu, M.; Beaugelin-Seiller, K.; Boyer, P.; Calmon, P.; Garcia-Sanchez, L.; Mourlon, C.; Nicoulaud, V.; Sy, M.; Gonze, M.A.

    2015-01-01

    SYMBIOSE is a modelling platform that accounts for variability and uncertainty in radiological impact assessments, when simulating the environmental fate of radionuclides and assessing doses to human populations. The default database of SYMBIOSE is partly based on parameter values that are summarized within International Atomic Energy Agency (IAEA) documents. To characterize uncertainty on the transfer parameters, 331 Probability Distribution Functions (PDFs) were defined from the summary statistics provided within the IAEA documents (i.e. sample size, minimal and maximum values, arithmetic and geometric means, standard and geometric standard deviations) and are made available as spreadsheet files. The methods used to derive the PDFs without complete data sets, but merely the summary statistics, are presented. Then, a simple case-study illustrates the use of the database in a second-order Monte Carlo calculation, separating parametric uncertainty and inter-individual variability. - Highlights: • Parametric uncertainty in radioecology was derived from IAEA documents. • 331 Probability Distribution Functions were defined for transfer parameters. • Parametric uncertainty and inter-individual variability were propagated

  8. Radioactive cloud dose calculations

    International Nuclear Information System (INIS)

    Healy, J.W.

    1984-01-01

    Radiological dosage principles, as well as methods for calculating external and internal dose rates, following dispersion and deposition of radioactive materials in the atmosphere are described. Emphasis has been placed on analytical solutions that are appropriate for hand calculations. In addition, the methods for calculating dose rates from ingestion are discussed. A brief description of several computer programs are included for information on radionuclides. There has been no attempt to be comprehensive, and only a sampling of programs has been selected to illustrate the variety available

  9. Ozone risk assessment in three oak species as affected by soil water availability.

    Science.gov (United States)

    Hoshika, Yasutomo; Moura, Barbara; Paoletti, Elena

    2018-03-01

    To derive ozone (O 3 ) dose-response relationships for three European oak species (Quercus ilex, Quercus pubescens, and Quercus robur) under a range of soil water availability, an experiment was carried out with 2-year-old potted seedlings exposed to three levels of water availability in the soil and three levels of O 3 pollution for one growing season in an ozone free-air controlled exposure (FACE) facility. Total biomass losses were estimated relative to a hypothetical clean air at the pre-industrial age, i.e., at 10 ppb as daily average (M24). A stomatal conductance model was parameterized with inputs from the three species for calculating the stomatal O 3 flux. Exposure-based (M24, W126, and AOT40) and flux-based (phytotoxic O 3 dose (POD) 0-3 ) dose-response relationships were estimated and critical levels (CL) were calculated for a 5% decline of total biomass. Results show that water availability can significantly affect O 3 risk assessment. In fact, dose-response relationships calculated per individual species at each water availability level resulted in very different CLs and best metrics. In a simplified approach where species were aggregated on the basis of their O 3 sensitivity, the best metric was POD 0.5 , with a CL of 6.8 mmol m -2 for the less O 3 -sensitive species Q. ilex and Q. pubescens and of 3.5 mmol m -2 for the more O 3 -sensitive species Q. robur. The performance of POD 0 , however, was very similar to that of POD 0.5 , and thus a CL of 6.9 mmol m -2 POD 0 and 3.6 mmol m -2 POD 0 for the less and more O 3 -sensitive oak species may be also recommended. These CLs can be applied to oak ecosystems at variable water availability in the soil. We conclude that POD y is able to reconcile the effects of O 3 and soil water availability on species-specific oak productivity.

  10. Relationship between neighbourhood socioeconomic position and neighbourhood public green space availability: An environmental inequality analysis in a large German city applying generalized linear models.

    Science.gov (United States)

    Schüle, Steffen Andreas; Gabriel, Katharina M A; Bolte, Gabriele

    2017-06-01

    The environmental justice framework states that besides environmental burdens also resources may be social unequally distributed both on the individual and on the neighbourhood level. This ecological study investigated whether neighbourhood socioeconomic position (SEP) was associated with neighbourhood public green space availability in a large German city with more than 1 million inhabitants. Two different measures were defined for green space availability. Firstly, percentage of green space within neighbourhoods was calculated with the additional consideration of various buffers around the boundaries. Secondly, percentage of green space was calculated based on various radii around the neighbourhood centroid. An index of neighbourhood SEP was calculated with principal component analysis. Log-gamma regression from the group of generalized linear models was applied in order to consider the non-normal distribution of the response variable. All models were adjusted for population density. Low neighbourhood SEP was associated with decreasing neighbourhood green space availability including 200m up to 1000m buffers around the neighbourhood boundaries. Low neighbourhood SEP was also associated with decreasing green space availability based on catchment areas measured from neighbourhood centroids with different radii (1000m up to 3000 m). With an increasing radius the strength of the associations decreased. Social unequally distributed green space may amplify environmental health inequalities in an urban context. Thus, the identification of vulnerable neighbourhoods and population groups plays an important role for epidemiological research and healthy city planning. As a methodical aspect, log-gamma regression offers an adequate parametric modelling strategy for positively distributed environmental variables. Copyright © 2017 Elsevier GmbH. All rights reserved.

  11. Calculation of Critical Temperatures by Empirical Formulae

    Directory of Open Access Journals (Sweden)

    Trzaska J.

    2016-06-01

    Full Text Available The paper presents formulas used to calculate critical temperatures of structural steels. Equations that allow calculating temperatures Ac1, Ac3, Ms and Bs were elaborated based on the chemical composition of steel. To elaborate the equations the multiple regression method was used. Particular attention was paid to the collection of experimental data which was required to calculate regression coefficients, including preparation of data for calculation. The empirical data set included more than 500 chemical compositions of structural steel and has been prepared based on information available in literature on the subject.

  12. Rainfall Variability and Landuse Conversion Impacts to Sensitivity of Citarum River Flow

    Directory of Open Access Journals (Sweden)

    Dyah Marganingrum

    2013-07-01

    Full Text Available The objective of this study is to determine the sensitivity of Citarum river flow to climate change and land conversion. It will provide the flow information that required in the water resources sustainability. Saguling reservoir is one of the strategic reservoirs, which 75% water is coming from the inflow of Upper Citarum measured at Nanjung station. Climate variability was identified as rainfall variability. Sensitivity was calculated as the elasticity value of discharge using three-variate model of statistical approach. The landuse conversion was calculated used GIS at 1994 and 2004. The results showed that elasticity at the Nanjung station and Saguling station decreased from 1.59 and 1.02 to 0.68 and 0.62 respectively. The decreasing occurred in the before the dam was built period (1950-1980 to the after reservoirs operated period (1986-2008. This value indicates that: 1 Citarum river flow is more sensitive to rainfall variability that recorded at Nanjung station than Saguling station, 2 rainfall character is more difficult to predict. The landuse analysis shows that forest area decrease to ± 27% and built up area increased to ± 26%. Those implied a minimum rainfall reduction to± 8% and minimum flow to ± 46%. Those were caused by land conversion and describing that the vegetation have function to maintain the base flow for sustainable water resource infrastructure.

  13. STUDY OF GENETIC VARIABILITY OF TRITICALE VARIETIES BY SSR MARKERS

    Directory of Open Access Journals (Sweden)

    Jana Ondroušková

    2013-04-01

    Full Text Available For the detection of genetic variability ten genotypes of winter triticale (×Triticosecale Wittmack, 2n = 6x = 42; BBAARR were selected: nine varieties and one breeding line with good bread-making quality KM 4-09 with the chromosome translocation 1R.1D 5+10-2. 25 microsatellites markers located in the genome A, B, D and R were chosen for analysis. Eighty-four alleles were detected with an average of 3.36 alleles per locus were detected. For each microsatellite statistical values were calculated diversity index (DI, probabilities of identity (PI and polymorphic information content (PIC were calculated and averages statistical values are: DI 0.55, PI 0.27 and 0.5 PIC. Overall dendrogram based on the UPGMA method (Jaccards similarity coefficient significantly distinguished two groups of genotypes and these groups were divided into sub-clusters. A set of 5 SSR markers (Xwms0752, Xbarc128, Xrems1237, Xwms0861 and Xbrac170 which have the calculated PIC value higher than 0.68 that are sufficient for the identification of the analyzed genotypes was described.

  14. Calculation of Rydberg interaction potentials

    DEFF Research Database (Denmark)

    Weber, Sebastian; Tresp, Christoph; Menke, Henri

    2017-01-01

    for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up...... to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source....

  15. Variable & Recode Definitions - SEER Documentation

    Science.gov (United States)

    Resources that define variables and provide documentation for reporting using SEER and related datasets. Choose from SEER coding and staging manuals plus instructions for recoding behavior, site, stage, cause of death, insurance, and several additional topics. Also guidance on months survived, calculating Hispanic mortality, and site-specific surgery.

  16. Validity and reliability of the "German Utilization Questionnaire-Dissemination and Use of Research" to measure attitude, availability, and support toward implementation of research in nursing practice.

    Science.gov (United States)

    Haslinger-Baumann, Elisabeth; Lang, Gert; Müller, Gerhard

    2014-01-01

    In nursing practice, research results have to undergo a systematic process of transformation. Currently in Austria, there is no empirical data available concerning the actual implementation of research results. An English validated questionnaire was translated into German and tested for validity and reliability. A survey of 178 registered nurses (n = 178) was conducted in a multicenter, quantitative, cross-sectional study in Austria in 2011. Cronbach's alpha values (.82-.92) were calculated for 4 variables ("use," "attitude," "availability," "support") after the reduction of 7 irrelevant items. Exploratory factor analysis was calculated with Kaiser-Meyer-Olkin (KMO) ranging from .78 to .92; the total variance ranged from 46% to 56%. A validated German questionnaire concerning the implementation of research results is now available for the nursing practice.

  17. Estimating the reliability of glycemic index values and potential sources of methodological and biological variability.

    Science.gov (United States)

    Matthan, Nirupa R; Ausman, Lynne M; Meng, Huicui; Tighiouart, Hocine; Lichtenstein, Alice H

    2016-10-01

    The utility of glycemic index (GI) values for chronic disease risk management remains controversial. Although absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value determinations and potential sources of variability among healthy adults. We examined the intra- and inter-individual variability in glycemic response to a single food challenge and methodologic and biological factors that potentially mediate this response. The GI value for white bread was determined by using standardized methodology in 63 volunteers free from chronic disease and recruited to differ by sex, age (18-85 y), and body mass index [BMI (in kg/m 2 ): 20-35]. Volunteers randomly underwent 3 sets of food challenges involving glucose (reference) and white bread (test food), both providing 50 g available carbohydrates. Serum glucose and insulin were monitored for 5 h postingestion, and GI values were calculated by using different area under the curve (AUC) methods. Biochemical variables were measured by using standard assays and body composition by dual-energy X-ray absorptiometry. The mean ± SD GI value for white bread was 62 ± 15 when calculated by using the recommended method. Mean intra- and interindividual CVs were 20% and 25%, respectively. Increasing sample size, replication of reference and test foods, and length of blood sampling, as well as AUC calculation method, did not improve the CVs. Among the biological factors assessed, insulin index and glycated hemoglobin values explained 15% and 16% of the variability in mean GI value for white bread, respectively. These data indicate that there is substantial variability in individual responses to GI value determinations, demonstrating that it is unlikely to be a good approach to guiding food choices. Additionally, even in healthy individuals, glycemic status significantly contributes to the variability in GI value

  18. CALCULATION OF LASER CUTTING COSTS

    Directory of Open Access Journals (Sweden)

    Bogdan Nedic

    2016-09-01

    Full Text Available The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, comparison' of costs made by other unconventional methods and provides documentation that consists of reports on estimated costs.

  19. Assessment of regional air pollution variability in Istanbul

    International Nuclear Information System (INIS)

    Sen, Z.; Oztopal, A.

    2001-01-01

    Air pollution concentrations have temporal and spatial variations depending on the prevailing weather conditions, topographic features, city building heights and locations. When the measurements of air pollutants are available at set measurement sites, the regional variability degree of air pollutants is quantified using the point cumulative semi-variogram (PCSV). This technique provides a systematic method for calculating the changes in the concentrations of air pollutants with distance from a specific site. Regional variations of sulphur dioxide (SO 2 ) and total suspended particulate (TSP) matter concentrations in Istanbul city were evaluated using the PCSV concept. The data were available from 16 different air pollution measurement stations scattered all over the city for a period from 1988 to 1994. Monthly regional variation maps were drawn in and around the city at different radii of influence. These maps provide a reference for measuring future changes of air pollution in the city. (author)

  20. Associations between Whole-Grain Intake, Psychosocial Variables, and Home Availability among Elementary School Children

    Science.gov (United States)

    Rosen, Renee A.; Burgess-Champoux, Teri L.; Marquart, Len; Reicks, Marla M.

    2012-01-01

    Objective: Develop, refine, and test psychosocial scales for associations with whole-grain intake. Methods: A cross-sectional survey was conducted in a Minneapolis/St. Paul suburban elementary school with children in fourth through sixth grades (n = 98) and their parents (n = 76). Variables of interest were child whole-grain intake, self-efficacy,…

  1. Minaret, a deterministic neutron transport solver for nuclear core calculations

    Energy Technology Data Exchange (ETDEWEB)

    Moller, J-Y.; Lautard, J-J., E-mail: jean-yves.moller@cea.fr, E-mail: jean-jacques.lautard@cea.fr [CEA - Centre de Saclay , Gif sur Yvette (France)

    2011-07-01

    We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)

  2. Minaret, a deterministic neutron transport solver for nuclear core calculations

    International Nuclear Information System (INIS)

    Moller, J-Y.; Lautard, J-J.

    2011-01-01

    We present here MINARET a deterministic transport solver for nuclear core calculations to solve the steady state Boltzmann equation. The code follows the multi-group formalism to discretize the energy variable. It uses discrete ordinate method to deal with the angular variable and a DGFEM to solve spatially the Boltzmann equation. The mesh is unstructured in 2D and semi-unstructured in 3D (cylindrical). Curved triangles can be used to fit the exact geometry. For the curved elements, two different sets of basis functions can be used. Transport solver is accelerated with a DSA method. Diffusion and SPN calculations are made possible by skipping the transport sweep in the source iteration. The transport calculations are parallelized with respect to the angular directions. Numerical results are presented for simple geometries and for the C5G7 Benchmark, JHR reactor and the ESFR (in 2D and 3D). Straight and curved finite element results are compared. (author)

  3. A Validated Smartphone-Based Assessment of Gait and Gait Variability in Parkinson's Disease.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available A well-established connection exists between increased gait variability and greater fall likelihood in Parkinson's disease (PD; however, a portable, validated means of quantifying gait variability (and testing the efficacy of any intervention remains lacking. Furthermore, although rhythmic auditory cueing continues to receive attention as a promising gait therapy for PD, its widespread delivery remains bottlenecked. The present paper describes a smartphone-based mobile application ("SmartMOVE" to address both needs.The accuracy of smartphone-based gait analysis (utilizing the smartphone's built-in tri-axial accelerometer and gyroscope to calculate successive step times and step lengths was validated against two heel contact-based measurement devices: heel-mounted footswitch sensors (to capture step times and an instrumented pressure sensor mat (to capture step lengths. 12 PD patients and 12 age-matched healthy controls walked along a 26-m path during self-paced and metronome-cued conditions, with all three devices recording simultaneously.Four outcome measures of gait and gait variability were calculated. Mixed-factorial analysis of variance revealed several instances in which between-group differences (e.g., increased gait variability in PD patients relative to healthy controls yielded medium-to-large effect sizes (eta-squared values, and cueing-mediated changes (e.g., decreased gait variability when PD patients walked with auditory cues yielded small-to-medium effect sizes-while at the same time, device-related measurement error yielded small-to-negligible effect sizes.These findings highlight specific opportunities for smartphone-based gait analysis to serve as an alternative to conventional gait analysis methods (e.g., footswitch systems or sensor-embedded walkways, particularly when those methods are cost-prohibitive, cumbersome, or inconvenient.

  4. Point kinetics improvements to evaluate three-dimensional effects in transients calculation

    International Nuclear Information System (INIS)

    Castellotti, U.

    1987-01-01

    A calculation method, which considers the flux axial perturbations in the parameters related to the reactivity within a point kinetics model, is described. The method considered uses axial factors of consideration which act on the thermohydraulic variables included in the reactivity calculation. The PUMA three-dimensional code as reference model for the comparisons, is used. The limitations inherent to the reactivity balance of the point models used in the transients calculation, are given. (Author)

  5. Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations

    Directory of Open Access Journals (Sweden)

    Guillermo Macbeth

    2011-05-01

    Full Text Available The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calculator software that performs such analysis and offers some interpretation tips. Differences and similarities with the parametric case are analysed and illustrated. The implementation of this free program is fully described and compared with other calculators. Alternative algorithmic approaches are mathematically analysed and a basic linear algebra proof of its equivalence is formally presented. Two worked examples in cognitive psychology are commented. A visual interpretation of Cliff´s Delta is suggested. Availability, installation and applications of the program are presented and discussed.

  6. Groundwater availability as constrained by hydrogeology and environmental flows.

    Science.gov (United States)

    Watson, Katelyn A; Mayer, Alex S; Reeves, Howard W

    2014-01-01

    Groundwater pumping from aquifers in hydraulic connection with nearby streams has the potential to cause adverse impacts by decreasing flows to levels below those necessary to maintain aquatic ecosystems. The recent passage of the Great Lakes-St. Lawrence River Basin Water Resources Compact has brought attention to this issue in the Great Lakes region. In particular, the legislation requires the Great Lakes states to enact measures for limiting water withdrawals that can cause adverse ecosystem impacts. This study explores how both hydrogeologic and environmental flow limitations may constrain groundwater availability in the Great Lakes Basin. A methodology for calculating maximum allowable pumping rates is presented. Groundwater availability across the basin may be constrained by a combination of hydrogeologic yield and environmental flow limitations varying over both local and regional scales. The results are sensitive to factors such as pumping time, regional and local hydrogeology, streambed conductance, and streamflow depletion limits. Understanding how these restrictions constrain groundwater usage and which hydrogeologic characteristics and spatial variables have the most influence on potential streamflow depletions has important water resources policy and management implications. © 2013, National Ground Water Association.

  7. Relativistic Calculations for Be-like Iron

    International Nuclear Information System (INIS)

    Yang Jianhui; Zhang Jianping; Li Ping; Li Huili

    2008-01-01

    Relativistic configuration interaction calculations for the states of 1s 2 2s 2 , 1s 2 2s3l (l = s,p,d) and 1s 2 2p3l (l = s,p,d) configurations of iron are carried out using relativistic configuration interaction (RCI) and multi-configuration Dirac-Fock (MCDF) method in the active interaction approach. In the present calculation, a large-scale configuration expansion was used in describing the target states. These results are extensively compared with other available calculative and experimental and observed values, the corresponding present results are in good agreement with experimental and observed values, and some differences are found with other available calculative values. Because more relativistic effects are considered than before, the present results should be more accurate and reliable

  8. Representation and calculation of economic uncertainties

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2002-01-01

    Management and decision making when certain information is available may be a matter of rationally choosing the optimal alternative by calculation of the utility function. When only uncertain information is available (which is most often the case) decision-making calls for more complex methods...... of representation and calculation and the basis for choosing the optimal alternative may become obscured by uncertainties of the utility function. In practice, several sources of uncertainties of the required information impede optimal decision making in the classical sense. In order to be able to better handle...... to uncertain economic numbers are discussed. When solving economic models for decision-making purposes calculation of uncertain functions will have to be carried out in addition to the basic arithmetical operations. This is a challenging numerical problem since improper methods of calculation may introduce...

  9. Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.

    Science.gov (United States)

    Tong, Liping; Thompson, Elizabeth

    2008-01-01

    To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel

  10. Climatological variability in regional air pollution

    International Nuclear Information System (INIS)

    Shannon, J.D.; Trexler, E.C. Jr.

    1995-01-01

    Although some air pollution modeling studies examine events that have already occurred (e.g., the Chernobyl plume) with relevant meteorological conditions largely known, most pollution modeling studies address expected or potential scenarios for the future. Future meteorological conditions, the major pollutant forcing function other than emissions, are inherently uncertain although much relevant information is contained in past observational data. For convenience in our discussions of regional pollutant variability unrelated to emission changes, we define meteorological variability as short-term (within-season) pollutant variability and climatological variability as year-to-year changes in seasonal averages and accumulations of pollutant variables. In observations and in some of our simulations the effects are confounded because for seasons of two different years both the mean and the within-season character of a pollutant variable may change. Effects of climatological and meteorological variability on means and distributions of air pollution parameters, particularly those related to regional visibility, are illustrated. Over periods of up to a decade climatological variability may mask or overstate improvements resulting from emission controls. The importance of including climatological uncertainties in assessing potential policies, particularly when based partly on calculated source-receptor relationships, is highlighted

  11. Variable mechanical ventilation.

    Science.gov (United States)

    Fontela, Paula Caitano; Prestes, Renata Bernardy; Forgiarini, Luiz Alberto; Friedman, Gilberto

    2017-01-01

    To review the literature on the use of variable mechanical ventilation and the main outcomes of this technique. Search, selection, and analysis of all original articles on variable ventilation, without restriction on the period of publication and language, available in the electronic databases LILACS, MEDLINE®, and PubMed, by searching the terms "variable ventilation" OR "noisy ventilation" OR "biologically variable ventilation". A total of 36 studies were selected. Of these, 24 were original studies, including 21 experimental studies and three clinical studies. Several experimental studies reported the beneficial effects of distinct variable ventilation strategies on lung function using different models of lung injury and healthy lungs. Variable ventilation seems to be a viable strategy for improving gas exchange and respiratory mechanics and preventing lung injury associated with mechanical ventilation. However, further clinical studies are necessary to assess the potential of variable ventilation strategies for the clinical improvement of patients undergoing mechanical ventilation.

  12. Matching fully differential NNLO calculations and parton showers

    International Nuclear Information System (INIS)

    Alioli, Simone; Bauer, Christian W.; Berggren, Calvin; Walsh, Jonathan R.; Zuberi, Saba

    2013-11-01

    We present a general method to match fully differential next-to-next-to-leading (NNLO) calculations to parton shower programs. We discuss in detail the perturbative accuracy criteria a complete NNLO+PS matching has to satisfy. Our method is based on consistently improving a given NNLO calculation with the leading-logarithmic (LL) resummation in a chosen jet resolution variable. The resulting NNLO+LL calculation is cast in the form of an event generator for physical events that can be directly interfaced with a parton shower routine, and we give an explicit construction of the input ''Monte Carlo cross sections'' satisfying all required criteria. We also show how other proposed approaches naturally arise as special cases in our method.

  13. Spatial variability of N, P, and K in rice field in Sawah Sempadan, Malaysia

    Directory of Open Access Journals (Sweden)

    Saeed Mohamed Eltaib

    2002-04-01

    Full Text Available The variability of soil chemical properties such as total N, available P, and exchangeable K were examined on a 1.2 ha rice (Oryza sativa field. The soil (n = 72 samples were systematically taken from individual fields in Sawah Sempadan in thirty-six locations at two depths (0-20 and 20-30 cm. The Differential Global Positioning System (DGPS was used for locating the sample position. Geostatistical techniques were used to analyze the soil chemical properties variability of the samples that assist in site-specific management of the field. Results showed that areas of similarity were much greater for the soil chemical properties measured at the depth of (0-20 cm than that of the second lower (20- 30 cm. The ranges of the semivariogram for total N, available P, and exchangeable K were 12, and 13 m (0-20 cm, 12 and 38 m (20-30 cm, respectively. Point kriging calculated from the semivariogram was employed for spatial distribution map. The results suggested that soil chemical properties measured may be spatially dependent even within the small.

  14. The ability of non-computer tasks to increase biomechanical exposure variability in computer-intensive office work.

    Science.gov (United States)

    Barbieri, Dechristian França; Srinivasan, Divya; Mathiassen, Svend Erik; Nogueira, Helen Cristina; Oliveira, Ana Beatriz

    2015-01-01

    Postures and muscle activity in the upper body were recorded from 50 academics office workers during 2 hours of normal work, categorised by observation into computer work (CW) and three non-computer (NC) tasks (NC seated work, NC standing/walking work and breaks). NC tasks differed significantly in exposures from CW, with standing/walking NC tasks representing the largest contrasts for most of the exposure variables. For the majority of workers, exposure variability was larger in their present job than in CW alone, as measured by the job variance ratio (JVR), i.e. the ratio between min-min variabilities in the job and in CW. Calculations of JVRs for simulated jobs containing different proportions of CW showed that variability could, indeed, be increased by redistributing available tasks, but that substantial increases could only be achieved by introducing more vigorous tasks in the job, in casu illustrated by cleaning.

  15. Calculating the Probability of Returning a Loan with Binary Probability Models

    Directory of Open Access Journals (Sweden)

    Julian Vasilev

    2014-12-01

    Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.

  16. Material variability and repetitive member factors for the allowable properties of engineered wood products

    Science.gov (United States)

    Steve Verrill; David E. Kretschmann

    2009-01-01

    It has been argued that repetitive member allowable property adjustments should be larger for high-variability materials than for low-variability materials. We report analytic calculations and simulations that suggest that the order of such adjustments should be reversed, that is, given the manner in which allowable properties are currently calculated, as the...

  17. Non disponible / Not available

    OpenAIRE

    Pierre , Cuny

    2007-01-01

    Non disponible / Not available; La restauration des dents antérieures par la céramique collée impose une réussite au niveau de la biocompatibilité, de la forme mais également de la couleur du résultat définitif. En raison de l'opacité variable des céramiques utilisées, les industriels proposent des polymères de collage dont le mécanisme de prise est dit « dual ». Une gamme étendue de couleur de ces polymères decollage ainsi que des niveaux d'opacité variable permettent de répondre au mieux à ...

  18. Parameter definition for reactor physics calculation of Obrigheim KWO PWR type reactor using the Gels and Erebus codes

    International Nuclear Information System (INIS)

    Faya, A.G.; Nakata, H.; Rodrigues, V.G.; Oosterkamp, W.J.

    1974-01-01

    The main variables for Obrigheim Reactor - KWO diffusion theory calculations, using the EREBUS code were defined. The variables under consideration were: mesh spacing for reactor description, time-step in burn-up calculation, and the temperature in both the moderator and the fuel. The best mesh spacing and time-step were defined considering the relative deviations and the computer time expended in each case. It has been verified that the error involved in the mean fuel temperature calculation (1317 0 K as given by SIEMENS and 1028 0 K as calculated by Dr. Penndorf) does not change substancially the calculation results

  19. Common characterization of variability and forecast errors of variable energy sources and their mitigation using reserves in power system integration studies

    Energy Technology Data Exchange (ETDEWEB)

    Menemenlis, N.; Huneault, M. [IREQ, Varennes, QC (Canada); Robitaille, A. [Dir. Plantif. de la Production Eolienne, Montreal, QC (Canada). HQ Production; Holttinen, H. [VTT Technical Research Centre of Finland, VTT (Finland)

    2012-07-01

    This In this paper we define and characterize the two random variables, variability and forecast error, over which uncertainty in power systems operations is characterized and mitigated. We show that the characterization of both these variables can be carried out with the same mathematical tools. Furthermore, this common characterization of random variables lends itself to a common methodology for the calculation of non-contingency reserves required to mitigate their effects. A parallel comparison of these two variables demonstrates similar inherent statistical properties. They depend on imminent conditions, evolve with time and can be asymmetric. Correlation is an important factor when aggregating individual wind farm characteristics in forming the distribution of the total wind generation for imminent conditions. (orig.)

  20. Audit calculation for the LOCA methodology for KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Un Chul; Park, Chang Hwan; Choi, Yong Won; Yoo, Jun Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2006-11-15

    The objective of this research is to perform the audit regulatory calculation for the LOCA methodology for KSNP. For LBLOCA calculation, several uncertainty variables and new ranges of those are added to those of previous KINS-REM to improve the applicability of KINS-REM for KSNP LOCA. And those results are applied to LBLOCA audit calculation by statistical method. For SBLOCA calculation, after selecting BATHSY9.1.b, which is not used by KHNP, the results of RELAP5/Mod3.3 and RELAP5/MOD3.3ef-sEM for KSNP SBLOCA are compared to evaluate the conservativeness or applicability of RELAP5/MOD3.3ef-sEM code for KSNP SBLOCA. The result of this research can be used to support the activities of KINS for reviewing the LOCA methodology for KSNP proposed by KHNP.

  1. A method to calculate fission-fragment yields Y(Z,N) versus proton and neutron number in the Brownian shape-motion model. Application to calculations of U and Pu charge yields

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, Peter [Los Alamos National Laboratory, Theoretical Division, Los Alamos, NM (United States); Ichikawa, Takatoshi [Kyoto University, Yukawa Institute for Theoretical Physics, Kyoto (Japan)

    2015-12-15

    We propose a method to calculate the two-dimensional (2D) fission-fragment yield Y(Z,N) versus both proton and neutron number, with inclusion of odd-even staggering effects in both variables. The approach is to use the Brownian shape-motion on a macroscopic-microscopic potential-energy surface which, for a particular compound system is calculated versus four shape variables: elongation (quadrupole moment Q{sub 2}), neck d, left nascent fragment spheroidal deformation ε{sub f1}, right nascent fragment deformation ε{sub f2} and two asymmetry variables, namely proton and neutron numbers in each of the two fragments. The extension of previous models 1) introduces a method to calculate this generalized potential-energy function and 2) allows the correlated transfer of nucleon pairs in one step, in addition to sequential transfer. In the previous version the potential energy was calculated as a function of Z and N of the compound system and its shape, including the asymmetry of the shape. We outline here how to generalize the model from the ''compound-system'' model to a model where the emerging fragment proton and neutron numbers also enter, over and above the compound system composition. (orig.)

  2. An exploratory clinical study to determine the utility of heart rate variability analysis in the assessment of dosha imbalance

    Directory of Open Access Journals (Sweden)

    P. Ram Manohar

    2018-04-01

    Full Text Available The present study is a comparison of the data of spectral analysis of heart rate variability with clinical evaluation of pathological state of doshas. The calculated cardiointervalography values are combined into three integral indexes, which according to the authors' opinion reflect the influence on heart rhythm of vata, pitta and kapha, the regulation systems of the body known as doshas in Ayurveda. Seven gross dosha imbalances were assessed to test the agreement between the two methods in this study. Heart Rate Variability (HRV spectral data was collected from 42 participants to make the comparison with the clinical assessment of dosha imbalance. Clinical method of dosha assessment and method of calculating integral indexes by cardiointervalography data showed substantial agreement by Kappa coefficient statistic (k = 0.78 in assessment of gross dosha imbalance. The results of the data generated from this pilot study warrant further studies to rigorously validate the algorithms of HRV analysis in understanding dosha imbalance in Ayurvedic clinical practice and research settings. Keywords: Heart rate variability, Ayurveda, Spectral analysis

  3. Impaired neural networks for approximate calculation in dyscalculic children: a functional MRI study

    Directory of Open Access Journals (Sweden)

    Dosch Mengia

    2006-09-01

    Full Text Available Abstract Background Developmental dyscalculia (DD is a specific learning disability affecting the acquisition of mathematical skills in children with otherwise normal general intelligence. The goal of the present study was to examine cerebral mechanisms underlying DD. Methods Eighteen children with DD aged 11.2 ± 1.3 years and twenty age-matched typically achieving schoolchildren were investigated using functional magnetic resonance imaging (fMRI during trials testing approximate and exact mathematical calculation, as well as magnitude comparison. Results Children with DD showed greater inter-individual variability and had weaker activation in almost the entire neuronal network for approximate calculation including the intraparietal sulcus, and the middle and inferior frontal gyrus of both hemispheres. In particular, the left intraparietal sulcus, the left inferior frontal gyrus and the right middle frontal gyrus seem to play crucial roles in correct approximate calculation, since brain activation correlated with accuracy rate in these regions. In contrast, no differences between groups could be found for exact calculation and magnitude comparison. In general, fMRI revealed similar parietal and prefrontal activation patterns in DD children compared to controls for all conditions. Conclusion In conclusion, there is evidence for a deficient recruitment of neural resources in children with DD when processing analog magnitudes of numbers.

  4. Calculation of the Local Free Energy Landscape in the Restricted Region by the Modified Tomographic Method.

    Science.gov (United States)

    Chen, Changjun

    2016-03-31

    The free energy landscape is the most important information in the study of the reaction mechanisms of the molecules. However, it is difficult to calculate. In a large collective variable space, a molecule must take a long time to obtain the sufficient sampling during the simulation. To save the calculation quantity, decreasing the sampling region and constructing the local free energy landscape is required in practice. However, the restricted region in the collective variable space may have an irregular shape. Simply restricting one or more collective variables of the molecule cannot satisfy the requirement. In this paper, we propose a modified tomographic method to perform the simulation. First, it divides the restricted region by some hyperplanes and connects the centers of hyperplanes together by a curve. Second, it forces the molecule to sample on the curve and the hyperplanes in the simulation and calculates the free energy data on them. Finally, all the free energy data are combined together to form the local free energy landscape. Without consideration of the area outside the restricted region, this free energy calculation can be more efficient. By this method, one can further optimize the path quickly in the collective variable space.

  5. Influence of the biomechanical variables of the gait cycle in running economy. [Influencia de variables biomecánicas del ciclo de paso en la economía de carrera].

    Directory of Open Access Journals (Sweden)

    Jordan Santos-Concejero

    2014-04-01

    Full Text Available The aim of this study was to investigate the relationships between biomechanical variables and running economy (RE. Eleven recreational (RR and 14 well-trained runners (WT completed 4 min stages on a treadmill at different speeds. During the test, biomechanical variables such as ground contact time (tc, swing time (tsw, stride length, frequency and angle and the length of the different subphases of ground contact were calculated using an optical measurement system. VO2 was measured in order to calculate RE. The WT runners were more economical than the RR at all speeds and presented lower tc, higher tsw, longer strides, lower stride frequencies and higher stride angles (P Resumen El objetivo de este estudio fue el investigar las relaciones entre diferentes variables biomecánicas y la economía de carrera (RE. Once atletas populares (RR y 14 atletas altamente entrenados (WT completaron estadios de 4 min en tapiz rodante a diferentes velocidades. Durante el test, el tiempo de contacto (tc y de vuelo (tsw, la longitud, frecuencia y ángulo de zancada y la duración de las diferentes sub-fases del tiempo de contacto se calcularon usando un sistema óptico. Se midió el VO2 para calcular la RE. Los atletas WT fueron más económicos que los RR y presentaron menores tc, mayores tsw, zancadas más largas, frecuencias más bajas y ángulos mayores (P

  6. Modeling and Design Optimization of Variable-Speed Wind Turbine Systems

    Directory of Open Access Journals (Sweden)

    Ulas Eminoglu

    2014-01-01

    Full Text Available As a result of the increase in energy demand and government subsidies, the usage of wind turbine system (WTS has increased dramatically. Due to the higher energy production of a variable-speed WTS as compared to a fixed-speed WTS, the demand for this type of WTS has increased. In this study, a new method for the calculation of the power output of variable-speed WTSs is proposed. The proposed model is developed from the S-type curve used for population growth, and is only a function of the rated power and rated (nominal wind speed. It has the advantage of enabling the user to calculate power output without using the rotor power coefficient. Additionally, by using the developed model, a mathematical method to calculate the value of rated wind speed in terms of turbine capacity factor and the scale parameter of the Weibull distribution for a given wind site is also proposed. Design optimization studies are performed by using the particle swarm optimization (PSO and artificial bee colony (ABC algorithms, which are applied into this type of problem for the first time. Different sites such as Northern and Mediterranean sites of Europe have been studied. Analyses for various parameters are also presented in order to evaluate the effect of rated wind speed on the design parameters and produced energy cost. Results show that proposed models are reliable and very useful for modeling and optimization of WTSs design by taking into account the wind potential of the region. Results also show that the PSO algorithm has better performance than the ABC algorithm for this type of problem.

  7. Matching fully differential NNLO calculations and parton showers

    Energy Technology Data Exchange (ETDEWEB)

    Alioli, Simone; Bauer, Christian W.; Berggren, Calvin; Walsh, Jonathan R.; Zuberi, Saba [California Univ., Berkeley, CA (United States). Ernest Orlando Lawrence Berkeley National Laboratory; Tackmann, Frank J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-11-15

    We present a general method to match fully differential next-to-next-to-leading (NNLO) calculations to parton shower programs. We discuss in detail the perturbative accuracy criteria a complete NNLO+PS matching has to satisfy. Our method is based on consistently improving a given NNLO calculation with the leading-logarithmic (LL) resummation in a chosen jet resolution variable. The resulting NNLO+LL calculation is cast in the form of an event generator for physical events that can be directly interfaced with a parton shower routine, and we give an explicit construction of the input ''Monte Carlo cross sections'' satisfying all required criteria. We also show how other proposed approaches naturally arise as special cases in our method.

  8. Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration

    Science.gov (United States)

    Scott, James R.; Martini, Michael C.

    2011-01-01

    A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and

  9. Incorporating soil variability in continental soil water modelling: a trade-off between data availability and model complexity

    Science.gov (United States)

    Peeters, L.; Crosbie, R. S.; Doble, R.; van Dijk, A. I. J. M.

    2012-04-01

    Developing a continental land surface model implies finding a balance between the complexity in representing the system processes and the availability of reliable data to drive, parameterise and calibrate the model. While a high level of process understanding at plot or catchment scales may warrant a complex model, such data is not available at the continental scale. This data sparsity is especially an issue for the Australian Water Resources Assessment system, AWRA-L, a land-surface model designed to estimate the components of the water balance for the Australian continent. This study focuses on the conceptualization and parametrization of the soil drainage process in AWRA-L. Traditionally soil drainage is simulated with Richards' equation, which is highly non-linear. As general analytic solutions are not available, this equation is usually solved numerically. In AWRA-L however, we introduce a simpler function based on simulation experiments that solve Richards' equation. In the simplified function soil drainage rate, the ratio of drainage (D) over storage (S), decreases exponentially with relative water content. This function is controlled by three parameters, the soil water storage at field capacity (SFC), the drainage fraction at field capacity (KFC) and a drainage function exponent (β). [ ] D- -S- S = KF C exp - β (1 - SFC ) To obtain spatially variable estimates of these three parameters, the Atlas of Australian Soils is used, which lists soil hydraulic properties for each soil profile type. For each soil profile type in the Atlas, 10 days of draining an initially fully saturated, freely draining soil is simulated using HYDRUS-1D. With field capacity defined as the volume of water in the soil after 1 day, the remaining parameters can be obtained by fitting the AWRA-L soil drainage function to the HYDRUS-1D results. This model conceptualisation fully exploits the data available in the Atlas of Australian Soils, without the need to solve the non

  10. The Calculator of Anti-Alzheimer’s Diet. Macronutrients

    Science.gov (United States)

    Studnicki, Marcin; Woźniak, Grażyna; Stępkowski, Dariusz

    2016-01-01

    The opinions about optimal proportions of macronutrients in a healthy diet have changed significantly over the last century. At the same time nutritional sciences failed to provide strong evidence backing up any of the variety of views on macronutrient proportions. Herein we present an idea how these proportions can be calculated to find an optimal balance of macronutrients with respect to prevention of Alzheimer’s Disease (AD) and dementia. These calculations are based on our published observation that per capita personal income (PCPI) in the USA correlates with age-adjusted death rates for AD (AADR). We have previously reported that PCPI through the period 1925–2005 correlated with AADR in 2005 in a remarkable, statistically significant oscillatory manner, as shown by changes in the correlation coefficient R (Roriginal). A question thus arises what caused the oscillatory behavior of Roriginal? What historical events in the life of 2005 AD victims had shaped their future with AD? Looking for the answers we found that, considering changes in the per capita availability of macronutrients in the USA in the period 1929–2005, we can mathematically explain the variability of Roriginal for each quarter of a human life. On the basis of multiple regression of Roriginal with regard to the availability of three macronutrients: carbohydrates, total fat, and protein, with or without alcohol, we propose seven equations (referred to as “the calculator” throughout the text) which allow calculating optimal changes in the proportions of macronutrients to reduce the risk of AD for each age group: youth, early middle age, late middle age and late age. The results obtained with the use of “the calculator” are grouped in a table (Table 4) of macronutrient proportions optimal for reducing the risk of AD in each age group through minimizing Rpredicted−i.e., minimizing the strength of correlation between PCPI and future AADR. PMID:27992612

  11. Global variables and the dynamics or relativistic nucleus-nucleus collisions

    International Nuclear Information System (INIS)

    Cugnon, J.; L'Hote, D.

    1983-01-01

    Various global variables providing a simple description of high multiplicity events are reviewed. Many of them are calculated in the framework of an intra-nuclear cascade model, which describes the collision process as a series of binary on-shell relativistic baryon-baryon collisions and which includes inelasticity through the production of δ-resonances. The calculations are first made for the Ar+KCl system at 0.8 GeV/A, with global variables including either all the nucleons or only the participant nucleons. The shape and the orientation of the ellipsoid of sphericity are particularly investigated. For both cases, on the average, the large axis of the ellipsoid is found to point in the beam direction. This result is discussed in comparison with hydrodynamics predictions and in relation with the mean free path. A kind of small 'bounce-off effect' is detected for intermediate impact parameters. The possibility of extracting the value of the impact parameter b from the value of a global variable is shown to depend upon the variation of this variable with b and upon the fluctuation of the global variable for a given impact parameter. A quality factor is defined to quantify this possibility. No current global variable seems to be more appropriate than the number of participant nucleons for the impact parameter selection. The physical origin of the fluctuations inside the intranuclear cascade model is discussed and the possibility of extracting useful information on the dynamics of the system from the fluctuations is pointed out. The energy dependence of our results is discussed. Some results of the calculations at 250 and 400 MeV/A are also presented for the same system Ar+KCl. (orig.)

  12. Radionuclide data bases available for bioaccumulation factors for freshwater biota

    International Nuclear Information System (INIS)

    Blaylock, B.G.

    1982-01-01

    Aquatic models currently in use for dose assessment simulate the transfer of radionuclides in aquatic environments and the transfer to man. In these models the assimilation of a radionuclide in aquatic biota is calculated by using a simple empirical relationship known as the bioaccumulation factor (BF) to represent the transfer of the radionuclide from water to organism. The purpose of this article is to review data bases that are available for BFs for freshwater biota and to identify the uncertainties associated with them. Data bases for raidoisotopes of Co, Cs, C, H, I, Pu, Ra, Ru, Sr, and U are reviewed. With the exception of ruthenium and carbon, the review is restricted to BFs determined for natural freshwater systems. Factors influencing the variability of BFs are identified, uncertainties associated with the validation of BFs are discussed, and some guidance is given for collecting data and measuring BFs

  13. An Extended TOPSIS Method for Multiple Attribute Decision Making based on Interval Neutrosophic Uncertain Linguistic Variables

    Directory of Open Access Journals (Sweden)

    Said Broumi

    2015-03-01

    Full Text Available The interval neutrosophic uncertain linguistic variables can easily express the indeterminate and inconsistent information in real world, and TOPSIS is a very effective decision making method more and more extensive applications. In this paper, we will extend the TOPSIS method to deal with the interval neutrosophic uncertain linguistic information, and propose an extended TOPSIS method to solve the multiple attribute decision making problems in which the attribute value takes the form of the interval neutrosophic uncertain linguistic variables and attribute weight is unknown. Firstly, the operational rules and properties for the interval neutrosophic variables are introduced. Then the distance between two interval neutrosophic uncertain linguistic variables is proposed and the attribute weight is calculated by the maximizing deviation method, and the closeness coefficients to the ideal solution for each alternatives. Finally, an illustrative example is given to illustrate the decision making steps and the effectiveness of the proposed method.

  14. Improved Genetic Algorithm with Two-Level Approximation for Truss Optimization by Using Discrete Shape Variables

    Directory of Open Access Journals (Sweden)

    Shen-yan Chen

    2015-01-01

    Full Text Available This paper presents an Improved Genetic Algorithm with Two-Level Approximation (IGATA to minimize truss weight by simultaneously optimizing size, shape, and topology variables. On the basis of a previously presented truss sizing/topology optimization method based on two-level approximation and genetic algorithm (GA, a new method for adding shape variables is presented, in which the nodal positions are corresponding to a set of coordinate lists. A uniform optimization model including size/shape/topology variables is established. First, a first-level approximate problem is constructed to transform the original implicit problem to an explicit problem. To solve this explicit problem which involves size/shape/topology variables, GA is used to optimize individuals which include discrete topology variables and shape variables. When calculating the fitness value of each member in the current generation, a second-level approximation method is used to optimize the continuous size variables. With the introduction of shape variables, the original optimization algorithm was improved in individual coding strategy as well as GA execution techniques. Meanwhile, the update strategy of the first-level approximation problem was also improved. The results of numerical examples show that the proposed method is effective in dealing with the three kinds of design variables simultaneously, and the required computational cost for structural analysis is quite small.

  15. Water availability as a driver of spatial and temporal variability in vegetation in the La Mancha plain (Spain): Implications for the land-surface energy, water and carbon budget

    Science.gov (United States)

    Los, Sietse

    2017-04-01

    Vegetation is water limited in large areas of Spain and therefore a close link exists between vegetation greenness observed from satellite and moisture availability. Here we exploit this link to infer spatial and temporal variability in moisture from MODIS NDVI data and thermal data. Discrepancies in the precipitation - vegetation relationship indicate areas with an alternative supply of water (i.e. not rainfall), this can be natural where moisture is supplied by upwelling groundwater, or can be artificial where crops are irrigated. As a result spatial and temporal variability in vegetation in the La Mancha Plain appears closely linked to topography, geology, rainfall and land use. Crop land shows large variability in year-to-year vegetation greenness; for some areas this variability is linked to variability in rainfall but in other cases this variability is linked to irrigation. The differences in irrigation treatment within one plant functional type, in this case crops, will lead to errors in land surface models when ignored. The magnitude of these effects on the energy, carbon and water balance are assessed at the scale of 250 m to 200 km. Estimating the water balance correctly is of particular important since in some areas in Spain more water is used for irrigation than is supplemented by rainfall.

  16. Calculating Puerto Rico’s Ecological Footprint (1970–2010 Using Freely Available Data

    Directory of Open Access Journals (Sweden)

    Matthew E. Hopton

    2015-07-01

    Full Text Available Ecological Footprint Analysis (EFA is appealing as a metric of sustainability because it is straightforward in theory and easy to conceptualize. However, EFA is difficult to implement because it requires extensive data. A simplified approach to EFA that requires fewer data can serve as a perfunctory analysis allowing researchers to examine a system with relatively little cost and effort. We examined whether a simplified approach using freely available data could be applied to Puerto Rico, a densely populated island with limited land resources. Forty-one years of data were assembled to compute the ecological footprint from 1970 to 2010. According to EFA, individuals in Puerto Rico were moving toward sustainability over time, as the per capita ecological footprint decreased from 3.69 ha per capita (ha/ca in 1970 to 3.05 ha/ca in 2010. However, due to population growth, the population’s footprint rose from 1.00 × 107 ha in 1970 to 1.14 × 107 ha in 2010, indicating Puerto Rico as a whole was moving away from sustainability. Our findings demonstrate the promise for conducting EFA using a simplified approach with freely available data, and we discuss potential limitations on data quality and availability that should be addressed to further improve the science.

  17. Constraint-led changes in internal variability in running.

    Science.gov (United States)

    Haudum, Anita; Birklbauer, Jürgen; Kröll, Josef; Müller, Erich

    2012-01-01

    We investigated the effect of a one-time application of elastic constraints on movement-inherent variability during treadmill running. Eleven males ran two 35-min intervals while surface EMG was measured. In one of two 35-min intervals, after 10 min of running without tubes, elastic tubes (between hip and heels) were attached, followed by another 5 min of running without tubes. To assess variability, stride-to-stride iEMG variability was calculated. Significant increases in variability (36 % to 74 %) were observed during tube running, whereas running without tubes after the tube running block showed no significant differences. Results show that elastic tubes affect variability on a muscular level despite the constant environmental conditions and underline the nervous system's adaptability to cope with somehow unpredictable constraints since stride duration was unaltered.

  18. Axial shape index calculation for the 3-level excore detector

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Han Gon; Kim, Yong Hee; Kim, Byung Sop; Lee, Sang Hee; Cho, Sung Jae [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A new method based on the alternating conditional expectation (ACE) algorithm is developed to calculate axial shape index (ASI) for the 3-level excore detector. The ACE algorithm, a type of nonparametric regression algorithms, yields an optimal relationship between a dependent variable and multiple independent variables. In this study, the simple correlation between ASI and excore detector signals is developed using the Younggwang nuclear power plant unit 3 (YGN-3) data without any preprocessing on the relationships between independent variables and dependent variable. The numerical results show that simple correlations exist between the three excore signals and ASI of the core. The accuracy of the new method is much better than those of the current CPC and COLSS algorithms. 5 refs., 2 figs., 2 tabs. (Author)

  19. Axial shape index calculation for the 3-level excore detector

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Han Gon; Kim, Yong Hee; Kim, Byung Sop; Lee, Sang Hee; Cho, Sung Jae [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A new method based on the alternating conditional expectation (ACE) algorithm is developed to calculate axial shape index (ASI) for the 3-level excore detector. The ACE algorithm, a type of nonparametric regression algorithms, yields an optimal relationship between a dependent variable and multiple independent variables. In this study, the simple correlation between ASI and excore detector signals is developed using the Younggwang nuclear power plant unit 3 (YGN-3) data without any preprocessing on the relationships between independent variables and dependent variable. The numerical results show that simple correlations exist between the three excore signals and ASI of the core. The accuracy of the new method is much better than those of the current CPC and COLSS algorithms. 5 refs., 2 figs., 2 tabs. (Author)

  20. Flexible Design and Operation of Multi-Stage Flash (MSF Desalination Process Subject to Variable Fouling and Variable Freshwater Demand

    Directory of Open Access Journals (Sweden)

    Said Alforjani Said

    2013-10-01

    Full Text Available This work describes how the design and operation parameters of the Multi-Stage Flash (MSF desalination process are optimised when the process is subject to variation in seawater temperature, fouling and freshwater demand throughout the day. A simple polynomial based dynamic seawater temperature and variable freshwater demand correlations are developed based on actual data which are incorporated in the MSF mathematical model using gPROMS models builder 3.0.3. In addition, a fouling model based on stage temperature is considered. The fouling and the effect of noncondensable gases are incorporated into the calculation of overall heat transfer co-efficient for condensers. Finally, an optimisation problem is developed where the total daily operating cost of the MSF process is minimised by optimising the design (no of stages and the operating (seawater rejected flowrate and brine recycle flowrate parameters.

  1. Metronome Cueing of Walking Reduces Gait Variability after a Cerebellar Stroke.

    Science.gov (United States)

    Wright, Rachel L; Bevins, Joseph W; Pratt, David; Sackley, Catherine M; Wing, Alan M

    2016-01-01

    Cerebellar stroke typically results in increased variability during walking. Previous research has suggested that auditory cueing reduces excessive variability in conditions such as Parkinson's disease and post-stroke hemiparesis. The aim of this case report was to investigate whether the use of a metronome cue during walking could reduce excessive variability in gait parameters after a cerebellar stroke. An elderly female with a history of cerebellar stroke and recurrent falling undertook three standard gait trials and three gait trials with an auditory metronome. A Vicon system was used to collect 3-D marker trajectory data. The coefficient of variation was calculated for temporal and spatial gait parameters. SDs of the joint angles were calculated and used to give a measure of joint kinematic variability. Step time, stance time, and double support time variability were reduced with metronome cueing. Variability in the sagittal hip, knee, and ankle angles were reduced to normal values when walking to the metronome. In summary, metronome cueing resulted in a decrease in variability for step, stance, and double support times and joint kinematics. Further research is needed to establish whether a metronome may be useful in gait rehabilitation after cerebellar stroke and whether this leads to a decreased risk of falling.

  2. Quality and Variability of Online Available Physical Therapy Protocols From Academic Orthopaedic Surgery Programs for Anterior Cruciate Ligament Reconstruction.

    Science.gov (United States)

    Makhni, Eric C; Crump, Erica K; Steinhaus, Michael E; Verma, Nikhil N; Ahmad, Christopher S; Cole, Brian J; Bach, Bernard R

    2016-08-01

    To assess the quality and variability found across anterior cruciate ligament (ACL) rehabilitation protocols published online by academic orthopaedic programs. Web-based ACL physical therapy protocols from United States academic orthopaedic programs available online were included for review. Main exclusion criteria included concomitant meniscus repair, protocols aimed at pediatric patients, and failure to provide time points for the commencement or recommended completion of any protocol components. A comprehensive, custom scoring rubric was created that was used to assess each protocol for the presence or absence of various rehabilitation components, as well as when those activities were allowed to be initiated in each protocol. Forty-two protocols were included for review from 155 U.S. academic orthopaedic programs. Only 13 protocols (31%) recommended a prehabilitation program. Five protocols (12%) recommended continuous passive motion postoperatively. Eleven protocols (26%) recommended routine partial or non-weight bearing immediately postoperatively. Ten protocols (24%) mentioned utilization of a secondary/functional brace. There was considerable variation in range of desired full-weight-bearing initiation (9 weeks), as well as in the types of strength and proprioception exercises specifically recommended. Only 8 different protocols (19%) recommended return to sport after achieving certain strength and activity criteria. Many ACL rehabilitation protocols recommend treatment modalities not supported by current reports. Moreover, high variability in the composition and time ranges of rehabilitation components may lead to confusion among patients and therapists. Level II. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  3. Using variable combination population analysis for variable selection in multivariate calibration.

    Science.gov (United States)

    Yun, Yong-Huan; Wang, Wei-Ting; Deng, Bai-Chuan; Lai, Guang-Bi; Liu, Xin-bo; Ren, Da-Bing; Liang, Yi-Zeng; Fan, Wei; Xu, Qing-Song

    2015-03-03

    Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Distributed intelligence improves availability

    International Nuclear Information System (INIS)

    Einholf, C.W.; Ciaramitaro, W.

    1982-01-01

    The new generation of instrumentation which is being developed to monitor critical variables in nuclear power plants is described. Powerful, compact microprocessors have been built into monitors to simplify data display. Some of the benefits of digital systems are improved plant availability, reduction in maintenance costs, reduction in manpower, lessening of test times and less frequent inspection and overhaul. (U.K.)

  5. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  6. On the K-term and dispersion ratios of semi-regular variables

    International Nuclear Information System (INIS)

    Aslan, Z.

    1981-01-01

    Optical velocities of semi-regular (SR) and irregular (Lb) variables are analysed for a K-term. There is evidence for a dependence upon stellar period. Absorption lines in shorter period non-emission SR variables are blue-shifted relative to the centre-of-mass velocity by about 6 +- 3 km s -1 . Emission-line SR variables give a non-negative absorption K-term and Lb variables give no K-terms other than zero. Comparison is made with the K-terms implied by the OH velocity pattern in long-period variables. Dispersion ratios are also calculated. (author)

  7. A protocol for measuring spatial variables in soft-sediment tide pools

    Directory of Open Access Journals (Sweden)

    Marina R. Brenha-Nunes

    2016-01-01

    Full Text Available ABSTRACT We present a protocol for measuring spatial variables in large (>50 m2 soft-sediment tide pool. Secondarily, we present the fish capture efficiency of a sampling protocol that based on such spatial variables to calculate relative abundances. The area of the pool is estimated by summing areas of basic geometric forms; the depth, by taken representative measurements of the depth variability of each pool's sector, previously determined according to its perimeter; and the volume, by considering the pool as a prism. These procedures were a trade-off between the acquisition of reliable estimates and the minimization of both the cost of operating and the time spent in field. The fish sampling protocol is based on two con secutive stages: 1 two people search for fishes under structures (e.g., rocks and litters on the pool and capture them with hand seines; 2 these structures are removed and then a beach-seine is hauled over the whole pool. Our method is cheaper than others and fast to operate considering the time in low tides. The method to sample fish is quite efficient resulting in a capture efficiency of 89%.

  8. Variability aware compact model characterization for statistical circuit design optimization

    Science.gov (United States)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  9. Costs of solar and wind power variability for reducing CO2 emissions.

    Science.gov (United States)

    Lueken, Colleen; Cohen, Gilbert E; Apt, Jay

    2012-09-04

    We compare the power output from a year of electricity generation data from one solar thermal plant, two solar photovoltaic (PV) arrays, and twenty Electric Reliability Council of Texas (ERCOT) wind farms. The analysis shows that solar PV electricity generation is approximately one hundred times more variable at frequencies on the order of 10(-3) Hz than solar thermal electricity generation, and the variability of wind generation lies between that of solar PV and solar thermal. We calculate the cost of variability of the different solar power sources and wind by using the costs of ancillary services and the energy required to compensate for its variability and intermittency, and the cost of variability per unit of displaced CO(2) emissions. We show the costs of variability are highly dependent on both technology type and capacity factor. California emissions data were used to calculate the cost of variability per unit of displaced CO(2) emissions. Variability cost is greatest for solar PV generation at $8-11 per MWh. The cost of variability for solar thermal generation is $5 per MWh, while that of wind generation in ERCOT was found to be on average $4 per MWh. Variability adds ~$15/tonne CO(2) to the cost of abatement for solar thermal power, $25 for wind, and $33-$40 for PV.

  10. Evaluation of the value of availability and dispatchability in IPP contracts

    International Nuclear Information System (INIS)

    Camporeale, R.J.

    1990-01-01

    Consolidated Edison's resource plan relies on power from Independent Power Producers (IPPs) for a portion of future generation requirements. The additional restriction of obtaining this capacity through a bidding process requires the utility to evaluate a large number of potential contracts with different combinations of price, availability, and dispatchability. This paper discusses the theoretical considerations and outlines the methodology adopted for Consolidated Edison's first request for proposal. The value of an IPP contract is a function of the variable energy cost compared to the system avoided costs. For example, there is a value for availability only in those hours when contract cost is below the avoided cost and there is a penalty for non-dispatchability only in those hours when the contract cost is higher than the avoided cost. The best method to determine the value of an IPP contract would be to simulate the operation of the system with and without the IPP purchase using a perfect production cost model. In reality no model is perfect and there are trade-offs because not all aspects of system operation are captured. Performing a detailed production cost simulation for every proposal would be burdensome. Therefore, it was decided that a simplified methodology was needed. An additional benefit of a simplified approach is that the IPPs could score their own proposals and use this value as input into their final pricing scheme. The method developed relies on detailed production cost simulations to generate hourly avoided costs. A comparison of these avoided costs to the IPP variable costs becomes the basis for the calculation of the value of availability or dispatchability. This methodology can be applied consistently to all supply side resources; baseload or peaking, gas, oil or coal fired. This allows for the evaluation of all bid proposals on an equal basis

  11. Study of event shape variables at LEP

    CERN Document Server

    Sarkar, Subir

    1997-01-01

    We present the LEP results on the study of the hadronic event shape variables. Excellent detector performance and improved theoretical calculations make it possible to study quantum chromodynamics with small experimental and theoretical uncertainties. QCD predictions describe data well at energies above the Z peak.

  12. Alaska Village Electric Load Calculator

    Energy Technology Data Exchange (ETDEWEB)

    Devine, M.; Baring-Gould, E. I.

    2004-10-01

    As part of designing a village electric power system, the present and future electric loads must be defined, including both seasonal and daily usage patterns. However, in many cases, detailed electric load information is not readily available. NREL developed the Alaska Village Electric Load Calculator to help estimate the electricity requirements in a village given basic information about the types of facilities located within the community. The purpose of this report is to explain how the load calculator was developed and to provide instructions on its use so that organizations can then use this model to calculate expected electrical energy usage.

  13. Calculation of the viscosity of nuclear waste glass systems

    International Nuclear Information System (INIS)

    Shah, R.; Behrman, E.C.; Oksoy, D.

    1990-01-01

    Viscosity is one of the most important processing parameters and one of the most difficult to calculate theoretically, particularly for multicomponent systems like nuclear waste glasses. Here, the authors propose a semi-empirical approach based on the Fulcher equation, involving identification of key variables, for which coefficients are then determined by regression analysis. Results are presented for two glass systems, and compared to results of previous workers and to experiment. The authors also sketch a first-order statistical mechanical perturbation theory calculation for the effects on viscosity of a change in composition of the melt

  14. Practical applications of internal dose calculations

    International Nuclear Information System (INIS)

    Carbaugh, E.H.

    1994-06-01

    Accurate estimates of intake magnitude and internal dose are the goal for any assessment of an actual intake of radioactivity. When only one datum is available on which to base estimates, the choices for internal dose assessment become straight-forward: apply the appropriate retention or excretion function, calculate the intake, and calculate the dose. The difficulty comes when multiple data and different types of data become available. Then practical decisions must be made on how to interpret conflicting data, or how to adjust the assumptions and techniques underlying internal dose assessments to give results consistent with the data. This article describes nine types of adjustments which can be incorporated into calculations of intake and internal dose, and then offers several practical insights to dealing with some real-world internal dose puzzles

  15. Pile Load Capacity – Calculation Methods

    Directory of Open Access Journals (Sweden)

    Wrana Bogumił

    2015-12-01

    Full Text Available The article is a review of the current problems of the foundation pile capacity calculations. The article considers the main principles of pile capacity calculations presented in Eurocode 7 and other methods with adequate explanations. Two main methods are presented: α – method used to calculate the short-term load capacity of piles in cohesive soils and β – method used to calculate the long-term load capacity of piles in both cohesive and cohesionless soils. Moreover, methods based on cone CPTu result are presented as well as the pile capacity problem based on static tests.

  16. Assessment of hydric balance through climatic variables, in the Cazones River Basin, Veracruz, Mexico

    Directory of Open Access Journals (Sweden)

    Eduardo Santillán Gutiérrez

    2014-09-01

    Full Text Available The hydrologic regime and the water catchment capacity of a hydrographic basin depend on the temporal and spatial variation patterns of climatic variables and on the physiographic characteristics of the watershed. In certain regions, where the availability of water depends on the catchment capacity of the watershed, the utilization of effective methods such as the hydric balance has become more frequently used because it enables an estimate of the hydrologic regime, the catchment capacity, and the water flows. It also enables an estimate of the hydrologic processes and the period in which they occurred. In the present work, assessments of the Climatic Hydric Balance (CHB and of potential evapotranspiration were performed in the Cazones river basin. The calculations followed the Thornthwaite and Mather method based on climatic variables such as temperature and precipitation during the period from 1981 to 2010. As a result of these assessments, it was found that the excess layer of water and the annual runoff were 638.63 mm and 637.02 mm, respectively. Further, the work identified the months that comprise the humid and dry periods, the regime of the climatic variables, and surpluses and deficits of water in the basin during an annual cycle.

  17. The Importance of Considering the Temporal Distribution of Climate Variables for Ecological-Economic Modeling to Calculate the Consequences of Climate Change for Agriculture

    Science.gov (United States)

    Plegnière, Sabrina; Casper, Markus; Hecker, Benjamin; Müller-Fürstenberger, Georg

    2014-05-01

    The basis of many models to calculate and assess climate change and its consequences are annual means of temperature and precipitation. This method leads to many uncertainties especially at the regional or local level: the results are not realistic or too coarse. Particularly in agriculture, single events and the distribution of precipitation and temperature during the growing season have enormous influences on plant growth. Therefore, the temporal distribution of climate variables should not be ignored. To reach this goal, a high-resolution ecological-economic model was developed which combines a complex plant growth model (STICS) and an economic model. In this context, input data of the plant growth model are daily climate values for a specific climate station calculated by the statistical climate model (WETTREG). The economic model is deduced from the results of the plant growth model STICS. The chosen plant is corn because corn is often cultivated and used in many different ways. First of all, a sensitivity analysis showed that the plant growth model STICS is suitable to calculate the influences of different cultivation methods and climate on plant growth or yield as well as on soil fertility, e.g. by nitrate leaching, in a realistic way. Additional simulations helped to assess a production function that is the key element of the economic model. Thereby the problems when using mean values of temperature and precipitation in order to compute a production function by linear regression are pointed out. Several examples show why a linear regression to assess a production function based on mean climate values or smoothed natural distribution leads to imperfect results and why it is not possible to deduce a unique climate factor in the production function. One solution for this problem is the additional consideration of stress indices that show the impairment of plants by water or nitrate shortage. Thus, the resulting model takes into account not only the ecological

  18. UV Reconstruction Algorithm And Diurnal Cycle Variability

    Science.gov (United States)

    Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara

    2009-03-01

    UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.

  19. Development of a simplified statistical methodology for nuclear fuel rod internal pressure calculation

    International Nuclear Information System (INIS)

    Kim, Kyu Tae; Kim, Oh Hwan

    1999-01-01

    A simplified statistical methodology is developed in order to both reduce over-conservatism of deterministic methodologies employed for PWR fuel rod internal pressure (RIP) calculation and simplify the complicated calculation procedure of the widely used statistical methodology which employs the response surface method and Monte Carlo simulation. The simplified statistical methodology employs the system moment method with a deterministic statistical methodology employs the system moment method with a deterministic approach in determining the maximum variance of RIP. The maximum RIP variance is determined with the square sum of each maximum value of a mean RIP value times a RIP sensitivity factor for all input variables considered. This approach makes this simplified statistical methodology much more efficient in the routine reload core design analysis since it eliminates the numerous calculations required for the power history-dependent RIP variance determination. This simplified statistical methodology is shown to be more conservative in generating RIP distribution than the widely used statistical methodology. Comparison of the significances of each input variable to RIP indicates that fission gas release model is the most significant input variable. (author). 11 refs., 6 figs., 2 tabs

  20. Percolation with first-and-second neighbour bonds: a renormalization-group calculation of critical exponents

    International Nuclear Information System (INIS)

    Riera, R.; Oliveira, P.M.C. de; Chaves, C.M.G.F.; Queiroz, S.L.A. de.

    1980-04-01

    A real-space renormalization group approach for the bond percolation problem in a square lattice with first- and second- neighbour bonds is proposed. The respective probabilities are treated, as independent variables. Two types of cells are constructed. In one of them the lattice is considered as two interpenetrating sublattices, first-neighbour bonds playing the role of intersublattice links. This allows the calculation of both critical exponents ν and γ, without resorting to any external field. Values found for the critical indices are in good agreement with data available in the literature. The phase diagram in parameter space is also obtained in each case. (Author) [pt

  1. MORSE/STORM: A generalized albedo option for Monte Carlo calculations

    International Nuclear Information System (INIS)

    Gomes, I.C.; Stevens, P.N.

    1991-09-01

    The advisability of using the albedo procedure for the Monte Carlo solution of deep penetration shielding problems that have ducts and other penetrations has been investigated. The use of albedo data can dramatically improve the computational efficiency of certain Monte Carlo calculations. However, the accuracy of these results may be unacceptable because of lost information during the albedo event and serious errors in the available differential albedo data. This study was done to evaluate and appropriately modify the MORSE/BREESE package, to develop new methods for generating the required albedo data, and to extend the adjoint capability to the albedo-modified calculations. Major modifications to MORSE/BREESE include an option to save for further use information that would be lost at the albedo event, an option to displace the point of emergence during an albedo event, and an option to use spatially dependent albedo data for both forward and adjoint calculations, which includes the point of emergence as a new random variable to be selected during an albedo event. The theoretical basis for using TORT-generated forward albedo information to produce adjuncton albedos was derived. The MORSE/STORM package was developed to perform both forward and adjoint modes of analysis using spatially dependent albedo data. Results obtained with MORSE/STORM for both forward and adjoint modes were compared with benchmark solutions. Excellent agreement and improved computational efficiency were achieved, demonstrating the full utilization of the albedo option in the MORSE code. 7 refs., 17 figs., 15 tabs

  2. Global phase equilibrium calculations: Critical lines, critical end points and liquid-liquid-vapour equilibrium in binary mixtures

    DEFF Research Database (Denmark)

    Cismondi, Martin; Michelsen, Michael Locht

    2007-01-01

    A general strategy for global phase equilibrium calculations (GPEC) in binary mixtures is presented in this work along with specific methods for calculation of the different parts involved. A Newton procedure using composition, temperature and Volume as independent variables is used for calculation...

  3. Performance of commercially available solar and heat pump water heaters

    International Nuclear Information System (INIS)

    Lloyd, C.R.; Kerr, A.S.D.

    2008-01-01

    Many countries are using policy incentives to encourage the adoption of energy-efficient hot water heating as a means of reducing greenhouse gas emissions. Such policies rely heavily on assumed performance factors for such systems. In-situ performance data for solar and heat pump hot water systems, however, are not copious in the literature. Otago University has been testing some systems available in New Zealand for a number of years. The results obtained are compared to international studies of in-situ performance of solar hot water systems and heat pump hot water systems, by converting the results from the international studies into a single index suitable for both solar and heat pump systems (COP). Variability in the international data is investigated as well as comparisons to model results. The conclusions suggest that there is not too much difference in performance between solar systems that have a permanently connected electric boost backup and heat pump systems over a wide range of environmental temperatures. The energy payback time was also calculated for electric boost solar flat plate systems as a function of both COP and hot water usage for a given value of embodied energy. The calculations generally bode well for solar systems but ensuring adequate system performance is paramount. In addition, such systems generally favour high usage rates to obtain good energy payback times

  4. Trends and inter-annual variability of methane emissions derived from 1979-1993 global CTM simulations

    Directory of Open Access Journals (Sweden)

    F. Dentener

    2003-01-01

    Full Text Available The trend and interannual variability of methane sources are derived from multi-annual simulations of tropospheric photochemistry using a 3-D global chemistry-transport model. Our semi-inverse analysis uses the fifteen years (1979--1993 re-analysis of ECMWF meteorological data and annually varying emissions including photo-chemistry, in conjunction with observed CH4 concentration distributions and trends derived from the NOAA-CMDL surface stations. Dividing the world in four zonal regions (45--90 N, 0--45 N, 0--45 S, 45--90 S we find good agreement in each region between (top-down calculated emission trends from model simulations and (bottom-up estimated anthropogenic emission trends based on the EDGAR global anthropogenic emission database, which amounts for the period 1979--1993 2.7 Tg CH4 yr-1. Also the top-down determined total global methane emission compares well with the total of the bottom-up estimates. We use the difference between the bottom-up and top-down determined emission trends to calculate residual emissions. These residual emissions represent the inter-annual variability of the methane emissions. Simulations have been performed in which the year-to-year meteorology, the emissions of ozone precursor gases, and the stratospheric ozone column distribution are either varied, or kept constant. In studies of methane trends it is most important to include the trends and variability of the oxidant fields. The analyses reveals that the variability of the emissions is of the order of 8Tg CH4 yr-1, and likely related to wetland emissions and/or biomass burning.

  5. REVIEW OF METHODOLOGIES FOR COSTS CALCULATING OF RUMINANTS IN SLOVAKIA

    Directory of Open Access Journals (Sweden)

    Zuzana KRUPOVÁ

    2012-09-01

    Full Text Available The objective of this work was to synthesise and analyse the methodologies and the biological aspects of the costs calculation in ruminants in Slovakia. According to literature, the account classification of cost items is most often considered for construction of costing formula. The costs are mostly divided into fixed (costs independent from volume of herd’s production and variable ones (costs connected with improvement of breeding conditions. Cost for feeds and beddings, labour costs, other direct costs and depreciations were found as the most important cost items in ruminants. It can be assumed that including the depreciations into costs of the basic herd takes into consideration the real costs simultaneously invested into raising of young animals in the given period. Costs are calculated for the unit of the main and by-products and their classification is influenced mainly by the type of livestock and production system. In dairy cows is usually milk defined as the main product, and by- products are live born calf and manure. The base calculation unit is kilogram of milk (basic herd of cows and kilogram of gain and kilogram of live weight (young breeding cattle. In suckler cows is a live-born calf the main product and manure is the by-product. The costs are mostly calculated per suckler cow, live-born calf and per kilogram of live weight of weaned calf. Similar division of products into main and by-products is also in cost calculation for sheep categories. The difference is that clotted cheese is also considered as the main product of basic herd in dairy sheep and greasy wool as the by-products in all categories. Definition of the base calculation units in sheep categories followed the mentioned classification. The value of a by-product in cattle and sheep is usually set according to its quantity and intra- plant price of the by-product. In the calculation of the costs for sheep and cattle the “structural ewe” and “structural cow

  6. Diagnostic for two-mode variable valve activation device

    Science.gov (United States)

    Fedewa, Andrew M

    2014-01-07

    A method is provided for diagnosing a multi-mode valve train device which selectively provides high lift and low lift to a combustion valve of an internal combustion engine having a camshaft phaser actuated by an electric motor. The method includes applying a variable electric current to the electric motor to achieve a desired camshaft phaser operational mode and commanding the multi-mode valve train device to a desired valve train device operational mode selected from a high lift mode and a low lift mode. The method also includes monitoring the variable electric current and calculating a first characteristic of the parameter. The method also includes comparing the calculated first characteristic against a predetermined value of the first characteristic measured when the multi-mode valve train device is known to be in the desired valve train device operational mode.

  7. Unsupervised Calculation of Free Energy Barriers in Large Crystalline Systems

    Science.gov (United States)

    Swinburne, Thomas D.; Marinica, Mihai-Cosmin

    2018-03-01

    The calculation of free energy differences for thermally activated mechanisms in the solid state are routinely hindered by the inability to define a set of collective variable functions that accurately describe the mechanism under study. Even when possible, the requirement of descriptors for each mechanism under study prevents implementation of free energy calculations in the growing range of automated material simulation schemes. We provide a solution, deriving a path-based, exact expression for free energy differences in the solid state which does not require a converged reaction pathway, collective variable functions, Gram matrix evaluations, or probability flux-based estimators. The generality and efficiency of our method is demonstrated on a complex transformation of C 15 interstitial defects in iron and double kink nucleation on a screw dislocation in tungsten, the latter system consisting of more than 120 000 atoms. Both cases exhibit significant anharmonicity under experimentally relevant temperatures.

  8. Reflections on early Monte Carlo calculations

    International Nuclear Information System (INIS)

    Spanier, J.

    1992-01-01

    Monte Carlo methods for solving various particle transport problems developed in parallel with the evolution of increasingly sophisticated computer programs implementing diffusion theory and low-order moments calculations. In these early years, Monte Carlo calculations and high-order approximations to the transport equation were seen as too expensive to use routinely for nuclear design but served as invaluable aids and supplements to design with less expensive tools. The earliest Monte Carlo programs were quite literal; i.e., neutron and other particle random walk histories were simulated by sampling from the probability laws inherent in the physical system without distoration. Use of such analogue sampling schemes resulted in a good deal of time being spent in examining the possibility of lowering the statistical uncertainties in the sample estimates by replacing simple, and intuitively obvious, random variables by those with identical means but lower variances

  9. Analysis and Prediction of Micromilling Stability with Variable Tool Geometry

    Directory of Open Access Journals (Sweden)

    Ziyang Cao

    2014-11-01

    Full Text Available Micromilling can fabricate miniaturized components using micro-end mill at high rotational speeds. The analysis of machining stability in micromilling plays an important role in characterizing the cutting process, estimating the tool life, and optimizing the process. A numerical analysis and experimental method are presented to investigate the chatter stability in micro-end milling process with variable milling tool geometry. The schematic model of micromilling process is constructed and the calculation formula to predict cutting force and displacements is derived. This is followed by a detailed numerical analysis on micromilling forces between helical ball and square end mills through time domain and frequency domain method and the results are compared. Furthermore, a detailed time domain simulation for micro end milling with straight teeth and helical teeth end mill is conducted based on the machine-tool system frequency response function obtained through modal experiment. The forces and displacements are predicted and the simulation result between variable cutter geometry is deeply compared. The simulation results have important significance for the actual milling process.

  10. Darboux invariants of integrable equations with variable spectral parameters

    International Nuclear Information System (INIS)

    Shin, H J

    2008-01-01

    The Darboux transformation for integrable equations with variable spectral parameters is introduced. Darboux invariant quantities are calculated, which are used in constructing the Lax pair of integrable equations. This approach serves as a systematic method for constructing inhomogeneous integrable equations and their soliton solutions. The structure functions of variable spectral parameters determine the integrability and nonlinear coupling terms. Three cases of integrable equations are treated as examples of this approach

  11. Variability of consumer impacts from energy efficiency standards

    Energy Technology Data Exchange (ETDEWEB)

    McMahon, James E.; Liu, Xiaomin

    2000-06-15

    A typical prospective analysis of the expected impact of energy efficiency standards on consumers is based on average economic conditions (e.g., energy price) and operating characteristics. In fact, different consumers face different economic conditions and exhibit different behaviors when using an appliance. A method has been developed to characterize the variability among individual households and to calculate the life-cycle cost of appliances taking into account those differences. Using survey data, this method is applied to a distribution of consumers representing the U.S. Examples of clothes washer standards are shown for which 70-90% of the population benefit, compared to 10-30% who are expected to bear increased costs due to new standards. In some cases, sufficient data exist to distinguish among demographic subgroups (for example, low income or elderly households) who are impacted differently from the general population. Rank order correlations between the sampled input distributions and the sampled output distributions are calculated to determine which variability inputs are main factors. This ''importance analysis'' identifies the key drivers contributing to the range of results. Conversely, the importance analysis identifies variables that, while uncertain, make so little difference as to be irrelevant in deciding a particular policy. Examples will be given from analysis of water heaters to illustrate the dominance of the policy implications by a few key variables.

  12. Natural variability in the surface ocean carbonate ion concentration

    Directory of Open Access Journals (Sweden)

    N. S. Lovenduski

    2015-11-01

    Full Text Available We investigate variability in the surface ocean carbonate ion concentration ([CO32−] on the basis of a~long control simulation with an Earth System Model. The simulation is run with a prescribed, pre-industrial atmospheric CO2 concentration for 1000 years, permitting investigation of natural [CO32−] variability on interannual to multi-decadal timescales. We find high interannual variability in surface [CO32−] in the tropical Pacific and at the boundaries between the subtropical and subpolar gyres in the Northern Hemisphere, and relatively low interannual variability in the centers of the subtropical gyres and in the Southern Ocean. Statistical analysis of modeled [CO32−] variance and autocorrelation suggests that significant anthropogenic trends in the saturation state of aragonite (Ωaragonite are already or nearly detectable at the sustained, open-ocean time series sites, whereas several decades of observations are required to detect anthropogenic trends in Ωaragonite in the tropical Pacific, North Pacific, and North Atlantic. The detection timescale for anthropogenic trends in pH is shorter than that for Ωaragonite, due to smaller noise-to-signal ratios and lower autocorrelation in pH. In the tropical Pacific, the leading mode of surface [CO32−] variability is primarily driven by variations in the vertical advection of dissolved inorganic carbon (DIC in association with El Niño–Southern Oscillation. In the North Pacific, surface [CO32−] variability is caused by circulation-driven variations in surface DIC and strongly correlated with the Pacific Decadal Oscillation, with peak spectral power at 20–30-year periods. North Atlantic [CO32−] variability is also driven by variations in surface DIC, and exhibits weak correlations with both the North Atlantic Oscillation and the Atlantic Multidecadal Oscillation. As the scientific community seeks to detect the anthropogenic influence on ocean carbonate chemistry, these results

  13. 42 CFR 102.82 - Calculation of death benefits.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Calculation of death benefits. 102.82 Section 102... COMPENSATION PROGRAM Calculation and Payment of Benefits § 102.82 Calculation of death benefits. (a... paragraph (d) of this section for the death benefit available to dependents. (2) Deceased person means an...

  14. Rain Scattering and Co-ordinate Distance Calculation

    Directory of Open Access Journals (Sweden)

    M. Hajny

    1998-12-01

    Full Text Available Calculations of scattered field on the rain objects are based on using of Multiple MultiPole (MMP numerical method. Both bi-static scattering function and bi-static scattering cross section are calculated in the plane parallel to Earth surface. The co-ordination area was determined using the simple model of scattering volume [1]. Calculation for frequency 9.595 GHz and antenna elevation of 25° was done. Obtained results are compared with calculation in accordance to ITU-R recommendation.

  15. TORCAPP: time-dependent cyclotron orbit calculation and plotting package

    International Nuclear Information System (INIS)

    Maddox, L.B.; McNeilly, G.S.

    1979-11-01

    TORCAPP calculates the motion of charged particles in electromagnetic fields with time as the independent variable, and produces a variety of printed and plotted output of results. Finite-size beam behavior is studied conveniently by following groups of particles which define an appropriate phase space area. Since time is the independent variable, general motion in the near-median-plane region may be followed. This includes, for example, loops not enclosing the origin and strongly radial motions. Thus, TORCAPP is particularly useful for injection studies for isochronous cyclotrons, or other devices with near-median-plane charged particle motion

  16. Fatigue life assessment under multiaxial variable amplitude loading

    International Nuclear Information System (INIS)

    Morilhat, P.; Kenmeugne, B.; Vidal-Salle, E.; Robert, J.L.

    1996-06-01

    A variable amplitude multiaxial fatigue life prediction method is presented in this paper. It is based on a stress as input data are the stress tensor histories which may be calculated by FEM analysis or measured directly on the structure during the service loading. The different steps of he method are first presented then its experimental validation is realized for log and finite fatigue lives through biaxial variable amplitude loading tests using cruciform steel samples. (authors). 9 refs., 7 figs

  17. Quantifying the Effectiveness of Dose Individualization by Simulation for a Drug With Moderate Pharmacokinetic Variability.

    Science.gov (United States)

    Liefaard, Lia; Chen, Chao

    2015-10-01

    Dose individualization can reduce variability in exposure. The objective of this work was to quantify, through pharmacokinetic (PK) simulation, the potential for reducing the variability in exposure by dose individualization for a drug with moderate PK variability between subjects and between occasions within a subject, and a narrow therapeutic window. Using a population PK model that includes between-subject and between-occasion variability for apparent clearance, individual PK profiles in a trial of 300 subjects after a test dose were simulated. From the simulated data, datasets were created mimicking various sampling regimens (from single predose sample to full profile samples over 12 hours) on 1 or more occasions (1, 2, 3, 5, or 10 visits). Using these datasets, individual apparent clearance values were estimated, which were then used to calculate an individualized dose for a predefined target area under the concentration-time curve (AUC), based on the available formulation strengths. The proportion of people whose mean AUC was within a predefined therapeutic AUC range was calculated for the test (before) and the individualized dose (after), and compared between the different sampling scenarios. The maximum increase in proportion of subjects with an AUC within the range was 20%. To achieve this benefit, PK samples over 4 hours from 100 dosing occasions were required. As a result of the dose adjustment, the AUC of 7.3% of the subjects moved from inside the therapeutic range to outside of the range. This work shows how modeling and simulation can help assess the benefit and risk of dose individualization for a compound with variability between subjects and between occasions. The framework can be applied to similar situations with a defined set of conditions (eg, therapeutic window, tablet strengths, and PK and/or pharmacodynamic sampling scheme) to inform dose change and to assess the utility of dose individualization against certain success criteria.

  18. Dynamic simulation of variable capacity refrigeration systems under abnormal conditions

    International Nuclear Information System (INIS)

    Liang Nan; Shao Shuangquan; Tian Changqing; Yan Yuying

    2010-01-01

    There are often abnormal working conditions at evaporator outlet of a refrigeration system, such as two-phase state in transient process, and it is essential to investigate such transient behaviours for system design and control strategy. In this paper, a dynamic lumped parameter model is developed to simulate the transient behaviours of refrigeration system with variable capacity in both normal and abnormal working conditions. The appropriate discriminant method is adopted to switch the normal and abnormal conditions smoothly and to eliminate the simulated data oscillation. In order to verify the dynamic model, we built a test system with variable frequency compressor, water-cooling condenser, evaporator and electronic expansion valve. Calculated values from the mathematical model show reasonable agreement with the experimental data. The simulation results show that the transient behaviours of the variable capacity refrigeration system in the abnormal working conditions can be calculated reliably with the dynamic model when the compressor rotary speed or the opening of electronic expansion valve changes abruptly.

  19. Variable trajectory model for regional assessments of air pollution from sulfur compounds.

    Energy Technology Data Exchange (ETDEWEB)

    Powell, D.C.; McNaughton, D.J.; Wendell, L.L.; Drake, R.L.

    1979-02-01

    This report describes a sulfur oxides atmospheric pollution model that calculates trajectories using single-layer historical wind data as well as chemical transformation and deposition following discrete contaminant air masses. Vertical diffusion under constraints is calculated, but all horizontal dispersion is a funcion of trajectory variation. The ground-level air concentrations and deposition are calculated in a rectangular area comprising the northeastern United States and southeastern Canada. Calculations for a 29-day assessment period in April 1974 are presented along with a limited verification. Results for the studies were calculated using a source inventory comprising 61% of the anthropogenic SO/sub 2/ emissions. Using current model parameterization levels, predicted concentration values are most sensitive to variations in dry deposition of SO/sub 2/, wet deposition of sulfate, and transformation of SO/sub 2/ to sulfate. Replacing the variable mixed-layer depth and variable stability features of the model with constant definitions of each results in increased ground-level concentration predicions for SO/sub 2/ and particularly for sulfate.

  20. Evaluation of energy efficiency in street lighting: model proposition considering climate variability

    Directory of Open Access Journals (Sweden)

    Amaury Caruzzo

    2015-12-01

    Full Text Available This paper assesses the impacts of climate variability on efficient electricity consumption in street lighting in Brazil. The Climate Demand Method (CDM was applied, and the energy savings achieved by Brazil’s National Efficient Street Lighting Program (ReLuz in 2005 were calculated, considering the monthly climatology of sunshine duration, disaggregated by county in Brazil. The total energy savings in street lighting in 2005 were estimated at 63 GWh/year or 1.39% higher than the value determined by ReLuz/Eletrobrás and there was a 15 MW reduction in demand in Brazil, considering the nearly 393,000 points in ReLuz served in 2005. The results indicate that, besides the difference in latitude, climate variability in different county increases the daily usage of street lighting up to 19%. Furthermore, Brazil’s large size means that seasonality patterns in energy savings are not homogeneous, and there is a correlation between the monthly variability in sunshine duration and the latitude of mesoregions. The CDM was also shown to be suitable for ranking mesoregions with the highest levels of energy saving lighting.

  1. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms

    International Nuclear Information System (INIS)

    Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick

    2014-01-01

    The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the

  2. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    Science.gov (United States)

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p 30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  3. Evaluation of heterogeneity dose distributions for Stereotactic Radiotherapy (SRT: comparison of commercially available Monte Carlo dose calculation with other algorithms

    Directory of Open Access Journals (Sweden)

    Takahashi Wataru

    2012-02-01

    Full Text Available Abstract Background The purpose of this study was to compare dose distributions from three different algorithms with the x-ray Voxel Monte Carlo (XVMC calculations, in actual computed tomography (CT scans for use in stereotactic radiotherapy (SRT of small lung cancers. Methods Slow CT scan of 20 patients was performed and the internal target volume (ITV was delineated on Pinnacle3. All plans were first calculated with a scatter homogeneous mode (SHM which is compatible with Clarkson algorithm using Pinnacle3 treatment planning system (TPS. The planned dose was 48 Gy in 4 fractions. In a second step, the CT images, structures and beam data were exported to other treatment planning systems (TPSs. Collapsed cone convolution (CCC from Pinnacle3, superposition (SP from XiO, and XVMC from Monaco were used for recalculating. The dose distributions and the Dose Volume Histograms (DVHs were compared with each other. Results The phantom test revealed that all algorithms could reproduce the measured data within 1% except for the SHM with inhomogeneous phantom. For the patient study, the SHM greatly overestimated the isocenter (IC doses and the minimal dose received by 95% of the PTV (PTV95 compared to XVMC. The differences in mean doses were 2.96 Gy (6.17% for IC and 5.02 Gy (11.18% for PTV95. The DVH's and dose distributions with CCC and SP were in agreement with those obtained by XVMC. The average differences in IC doses between CCC and XVMC, and SP and XVMC were -1.14% (p = 0.17, and -2.67% (p = 0.0036, respectively. Conclusions Our work clearly confirms that the actual practice of relying solely on a Clarkson algorithm may be inappropriate for SRT planning. Meanwhile, CCC and SP were close to XVMC simulations and actual dose distributions obtained in lung SRT.

  4. New theoretical development for the calculating of physical properties of D2O

    International Nuclear Information System (INIS)

    Moreira, Osvaldo

    2011-01-01

    In this work we have developed a new method for calculating the physical properties of heavy water, D 2 O, using the Helmholtz free energy state function, A = U − T S, exclusively for this molecule. The state function has been calculated as ā = ā 0 +ā 1 (specific dimensionless values), where ā 0 is related to the properties of heavy water in gaseous state and ā 1 describes the liquid state. The canonical variables of the state function are absolute temperature and volume. To calculate the physical properties defining absolute pressure and temperature, here a variable change method was developed, based on the solution of a differential equation (function ζ) using numerical algorithms (scaling and Newton-Raphson). Physical quantities calculated are: density ϱ(specific volume υ), specific enthalpy h and entropy s. The results obtained agree completely with the values calculated by the National Institute of Standards and Technology (NIST). In this report it has also proposed an adjustment function to calculate the saturation absolute temperature of heavy water as a function of the pressure: T s (p) = exp[a·b(p)], where a is a vector of constant coefficients and b a vector function of pressure, using theoretical values and extending the wording proposed by the Oak Ridge National Laboratory. The new setting has an error less than 0.03%. (author)

  5. CALCULANDO EL NIVEL DE RECURSOS DISPONIBLES A PARTIR DEL REGISTRO FUNERARIO MESOAMERICANO (Calculating the Available Resource Level from the Mesoamerican Mortuary Record

    Directory of Open Access Journals (Sweden)

    Pascual Izquierdo-Egea

    2016-03-01

    Full Text Available El cálculo del nivel de recursos disponibles a partir del registro funerario no solo es posible sino que arroja resultados espectaculares que aclaran cuestiones tan fundamentales como la naturaleza del cambio demográfico. Constituye un nuevo logro relevante de la arqueología de los fenómenos sociales como disciplina verdaderamente científica. Su aplicación a la Mesoamérica prehispánica confirma la coincidencia entre los resultados obtenidos para este nuevo parámetro en las tierras bajas mayas y los de la cuenca del río Balsas (México durante el Clásico Tardío. ENGLISH: The calculation of the availability of resources from the mortuary record is possible and yields dramatic results that clarify fundamental questions such as the nature of demographic change. It is a new and important achievement of the archaeology of social phenomena as a truly scientific discipline. Its application to prehispanic Mesoamerica confirms the similarity of the results obtained in the Mayan lowlands and those from the Balsas river basin in Mexico during the Late Classic.

  6. Model and calculations for net infiltration

    International Nuclear Information System (INIS)

    Childs, S.W.; Long, A.

    1992-01-01

    In this paper a conceptual model for calculating net infiltration is developed and implemented. It incorporates the following important factors: viability of climate for the next 10,000 years, areal viability of net infiltration, and important soil/plant factors that affect the soil water budget of desert soils. Model results are expressed in terms of occurrence probabilities for time periods. In addition the variability of net infiltration is demonstrated both for change with time and differences among three soil/hydrologic units present at the site modeled

  7. THE CHANDRA VARIABLE GUIDE STAR CATALOG

    International Nuclear Information System (INIS)

    Nichols, Joy S.; Lauer, Jennifer L.; Morgan, Douglas L.; Sundheim, Beth A.; Henden, Arne A.; Huenemoerder, David P.; Martin, Eric

    2010-01-01

    Variable stars have been identified among the optical-wavelength light curves of guide stars used for pointing control of the Chandra X-ray Observatory. We present a catalog of these variable stars along with their light curves and ancillary data. Variability was detected to a lower limit of 0.02 mag amplitude in the 4000-10000 A range using the photometrically stable Aspect Camera on board the Chandra spacecraft. The Chandra Variable Guide Star Catalog (VGUIDE) contains 827 stars, of which 586 are classified as definitely variable and 241 are identified as possibly variable. Of the 586 definite variable stars, we believe 319 are new variable star identifications. Types of variables in the catalog include eclipsing binaries, pulsating stars, and rotating stars. The variability was detected during the course of normal verification of each Chandra pointing and results from analysis of over 75,000 guide star light curves from the Chandra mission. The VGUIDE catalog represents data from only about 9 years of the Chandra mission. Future releases of VGUIDE will include newly identified variable guide stars as the mission proceeds. An important advantage of the use of space data to identify and analyze variable stars is the relatively long observations that are available. The Chandra orbit allows for observations up to 2 days in length. Also, guide stars were often used multiple times for Chandra observations, so many of the stars in the VGUIDE catalog have multiple light curves available from various times in the mission. The catalog is presented as both online data associated with this paper and as a public Web interface. Light curves with data at the instrumental time resolution of about 2 s, overplotted with the data binned at 1 ks, can be viewed on the public Web interface and downloaded for further analysis. VGUIDE is a unique project using data collected during the mission that would otherwise be ignored. The stars available for use as Chandra guide stars are

  8. There Is No Further Gain from Calculating Disease Activity Score in 28 Joints with High Sensitivity Assays of C-Reactive Protein Because of High Intraindividual Variability of CRP: A Cross Sectional Study and Theoretical Consideration

    DEFF Research Database (Denmark)

    Jensen Hansen, Inger Marie; Asmussen Andreasen, Rikke; Antonsen, Steen

    Background/Purpose: The threshold for reporting of C-reactive protein (CRP) differs from laboratory to laboratory. Moreover, CRP values are affected by the intra individual biological variability.[1] With respect to disease activity score in 28 joints (DAS28) and Rheumatoid Arthritis (RA), precise...... threshold for reporting CRP is important due to the direct effects of CRP on calculating DAS28, patient classification and subsequent treatment decisions[2] Methods: This study consists of two sections: a theoretical consideration discussing the performance of CRP in calculating DAS28 with regard...... to the biological variation and reporting limit for CRP and a cross sectional study of all RA patients from our department (n=876) applying our theoretical results. In the second section, we calculate DAS28 twice with actual CRP and CRP=9, the latter to elucidate the positive consequences of changing the lower...

  9. Python-based framework for coupled MC-TH reactor calculations

    International Nuclear Information System (INIS)

    Travleev, A.A.; Molitor, R.; Sanchez, V.

    2013-01-01

    We have developed a set of Python packages to provide a modern programming interface to codes used for analysis of nuclear reactors. Python classes can be classified by their functionality into three groups: low-level interfaces, general model classes and high-level interfaces. A low-level interface describes an interface between Python and a particular code. General model classes can be used to describe calculation geometry and meshes to represent system variables. High-level interface classes are used to convert geometry described with general model classes into instances of low-level interface classes and to put results of code calculations (read by low-interface classes) back to general model. The implementation of Python interfaces to the Monte Carlo neutronics code MCNP and thermo-hydraulic code SCF allow efficient description of calculation models and provide a framework for coupled calculations. In this paper we illustrate how these interfaces can be used to describe a pin model, and report results of coupled MCNP-SCF calculations performed for a PWR fuel assembly, organized by means of the interfaces

  10. Means and method of sampling flow related variables from a waterway in an accurate manner using a programmable calculator

    Science.gov (United States)

    Rand E. Eads; Mark R. Boolootian; Steven C. [Inventors] Hankin

    1987-01-01

    Abstract - A programmable calculator is connected to a pumping sampler by an interface circuit board. The calculator has a sediment sampling program stored therein and includes a timer to periodically wake up the calculator. Sediment collection is controlled by a Selection At List Time (SALT) scheme in which the probability of taking a sample is proportional to its...

  11. Calculation and definition of safety indicators

    International Nuclear Information System (INIS)

    Cristian, I.; Branzeu, N.; Vidican, D.; Vladescu, G.

    1997-01-01

    This paper presents, based on Cernavoda safety indicators proposal, the purpose definition and calculation formulas for each of the selected safety indicators. Five categories of safety indicators for Cernavoda Unit 1 were identified, namely: overall plant safety performance; initiating events; safety system availability, physical barrier integrity; indirect indicators. Definition, calculation and use of some safety indicators are shown in a tabular form. (authors)

  12. To the proof of manifest relativistic invariance of transverse variables in QED

    International Nuclear Information System (INIS)

    Pervushin, V.N.; Nguyen Suan Han; Azimov, R.A.

    1986-01-01

    The quantization of electrodynamics in terms of transverse physical variables is accomplished. At all the steps of the theory construction: 1) the choice of transverse variables, 2) the choice of energy-momentum tensor, 3) quantization, 4) the Feynman diagram description the manifest gauge and relativistic invariance is preserved. For the transverse variables the relativistic-invariant self-energy of the electron is calculated. The results completely solve the problem of renormalization of physical quantities on the mass shell for the physical variables

  13. Variability and uncertainty in Swedish exposure factors for use in quantitative exposure assessments.

    Science.gov (United States)

    Filipsson, Monika; Öberg, Tomas; Bergbäck, Bo

    2011-01-01

    Information of exposure factors used in quantitative risk assessments has previously been compiled and reported for U.S. and European populations. However, due to the advancement of science and knowledge, these reports are in continuous need of updating with new data. Equally important is the change over time of many exposure factors related to both physiological characteristics and human behavior. Body weight, skin surface, time use, and dietary habits are some of the most obvious examples covered here. A wealth of data is available from literature not primarily gathered for the purpose of risk assessment. Here we review a number of key exposure factors and compare these factors between northern Europe--here represented by Sweden--and the United States. Many previous compilations of exposure factor data focus on interindividual variability and variability between sexes and age groups, while uncertainty is mainly dealt with in a qualitative way. In this article variability is assessed along with uncertainty. As estimates of central tendency and interindividual variability, mean, standard deviation, skewness, kurtosis, and multiple percentiles were calculated, while uncertainty was characterized using 95% confidence intervals for these parameters. The presented statistics are appropriate for use in deterministic analyses using point estimates for each input parameter as well as in probabilistic assessments. © 2010 Society for Risk Analysis.

  14. Assessing the quality of life history information in publicly available databases.

    Science.gov (United States)

    Thorson, James T; Cope, Jason M; Patrick, Wesley S

    2014-01-01

    Single-species life history parameters are central to ecological research and management, including the fields of macro-ecology, fisheries science, and ecosystem modeling. However, there has been little independent evaluation of the precision and accuracy of the life history values in global and publicly available databases. We therefore develop a novel method based on a Bayesian errors-in-variables model that compares database entries with estimates from local experts, and we illustrate this process by assessing the accuracy and precision of entries in FishBase, one of the largest and oldest life history databases. This model distinguishes biases among seven life history parameters, two types of information available in FishBase (i.e., published values and those estimated from other parameters), and two taxa (i.e., bony and cartilaginous fishes) relative to values from regional experts in the United States, while accounting for additional variance caused by sex- and region-specific life history traits. For published values in FishBase, the model identifies a small positive bias in natural mortality and negative bias in maximum age, perhaps caused by unacknowledged mortality caused by fishing. For life history values calculated by FishBase, the model identified large and inconsistent biases. The model also demonstrates greatest precision for body size parameters, decreased precision for values derived from geographically distant populations, and greatest between-sex differences in age at maturity. We recommend that our bias and precision estimates be used in future errors-in-variables models as a prior on measurement errors. This approach is broadly applicable to global databases of life history traits and, if used, will encourage further development and improvements in these databases.

  15. The effects of alignment quality, distance calculation method, sequence filtering, and region on the analysis of 16S rRNA gene-based studies.

    Directory of Open Access Journals (Sweden)

    Patrick D Schloss

    Full Text Available Pyrosequencing of PCR-amplified fragments that target variable regions within the 16S rRNA gene has quickly become a powerful method for analyzing the membership and structure of microbial communities. This approach has revealed and introduced questions that were not fully appreciated by those carrying out traditional Sanger sequencing-based methods. These include the effects of alignment quality, the best method of calculating pairwise genetic distances for 16S rRNA genes, whether it is appropriate to filter variable regions, and how the choice of variable region relates to the genetic diversity observed in full-length sequences. I used a diverse collection of 13,501 high-quality full-length sequences to assess each of these questions. First, alignment quality had a significant impact on distance values and downstream analyses. Specifically, the greengenes alignment, which does a poor job of aligning variable regions, predicted higher genetic diversity, richness, and phylogenetic diversity than the SILVA and RDP-based alignments. Second, the effect of different gap treatments in determining pairwise genetic distances was strongly affected by the variation in sequence length for a region; however, the effect of different calculation methods was subtle when determining the sample's richness or phylogenetic diversity for a region. Third, applying a sequence mask to remove variable positions had a profound impact on genetic distances by muting the observed richness and phylogenetic diversity. Finally, the genetic distances calculated for each of the variable regions did a poor job of correlating with the full-length gene. Thus, while it is tempting to apply traditional cutoff levels derived for full-length sequences to these shorter sequences, it is not advisable. Analysis of beta-diversity metrics showed that each of these factors can have a significant impact on the comparison of community membership and structure. Taken together, these results

  16. Variability, plot size and border effect in lettuce trials in protected environment

    Directory of Open Access Journals (Sweden)

    Daniel Santos

    2018-03-01

    Full Text Available ABSTRACT The variability within rows of cultivation may reduce the accuracy of experiments conducted in a complete randomized block design if the rows are considered as blocks, however, little is known about this variability in protected environments. Thus, our aim was to study the variability of the fresh mass in lettuce shoot, growing in protected environment, and to verify the border effect and size of the experimental unit in minimizing the productive variability. Data from two uniformity trials carried out in a greenhouse in autumn and spring growing seasons were used. In the statistical analyses, it was considered the existence of parallel cultivation rows the lateral openings of the greenhouse and of columns perpendicular to these openings. Different scenarios were simulated by excluding rows and columns to generate several borders arrangements and also to use different sizes of the experimental unit. For each scenario, homogeneity test of variances between remaining rows and columns was performed, and it was calculated the variance and coefficient of variation. There is variability among rows in trials with lettuce in plastic greenhouses and the border use does not bring benefits in terms of reduction of the coefficient of variation or minimizing the cases of heterogeneous variances among rows. In experiments with lettuce in a plastic greenhouse, the use of an experimental unit size greater than or equal to two plants provides homogeneity of variances among rows and columns and, therefore, allows the use of a completely randomized design.

  17. Determinants of cell-to-cell variability in protein kinase signaling.

    Science.gov (United States)

    Jeschke, Matthias; Baumgärtner, Stephan; Legewie, Stefan

    2013-01-01

    Cells reliably sense environmental changes despite internal and external fluctuations, but the mechanisms underlying robustness remain unclear. We analyzed how fluctuations in signaling protein concentrations give rise to cell-to-cell variability in protein kinase signaling using analytical theory and numerical simulations. We characterized the dose-response behavior of signaling cascades by calculating the stimulus level at which a pathway responds ('pathway sensitivity') and the maximal activation level upon strong stimulation. Minimal kinase cascades with gradual dose-response behavior show strong variability, because the pathway sensitivity and the maximal activation level cannot be simultaneously invariant. Negative feedback regulation resolves this trade-off and coordinately reduces fluctuations in the pathway sensitivity and maximal activation. Feedbacks acting at different levels in the cascade control different aspects of the dose-response curve, thereby synergistically reducing the variability. We also investigated more complex, ultrasensitive signaling cascades capable of switch-like decision making, and found that these can be inherently robust to protein concentration fluctuations. We describe how the cell-to-cell variability of ultrasensitive signaling systems can be actively regulated, e.g., by altering the expression of phosphatase(s) or by feedback/feedforward loops. Our calculations reveal that slow transcriptional negative feedback loops allow for variability suppression while maintaining switch-like decision making. Taken together, we describe design principles of signaling cascades that promote robustness. Our results may explain why certain signaling cascades like the yeast pheromone pathway show switch-like decision making with little cell-to-cell variability.

  18. Necessary storage as a signature of discharge variability: towards global maps

    Directory of Open Access Journals (Sweden)

    K. Takeuchi

    2017-09-01

    Full Text Available This paper proposes the use of necessary storage to smooth out discharge variability to meet a discharge target as a signature of discharge variability in time. Such a signature has a distinct advantage over other statistical indicators such as standard deviation (SD or coefficient of variation (CV as it expresses hydrological variability in human terms, which directly indicates the difficulty and ease of managing discharge variation for water resource management. The signature is presented in the form of geographical distribution, in terms of both necessary storage (km3 and normalized necessary storage (months, and is related to the basin characteristics of hydrological heterogeneity. The signature is analyzed in different basins considering the Hurst equation of range as a reference. The slope of such a relation and the scatter of departures from the average relation are analyzed in terms of their relationship with basin characteristics. As a method of calculating necessary storage, the flood duration curve (FDC and drought duration curve (DDC methods are employed in view of their relative advantage over other methods to repeat the analysis over many grid points. The Ganges–Brahmaputra–Meghna (GBM basin is selected as the case study and the BTOPMC hydrological model with Water and Global Change (WATCH Forcing Data (WFD is used for estimating FDC and DDC. It is concluded that the necessary storage serves as a useful signature of discharge variability, and its analysis could be extended to the entire globe and in this way seek new insights into hydrological variability in the storage domain at a larger range of scales.

  19. Raw and Central Moments of Binomial Random Variables via Stirling Numbers

    Science.gov (United States)

    Griffiths, Martin

    2013-01-01

    We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…

  20. Source term calculations - Ringhals 2 PWR

    International Nuclear Information System (INIS)

    Johansson, L.L.

    1998-02-01

    This project was performed within the fifth and final phase of sub-project RAK-2.1 of the Nordic Co-operative Reactor Safety Program, NKS.RAK-2.1 has also included studies of reflooding of degraded core, recriticality and late phase melt progression. Earlier source term calculations for Swedish nuclear power plants are based on the integral code MAAP. A need was recognised to compare these calculations with calculations done with mechanistic codes. In the present work SCDAP/RELAP5 and CONTAIN were used. Only limited results could be obtained within the frame of RAK-2.1, since many problems were encountered using the SCDAP/RELAP5 code. The main obstacle was the extremely long execution times of the MOD3.1 version, but also some dubious fission product calculations. However, some interesting results were obtained for the studied sequence, a total loss of AC power. The report describes the modelling approach for SCDAP/RELAP5 and CONTAIN, and discusses results for the transient including the event of a surge line creep rupture. The study will probably be completed later, providing that an improved SCDAP/RELAP5 code version becomes available. (au) becomes available. (au)

  1. Engine performance testing using variable RON95 fuel brands available in Malaysia

    Directory of Open Access Journals (Sweden)

    Mohd Riduan Aizuddin Fahmi

    2017-01-01

    Full Text Available There are various gasoline fuel producers available in Malaysia. The effects of fuel variations from different manufacturers on vehicle performance have always been a debate among users and currently the facts still remains inconclusive. Hence, this study focuses on analyzing various RON95 fuel brands available in the Malaysian market and finding the differences towards engine performance. In terms of engine output, the important data of power (hp and torque (Nm will be gathered by using an engine dynamometer. Another data that would also be taken into account is the knocking where the relative knock index can be measured in percentage using the knock sensor accelerometer. Results have shown that the performance of different fuel brands tested are indeed different albeit by only a small margin even though all fuels are categorized with the same octane rating. The power and torque results also imply that both are influenced by the amount of vibration generated due to engine knocking. Based from the overall outcome, consumers would not need to only focus on a certain type of gasoline brand as all differentiates the engine performance marginally.

  2. DESIGN OF LIQUID-STORAGE TANK: RESULTS OF SOFTWARE MODELING VS CALCULATIONS ACCORDING TO EUROCODE

    Directory of Open Access Journals (Sweden)

    Matko Gulin

    2017-01-01

    Full Text Available The objective of this article is to show the design process of a liquid-storage tank shell according to Eurocode and compare the results obtained using the norms with those from a finite element method (FEM analysis. The calculations were performed for an aboveground vertical steel water-storage tank with a variable thickness wall and stiffening ring on top. First, the types of liquid storage tanks are briefly explained. Second, the given tank is described. Third, an analysis of the tank wall according to the Eurocode was carried out. The FEM analysis was performed using the Scia Engineer ver. 17 software. Finally, all the results are presented in tables and compared.

  3. Geometric Parameters of Cutting Tools that Can be Used for Forming Sided Surfaces with Variable Profile

    Directory of Open Access Journals (Sweden)

    Razumov M.

    2017-03-01

    Full Text Available This article describes machining technology of polyhedral surfaces with varying profile, which is provided by planetary motion of multiblade block tools. The features of the technology and urgency of the problem is indicated. The purpose of the study is to determine the minimum value of the clearance angle of the tool. Also, the study is carried out about changing the value of the front and rear corners during the formation of polygonal surface using a planetary gear. The scheme of calculating the impact of various factors on the value of the minimum clearance angle of the tool and kinematic front and rear corners of the instrument is provided. The mathematical formula for calculating the minimum clearance angle of the tool is given. Also, given the formula for determining the front and rear corners of the tool during driving. This study can be used in the calculation of the design operations forming multifaceted external surfaces with a variable profile by using the planetary gear.

  4. TOGA COARE Satellite data summaries available on the World Wide Web

    Science.gov (United States)

    Chen, S. S.; Houze, R. A., Jr.; Mapes, B. E.; Brodzick, S. R.; Yutler, S. E.

    1995-01-01

    Satellite data summary images and analysis plots from the Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE), which were initially prepared in the field at the Honiara Operations Center, are now available on the Internet via World Wide Web browsers such as Mosaic. These satellite data summaries consist of products derived from the Japanese Geosynchronous Meteorological Satellite IR data: a time-size series of the distribution of contiguous cold cloudiness areas, weekly percent high cloudiness (PHC) maps, and a five-month time-longitudinal diagram illustrating the zonal motion of large areas of cold cloudiness. The weekly PHC maps are overlaid with weekly mean 850-hPa wind calculated from the European Centre for Medium-Range Weather Forecasts (ECMWF) global analysis field and can be viewed as an animation loop. These satellite summaries provide an overview of spatial and temporal variabilities of the cloud population and a large-scale context for studies concerning specific processes of various components of TOGA COARE.

  5. Removing the Influence of Shimmer in the Calculation of Harmonics-To-Noise Ratios Using Ensemble-Averages in Voice Signals

    Directory of Open Access Journals (Sweden)

    Carlos Ferrer

    2009-01-01

    Full Text Available Harmonics-to-noise ratios (HNRs are affected by general aperiodicity in voiced speech signals. To specifically reflect a signal-to-additive-noise ratio, the measurement should be insensitive to other periodicity perturbations, like jitter, shimmer, and waveform variability. The ensemble averaging technique is a time-domain method which has been gradually refined in terms of its sensitivity to jitter and waveform variability and required number of pulses. In this paper, shimmer is introduced in the model of the ensemble average, and a formula is derived which allows the reduction of shimmer effects in HNR calculation. The validity of the technique is evaluated using synthetically shimmered signals, and the prerequisites (glottal pulse positions and amplitudes are obtained by means of fully automated methods. The results demonstrate the feasibility and usefulness of the correction.

  6. Mordred: a molecular descriptor calculator.

    Science.gov (United States)

    Moriwaki, Hirotomo; Tian, Yu-Shi; Kawashita, Norihito; Takagi, Tatsuya

    2018-02-06

    Molecular descriptors are widely employed to present molecular characteristics in cheminformatics. Various molecular-descriptor-calculation software programs have been developed. However, users of those programs must contend with several issues, including software bugs, insufficient update frequencies, and software licensing constraints. To address these issues, we propose Mordred, a developed descriptor-calculation software application that can calculate more than 1800 two- and three-dimensional descriptors. It is freely available via GitHub. Mordred can be easily installed and used in the command line interface, as a web application, or as a high-flexibility Python package on all major platforms (Windows, Linux, and macOS). Performance benchmark results show that Mordred is at least twice as fast as the well-known PaDEL-Descriptor and it can calculate descriptors for large molecules, which cannot be accomplished by other software. Owing to its good performance, convenience, number of descriptors, and a lax licensing constraint, Mordred is a promising choice of molecular descriptor calculation software that can be utilized for cheminformatics studies, such as those on quantitative structure-property relationships.

  7. Internal emitter dosimetry: are patient-specific calculations necessary?

    International Nuclear Information System (INIS)

    Sgouros, G.

    1996-01-01

    SPECT or PET scan. The images sets must be registered to each other and the voxel values in the SPECT or PET images must be converted to activity or cumulated activity. A radionuclide that can be imaged is required and the distribution following a tracer administration is assumed to reflect the pharmacokinetics associated with the therapeutic administration. Clinical implementation of such detailed approaches to dosimetry must be justified by dose-response data. Convincing evidence must be available to demonstrate that patient-specific dosimetry will have a significant impact in avoiding toxicity while delivering the maximum possible absorbed dose to the tumor. In the case of radiolabeled antibody therapy, these data are just becoming available. A relationship between red marrow or whole-body absorbed dose and hematologic toxicity has been established for antibodies against colorectal and renal cell carcinoma. These data support the pharmacokinetic level of patient-specific dosimetry. Although 3-D dosimetry calculations have highlighted the spatial variability in tumor absorbed dose and the resulting potential loss of efficacy, such work is still at a research stage and dose-response data to justify routine 3-D dosimetry are lacking

  8. Radionuclide migration in the unsaturated zone with a variable hydrology

    International Nuclear Information System (INIS)

    Elert, M.; Collin, M.; Andersson, Birgitta; Lindgren, M.

    1990-01-01

    Radionuclide transport from contaminated ground water to the root zone of a soil has been modelled considering a variable hydrology. Hydrological calculations have been coupled with radionuclide transport calculations in order to study the influence of variations in flow rate and saturation, dispersion, and sorption. For non-sorbing radionuclides important seasonal variations in the root zone concentration were found. The dispersivity parameter proved to be very important for both sorbing and non-sorbing nuclides. In addition, some comparison calculations were made with a simple steady-state compartment model. (au)

  9. [Relations between biomedical variables: mathematical analysis or linear algebra?].

    Science.gov (United States)

    Hucher, M; Berlie, J; Brunet, M

    1977-01-01

    The authors, after a short reminder of one pattern's structure, stress on the possible double approach of relations uniting the variables of this pattern: use of fonctions, what is within the mathematical analysis sphere, use of linear algebra profiting by matricial calculation's development and automatiosation. They precise the respective interests on these methods, their bounds and the imperatives for utilization, according to the kind of variables, of data, and the objective for work, understanding phenomenons or helping towards decision.

  10. IAS15 inflation adjustments and EVA: empirical evidence from a highly variable inflation regime

    Directory of Open Access Journals (Sweden)

    Pierre Erasmus

    2011-08-01

    Full Text Available Inflation can have a pronounced effect on the financial performance of a firm. This study makes inflation adjustments to a firm’s cost of sales, depreciation, level of gearing and assets in line with International Accounting Standard 15 (IAS15 in order to calculate an inflation-adjusted version of the economic value added (EVA measure. The study was conducted using data from South African industrial firms during a period characterised by highly variable inflation levels (1991-2005. The results indicate that during this period there were significant differences between the nominal and real values of the firms’ EVAs

  11. Calculating the water and heat balances of the Eastern Mediterranean Basin using ocean modelling and available meteorological, hydrological and ocean data

    Directory of Open Access Journals (Sweden)

    Anders Omstedt

    2012-04-01

    Full Text Available Eastern Mediterranean water and heat balances wereanalysed over 52 years. The modelling uses a process-orientedapproach resolving the one-dimensional equations of momentum,heat and salt conservation; turbulence is modelled using a two-equation model. The results indicate that calculated temperature and salinity follow the reanalysed data well. The water balance in the Eastern Mediterranean basin was controlled by the difference between inflows and outflows through the Sicily Channel and by net precipitation. The freshwater component displayed a negative trend over the study period, indicating increasing salinity in the basin.The heat balance was controlled by heat loss from the water surface, solar radiation into the sea and heat flow through the Sicily Channel. Both solar radiation and net heat loss displayed increasing trends, probably due to decreased total cloud cover. In addition, the heat balance indicated a net import of approximately 9 W m-2 of heat to the Eastern Mediterranean Basin from the Western Basin.

  12. Investigating the effect of growth and financial strength variables on the financial leverage: Evidence from the Tehran Stock Exchange

    Directory of Open Access Journals (Sweden)

    Iman Dadashi

    2013-04-01

    Full Text Available The primary objective of this study is to investigate the effect of growth and financial strength variables on the financial leverage for some listed companies in the Tehran Stock Exchange. For this purpose, a sample of 700 firm-years among listed companies in the Tehran Stock Exchange over the period 2006-2010 was examined. In the present study, the growth variables, including asset growth, profit growth and sales growth; and financial strength calculated by the Altman Z-bankruptcy model have been considered as independent variables. In addition, the ratios of long-term debt to total assets, long-term debt to fixed assets, total long-term debt and short-term receivable facilities to equity capital and total long-term debt and short-term receivable facilities to total assets are used as measures of financial leverage and dependent variables. The results indicate that there is a negative and significant relationship between assets growth and some indexes of financial leverage. There is also a positive and significant relationship between the variables of profit growth, sales growth and financial strength with financial leverage measures.

  13. Dysglycemia induces abnormal circadian blood pressure variability

    Directory of Open Access Journals (Sweden)

    Kumarasamy Sivarajan

    2011-11-01

    Full Text Available Abstract Background Prediabetes (PreDM in asymptomatic adults is associated with abnormal circadian blood pressure variability (abnormal CBPV. Hypothesis Systemic inflammation and glycemia influence circadian blood pressure variability. Methods Dahl salt-sensitive (S rats (n = 19 after weaning were fed either an American (AD or a standard (SD diet. The AD (high-glycemic-index, high-fat simulated customary human diet, provided daily overabundant calories which over time lead to body weight gain. The SD (low-glycemic-index, low-fat mirrored desirable balanced human diet for maintaining body weight. Body weight and serum concentrations for fasting glucose (FG, adipokines (leptin and adiponectin, and proinflammatory cytokines [monocyte chemoattractant protein-1 (MCP-1 and tumor necrosis factor-α (TNF-α] were measured. Rats were surgically implanted with C40 transmitters and blood pressure (BP-both systolic; SBP and diastolic; DBP and heart rate (HR were recorded by telemetry every 5 minutes during both sleep (day and active (night periods. Pulse pressure (PP was calculated (PP = SBP-DBP. Results [mean(SEM]: The AD fed group displayed significant increase in body weight (after 90 days; p Conclusion These data validate our stated hypothesis that systemic inflammation and glycemia influence circadian blood pressure variability. This study, for the first time, demonstrates a cause and effect relationship between caloric excess, enhanced systemic inflammation, dysglycemia, loss of blood pressure control and abnormal CBPV. Our results provide the fundamental basis for examining the relationship between dysglycemia and perturbation of the underlying mechanisms (adipose tissue dysfunction induced local and systemic inflammation, insulin resistance and alteration of adipose tissue precursors for the renin-aldosterone-angiotensin system which generate abnormal CBPV.

  14. Geospatial models of climatological variables distribution over Colombian territory

    International Nuclear Information System (INIS)

    Baron Leguizamon, Alicia

    2003-01-01

    Diverse studies have dealt on the existing relation between the variables temperature about the air and precipitation with the altitude; nevertheless they have been precise analyses or by regions, but no of them has gotten to constitute itself in a tool that reproduces the space distribution, of the temperature or the precipitation, taking into account orography and allowing to obtain from her data on these variables in a certain place. Cradle in the raised relation and from the multi-annual monthly information of the temperature of the air and the precipitation, it was calculated the vertical gradients of temperature and the related the precipitation to the altitude. After it, with base in the data of altitude provided by the DEM, one calculated the values of temperature and precipitation, and those values were interpolated to generate geospatial models monthly

  15. Benchmarking criticality safety calculations with subcritical experiments

    International Nuclear Information System (INIS)

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments

  16. Quantitative analysis of spatial variability of geotechnical parameters

    Science.gov (United States)

    Fang, Xing

    2018-04-01

    Geotechnical parameters are the basic parameters of geotechnical engineering design, while the geotechnical parameters have strong regional characteristics. At the same time, the spatial variability of geotechnical parameters has been recognized. It is gradually introduced into the reliability analysis of geotechnical engineering. Based on the statistical theory of geostatistical spatial information, the spatial variability of geotechnical parameters is quantitatively analyzed. At the same time, the evaluation of geotechnical parameters and the correlation coefficient between geotechnical parameters are calculated. A residential district of Tianjin Survey Institute was selected as the research object. There are 68 boreholes in this area and 9 layers of mechanical stratification. The parameters are water content, natural gravity, void ratio, liquid limit, plasticity index, liquidity index, compressibility coefficient, compressive modulus, internal friction angle, cohesion and SP index. According to the principle of statistical correlation, the correlation coefficient of geotechnical parameters is calculated. According to the correlation coefficient, the law of geotechnical parameters is obtained.

  17. Psychological variables associated with employment following spinal cord injury: a meta-analysis.

    Science.gov (United States)

    Kent, M L; Dorstyn, D S

    2014-10-01

    Spinal cord injury (SCI) research has highlighted links between psychological variables and employment outcome; however, there remains a need to consolidate the available heterogenous data. Meta-analytic techniques were used to examine and quantify differences in psychological functioning and employment status among adults with an acquired SCI. Fourteen observational studies (N = 9, 868 participants) were identified from an electronic database search. Standardised mean difference scores between employed and unemployed groups were calculated using Cohen's d effect sizes. Additionally, 95% confidence intervals, fail-safe Ns, percentage overlap scores and heterogeneity statistics were used to determine the significance of d . Moderate to large and positive weighted effects were noted across three broad psychological constructs: affective experience or feelings (dw = 3.16), quality of life (dw = 1.06) and life satisfaction (dw = 0.70). However, the most compelling non-heterogeneous finding was associated with life satisfaction, a finding that was also not subject to publication bias. Inconsistent and weak associations between employment and individual measures of vocational attitude, self-efficacy, locus of control, adjustment and personality were also noted. Psychological factors and attributes are linked to employment post-SCI; however, the available data are limited in quantity. Longitudinal research is also needed to determine whether these variables can help to preserve employment over time.

  18. Potential of vehicle-to-grid ancillary services considering the uncertainties in plug-in electric vehicle availability and service/localization limitations in distribution grids

    International Nuclear Information System (INIS)

    Sarabi, Siyamak; Davigny, Arnaud; Courtecuisse, Vincent; Riffonneau, Yann; Robyns, Benoît

    2016-01-01

    Highlights: • The availability uncertainty of PEVs are modelled using Gaussian mixture model. • Interdependency of stochastic variables are modelled using copula function. • V2G bidding capacity is calculated using Free Pattern search optimization method. • Localization limitation is considered for V2G service potential assessment. • Competitive services for fleet of V2G-enabled PEVs are identified using fuzzy sets. - Abstract: The aim of the paper is to propose an approach for statistical assessment of the potential of plug-in electric vehicles (PEV) for vehicle-to-grid (V2G) ancillary services, where it focuses on PEVs doing daily home-work commuting. In this approach, the possible ancillary services (A/S) for each PEV fleet in terms of its available V2G power (AVP) and flexible intervals are identified. The flexible interval is calculated using a powerful stochastic global optimization technique so-called “Free Pattern Search” (FPS). A probabilistic method is also proposed to quantify the impacts of PEV’s availability uncertainty using the Gaussian mixture model (GMM), and interdependency of stochastic variables on AVP of each fleet thanks to a multivariate modeling with Copula function. Each fleet is analyzed based on its aggregated PEV numbers at different level of distribution grid, in order to satisfy the ancillary services localization limitation. A case study using the proposed approach evaluates the real potential in Niort, a city in west of France. In fact, by using the proposed approach an aggregator can analyze the V2G potential of PEVs under its contract.

  19. A study of variable thrust, variable specific impulse trajectories for solar system exploration

    Science.gov (United States)

    Sakai, Tadashi

    A study has been performed to determine the advantages and disadvantages of variable thrust and variable Isp (specific impulse) trajectories for solar system exploration. There have been several numerical research efforts for variable thrust, variable Isp, power-limited trajectory optimization problems. All of these results conclude that variable thrust, variable Isp (variable specific impulse, or VSI) engines are superior to constant thrust, constant Isp (constant specific impulse; or CSI) engines. However, most of these research efforts assume a mission from Earth to Mars, and some of them further assume that these planets are circular and coplanar. Hence they still lack the generality. This research has been conducted to answer the following questions: (1) Is a VSI engine always better than a CSI engine or a high thrust engine for any mission to any planet with any time of flight considering lower propellant mass as the sole criterion? (2) If a planetary swing-by is used for a VSI trajectory, is the fuel savings of a VSI swing-by trajectory better than that of a CSI swing-by or high thrust swing-by trajectory? To support this research, an unique, new computer-based interplanetary trajectory calculation program has been created. This program utilizes a calculus of variations algorithm to perform overall optimization of thrust, Isp, and thrust vector direction along a trajectory that minimizes fuel consumption for interplanetary travel. It is assumed that the propulsion system is power-limited, and thus the compromise between thrust and Isp is a variable to be optimized along the flight path. This program is capable of optimizing not only variable thrust trajectories but also constant thrust trajectories in 3-D space using a planetary ephemeris database. It is also capable of conducting planetary swing-bys. Using this program, various Earth-originating trajectories have been investigated and the optimized results have been compared to traditional CSI and high

  20. Psychobiological Factors Affecting Cortisol Variability in Human-Dog Dyads.

    Directory of Open Access Journals (Sweden)

    Iris Schöberl

    Full Text Available Stress responses within dyads are modulated by interactions such as mutual emotional support and conflict. We investigated dyadic psychobiological factors influencing intra-individual cortisol variability in response to different challenging situations by testing 132 owners and their dogs in a laboratory setting. Salivary cortisol was measured and questionnaires were used to assess owner and dog personality as well as owners' social attitudes towards the dog and towards other humans. We calculated the individual coefficient of variance of cortisol (iCV = sd/mean*100 over the different test situations as a parameter representing individual variability of cortisol concentration. We hypothesized that high cortisol variability indicates efficient and adaptive coping and a balanced individual and dyadic social performance. Female owners of male dogs had lower iCV than all other owner gender-dog sex combinations (F = 14.194, p<0.001, whereas owner Agreeableness (NEO-FFI scaled positively with owner iCV (F = 4.981, p = 0.028. Dogs of owners high in Neuroticism (NEO-FFI and of owners who were insecure-ambivalently attached to their dogs (FERT, had low iCV (F = 4.290, p = 0.041 and F = 5.948, p = 0.016, as had dogs of owners with human-directed separation anxiety (RSQ or dogs of owners with a strong desire of independence (RSQ (F = 7.661, p = 0.007 and F = 9.192, p = 0.003. We suggest that both owner and dog social characteristics influence dyadic cortisol variability, with the human partner being more influential than the dog. Our results support systemic approaches (i.e. considering the social context in science and in counselling.

  1. Interobserver Variability of Ki-67 Measurement in Breast Cancer

    Directory of Open Access Journals (Sweden)

    Yul Ri Chung

    2016-03-01

    Full Text Available Background: As measurement of Ki-67 proliferation index is an important part of breast cancer diagnostics, we conducted a multicenter study to examine the degree of concordance in Ki-67 counting and to find factors that lead to its variability. Methods: Thirty observers from thirty different institutions reviewed Ki-67–stained slides of 20 different breast cancers on whole sections and tissue microarray (TMA by online system. Ten of the 20 breast cancers had hot spots of Ki-67 expression. Each observer scored Ki-67 in two different ways: direct counting (average vs. hot spot method and categorical estimation. Intraclass correlation coefficient (ICC of Ki-67 index was calculated for comparative analysis. Results: For direct counting, ICC of TMA was slightly higher than that of whole sections using average method (0.895 vs 0.858. The ICC of tumors with hot spots was lower than that of tumors without (0.736 vs 0.874. In tumors with hot spots, observers took an additional counting from the hot spot; the ICC of whole sections using hot spot method was still lower than that of TMA (0.737 vs 0.895. In categorical estimation, Ki-67 index showed a wide distribution in some cases. Nevertheless, in tumors with hot spots, the range of distribution in Ki-67 categories was decreased with hot spot method and in TMA platform. Conclusions: Interobserver variability of Ki-67 index for direct counting and categorical estimation was relatively high. Tumors with hot spots showed greater interobserver variability as opposed to those without, and restricting the measurement area yielded lower interobserver variability.

  2. Drivers of household food availability in sub-Saharan Africa based on big data from small farms

    Science.gov (United States)

    Frelat, Romain; Lopez-Ridaura, Santiago; Herrero, Mario; Douxchamps, Sabine; Djurfeldt, Agnes Andersson; Erenstein, Olaf; Henderson, Ben; Kassie, Menale; Paul, Birthe K.; Rigolot, Cyrille; Ritzema, Randall S.; Rodriguez, Daniel; van Asten, Piet J. A.; van Wijk, Mark T.

    2016-01-01

    We calculated a simple indicator of food availability using data from 93 sites in 17 countries across contrasted agroecologies in sub-Saharan Africa (>13,000 farm households) and analyzed the drivers of variations in food availability. Crop production was the major source of energy, contributing 60% of food availability. The off-farm income contribution to food availability ranged from 12% for households without enough food available (18% of the total sample) to 27% for the 58% of households with sufficient food available. Using only three explanatory variables (household size, number of livestock, and land area), we were able to predict correctly the agricultural determined status of food availability for 72% of the households, but the relationships were strongly influenced by the degree of market access. Our analyses suggest that targeting poverty through improving market access and off-farm opportunities is a better strategy to increase food security than focusing on agricultural production and closing yield gaps. This calls for multisectoral policy harmonization, incentives, and diversification of employment sources rather than a singular focus on agricultural development. Recognizing and understanding diversity among smallholder farm households in sub-Saharan Africa is key for the design of policies that aim to improve food security. PMID:26712016

  3. Drivers of household food availability in sub-Saharan Africa based on big data from small farms.

    Science.gov (United States)

    Frelat, Romain; Lopez-Ridaura, Santiago; Giller, Ken E; Herrero, Mario; Douxchamps, Sabine; Andersson Djurfeldt, Agnes; Erenstein, Olaf; Henderson, Ben; Kassie, Menale; Paul, Birthe K; Rigolot, Cyrille; Ritzema, Randall S; Rodriguez, Daniel; van Asten, Piet J A; van Wijk, Mark T

    2016-01-12

    We calculated a simple indicator of food availability using data from 93 sites in 17 countries across contrasted agroecologies in sub-Saharan Africa (>13,000 farm households) and analyzed the drivers of variations in food availability. Crop production was the major source of energy, contributing 60% of food availability. The off-farm income contribution to food availability ranged from 12% for households without enough food available (18% of the total sample) to 27% for the 58% of households with sufficient food available. Using only three explanatory variables (household size, number of livestock, and land area), we were able to predict correctly the agricultural determined status of food availability for 72% of the households, but the relationships were strongly influenced by the degree of market access. Our analyses suggest that targeting poverty through improving market access and off-farm opportunities is a better strategy to increase food security than focusing on agricultural production and closing yield gaps. This calls for multisectoral policy harmonization, incentives, and diversification of employment sources rather than a singular focus on agricultural development. Recognizing and understanding diversity among smallholder farm households in sub-Saharan Africa is key for the design of policies that aim to improve food security.

  4. Economic Statistical Design of Variable Sampling Interval X¯$\\overline X $ Control Chart Based on Surrogate Variable Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lee Tae-Hoon

    2016-12-01

    Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.

  5. Dissolution comparisons using a Multivariate Statistical Distance (MSD) test and a comparison of various approaches for calculating the measurements of dissolution profile comparison.

    Science.gov (United States)

    Cardot, J-M; Roudier, B; Schütz, H

    2017-07-01

    The f 2 test is generally used for comparing dissolution profiles. In cases of high variability, the f 2 test is not applicable, and the Multivariate Statistical Distance (MSD) test is frequently proposed as an alternative by the FDA and EMA. The guidelines provide only general recommendations. MSD tests can be performed either on raw data with or without time as a variable or on parameters of models. In addition, data can be limited-as in the case of the f 2 test-to dissolutions of up to 85% or to all available data. In the context of the present paper, the recommended calculation included all raw dissolution data up to the first point greater than 85% as a variable-without the various times as parameters. The proposed MSD overcomes several drawbacks found in other methods.

  6. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  7. Technology for Fissionable Materials Detection by Use of 100 MeV Variable Linac

    CERN Document Server

    Karasyov, Sergey P; Dovbnja, Anatoliy N; Eran, L; Kiryukhin, Nikolay M; Melnik, Yu M; Ran'iuk, Yu; Shlyakhov, Il'ya N; Trubnikov, Sergiy V

    2005-01-01

    A new concept for a two-step facility to increase the accuracy/reliability of detecting heavily shielded fissionable materials (FM) in marine containers is presented. The facility will detect FM in two steps. An existing dual-view; dual-energy X-ray scanner, which is based on 7 MeV electron accelerator, will select the suspicious places inside container. The linac with variable energy (up to 100 MeV) will be used for the second step. The technology will detect fissionable nuclei by gamma induced fission reactions and delayed neutron registration. A little-known Ukrainian experimental data obtained in Chernobil' clean-up program will be presented to ground proposed concept. The theoretical calculations of neutron fluxes scale these results to marine container size. Modified GEANT code for electron/gamma penetration and authors' own software for neutron yield/penetration are used for these calculations. Available facilities (X-ray scanners; linac; detectors), which will be used for concept proof, are described....

  8. Calculation of Rydberg interaction potentials

    International Nuclear Information System (INIS)

    Weber, Sebastian; Büchler, Hans Peter; Tresp, Christoph; Urvoy, Alban; Hofferberth, Sebastian; Menke, Henri; Firstenberg, Ofer

    2017-01-01

    The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence can be fine-tuned with great flexibility by choosing appropriate Rydberg states and applying external electric and magnetic fields. More and more experiments are probing this interaction at short atomic distances or with such high precision that perturbative calculations as well as restrictions to the leading dipole–dipole interaction term are no longer sufficient. In this tutorial, we review all relevant aspects of the full calculation of Rydberg interaction potentials. We discuss the derivation of the interaction Hamiltonian from the electrostatic multipole expansion, numerical and analytical methods for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source. (tutorial)

  9. Analysis of the reduced wake effect for available wind power calculation during curtailment

    NARCIS (Netherlands)

    Sanchez Perez Moreno, S.; Ummels, B. C.; Zaayer, M B

    2017-01-01

    With the increase of installed wind power capacity, the contribution of wind power curtailment to power balancing becomes more relevant. Determining the available power during curtailment at the wind farm level is not trivial, as curtailment changes the wake effects in a wind farm. Current best

  10. Optimization method for quantitative calculation of clay minerals in soil

    Indian Academy of Sciences (India)

    However, no reliable method for quantitative analysis of clay minerals has been established so far. In this study, an attempt was made to propose an optimization method for the quantitative ... 2. Basic principles. The mineralogical constitution of soil is rather complex. ... K2O, MgO, and TFe as variables for the calculation.

  11. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    Directory of Open Access Journals (Sweden)

    Marika T Leving

    Full Text Available It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process.17 Participants received visual feedback-based practice (feedback group and 15 participants received regular practice (natural learning group. Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block and optimize it in the prescribed direction (2nd 4-min block. To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability.The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group.These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not

  12. Modeling Short-Range Soil Variability and its Potential Use in Variable-Rate Treatment of Experimental Plots

    Directory of Open Access Journals (Sweden)

    A Moameni

    2011-02-01

    Full Text Available Abstract In Iran, the experimental plots under fertilizer trials are managed in such a way that the whole plot area uniformly receives agricultural inputs. This could lead to biased research results and hence to suppressing of the efforts made by the researchers. This research was conducted in a selected site belonging to the Gonbad Agricultural Research Station, located in the semiarid region, northeastern Iran. The aim was to characterize the short-range spatial variability of the inherent and management-depended soil properties and to determine if this variation is large and can be managed at practical scales. The soils were sampled using a grid 55 m apart. In total, 100 composite soil samples were collected from topsoil (0-30 cm and were analyzed for calcium carbonate equivalent, organic carbon, clay, available phosphorus, available potassium, iron, copper, zinc and manganese. Descriptive statistics were applied to check data trends. Geostatistical analysis was applied to variography, model fitting and contour mapping. Sampling at 55 m made it possible to split the area of the selected experimental plot into relatively uniform areas that allow application of agricultural inputs with variable rates. Keywords: Short-range soil variability, Within-field soil variability, Interpolation, Precision agriculture, Geostatistics

  13. Cepheid pulsation theory and multiperiodic cepheid variables

    International Nuclear Information System (INIS)

    Cox, A.N.; Cox, J.P.

    1975-01-01

    In this review of the multiperiodic Cepheid variables, the subject matter is divided into four parts. The first discusses general causes of pulsation of Cepheids and other variable stars, and their locations on the H-R diagram. In the second section, the linear adiabatic and nonadiabatic theory calculation of radial pulsation periods and their application to the problem of masses and double-mode Cepheids are reviewed. Periodic solutions, and their stability, of the nonlinear radial pulsation equations for Cepheids and RR Lyrae stars are considered in the third section. The last section provides the latest results on nonlinear, nonperiodic, radial pulsations for Cepheids and RR Lyrae stars. (BJG)

  14. A tool for the calculation of rockfall fragility curves for masonry buildings

    Science.gov (United States)

    Mavrouli, Olga

    2017-04-01

    Masonries are common structures in mountainous and coastal areas and they exhibit substantial vulnerability to rockfalls. For big rockfall events or precarious structures the damage is very high and the repair is not cost-effective. Nonetheless, for small or moderate rockfalls, the damage may vary in function of the characteristics of the impacting rock blocks and of the buildings. The evaluation of the expected damage for masonry buildings, and for different small and moderate rockfall scenarios, is useful for assessing the expected direct loss at constructed areas, and its implications for life safety. A tool for the calculation of fragility curves for masonry buildings which are impacted by rock blocks is presented. The fragility curves provide the probability of exceeding a given damage state (low, moderate and high) for increasing impact energies of the rock blocks on the walls. The damage states are defined according to a damage index equal to the percentage of the damaged area of a wall, as being proportional to the repair cost. Aleatoric and epistemic uncertainties are incorporated with respect to the (i) rock block velocity, (ii) rock block size, (iii) masonry width, and (iv) masonry resistance. The calculation of the fragility curves is applied using a Monte Carlo simulation. Given user-defined data for the average value of these four parameters and their variability, random scenarios are developed, the respective damage index is assessed for each scenario, and the probability of exceedance of each damage state is calculated. For the assessment of the damage index, a database developed by the results of 576 analytical simulations is used. The variables range is: wall width 0.4 - 1.0 m, wall tensile strength 0.1 - 0.6 MPa, rock velocity 1-20 m/s, rock size 1-20 m3. Nonetheless this tool permits the use of alternative databases, on the condition that they contain data that correlate the damage with the four aforementioned variables. The fragility curves can

  15. Calculation of isotopic profile during band displacement on ion exchange resins

    International Nuclear Information System (INIS)

    Sonwalkar, A.S.; Puranik, V.D.; D'Souza, A.B.

    1981-01-01

    A method has been developed to calculate the isotopic profile during band displacement on ion exchange resins using computer simulation. Persoz had utilized this technique earlier for calculating the isotopic profile during band displacement as well as frontal analysis. The present report deals with a simplification of the method used by Persoz by reducing the number of variables and making certain approximations where the separation factor is not far from unity. Calculations were made for the typical case of boron isotope separation. The results obtained by the modified method were found to be in very good agreement with those obtained by using an exact equation, at the same time requiring conside--rably less computer time. (author)

  16. Extent of, and variables associated with, blood pressure variability among older subjects.

    Science.gov (United States)

    Morano, Arianna; Ravera, Agnese; Agosta, Luca; Sappa, Matteo; Falcone, Yolanda; Fonte, Gianfranco; Isaia, Gianluca; Isaia, Giovanni Carlo; Bo, Mario

    2018-02-23

    Blood pressure variability (BPV) may have prognostic implications for cardiovascular risk and cognitive decline; however, BPV has yet to be studied in old and very old people. Aim of the present study was to evaluate the extent of BPV and to identify variables associated with BPV among older subjects. A retrospective study of patients aged ≥ 65 years who underwent 24-h ambulatory blood pressure monitoring (ABPM) was carried out. Three different BPV indexes were calculated for systolic and diastolic blood pressure (SBP and DBP): standard deviation (SD), coefficient of variation (CV), and average real variability (ARV). Demographic variables and use of antihypertensive medications were considered. The study included 738 patients. Mean age was 74.8 ± 6.8 years. Mean SBP and DBP SD were 20.5 ± 4.4 and 14.6 ± 3.4 mmHg. Mean SBP and DBP CV were 16 ± 3 and 20 ± 5%. Mean SBP and DBP ARV were 15.7 ± 3.9 and 11.8 ± 3.6 mmHg. At multivariate analysis older age, female sex and uncontrolled mean blood pressure were associated with both systolic and diastolic BPV indexes. The use of calcium channel blockers and alpha-adrenergic antagonists was associated with lower systolic and diastolic BPV indexes, respectively. Among elderly subjects undergoing 24-h ABPM, we observed remarkably high indexes of BPV, which were associated with older age, female sex, and uncontrolled blood pressure values.

  17. Application of monosymmetrical I-beams in light metal frames with variable stiffness

    Directory of Open Access Journals (Sweden)

    I.O. Sklyarov

    2016-05-01

    Full Text Available The article is devoted to effectiveness of using of monosymmetrical I-beams with flexible wall frame structures of variable section, features of their calculation and design. Aim: The aim of research is to confirm the feasibility of I-beams with flexible wall bearing as light metal skeletons for buildings of the universal assignment. Materials and Methods: In order to reduce the metal consumption a frame is conventionally divided into several sections according to bending moment diagrams so that in the more compressed zone section the belt of great area was located, and in the stretched or less intense zone the lesser belt was installed. The resulting sections have smaller area in compare to symmetric profiles. Additional reduce bending moments provided as a result of displacement of elements axes with variable cross section. Results: The calculations and selection of sections of the frame have shown that it can be achieved the reducing of bearing elements weight by 10% compared to the symmetrical profiles of variable stiffness due to using monosymmetrical sections. The effectiveness of the proposed constructive solution is confirmed by compare of the projected weight frame construction with existing analogue. The symmetrical frame profile is 15.3% lighter; the monosymmetrical frame profile is 27% lighter. Conclusions: Analysis of stress-strain state structures shown: first, through asymmetrical profile there is a shifting of the center of gravity section, which leads to a redistribution of internal forces in the frame; secondly, because of the small cross-sectional area of the stretched zones more difficult to ensure the stability of the plane form of bending beams, which leads to the necessity to disconnect areas curtain beams by constraints of smaller steps.

  18. Calculations in the weak and crossover regions of SU(2) lattice gauge theory

    International Nuclear Information System (INIS)

    Greensite, J.; Hansson, T.H.; Hari Dass, N.D.; Lauwers, P.G.

    1981-07-01

    A calculational scheme for lattice gauge theory is proposed which interpolates between lowest order mean-field and full Monte-Carlo calculations. The method is to integrate over a restricted set of link variables in the functional integral, with the remainder fixed at their mean-field value. As an application the authors compute small SU(2) Wilson loops near and above the weak-to-strong coupling transition point. (Auth.)

  19. Use of nuclear reaction models in cross section calculations

    International Nuclear Information System (INIS)

    Grimes, S.M.

    1975-03-01

    The design of fusion reactors will require information about a large number of neutron cross sections in the MeV region. Because of the obvious experimental difficulties, it is probable that not all of the cross sections of interest will be measured. Current direct and pre-equilibrium models can be used to calculate non-statistical contributions to neutron cross sections from information available from charged particle reaction studies; these are added to the calculated statistical contribution. Estimates of the reliability of such calculations can be derived from comparisons with the available data. (3 tables, 12 figures) (U.S.)

  20. Quantifying benthic nitrogen fluxes in Puget Sound, Washington: a review of available data

    Science.gov (United States)

    Sheibley, Richard W.; Paulson, Anthony J.

    2014-01-01

    Understanding benthic fluxes is important for understanding the fate of materials that settle to the Puget Sound, Washington, seafloor, as well as the impact these fluxes have on the chemical composition and biogeochemical cycles of marine waters. Existing approaches used to measure benthic nitrogen flux in Puget Sound and elsewhere were reviewed and summarized, and factors for considering each approach were evaluated. Factors for selecting an appropriate approach for gathering information about benthic flux include: availability of resources, objectives of projects, and determination of which processes each approach measures. An extensive search of literature was undertaken to summarize known benthic nitrogen fluxes in Puget Sound. A total of 138 individual flux chamber measurements and 38 sets of diffusive fluxes were compiled for this study. Of the diffusive fluxes, 35 new datasets were located, and new flux calculations are presented in this report. About 65 new diffusive flux calculations are provided across all nitrogen species (nitrate, NO3-; nitrite, NO2-; ammonium, NH4+). Data analysis of this newly compiled benthic flux dataset showed that fluxes beneath deep (greater than 50 meters) water tended to be lower than those beneath shallow (less than 50 meters) water. Additionally, variability in flux at the shallow depths was greater, possibly indicating a more dynamic interaction between the benthic and pelagic environments. The overall range of bottom temperatures from studies in the Puget Sound area were small (5–16 degrees Celsius), and only NH4+ flux showed any pattern with temperature. For NH4+, flux values and variability increased at greater than about 12 degrees Celsius. Collection of additional study site metadata about environmental factors (bottom temperature, depth, sediment porosity, sediment type, and sediment organic matter) will help with development of a broader regional understanding benthic nitrogen flux in the Puget Sound.

  1. A pencil beam dose calculation model for CyberKnife system

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Bin; Li, Yongbao; Liu, Bo; Zhou, Fugen [Image Processing Center, Beihang University, Beijing 100191 (China); Xu, Shouping [Department of Radiation Oncology, PLA General Hospital, Beijing 100853 (China); Wu, Qiuwen, E-mail: Qiuwen.Wu@Duke.edu [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710 (United States)

    2016-10-15

    Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensity profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation

  2. Investigation of hydrological variability in the Korean Peninsula with the ENSO teleconnections

    Directory of Open Access Journals (Sweden)

    S. Yoon

    2016-10-01

    Full Text Available This study analyzes nonlinear behavior links with atmospheric teleconnections between hydrologic variables and climate indices using statistical models during warm season (June to September over the Korean Peninsula (KP. The ocean-related major climate factor, which is the El Niño-Southern Oscillation (ENSO was used to analyze the atmospheric teleconnections by principal component analysis (PCA and a singular spectrum analysis (SSA. The nonlinear lag time correlations between climate indices and hydrologic variables are calculated by Mutual Information (MI technique. The nonlinear correlation coefficients (CCs by MI were higher than linear CCs, and ENSO shows a few months of lag time correlation. The warm season hydrologic variables in KP shows a significant increasing tendency during the warm pool (WP, and the cold tongue (CT El Niño decaying years shows a significant decreasing tendency, while the La Niña year shows slightly above normal conditions, respectively. A better understanding of the relationship between climate indices and streamflow, and their local impacts can help to prepare for the river discharge management by water managers and scientists. Furthermore, these results provide useful data for policy makers and end-users to support long-range water resources prediction and water-related policy.

  3. Water availability and demand in the development regions of South Africa

    Directory of Open Access Journals (Sweden)

    A. B. de Villiers

    1988-03-01

    Full Text Available The availability of water data in the development regions is at present insufficient. This is due to the fact that water supply and demand is calculated for the physical drainage regions (watersheds, while the development regions do not correspond with the drainage regions. The necessary calculations can accordingly presently not be made. In this paper this problem is addressed.

  4. Impact of Reconstruction Algorithms on CT Radiomic Features of Pulmonary Tumors: Analysis of Intra- and Inter-Reader Variability and Inter-Reconstruction Algorithm Variability.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo

    2016-01-01

    To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (palgorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (palgorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.

  5. Calculations of optical rotation: Influence of molecular structure

    Directory of Open Access Journals (Sweden)

    Yu Jia

    2012-01-01

    Full Text Available Ab initio Hartree-Fock (HF method and Density Functional Theory (DFT were used to calculate the optical rotation of 26 chiral compounds. The effects of theory and basis sets used for calculation, solvents influence on the geometry and values of calculated optical rotation were all discussed. The polarizable continuum model, included in the calculation, did not improve the accuracy effectively, but it was superior to γs. Optical rotation of five or sixmembered of cyclic compound has been calculated and 17 pyrrolidine or piperidine derivatives which were calculated by HF and DFT methods gave acceptable predictions. The nitrogen atom affects the calculation results dramatically, and it is necessary in the molecular structure in order to get an accurate computation result. Namely, when the nitrogen atom was substituted by oxygen atom in the ring, the calculation result deteriorated.

  6. Willow growing - Methods of calculation and profitability

    International Nuclear Information System (INIS)

    Rosenqvist, H.

    1997-01-01

    The calculation method presented here makes it possible to conduct profitability comparisons between annual and perennial crops and in addition take the planning situation into account. The method applied is a modified total step calculation. The difference between a traditional total step calculation and the modified version is the way in which payments and disbursements are taken into account over a period of several years. This is achieved by combining the present value method and the annuity method. The choice of interest rate has great bearing on the result in perennial calculations. The various components influencing the interest rate are analysed and factors relating to the establishment of the interest rate in different situations are described. The risk factor can be an important variable component of the interest rate calculation. Risk is also addressed from an approach in accordance with portfolio theory. The application of the methods sheds light on the profitability of Salix cultivation from the viewpoint of business economics, and also how different factors influence the profitability of Salix cultivation. Aspects studied are harvesting intervals, the importance of yield level, the competitiveness of Salix versus grain cultivation, the influence of income taxes on profitability etc. Methods for evaluation of activities concerning cultivation of a perennial crop are described and also involve the application of nitrogen fertilization to Salix cultivation. Studies have been performed using these methods to look into nitrogen fertilizer profitability in Salix cultivation during the first rotation period. Nitrogen fertilizer profitability has been investigated involving both production functions and cost calculations, taking the year fertilization into consideration. 72 refs., 2 figs., 52 tabs

  7. Calculating the C operator in PT-symmetric quantum mechanics

    International Nuclear Information System (INIS)

    Bender, C.M.

    2004-01-01

    It has recently been shown that a non-Hermitian Hamiltonian H possessing an unbroken PT-symmetry (i) has a real spectrum that is bounded below, and (ii) defines a unitary theory of quantum mechanics with positive norm. The proof of unitarity requires a linear operator C, which was originally defined as a sum over the eigenfunctions of H. However, using this definition it is cumbersome to calculate C in quantum mechanics and impossible in quantum field theory. An alternative method is devised here for calculating C directly in terms of the operator dynamical variables of the quantum theory. This new method is general and applies to a variety of quantum mechanical systems having several degrees of freedom. More importantly, this method can be used to calculate the C operator in quantum field theory. The C operator is a new time-independent observable in PT-symmetric quantum field theory. (author)

  8. Availability program: Phase I report

    International Nuclear Information System (INIS)

    Thomson, S.L.; Dabiri, A.; Keeton, D.C.; Riemer, B.W.; Waganer, L.M.

    1985-05-01

    An Availability Working Group was formed within the Office of Fusion Energy in March 1984 to consider the establishment of an availability program for magnetic fusion. The scope of this program is defined to include the development of (1) a comprehensive data base, (2) empirical correlations, and (3) analytical methods for application to fusion facilities and devices. The long-term goal of the availability program is to develop a validated, integrated methodology that will provide (1) projections of plant availability and (2) input to design decisions on maintainability and system reliability requirements. The Phase I study group was commissioned to assess the status of work in progress that is relevant to the availability program. The scope of Phase I included surveys of existing data and data collection programs at operating fusion research facilities, the assessment of existing computer models to calculate system availability, and the review of methods to predict and correlate data on component failure and maintenance. The results of these investigations are reported to the Availability Working Group in this document

  9. Variable gamma-ray sky at 1 GeV

    International Nuclear Information System (INIS)

    Pshirkov, M. S.; Rubtsov, G. I.

    2013-01-01

    We search for the long-term variability of the gamma-ray sky in the energy range E > 1 GeV with 168 weeks of the gamma-ray telescope Fermi-LAT data. We perform a full sky blind search for regions with variable flux looking for deviations from uniformity. We bin the sky into 12288 pixels using the HEALPix package and use the Kolmogorov-Smirnov test to compare weekly photon counts in each pixel with the constant flux hypothesis. The weekly exposure of Fermi-LAT for each pixel is calculated with the Fermi-LAT tools. We consider flux variations in a pixel significant if the statistical probability of uniformity is less than 4 × 10 −6 , which corresponds to 0.05 false detections in the whole set. We identified 117 variable sources, 27 of which have not been reported variable before. The sources with previously unidentified variability contain 25 active galactic nuclei (AGN) belonging to the blazar class (11 BL Lacs and 14 FSRQs), one AGN of an uncertain type, and one pulsar PSR J0633+1746 (Geminga).

  10. Build-up Factor Calculation for Ordinary Concrete, Baryte Concrete and Blast-furnace Slugges Concrete as γ Radiation Shielding

    International Nuclear Information System (INIS)

    Isman MT; Elisabeth Supriatni; Tochrul Binowo

    2002-01-01

    Calculation of build up factor ordinary concrete, baryte concrete and blast-furnace sludge concrete have been carried out. The calculations have been carried out by dose rate measurement of Cs 137 source before and after passing through shielding. The investigated variables were concrete type, thickness of concrete and relative possession of concrete. Concrete type variables are ordinary concrete, baryte concrete and blast sludge furnace concrete. The thickness variables were 6, 12, 18, 24, 30 and 36 cm. The relative position variables were dose to the source and close to detector. The result showed that concrete type and position did not have significant effect to build-up factor value, while the concrete thickness (r) and the attenuation coefficient (μ) were influenced to the build-up factor. The higher μr value the higher build-up factor value. (author)

  11. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  12. Sorting variables for each case: a new algorithm to calculate injury severity score (ISS) using SPSS-PC.

    Science.gov (United States)

    Linn, S

    One of the more often used measures of multiple injuries is the injury severity score (ISS). Determination of the ISS is based on the abbreviated injury scale (AIS). This paper suggests a new algorithm to sort the AISs for each case and calculate ISS. The program uses unsorted abbreviated injury scale (AIS) levels for each case and rearranges them in descending order. The first three sorted AISs representing the three most severe injuries of a person are then used to calculate injury severity score (ISS). This algorithm should be useful for analyses of clusters of injuries especially when more patients have multiple injuries.

  13. An application of the variable-r method to subpopulation growth rates in a 19th century agricultural population

    Directory of Open Access Journals (Sweden)

    Corey Sparks

    2009-07-01

    Full Text Available This paper presents an analysis of the differential growth rates of the farming and non-farming segments of a rural Scottish community during the 19th and early 20th centuries using the variable-r method allowing for net migration. Using this method, I find that the farming population of Orkney, Scotland, showed less variability in their reproduction and growth rates than the non-farming population during a period of net population decline. I conclude by suggesting that the variable-r method can be used in general cases where the relative growth of subpopulations or subpopulation reproduction is of interest.

  14. An Exploration of Wind Stress Calculation Techniques in Hurricane Storm Surge Modeling

    Directory of Open Access Journals (Sweden)

    Kyra M. Bryant

    2016-09-01

    Full Text Available As hurricanes continue to threaten coastal communities, accurate storm surge forecasting remains a global priority. Achieving a reliable storm surge prediction necessitates accurate hurricane intensity and wind field information. The wind field must be converted to wind stress, which represents the air-sea momentum flux component required in storm surge and other oceanic models. This conversion requires a multiplicative drag coefficient for the air density and wind speed to represent the air-sea momentum exchange at a given location. Air density is a known parameter and wind speed is a forecasted variable, whereas the drag coefficient is calculated using an empirical correlation. The correlation’s accuracy has brewed a controversy of its own for more than half a century. This review paper examines the lineage of drag coefficient correlations and their acceptance among scientists.

  15. Massively parallel Fokker-Planck calculations

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1990-01-01

    This paper reports that the Fokker-Planck package FPPAC, which solves the complete nonlinear multispecies Fokker-Planck collision operator for a plasma in two-dimensional velocity space, has been rewritten for the Connection Machine 2. This has involved allocation of variables either to the front end or the CM2, minimization of data flow, and replacement of Cray-optimized algorithms with ones suitable for a massively parallel architecture. Calculations have been carried out on various Connection Machines throughout the country. Results and timings on these machines have been compared to each other and to those on the static memory Cray-2. For large problem size, the Connection Machine 2 is found to be cost-efficient

  16. Behavioral Variables Associated with Obesity in Police Officers

    Science.gov (United States)

    CAN, S. Hakan; HENDY, Helen M.

    2014-01-01

    Past research has documented that non-behavioral variables (such as long work hours, exposure to police stressors) are associated with obesity risk in police officers, but limited research has examined behavioral variables that might be targeted by Employee Assistance Programs for police weight management. The present study compared non-obese and obese officers for behavioral variables found associated with obesity in other adult samples: physical activity (cardiovascular, strength-training, stretching), sleep duration, and consumption of alcohol, fruit and vegetables, and snack foods. Participants included 172 male police officers who completed questionnaires to report height and weight, used to calculate body mass index (BMI = kg/m2) and to divide them into “non-obese” and “obese” groups. They also reported the above behaviors and six non-behavioral variables found associated with obesity risk: age, health problems, family support, police work hours, police stressors, police support. ANCOVAs compared each behavioral variable across obesity status (non-obese, obese), with the six non-behavioral variables used as covariates. Results revealed that cardiovascular and strength-training physical activity were the only behavioral variables that differed significantly between non-obese and obese police officers. The use of self-reported height and weight values may provide Employee Assistance Program with improved cost, time, and officer participation. PMID:24694574

  17. Clinical Validity, Understandability, and Actionability of Online Cardiovascular Disease Risk Calculators: Systematic Review.

    Science.gov (United States)

    Bonner, Carissa; Fajardo, Michael Anthony; Hui, Samuel; Stubbs, Renee; Trevena, Lyndal

    2018-02-01

    Online health information is particularly important for cardiovascular disease (CVD) prevention, where lifestyle changes are recommended until risk becomes high enough to warrant pharmacological intervention. Online information is abundant, but the quality is often poor and many people do not have adequate health literacy to access, understand, and use it effectively. This project aimed to review and evaluate the suitability of online CVD risk calculators for use by low health literate consumers in terms of clinical validity, understandability, and actionability. This systematic review of public websites from August to November 2016 used evaluation of clinical validity based on a high-risk patient profile and assessment of understandability and actionability using Patient Education Material Evaluation Tool for Print Materials. A total of 67 unique webpages and 73 unique CVD risk calculators were identified. The same high-risk patient profile produced widely variable CVD risk estimates, ranging from as little as 3% to as high as a 43% risk of a CVD event over the next 10 years. One-quarter (25%) of risk calculators did not specify what model these estimates were based on. The most common clinical model was Framingham (44%), and most calculators (77%) provided a 10-year CVD risk estimate. The calculators scored moderately on understandability (mean score 64%) and poorly on actionability (mean score 19%). The absolute percentage risk was stated in most (but not all) calculators (79%), and only 18% included graphical formats consistent with recommended risk communication guidelines. There is a plethora of online CVD risk calculators available, but they are not readily understandable and their actionability is poor. Entering the same clinical information produces widely varying results with little explanation. Developers need to address actionability as well as clinical validity and understandability to improve usefulness to consumers with low health literacy.

  18. Computation of Normal Conducting and Superconducting Linear Accelerator (LINAC) Availabilities

    International Nuclear Information System (INIS)

    Haire, M.J.

    2000-01-01

    A brief study was conducted to roughly estimate the availability of a superconducting (SC) linear accelerator (LINAC) as compared to a normal conducting (NC) one. Potentially, SC radio frequency cavities have substantial reserve capability, which allows them to compensate for failed cavities, thus increasing the availability of the overall LINAC. In the initial SC design, there is a klystron and associated equipment (e.g., power supply) for every cavity of an SC LINAC. On the other hand, a single klystron may service eight cavities in the NC LINAC. This study modeled that portion of the Spallation Neutron Source LINAC (between 200 and 1,000 MeV) that is initially proposed for conversion from NC to SC technology. Equipment common to both designs was not evaluated. Tabular fault-tree calculations and computer-event-driven simulation (EDS) computer computations were performed. The estimated gain in availability when using the SC option ranges from 3 to 13% under certain equipment and conditions and spatial separation requirements. The availability of an NC LINAC is estimated to be 83%. Tabular fault-tree calculations and computer EDS modeling gave the same 83% answer to within one-tenth of a percent for the NC case. Tabular fault-tree calculations of the availability of the SC LINAC (where a klystron and associated equipment drive a single cavity) give 97%, whereas EDS computer calculations give 96%, a disagreement of only 1%. This result may be somewhat fortuitous because of limitations of tabular fault-tree calculations. For example, tabular fault-tree calculations can not handle spatial effects (separation distance between failures), equipment network configurations, and some failure combinations. EDS computer modeling of various equipment configurations were examined. When there is a klystron and associated equipment for every cavity and adjacent cavity, failure can be tolerated and the SC availability was estimated to be 96%. SC availability decreased as

  19. Genetic variability available through cell fusion

    Energy Technology Data Exchange (ETDEWEB)

    Smith, H.H.; Mastrangelo-Hough, I.A.

    1977-01-01

    Results are reported for the following studies: plant hybridization through protoplast fusion using species of Nicotiana and Petunia; chromosome instability studies on culture-induced chromosome changes and chromosome elimination; chloroplast distribution in parasexual hybrids; chromosomal introgression following fusion; plant-animal fusion; and microcell-mediated chromosome transfer and chromosome-mediated gene transfer. (HLW)

  20. An improved and explicit surrogate variable analysis procedure by coefficient adjustment.

    Science.gov (United States)

    Lee, Seunggeun; Sun, Wei; Wright, Fred A; Zou, Fei

    2017-06-01

    Unobserved environmental, demographic, and technical factors can negatively affect the estimation and testing of the effects of primary variables. Surrogate variable analysis, proposed to tackle this problem, has been widely used in genomic studies. To estimate hidden factors that are correlated with the primary variables, surrogate variable analysis performs principal component analysis either on a subset of features or on all features, but weighting each differently. However, existing approaches may fail to identify hidden factors that are strongly correlated with the primary variables, and the extra step of feature selection and weight calculation makes the theoretical investigation of surrogate variable analysis challenging. In this paper, we propose an improved surrogate variable analysis using all measured features that has a natural connection with restricted least squares, which allows us to study its theoretical properties. Simulation studies and real data analysis show that the method is competitive to state-of-the-art methods.

  1. Calculation of paleohydraulic parameters of a fluvial system under spatially variable subsidence, of the Ericson sandstone, South western Wyoming

    Science.gov (United States)

    Snyder, H.; Leva-Lopez, J.

    2017-12-01

    During the late Campanian age in North America fluvial systems drained the highlands of the Sevier orogenic belt and travelled east towards the Western Interior Seaway. One of such systems deposited the Canyon Creek Member (CCM) of the Ericson Formation in south-western Wyoming. At this time the fluvial system was being partially controlled by laterally variable subsidence caused by incipient Laramide uplifts. These uplifts rather than real topographic features were only areas of reduced subsidence at the time of deposition of the CCM. Surface expression at that time must have been minimum, only minute changes in slope and accommodation. Outcrops around these Laramide structures, in particular both flanks of the Rock Springs Uplift, the western side of the Rawlins uplift and the north flank of the Uinta Mountains, have been sampled to study the petrography, grain size, roundness and sorting of the CCM, which along with the cross-bed thickness and bar thickness allowed calculation of the hydraulic parameters of the rivers that deposited the CCM. This study reveals how the fluvial system evolved and responded to the very small changes in subsidence and slope. Furthermore, the petrography will shed light on the provenance of these sandstones and on the relative importance of Sevier sources versus Laramide sources. This work is framed in a larger study that shows how incipient Laramide structural highs modified the behavior, style and architecture of the fluvial system, affecting its thickness, facies characteristics and net-to-gross both down-dip and along strike across the basin.

  2. The effect of rock electrical parameters on the calculation of reservoir saturation

    International Nuclear Information System (INIS)

    Li, Xiongyan; Qin, Ruibao; Liu, Chuncheng; Mao, Zhiqiang

    2013-01-01

    The error in calculating a reservoir saturation caused by the error in the cementation exponent, m, and the saturation exponent, n, should be analysed. In addition, the influence of m and n on the reservoir saturation should be discussed. Based on the Archie formula, the effect of variables m and n on the reservoir saturation is analysed, while the formula for the error in calculating the reservoir saturation, caused by the error in m and n, is deduced, and the main factors affecting the error in reservoir saturation are illustrated. According to the physical meaning of m and n, it can be interpreted that they are two independent parameters, i.e., there is no connection between m and n. When m and n have the same error, the impact of the variables on the calculation of the reservoir saturation should be compared. Therefore, when the errors of m and n are respectively equal to 0.2, 0.4 and 0.6, the distribution range of the errors in calculating the reservoir saturation is analysed. However, in most cases, the error of m and n is about 0.2. When the error of m is 0.2, the error in calculating the reservoir saturation ranges from 0% to 35%. Meanwhile, when the error in n is 0.2, the error in calculating the reservoir saturation is almost always below 5%. On the basis of loose sandstone, medium sandstone, tight sandstone, conglomerate, tuff, breccia, basalt, andesite, dacite and rhyolite, this paper first analyses the distribution range and change amplitude of m and n. Second, the impact of m and n on the calculation of reservoir saturation is elaborated upon. With regard to each lithology, the distribution range and change amplitude of m are greater than those of n. Therefore, compared with n, the effect of m on the reservoir saturation is stronger. The influence of m and n on the reservoir saturation is determined, and the error in calculating the reservoir saturation caused by the error of m and n is calculated. This is theoretically and practically significant for

  3. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  4. State-of-the-art for multiconfiguration Dirac-Fock calculations

    International Nuclear Information System (INIS)

    Desclaux, J.P.

    1981-01-01

    The approximations involved in almost all relativistic calculations are analyzed and one of the most advanced methods, the multiconfiguration Dirac-Fock (MCDF) one, available to carry out high quality atomic calculations for bound states is discussed

  5. Automatic Probabilistic Program Verification through Random Variable Abstraction

    Directory of Open Access Journals (Sweden)

    Damián Barsotti

    2010-06-01

    Full Text Available The weakest pre-expectation calculus has been proved to be a mature theory to analyze quantitative properties of probabilistic and nondeterministic programs. We present an automatic method for proving quantitative linear properties on any denumerable state space using iterative backwards fixed point calculation in the general framework of abstract interpretation. In order to accomplish this task we present the technique of random variable abstraction (RVA and we also postulate a sufficient condition to achieve exact fixed point computation in the abstract domain. The feasibility of our approach is shown with two examples, one obtaining the expected running time of a probabilistic program, and the other the expected gain of a gambling strategy. Our method works on general guarded probabilistic and nondeterministic transition systems instead of plain pGCL programs, allowing us to easily model a wide range of systems including distributed ones and unstructured programs. We present the operational and weakest precondition semantics for this programs and prove its equivalence.

  6. Sensitivity analysis on uncertainty variables affecting the NPP's LUEC with probabilistic approach

    International Nuclear Information System (INIS)

    Nuryanti; Akhmad Hidayatno; Erlinda Muslim

    2013-01-01

    One thing that is quite crucial to be reviewed prior to any investment decision on the nuclear power plant (NPP) project is the calculation of project economic, including calculation of Levelized Unit Electricity Cost (LUEC). Infrastructure projects such as NPP’s project are vulnerable to a number of uncertainty variables. Information on the uncertainty variables which makes LUEC’s value quite sensitive due to the changes of them is necessary in order the cost overrun can be avoided. Therefore this study aimed to do the sensitivity analysis on variables that affect LUEC with probabilistic approaches. This analysis was done by using Monte Carlo technique that simulate the relationship between the uncertainty variables and visible impact on LUEC. The sensitivity analysis result shows the significant changes on LUEC value of AP1000 and OPR due to the sensitivity of investment cost and capacity factors. While LUEC changes due to sensitivity of U 3 O 8 ’s price looks not quite significant. (author)

  7. Calculating lattice thermal conductivity: a synopsis

    Science.gov (United States)

    Fugallo, Giorgia; Colombo, Luciano

    2018-04-01

    We provide a tutorial introduction to the modern theoretical and computational schemes available to calculate the lattice thermal conductivity in a crystalline dielectric material. While some important topics in thermal transport will not be covered (including thermal boundary resistance, electronic thermal conduction, and thermal rectification), we aim at: (i) framing the calculation of thermal conductivity within the general non-equilibrium thermodynamics theory of transport coefficients, (ii) presenting the microscopic theory of thermal conduction based on the phonon picture and the Boltzmann transport equation, and (iii) outlining the molecular dynamics schemes to calculate heat transport. A comparative and critical addressing of the merits and drawbacks of each approach will be discussed as well.

  8. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    Science.gov (United States)

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  9. eDrugCalc: an online self-assessment package to enhance medical students' drug dose calculation skills.

    Science.gov (United States)

    McQueen, Daniel S; Begg, Michael J; Maxwell, Simon R J

    2010-10-01

    Dose calculation errors can cause serious life-threatening clinical incidents. We designed eDrugCalc as an online self-assessment tool to develop and evaluate calculation skills among medical students. We undertook a prospective uncontrolled study involving 1727 medical students in years 1-5 at the University of Edinburgh. Students had continuous access to eDrugCalc and were encouraged to practise. Voluntary self-assessment was undertaken by answering the 20 questions on six occasions over 30 months. Questions remained fixed but numerical variables changed so each visit required a fresh calculation. Feedback was provided following each answer. Final-year students had a significantly higher mean score in test 6 compared with test 1 [16.6, 95% confidence interval (CI) 16.2, 17.0 vs. 12.6, 95% CI 11.9, 13.4; n= 173, P variable in all tests with 2.7% of final-year students scoring formative dose-calculation package and encouragement to develop their numeracy. Further research is required to establish whether eDrugCalc reduces calculation errors made in clinical practice. © 2010 The Authors. British Journal of Clinical Pharmacology © 2010 The British Pharmacological Society.

  10. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.

  11. Calculation of rf fields in axisymmetric cavities

    International Nuclear Information System (INIS)

    Iwashita, Y.

    1985-01-01

    A new code, PISCES, has been developed for calculating a complete set of rf electromagnetic modes in an axisymmetric cavity. The finite-element method is used with up to third-order shape functions. Although two components are enough to express these modes, three components are used as unknown variables to take advantage of the symmetry of the element matrix. The unknowns are taken to be either the electric field components or the magnetic field components. The zero-divergence condition will be satisfied by the shape function within each element

  12. THE EFFECTS OF CLIMATIC VARIABLES AND CROP AREA ON MAIZE YIELD AND VARIABILITY IN GHANA

    Directory of Open Access Journals (Sweden)

    Henry De-Graft Acquah

    2012-10-01

    Full Text Available Climate change tends to have negative effects on crop yield through its influence on crop production. Understanding the relationship between climatic variables and crop area on the mean and variance of crop yield will facilitate development of appropriate policies to cope with climate change. This paper examines the effects of climatic variables and crop area on the mean and variance of maize yield in Ghana. The Just and Pope stochastic production function using the Cobb-Douglas functional form was employed. The results show that average maize yield is positively related to crop area and negatively related to rainfall and temperature. Furthermore, increase in crop area and temperature will enlarge maize yield variability while rainfall increase will decrease the variability in maize yield.

  13. The generalized successive approximation and Padé Approximants method for solving an elasticity problem of based on the elastic ground with variable coefficients

    Directory of Open Access Journals (Sweden)

    Mustafa Bayram

    2017-01-01

    Full Text Available In this study, we have applied a generalized successive numerical technique to solve the elasticity problem of based on the elastic ground with variable coefficient. In the first stage, we have calculated the generalized successive approximation of being given BVP and in the second stage we have transformed it into Padé series. At the end of study a test problem has been given to clarify the method.

  14. Geographic variability of fatal road traffic injuries in Spain during the period 2002–2004: an ecological study

    Directory of Open Access Journals (Sweden)

    Jimenez-Puente Alberto

    2007-09-01

    Full Text Available Abstract Background The aim of the present study is to describe the inter-province variability of Road Traffic Injury (RTI mortality on Spanish roads, adjusted for vehicle-kilometres travelled, and to assess the possible role played by the following explicative variables: sociodemographic, structural, climatic and risk conducts. Methods An ecological study design was employed. The mean annual rate of RTI deaths was calculated for the period 2002–2004, adjusted for vehicle-kilometres travelled, in the 50 provinces of Spain. The RTI death rate was related with the independent variables described above, using simple and multiple linear regression analysis with backward step-wise elimination. The level of statistical significance was taken as p Results In the period 2002–2004 there were 12,756 RTI deaths in Spain (an average of 4,242 per year, SD = 356.6. The mean number of deaths due to RTI per 100 million vehicle-kilometres (mvk travelled was 1.76 (SD = 0.51, with a minimum value of 0.66 (in Santa Cruz de Tenerife and a maximum of 3.31 (in the province of Lugo. All other variables being equal, a higher proportion of kilometres available on high capacity roads, and a higher cultural and education level were associated with lower death rates due to RTI, while the opposite was true for the rate of alcohol consumers and the road traffic volume of heavy vehicles. The variables included in the model accounted for 55.4% of the variability in RTI mortality. Conclusion Adjusting RTI mortality rates for the number of vehicle-kilometres travelled enables us to identify the high variability of this cause of death, and its relation with risk factors other than those inherent to human behaviour, such as the type of roads and the type of vehicles using them.

  15. Calculated apparent yields of rare gas fission products

    International Nuclear Information System (INIS)

    Delucchi, A.A.

    1975-01-01

    The apparent fission yield of the rare gas fission products from four mass chains is calculated as a function of separation time for six different fissioning systems. A plot of the calculated fission yield along with a one standard deviation error band is given for each rare gas fission product and for each fissioning system. Those parameters in the calculation that were major contributors to the calculated standard deviation at each separation time were identified and the results presented on a separate plot. To extend the usefulness of these calculations as new and better values for the input parameters become available, a third plot was generated for each system which shows how sensitive the derived fission yield is to a change in any given parameter used in the calculation. (U.S.)

  16. Review and classification of variability analysis techniques with clinical applications

    Science.gov (United States)

    2011-01-01

    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis. PMID:21985357

  17. Review and classification of variability analysis techniques with clinical applications.

    Science.gov (United States)

    Bravi, Andrea; Longtin, André; Seely, Andrew J E

    2011-10-10

    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis.

  18. Variability of the Lyman alpha flux with solar activity

    International Nuclear Information System (INIS)

    Lean, J.L.; Skumanich, A.

    1983-01-01

    A three-component model of the solar chromosphere, developed from ground based observations of the Ca II K chromospheric emission, is used to calculate the variability of the Lyman alpha flux between 1969 and 1980. The Lyman alpha flux at solar minimum is required in the model and is taken as 2.32 x 10 11 photons/cm 2 /s. This value occurred during 1975 as well as in 1976 near the commencement of solar cycle 21. The model predicts that the Lyman alpha flux increases to as much as 5 x 10 11 photons/cm 2 /s at the maximum of the solar cycle. The ratio of the average fluxes for December 1979 (cycle maximum) and July 1976 (cycle minimum) is 1.9. During solar maximum the 27-day solar rotation is shown to cause the Lyman alpha flux to vary by as much as 40% or as little as 5%. The model also shows that the Lyman alpha flux varies over intermediate time periods of 2 to 3 years, as well as over the 11-year sunspot cycle. We conclude that, unlike the sunspot number and the 10.7-cm radio flux, the Lyman alpha flux had a variability that was approximately the same during each of the past three cycles. Lyman alpha fluxes calculated by the model are consistent with measurements of the Lyman alpha flux made by 11 of a total of 14 rocket experiments conducted during the period 1969--1980. The model explains satisfactorily the absolute magnitude, long-term trends, and the cycle variability seen in the Lyman alpha irradiances by the OSO 5 satellite experiment. The 27-day variability observed by the AE-E satellite experiment is well reproduced. However, the magntidue of the AE-E 1 Lyman alpha irradiances are higher than the model calculations by between 40% and 80%. We suggest that the assumed calibration of the AE-E irradiances is in error

  19. Calculation of the thermodynamic properties of liquid Ag–In–Sb alloys

    Directory of Open Access Journals (Sweden)

    DRAGANA ZIVKOVIC

    2006-03-01

    Full Text Available The results of calculations of the thermodynamic properties of liquid Ag–In–Sb alloys are presented in this paper. The Redlich–Kister–Muggianu model was used for the calculations. Based on known thermodynamic data for constitutive binary systems and available experimental data for the investigated ternary system, the ternary interaction parameter for the liquid phase in the temperature range 1000–1200 K was determined. Comparison between experimental and calculated results showed their good mutual agreement.

  20. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    International Nuclear Information System (INIS)

    Yao, W; Farr, J

    2015-01-01

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations

  1. Ti-84 Plus graphing calculator for dummies

    CERN Document Server

    McCalla

    2013-01-01

    Get up-to-speed on the functionality of your TI-84 Plus calculator Completely revised to cover the latest updates to the TI-84 Plus calculators, this bestselling guide will help you become the most savvy TI-84 Plus user in the classroom! Exploring the standard device, the updated device with USB plug and upgraded memory (the TI-84 Plus Silver Edition), and the upcoming color screen device, this book provides you with clear, understandable coverage of the TI-84's updated operating system. Details the new apps that are available for download to the calculator via the USB cabl

  2. Quality of care and variability in lung cancer management across Belgian hospitals: a population-based study using routinely available data.

    Science.gov (United States)

    Vrijens, France; De Gendt, Cindy; Verleye, Leen; Robays, Jo; Schillemans, Viki; Camberlin, Cécile; Stordeur, Sabine; Dubois, Cécile; Van Eycken, Elisabeth; Wauters, Isabelle; Van Meerbeeck, Jan P

    2018-05-01

    To evaluate the quality of care for all patients diagnosed with lung cancer in Belgium based on a set of evidence-based quality indicators and to study the variability of care between hospitals. A retrospective study based on linked data from the cancer registry, insurance claims and vital status for all patients diagnosed with lung cancer between 2010 and 2011. Evidence-based quality indicators were identified from a systematic literature search. A specific algorithm to attribute patients to a centre was developed, and funnel plots were used to assess variability of care between centres. None. The proportion of patients who received appropriate care as defined by the indicator. Secondary outcome included the variability of care between centres. Twenty indicators were measured for a total of 12 839 patients. Good results were achieved for 60-day post-surgical mortality (3.9%), histopathological confirmation of diagnosis (93%) and for the use of PET-CT before treatment with curative intent (94%). Areas to be improved include the reporting of staging information to the Belgian Cancer Registry (80%), the use of brain imaging for clinical stage III patients eligible for curative treatment (79%), and the time between diagnosis and start of first active treatment (median 20 days). High variability between centres was observed for several indicators. Twenty-three indicators were found relevant but could not be measured. This study highlights the feasibility to develop a multidisciplinary set of quality indicators using population-based data. The main advantage of this approach is that not additional registration is required, but the non-measurability of many relevant indicators is a hamper. It allows however to easily point to areas of large variability in care.

  3. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    International Nuclear Information System (INIS)

    Yu, Dequan; Cong, Shu-Lin; Sun, Zhigang

    2015-01-01

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method

  4. An improved Lobatto discrete variable representation by a phase optimisation and variable mapping method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Dequan [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Cong, Shu-Lin, E-mail: shlcong@dlut.edu.cn [School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024 (China); Sun, Zhigang, E-mail: zsun@dicp.ac.cn [State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical and Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Science, Dalian 116023 (China); Center for Advanced Chemical Physics and 2011 Frontier Center for Quantum Science and Technology, University of Science and Technology of China, 96 Jinzhai Road, Hefei 230026 (China)

    2015-09-08

    Highlights: • An optimised finite element discrete variable representation method is proposed. • The method is tested by solving one and two dimensional Schrödinger equations. • The method is quite efficient in solving the molecular Schrödinger equation. • It is very easy to generalise the method to multidimensional problems. - Abstract: The Lobatto discrete variable representation (LDVR) proposed by Manoloupolos and Wyatt (1988) has unique features but has not been generally applied in the field of chemical dynamics. Instead, it has popular application in solving atomic physics problems, in combining with the finite element method (FE-DVR), due to its inherent abilities for treating the Coulomb singularity in spherical coordinates. In this work, an efficient phase optimisation and variable mapping procedure is proposed to improve the grid efficiency of the LDVR/FE-DVR method, which makes it not only be competing with the popular DVR methods, such as the Sinc-DVR, but also keep its advantages for treating with the Coulomb singularity. The method is illustrated by calculations for one-dimensional Coulomb potential, and the vibrational states of one-dimensional Morse potential, two-dimensional Morse potential and two-dimensional Henon–Heiles potential, which prove the efficiency of the proposed scheme and promise more general applications of the LDVR/FE-DVR method.

  5. Evaluation of students' knowledge about paediatric dosage calculations.

    Science.gov (United States)

    Özyazıcıoğlu, Nurcan; Aydın, Ayla İrem; Sürenler, Semra; Çinar, Hava Gökdere; Yılmaz, Dilek; Arkan, Burcu; Tunç, Gülseren Çıtak

    2018-01-01

    Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it

  6. VARIABILITY IN PHENOTYPIC EXPRESSION OF SEED QUALITY TRAITS IN SOYBEAN GERMPLASM

    Directory of Open Access Journals (Sweden)

    Aleksandra Sudarić

    2017-01-01

    Full Text Available The aim of this research was to determine the genetic variability of chosen soybean lines in seed quality by determining diversity in phenotypic expression of 1000 seed weight, as well as protein and oil concentrations in the seed. Field trials were set up in a randomized, complete block design with two replications, at the Agricultural Institute Osijek during three growing seasons (2010-2012. Each year, after harvest, 1000 seed weight, and protein and oil concentrations in the seed were determined. Statistical analyses of the results included: calculating basic measures of variation and analysis of variance. The analyzed data showed the existence of plant material's diversity in phenotypic expression of investigated seed quality traits, as well as the existence of statistically significant genotype and year effects.

  7. Lexical and phonological variability in preschool children with speech sound disorder.

    Science.gov (United States)

    Macrae, Toby; Tyler, Ann A; Lewis, Kerry E

    2014-02-01

    The authors of this study examined relationships between measures of word and speech error variability and between these and other speech and language measures in preschool children with speech sound disorder (SSD). In this correlational study, 18 preschool children with SSD, age-appropriate receptive vocabulary, and normal oral motor functioning and hearing were assessed across 2 sessions. Experimental measures included word and speech error variability, receptive vocabulary, nonword repetition (NWR), and expressive language. Pearson product–moment correlation coefficients were calculated among the experimental measures. The correlation between word and speech error variability was slight and nonsignificant. The correlation between word variability and receptive vocabulary was moderate and negative, although nonsignificant. High word variability was associated with small receptive vocabularies. The correlations between speech error variability and NWR and between speech error variability and the mean length of children's utterances were moderate and negative, although both were nonsignificant. High speech error variability was associated with poor NWR and language scores. High word variability may reflect unstable lexical representations, whereas high speech error variability may reflect indistinct phonological representations. Preschool children with SSD who show abnormally high levels of different types of speech variability may require slightly different approaches to intervention.

  8. Calculation of Dancoff correction for cylindrical cells including void

    International Nuclear Information System (INIS)

    Lima, C.P.B.; Martinez, A.S.

    1989-01-01

    This paper presents a method developed to the calculation of an analytical expression to the Dancoff Correction for fuel rods surrounded by air gaps. The Dancoff Correction has an important role in the calculation of the multigroup constants. The approximated expression obtained to the Dancoff Correction may be used in the available methods for the multigroup constants calculation, based in its simple and precise form. (author) [pt

  9. Validity of (Ultra-)Short Recordings for Heart Rate Variability Measurements

    NARCIS (Netherlands)

    Muñoz Venegas, Loretto; van Roon, Arie; Riese, Harriette; Thio, Chris; Oostenbroek, Emma; Westrik, Iris; de Geus, Eco J. C.; Gansevoort, Ron; Lefrandt, Joop; Nolte, Ilja M.; Snieder, Harold

    2015-01-01

    Objectives In order to investigate the applicability of routine 10s electrocardiogram (ECG) recordings for time-domain heart rate variability (HRV) calculation we explored to what extent these (ultra-)short recordings capture the "actual" HRV. Methods The standard deviation of normal-to-normal

  10. Simulating variable source problems via post processing of individual particle tallies

    International Nuclear Information System (INIS)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source for optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source

  11. Climate change and water availability for vulnerable agriculture

    Science.gov (United States)

    Dalezios, Nicolas; Tarquis, Ana Maria

    2017-04-01

    Climatic projections for the Mediterranean basin indicate that the area will suffer a decrease in water resources due to climate change. The key climatic trends identified for the Mediterranean region are continuous temperature increase, further drying with precipitation decrease and the accentuation of climate extremes, such as droughts, heat waves and/or forest fires, which are expected to have a profound effect on agriculture. Indeed, the impact of climate variability on agricultural production is important at local, regional, national, as well as global scales. Agriculture of any kind is strongly influenced by the availability of water. Climate change will modify rainfall, evaporation, runoff, and soil moisture storage patterns. Changes in total seasonal precipitation or in its pattern of variability are both important. Similarly, with higher temperatures, the water-holding capacity of the atmosphere and evaporation into the atmosphere increase, and this favors increased climate variability, with more intense precipitation and more droughts. As a result, crop yields are affected by variations in climatic factors, such as air temperature and precipitation, and the frequency and severity of the above mentioned extreme events. The aim of this work is to briefly present the main effects of climate change and variability on water resources with respect to water availability for vulnerable agriculture, namely in the Mediterranean region. Results of undertaken studies in Greece on precipitation patterns and drought assessment using historical data records are presented. Based on precipitation frequency analysis, evidence of precipitation reductions is shown. Drought is assessed through an agricultural drought index, namely the Vegetation Health Index (VHI), in Thessaly, a drought-prone region in central Greece. The results justify the importance of water availability for vulnerable agriculture and the need for drought monitoring in the Mediterranean basin as part of

  12. Daylight calculations using constant luminance curves

    Energy Technology Data Exchange (ETDEWEB)

    Betman, E. [CRICYT, Mendoza (Argentina). Laboratorio de Ambiente Humano y Vivienda

    2005-02-01

    This paper presents a simple method to manually estimate daylight availability and to make daylight calculations using constant luminance curves calculated with local illuminance and irradiance data and the all-weather model for sky luminance distribution developed in the Atmospheric Science Research Center of the University of New York (ARSC) by Richard Perez et al. Work with constant luminance curves has the advantage that daylight calculations include the problem's directionality and preserve the information of the luminous climate of the place. This permits accurate knowledge of the resource and a strong basis to establish conclusions concerning topics related to the energy efficiency and comfort in buildings. The characteristics of the proposed method are compared with the method that uses the daylight factor. (author)

  13. Statistical calculation of hot channel factors

    International Nuclear Information System (INIS)

    Farhadi, K.

    2007-01-01

    It is a conventional practice in the design of nuclear reactors to introduce hot channel factors to allow for spatial variations of power generation and flow distribution. Consequently, it is not enough to be able to calculate the nominal temperature distributions of fuel element, cladding, coolant, and central fuel. Indeed, one must be able to calculate the probability that the imposed temperature or heat flux limits in the entire core is not exceeded. In this paper, statistical methods are used to calculate hot channel factors for a particular case of a heterogeneous, Material Testing Reactor (MTR) and compare the results obtained from different statistical methods. It is shown that among the statistical methods available, the semi-statistical method is the most reliable one

  14. Automatic calculations of electroweak processes

    International Nuclear Information System (INIS)

    Ishikawa, T.; Kawabata, S.; Kurihara, Y.; Shimizu, Y.; Kaneko, T.; Kato, K.; Tanaka, H.

    1996-01-01

    GRACE system is an excellent tool for calculating the cross section and for generating event of the elementary process automatically. However it is not always easy for beginners to use. An interactive version of GRACE is being developed so as to be a user friendly system. Since it works exactly in the same environment as PAW, all functions of PAW are available for handling any histogram information produced by GRACE. As its application the cross sections of all elementary processes with up to 5-body final states induced by e + e - interaction are going to be calculated and to be summarized as a catalogue. (author)

  15. Calculation of electron-helium scattering

    International Nuclear Information System (INIS)

    Fursa, D.V.; Bray, I.

    1994-11-01

    We present the Convergent Close-Coupling (CCC) theory for the calculation of electron-helium scattering. We demonstrate its applicability at a range of projectile energies of 1.5 to 500 eV to scattering from the ground state to n ≤3 states. Excellent agreement with experiment is obtained with the available differential, integrated, ionization, and total cross sections, as well as with the electron-impact coherence parameters up to and including the 3 3 D state excitation. Comparison with other theories demonstrates that the CCC theory is the only general reliable method for the calculation of electron helium scattering. (authors). 66 refs., 2 tabs., 24 figs

  16. Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication

    Directory of Open Access Journals (Sweden)

    Chien-Sheng Chen

    2015-01-01

    Full Text Available To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP, instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS, wireless sensor networks (WSN and cellular communication systems.

  17. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Prediction of university student’s addictability based on some demographic variables, academic procrastination, and interpersonal variables

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Tavakoli

    2014-02-01

    Full Text Available Objectives: This study aimed to predict addictability among the students, based on demographic variables, academic procrastination, and interpersonal variables, and also to study the prevalence of addictability among these students. Method: The participants were 500 students (260 females, 240 males selected through a stratified random sampling among the students in Islamic Azad University Branch Abadan. The participants were assessed through Individual specification inventory, addiction potential scale and Aitken procrastination Inventory. Findings: The findings showed %23/6 of students’ readiness for addiction. Men showed higher addictability than women, but age wasn’t an issue. Also variables such as economic status, age, major, and academic procrastination predicted %13, and among interpersonal variables, the variables of having friends who use drugs and dissociated family predicted %13/2 of the variance in addictability. Conclusion: This study contains applied implications for addiction prevention.

  19. Achieving High Accuracy in Calculations of NMR Parameters

    DEFF Research Database (Denmark)

    Faber, Rasmus

    quantum chemical methods have been developed, the calculation of NMR parameters with quantitative accuracy is far from trivial. In this thesis I address some of the issues that makes accurate calculation of NMR parameters so challenging, with the main focus on SSCCs. High accuracy quantum chemical......, but no programs were available to perform such calculations. As part of this thesis the CFOUR program has therefore been extended to allow the calculation of SSCCs using the CC3 method. CC3 calculations of SSCCs have then been performed for several molecules, including some difficult cases. These results show...... vibrations must be included. The calculation of vibrational corrections to NMR parameters has been reviewed as part of this thesis. A study of the basis set convergence of vibrational corrections to nuclear shielding constants has also been performed. The basis set error in vibrational correction...

  20. Inverse kinematics for the variable geometry truss manipulator via a Lagrangian dual method

    Directory of Open Access Journals (Sweden)

    Yanchun Zhao

    2016-11-01

    Full Text Available This article studies the inverse kinematics problem of the variable geometry truss manipulator. The problem is cast as an optimization process which can be divided into two steps. Firstly, according to the information about the location of the end effector and fixed base, an optimal center curve and the corresponding distribution of the intermediate platforms along this center line are generated. This procedure is implemented by solving a non-convex optimization problem that has a quadratic objective function subject to quadratic constraints. Then, in accordance with the distribution of the intermediate platforms along the optimal center curve, all lengths of the actuators are calculated via the inverse kinematics of each variable geometry truss module. Hence, the approach that we present is an optimization procedure that attempts to generate the optimal intermediate platform distribution along the optimal central curve, while the performance index and kinematic constraints are satisfied. By using the Lagrangian duality theory, a closed-form optimal solution of the original optimization is given. The numerical simulation substantiates the effectiveness of the introduced approach.

  1. Consequences of Neglecting the Interannual Variability of the Solar Resource: A Case Study of Photovoltaic Power Among the Hawaiian Islands

    Energy Technology Data Exchange (ETDEWEB)

    Brancucci Martinez-Anido, Carlo [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bryce, Richard [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Losada Carreno, Ignacio [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Kumler, Andrew [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Roberts, Billy J [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-04-05

    The interannual variability of the solar irradiance and meteorological conditions are often ignored in favor of single-year data sets for modeling power generation and evaluating the economic value of photovoltaic (PV) power systems. Yet interannual variability significantly impacts the generation from one year to another of renewable power systems such as wind and PV. Consequently, the interannual variability of power generation corresponds to the interannual variability of capital returns on investment. The penetration of PV systems within the Hawaiian Electric Companies' portfolio has rapidly accelerated in recent years and is expected to continue to increase given the state's energy objectives laid out by the Hawaii Clean Energy Initiative. We use the National Solar Radiation Database (1998-2015) to characterize the interannual variability of the solar irradiance and meteorological conditions across the State of Hawaii. These data sets are passed to the National Renewable Energy Laboratory's System Advisory Model (SAM) to calculate an 18-year PV power generation data set to characterize the variability of PV power generation. We calculate the interannual coefficient of variability (COV) for annual average global horizontal irradiance (GHI) on the order of 2% and COV for annual capacity factor on the order of 3% across the Hawaiian archipelago. Regarding the interannual variability of seasonal trends, we calculate the COV for monthly average GHI values on the order of 5% and COV for monthly capacity factor on the order of 10%. We model residential-scale and utility-scale PV systems and calculate the economic returns of each system via the payback period and the net present value. We demonstrate that studies based on single-year data sets for economic evaluations reach conclusions that deviate from the true values realized by accounting for interannual variability.

  2. An exploration of diffusion tensor eigenvector variability within human calf muscles.

    Science.gov (United States)

    Rockel, Conrad; Noseworthy, Michael D

    2016-01-01

    To explore the effect of diffusion tensor imaging (DTI) acquisition parameters on principal and minor eigenvector stability within human lower leg skeletal muscles. Lower leg muscles were evaluated in seven healthy subjects at 3T using an 8-channel transmit/receive coil. Diffusion-encoding was performed with nine signal averages (NSA) using 6, 15, and 25 directions (NDD). Individual DTI volumes were combined into aggregate volumes of 3, 2, and 1 NSA according to number of directions. Tensor eigenvalues (λ1 , λ2 , λ3 ), eigenvectors (ε1 , ε2 , ε3 ), and DTI metrics (fractional anisotropy [FA] and mean diffusivity [MD]) were calculated for each combination of NSA and NDD. Spatial maps of signal-to-noise ratio (SNR), λ3 :λ2 ratio, and zenith angle were also calculated for region of interest (ROI) analysis of vector orientation consistency. ε1 variability was only moderately related to ε2 variability (r = 0.4045). Variation of ε1 was affected by NDD, not NSA (P < 0.0002), while variation of ε2 was affected by NSA, not NDD (P < 0.0003). In terms of tensor shape, vector variability was weakly related to FA (ε1 :r = -0.1854, ε2 : ns), but had a stronger relation to the λ3 :λ2 ratio (ε1 :r = -0.5221, ε2 :r = -0.1771). Vector variability was also weakly related to SNR (ε1 :r = -0.2873, ε2 :r = -0.3483). Zenith angle was found to be strongly associated with variability of ε1 (r = 0.8048) but only weakly with that of ε2 (r = 0.2135). The second eigenvector (ε2 ) displayed higher directional variability relative to ε1 , and was only marginally affected by experimental conditions that impacted ε1 variability. © 2015 Wiley Periodicals, Inc.

  3. Determination of radionuclide solubility limits to be used in SR 97. Uncertainties associated to calculated solubilities

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, J.; Cera, E.; Duro, L.; Jordana, S. [QuantiSci S.L., Barcelona (Spain); Pablo, J. de [DEQ-UPC, Barcelona (Spain); Savage, D. [QuantiSci Ltd., Henley-on-Thames (United Kingdom)

    1997-12-01

    The thermochemical behaviour of 24 critical radionuclides for the forthcoming SR97 PA exercise is discussed. The available databases are reviewed and updated with new data and an extended database for aqueous and solid species of the radionuclides of interest is proposed. We have calculated solubility limits for the radionuclides of interest under different groundwater compositions. A sensitivity analysis of the calculated solubilities with the composition of the groundwater is presented. Besides selecting the most likely solubility limiting phases, in this work we have used coprecipitation approaches in order to calculate more realistic solubility limits for minor radionuclides, such as Ra, Am and Cm. The comparison between the calculated solubilities and the concentrations measured in relevant natural systems (NA) and in spent fuel leaching experiments helps to assess the validity of the methodology used and to derive source term concentrations for the radionuclides studied. The uncertainties associated to the solubilities of the main radionuclides involved in the spent nuclear fuel have also been discussed in this work. The variability of the groundwater chemistry; redox conditions and temperature of the system have been considered the main factors affecting the solubilities. In this case, a sensitivity analysis has been performed in order to study solubility changes as a function of these parameters. The uncertainties have been calculated by including the values found in a major extent in typical granitic groundwaters. The results obtained from this analysis indicate that there are some radionuclides which are not affected by these parameters, i.e. Ag, Cm, Ho, Nb, Ni, Np, Pu, Se, Sm, Sn, Sr, Tc and U

  4. Improved method for solving the neutron transport problem by discretization of space and energy variables

    International Nuclear Information System (INIS)

    Bosevski, T.

    1971-01-01

    The polynomial interpolation of neutron flux between the chosen space and energy variables enabled transformation of the integral transport equation into a system of linear equations with constant coefficients. Solutions of this system are the needed values of flux for chosen values of space and energy variables. The proposed improved method for solving the neutron transport problem including the mathematical formalism is simple and efficient since the number of needed input data is decreased both in treating the spatial and energy variables. Mathematical method based on this approach gives more stable solutions with significantly decreased probability of numerical errors. Computer code based on the proposed method was used for calculations of one heavy water and one light water reactor cell, and the results were compared to results of other very precise calculations. The proposed method was better concerning convergence rate, decreased computing time and needed computer memory. Discretization of variables enabled direct comparison of theoretical and experimental results

  5. Editorial: Challenges and solutions in GW calculations for complex systems

    Science.gov (United States)

    Giustino, F.; Umari, P.; Rubio, A.

    2012-09-01

    We report key advances in the area of GW calculations, review the available software implementations and define standardization criteria to render the comparison between GW calculations from different codes meaningful, and identify future major challenges in the area of quasiparticle calculations. This Topical Issue should be a reference point for further developments in the field.

  6. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  7. NRSC, Neutron Resonance Spectrum Calculation System

    International Nuclear Information System (INIS)

    Leszczynski, Francisco

    2004-01-01

    1 - Description of program or function: The NRSC system is a package of four programs for calculating detailed neutron spectra and related quantities, for homogeneous mixtures of isotopes and cylindrical reactor pin cells, in the energy resonance region, using ENDF/B evaluated nuclear data pre-processed with NJOY or Cullen's codes up to the Doppler Broadening and unresolved resonance level. 2 - Methods: NRSC consists of four programs: GEXSCO, RMET21, ALAMBDA and WLUTIL. GEXSCO prepares the nuclear data from ENDF/B evaluated nuclear data pre-processed with NJOY or Cullen's codes up to the Doppler Broadening or unresolved resonance level for RMET21 input. RMET21 calculates spectra and related quantities for homogeneous mixtures of isotopes and cylindrical reactor pin cells, in the energy resonance region, using slowing-down algorithms and, in the case of pin cells, the collision probability method. ALAMBDA obtains lambda factors (Goldstein-Cohen intermediate resonance factors in the formalism of WIMSD code) of different isotopes for including on WIMSD-type multigroup libraries for WIMSD or other cell-codes, from output of RMET21 program. WLUTIL is an auxiliary program for extracting tabulated parameters related with RMET21 program calculations from WIMSD libraries for comparisons, and for producing new WIMSD libraries with parameters calculated with RMET21 and ALAMBDA programs. 3 - Restrictions on the complexity of the problem: GEXSCO program has fixed array dimensions that are suitable for processing all reasonable outputs from nuclear data pre-processing programs. RMET21 program uses variable dimension method from a fixed general array. ALAMBDA and WLUTIL programs have fixed arrays that are adapted to standard WIMSD libraries. All programs can be easily modified to adapt to special requirements

  8. National, ready-to-use climate indicators calculation and dissemination

    Science.gov (United States)

    Desiato, F.; Fioravanti, G.; Fraschetti, P.; Perconti, W.; Toreti, A.

    2010-09-01

    In Italy, meteorological data necessary and useful for climate studies are collected, processed and archived by a wide range of national and regional institutions. As a result, the density of the stations, the length and frequency of the observations, the quality control procedures and the database structure vary from one dataset to the other. In order to maximize the use of those data for climate knowledge and climate change assessments, a computerized system for the collection, quality control, calculation, regular update and rapid dissemination of climate indicators (denominated SCIA) was developed. Along with the pieces of information provided by complete metadata, climate indicators consist of statistics (mean, extremes, date of occurrence, standard deviation) over ten-days, monthly and yearly time periods of meteorological variables, including temperature, precipitation, humidity, wind, water balance, evapotranspitaton, degree-days, cloud cover, sea level pressure, solar radiation. In addition, normal values over thirty-year reference climatological periods and yearly anomalies are calculated and made available. All climate indicators, as well as their time series at a single location or spatial distribution at a selected time, are available through a dedicated web site (www.scia.sinanet.apat.it). In addition, secondary products like high resolution temperature maps obtained by kriging spatial interpolation, are made available. Over the last three years, about 40000 visitors accessed to the SCIA web site, with an average of 45 visitors per day. Most frequent visitors belong to categories like universities and research institutes; private companies and general public are present as well. Apart from research purposes, climate indicators disseminated through SCIA may be used in several socio-economic sectors like energy consumption, water management, agriculture, tourism and health. With regards to our activity, we base on these indicators for the estimation of

  9. Validation of an online risk calculator for the prediction of anastomotic leak after colon cancer surgery and preliminary exploration of artificial intelligence-based analytics.

    Science.gov (United States)

    Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L

    2017-11-01

    Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.

  10. Quasiseparation of variables in the Schroedinger equation with a magnetic field

    International Nuclear Information System (INIS)

    Charest, F.; Hudon, C.; Winternitz, P.

    2007-01-01

    We consider a two-dimensional integrable Hamiltonian system with a vector and scalar potential in quantum mechanics. Contrary to the case of a pure scalar potential, the existence of a second order integral of motion does not guarantee the separation of variables in the Schroedinger equation. We introduce the concept of 'quasiseparation of variables' and show that in many cases it allows us to reduce the calculation of the energy spectrum and wave functions to linear algebra

  11. Review of Variable Generation Integration Charges

    Energy Technology Data Exchange (ETDEWEB)

    Porter, K.; Fink, S.; Buckley, M.; Rogers, J.; Hodge, B. M.

    2013-03-01

    The growth of wind and solar generation in the United States, and the expectation of continued growth of these technologies, dictates that the future power system will be operated in a somewhat different manner because of increased variability and uncertainty. A small number of balancing authorities have attempted to determine an 'integration cost' to account for these changes to their current operating practices. Some balancing authorities directly charge wind and solar generators for integration charges, whereas others add integration charges to projected costs of wind and solar in integrated resource plans or in competitive solicitations for generation. This report reviews the balancing authorities that have calculated variable generation integration charges and broadly compares and contrasts the methodologies they used to determine their specific integration charges. The report also profiles each balancing authority and how they derived wind and solar integration charges.

  12. PROSPECTS OF MANAGEMENT ACCOUNTING AND COST CALCULATION

    Directory of Open Access Journals (Sweden)

    Marian ŢAICU

    2014-11-01

    Full Text Available Progress in improving production technology requires appropriate measures to achieve an efficient management of costs. This raises the need for continuous improvement of management accounting and cost calculation. Accounting information in general, and management accounting information in particular, have gained importance in the current economic conditions, which are characterized by risk and uncertainty. The future development of management accounting and cost calculation is essential to meet the information needs of management.

  13. Study of the variance of a Monte Carlo calculation. Application to weighting; Etude de la variance d'un calcul de Monte Carlo. Application a la ponderation

    Energy Technology Data Exchange (ETDEWEB)

    Lanore, Jeanne-Marie [Commissariat a l' Energie Atomique - CEA, Centre d' Etudes Nucleaires de Fontenay-aux-Roses, Direction des Piles Atomiques, Departement des Etudes de Piles, Service d' Etudes de Protections de Piles (France)

    1969-04-15

    One of the main difficulties in Monte Carlo computations is the estimation of the results variance. Generally, only an apparent variance can be observed over a few calculations, often very different from the actual variance. By studying a large number of short calculations, the authors have tried to evaluate the real variance, and then to apply the obtained results to the optimization of the computations. The program used is the Poker one-dimensional Monte Carlo program. Calculations are performed in two types of fictitious environments: a body with constant cross section, without absorption, where all shocks are elastic and isotropic; a body with variable cross section (presenting a very pronounced peak and hole), with an anisotropy for high energy elastic shocks, and with the possibility of inelastic shocks (this body presents all the features that can appear in a real case)

  14. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  15. Calculation of quantum-mechanical system energy spectra using path integrals

    International Nuclear Information System (INIS)

    Evseev, A.M.; Dmitriev, V.P.

    1977-01-01

    A solution of the Feynman quantum-mechanical integral connecting a wave function (psi (x, t)) at a moment t+tau (tau → 0) with the wave function at the moment t is provided by complex variable substitution and subsequent path integration. Time dependence of the wave function is calculated by the Monte Carlo method. The Fourier inverse transformation of the wave function by path integration calculated has been applied to determine the energy spectra. Energy spectra are presented of a hydrogen atom derived from wave function psi (x, t) at different x, as well as boson energy spectra of He, Li, and Be atoms obtained from psi (x, t) at X = O

  16. A new reliability measure based on specified minimum distances before the locations of random variables in a finite interval

    International Nuclear Information System (INIS)

    Todinov, M.T.

    2004-01-01

    A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number density which guarantees that the probability of clustering of two or more random variables in a finite interval remains below a maximum acceptable level. It is demonstrated that even for moderate number densities the probability of clustering is substantial and should not be neglected in reliability calculations. In the important special case where the random variables are failure times, models have been proposed for determining the upper bound of the hazard rate which guarantees a set of minimum failure-free operating intervals before the random failures, with a specified probability. A model has also been proposed for determining the upper bound of the hazard rate which guarantees a minimum availability target. Using the models proposed, a new strategy, models and reliability tools have been developed for setting quantitative reliability requirements which consist of determining the intersection of the hazard rate envelopes (hazard rate upper bounds) which deliver a minimum failure-free operating period before random failures, a risk of premature failure below a maximum acceptable level and a minimum required availability. It is demonstrated that setting reliability requirements solely based on an availability target does not necessarily mean a low risk of premature failure. Even at a high availability level, the probability of premature failure can be substantial. For industries characterised by a high cost of failure, the reliability requirements should involve a hazard rate envelope limiting the risk of failure below a maximum acceptable level

  17. Efficient DoA Tracking of Variable Number of Moving Stochastic EM Sources in Far-Field Using PNN-MLP Model

    Directory of Open Access Journals (Sweden)

    Zoran Stanković

    2015-01-01

    Full Text Available An efficient neural network-based approach for tracking of variable number of moving electromagnetic (EM sources in far-field is proposed in the paper. Electromagnetic sources considered here are of stochastic radiation nature, mutually uncorrelated, and at arbitrary angular distance. The neural network model is based on combination of probabilistic neural network (PNN and the Multilayer Perceptron (MLP networks and it performs real-time calculations in two stages, determining at first the number of moving sources present in an observed space sector in specific moments in time and then calculating their angular positions in azimuth plane. Once successfully trained, the neural network model is capable of performing an accurate and efficient direction of arrival (DoA estimation within the training boundaries which is illustrated on the appropriate example.

  18. Ensuring the Availability of Funds (Germany)

    International Nuclear Information System (INIS)

    Warnecke, Ernst; Paul, Michael

    2006-01-01

    1 - Legislation and regulation pertinent to funding: no site / facility specific legislation / regulation (Decommissioning Guideline); the obligation for D+D results from the Atomic Energy Act; the AtG requires a license for D + D of a nuclear facility; the Commercial Code requires reserves for liabilities; the Income Tax Law (EStG) is relevant for the taxation of reserves; the 'Ordinance on Advance Payments' is relevant for the construction of RW disposal facilities; the AtG is relevant for the payment of RW disposal costs. 2 - Financing system: Basic Principle: Polluter pays; Publicly funded facilities (mainly Federal Government): payment from annual budget; Privately owned facilities: collection of 'reserves' during operation / linear accumulation over 25 years, coverage: processing, storage and disposal of radioactive waste/spent fuel, D + D of nuclear facilities, reserves are in the portfolio of industry, financial risk lies with the operator; Availability of private funds: annual review / revision of the cost calculations by the operator, review of cost calculations by tax authorities. 3 - Costs: Cost calculation by the operator are based on detailed planning, need to be assessed conservatively. D + D cost calculation (as of 1999): ca. 300 x 10 6 Euro (1200 MW PWR, excl. disposal), ca. 350 x 10 6 Euro (800 MW BWR excl. disposal), ca. 700 x 10 6 Euro (incl. disposal of non-heat generating waste), immediate dismantling is slightly cheaper than deferred dismantling. Review and decision on adequacy of cost calculation by tax authorities. 4 - Experience: A lot of experience (public and private) has been gained, Experience is good, funds were available. 5 - Changing conditions - new challenges: Termination of nuclear energy generation, New approach to waste disposal, Privatisation of utilities, Liberalisation of energy market. Does the existing funding system need improvement?: Reconsideration of the existing situation, Exploration of potential improvements

  19. Agriculture-related radiation dose calculations

    International Nuclear Information System (INIS)

    Furr, J.M.; Mayberry, J.J.; Waite, D.A.

    1987-10-01

    Estimates of radiation dose to the public must be made at each stage in the identification and qualification process leading to siting a high-level nuclear waste repository. Specifically considering the ingestion pathway, this paper examines questions of reliability and adequacy of dose calculations in relation to five stages of data availability (geologic province, region, area, location, and mass balance) and three methods of calculation (population, population/food production, and food production driven). Calculations were done using the model PABLM with data for the Permian and Palo Duro Basins and the Deaf Smith County area. Extra effort expended in gathering agricultural data at succeeding environmental characterization levels does not appear justified, since dose estimates do not differ greatly; that effort would be better spent determining usage of food types that contribute most to the total dose; and that consumption rate and the air dispersion factor are critical to assessment of radiation dose via the ingestion pathway. 17 refs., 9 figs., 32 tabs

  20. Scale-dependent spatial variability in peatland lead pollution in the southern Pennines, UK.

    Science.gov (United States)

    Rothwell, James J; Evans, Martin G; Lindsay, John B; Allott, Timothy E H

    2007-01-01

    Increasingly, within-site and regional comparisons of peatland lead pollution have been undertaken using the inventory approach. The peatlands of the Peak District, southern Pennines, UK, have received significant atmospheric inputs of lead over the last few hundred years. A multi-core study at three peatland sites in the Peak District demonstrates significant within-site spatial variability in industrial lead pollution. Stochastic simulations reveal that 15 peat cores are required to calculate reliable lead inventories at the within-site and within-region scale for this highly polluted area of the southern Pennines. Within-site variability in lead pollution is dominant at the within-region scale. The study demonstrates that significant errors may be associated with peatland lead inventories at sites where only a single peat core has been used to calculate an inventory. Meaningful comparisons of lead inventories at the regional or global scale can only be made if the within-site variability of lead pollution has been quantified reliably.

  1. Nodewise analytical calculation of the transfer function

    International Nuclear Information System (INIS)

    Makai, Mihaly

    1994-01-01

    The space dependence of neutron noise has so far been mostly investigated in homogeneous core models. Application of core diagnostic methods to locate a malfunction requires however that the transfer function be calculated for real, inhomogeneous cores. A code suitable for such purpose must be able to handle complex arithmetic and delta-function source. Further requirements are analytical dependence in one spatial variable and fast execution. The present work describes the TIDE program written to fulfil the above requirements. The core is subdivided into homogeneous, square assemblies. An analytical solution is given, which is a generalisation of the inhomogeneous response matrix method. (author)

  2. A Novel Hybrid Similarity Calculation Model

    Directory of Open Access Journals (Sweden)

    Xiaoping Fan

    2017-01-01

    Full Text Available This paper addresses the problems of similarity calculation in the traditional recommendation algorithms of nearest neighbor collaborative filtering, especially the failure in describing dynamic user preference. Proceeding from the perspective of solving the problem of user interest drift, a new hybrid similarity calculation model is proposed in this paper. This model consists of two parts, on the one hand the model uses the function fitting to describe users’ rating behaviors and their rating preferences, and on the other hand it employs the Random Forest algorithm to take user attribute features into account. Furthermore, the paper combines the two parts to build a new hybrid similarity calculation model for user recommendation. Experimental results show that, for data sets of different size, the model’s prediction precision is higher than the traditional recommendation algorithms.

  3. Calculation of Gilbert damping in ferromagnetic films

    Directory of Open Access Journals (Sweden)

    Edwards D. M.

    2013-01-01

    Full Text Available The Gilbert damping constant in the phenomenological Landau-Lifshitz-Gilbert equation which describes the dynamics of magnetization, is calculated for Fe, Co and Ni bulk ferromagnets, Co films and Co/Pd bilayers within a nine-band tight-binding model with spin-orbit coupling included. The calculational effciency is remarkably improved by introducing finite temperature into the electronic occupation factors and subsequent summation over the Matsubara frequencies. The calculated dependence of Gilbert damping constant on scattering rate for bulk Fe, Co and Ni is in good agreement with the results of previous ab initio calculations. Calculations are reported for ferromagnetic Co metallic films and Co/Pd bilayers. The dependence of the Gilbert damping constant on Co film thickness, for various scattering rates, is studied and compared with recent experiments.

  4. One dimensional benchmark calculations using diffusion theory

    International Nuclear Information System (INIS)

    Ustun, G.; Turgut, M.H.

    1986-01-01

    This is a comparative study by using different one dimensional diffusion codes which are available at our Nuclear Engineering Department. Some modifications have been made in the used codes to fit the problems. One of the codes, DIFFUSE, solves the neutron diffusion equation in slab, cylindrical and spherical geometries by using 'Forward elimination- Backward substitution' technique. DIFFUSE code calculates criticality, critical dimensions and critical material concentrations and adjoint fluxes as well. It is used for the space and energy dependent neutron flux distribution. The whole scattering matrix can be used if desired. Normalisation of the relative flux distributions to the reactor power, plotting of the flux distributions and leakage terms for the other two dimensions have been added. Some modifications also have been made for the code output. Two Benchmark problems have been calculated with the modified version and the results are compared with BBD code which is available at our department and uses same techniques of calculation. Agreements are quite good in results such as k-eff and the flux distributions for the two cases studies. (author)

  5. Pseudo-variables method to calculate HMA relaxation modulus through low-temperature induced stress and strain

    International Nuclear Information System (INIS)

    Canestrari, Francesco; Stimilli, Arianna; Bahia, Hussain U.; Virgili, Amedeo

    2015-01-01

    Highlights: • Proposal of a new method to analyze low-temperature cracking of bituminous mixtures. • Reliability of the relaxation modulus master curve modeling through Prony series. • Suitability of the pseudo-variables approach for a close form solution. - Abstract: Thermal cracking is a critical failure mode for asphalt pavements. Relaxation modulus is the major viscoelastic property that controls the development of thermally induced tensile stresses. Therefore, accurate determination of the relaxation modulus is fundamental for designing long lasting pavements. This paper proposes a reliable analytical solution for constructing the relaxation modulus master curve by measuring stress and strain thermally induced in asphalt mixtures. The solution, based on Boltzmann’s Superposition Principle and pseudo-variables concepts, accounts for time and temperature dependency of bituminous materials modulus, avoiding complex integral transformations. The applicability of the solution is demonstrated by testing a reference mixture using the Asphalt Thermal Cracking Analyzer (ATCA) device. By applying thermal loadings on restrained and unrestrained asphalt beams, ATCA allows the determination of several parameters, but is still unable to provide reliable estimations of relaxation properties. Without them the measurements from ATCA cannot be used in modeling of pavement behavior. Thus, the proposed solution successfully integrates ATCA experimental data. The same methodology can be applied to all test methods that concurrently measure stress and strain. The statistical parameters used to evaluate the goodness of fit show optimum correlation between theoretical and experimental results, demonstrating the accuracy of this mathematical approach

  6. Clinical implementation and evaluation of the Acuros dose calculation algorithm.

    Science.gov (United States)

    Yan, Chenyu; Combine, Anthony G; Bednarz, Greg; Lalonde, Ronald J; Hu, Bin; Dickens, Kathy; Wynn, Raymond; Pavord, Daniel C; Saiful Huq, M

    2017-09-01

    The main aim of this study is to validate the Acuros XB dose calculation algorithm for a Varian Clinac iX linac in our clinics, and subsequently compare it with the wildely used AAA algorithm. The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were validated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Central axis and off-axis points with different depths were chosen for the comparison. In addition, the accuracy of Acuros was evaluated for wedge fields with wedge angles from 15 to 60°. Similarly, variable field sizes for an inhomogeneous phantom were chosen to validate the Acuros algorithm. In addition, doses calculated by Acuros and AAA at the center of lung equivalent tissue from three different VMAT plans were compared to the ion chamber measured doses in QUASAR phantom, and the calculated dose distributions by the two algorithms and their differences on patients were compared. Computation time on VMAT plans was also evaluated for Acuros and AAA. Differences between dose-to-water (calculated by AAA and Acuros XB) and dose-to-medium (calculated by Acuros XB) on patient plans were compared and evaluated. For open 6 MV photon beams on the homogeneous water phantom, both Acuros XB and AAA calculations were within 1% of measurements. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. Testing on the inhomogeneous phantom demonstrated that AAA overestimated doses by up to 8.96% at a point close to lung/solid water interface, while Acuros XB reduced that to 1.64%. The test on QUASAR phantom showed that Acuros achieved better agreement in lung equivalent tissue while AAA underestimated dose for all VMAT plans by up to 2.7%. Acuros XB computation time was about three times faster than AAA for VMAT plans, and

  7. A 3D coarse-mesh time dependent code for nuclear reactor kinetic calculations

    International Nuclear Information System (INIS)

    Montagnini, B.; Raffaelli, P.; Sumini, M.; Zardini, D.M.

    1996-01-01

    A course-mesh code for time-dependent multigroup neutron diffusion calculation based on a direct integration scheme for the time dependence and a low order nodal flux expansion approximation for the space variables has been implemented as a fast tool for transient analysis. (Author)

  8. Biological Sampling Variability Study

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-11-08

    .9% - dirty vs. 53.6% - clean) (see Figure 4.1). Variance component analysis was used to estimate the amount of variability for each source of variability. There wasn’t much difference in variability for dirty and clean samples, as well as between materials, so these results were pooled together. There was a significant difference in amount of concentration deposited, so results were separated for the 10 spore and 100 spore deposited tests. In each case the within sampler variability was the largest with variances of 426.2 for 10 spores and 173.1 for 100 spores. The within sampler variability constitutes the variability between the four samples of similar material, interfering material, and concentration taken by each sampler. The between sampler variance was estimated to be 0 for 10 spores and 1.2 for 100 spores. The between day variance was estimated to be 42.1 for 10 spores and 78.9 for 100 spores. Standard deviations can be calculated in each case by taking the square root of the variance.

  9. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers.

  10. A new approach for modelling variability in residential construction projects

    Directory of Open Access Journals (Sweden)

    Mehrdad Arashpour

    2013-06-01

    Full Text Available The construction industry is plagued by long cycle times caused by variability in the supply chain. Variations or undesirable situations are the result of factors such as non-standard practices, work site accidents, inclement weather conditions and faults in design. This paper uses a new approach for modelling variability in construction by linking relative variability indicators to processes. Mass homebuilding sector was chosen as the scope of the analysis because data is readily available. Numerous simulation experiments were designed by varying size of capacity buffers in front of trade contractors, availability of trade contractors, and level of variability in homebuilding processes. The measurements were shown to lead to an accurate determination of relationships between these factors and production parameters. The variability indicator was found to dramatically affect the tangible performance measures such as home completion rates. This study provides for future analysis of the production homebuilding sector, which may lead to improvements in performance and a faster product delivery to homebuyers. 

  11. Short-term Variability of Vitamin D-Related Biomarkers.

    Science.gov (United States)

    Lutsey, Pamela L; Parrinello, Christina M; Misialek, Jeffrey R; Hoofnagle, Andy N; Henderson, Clark M; Laha, Thomas J; Michos, Erin D; Eckfeldt, John H; Selvin, Elizabeth

    2016-12-01

    Quantifying the variability of biomarkers is important, as high within-person variability can lead to misclassification of individuals. Short-term variability of important markers of vitamin D metabolism is relatively unknown. A repeatability study was conducted in 160 Atherosclerosis Risk in Communities study participants (60% female, 28% black, mean age 76 years). Fasting serum was drawn at 2 time points, a median of 6 (range 3-13) weeks apart. Vitamin D binding protein (VDBP) and 25-hydroxyvitamin D [25(OH)D] were measured by LC-MS, fibroblast growth factor (FGF23) and parathyroid hormone (PTH) by enzyme-linked immunoassay, and calcium and phosphorus by Roche Cobas 6000. Free and bioavailable 25(OH)D were calculated. We calculated the within-person CV (CV W ), intraclass correlation coefficient (ICC), Spearman rank correlation coefficient (r), and percent reclassified. The CV W was lowest for calcium (2.0%), albumin (3.6%), 25(OH)D (6.9%), VDBP (7.0%) and phosphorus (7.6%); intermediate for free 25(OH)D (9.0%) and bioavailable 25(OH)D (9.9%); and highest for PTH (16.7%) and FGF23 (17.8%). Reclassification was highest for PTH, VDBP, and phosphorus (all 7.5%). The ICC and r were highest (≥0.80) for 25(OH)D, free 25(OH)D, bioavailable 25(OH)D and PTH, but somewhat lower (approximately 0.60-0.75) for the other biomarkers. Six-week short-term variability, as assessed by CV W , was quite low for VDBP, calcium and phosphorus, but fairly high for FGF23 and PTH. As such, multiple measurements of FGF23 and PTH may be needed to minimize misclassification. These results provide insight into the extent of potential misclassification of vitamin D markers in research and clinical settings. © 2016 American Association for Clinical Chemistry.

  12. The Possibility Using the Power Production Function of Complex Variable for Economic Forecasting

    Directory of Open Access Journals (Sweden)

    Sergey Gennadyevich Svetunkov

    2016-09-01

    Full Text Available The possibility of dynamic analysis and forecasting production results using the power production functions of complex variables with real coefficients is considered. This model expands the arsenal of instrumental methods and allows multivariate production forecasts which are unattainable by other methods of real variables as the functions of complex variables simulate the production differently in comparison with the models of real variables. The values of coefficients of the power production functions of complex variables can be calculated for each statistical observation. This allows to consider the change of the coefficients over time, to analyze this trend and predict the values of the coefficients for a given term, thereby to predict the form of the production function, which forecasts the operating results. Thus, the model of the production function with variable coefficients is introduced into the scientific circulation. With this model, the inverse problem of forecasting might be solved, such as the determination of the necessary quantities of labor and capital to achieve the desired operational results. The study is based on the principles of the modern methodology of complex-valued economy, one of its sections is the complex-valued patterns of production functions. In the article, the possibility of economic forecasting is tested on the example of the UK economy. The results of this prediction are compared with the forecasts obtained by other methods, which have led to the conclusion about the effectiveness of the proposed approach and the method of forecasting at the macro levels of production systems. A complex-valued power model of the production function is recommended for the multivariate prediction of sustainable production systems — the global economy, the economies of individual countries, major industries and regions.

  13. A Unified Pricing of Variable Annuity Guarantees under the Optimal Stochastic Control Framework

    Directory of Open Access Journals (Sweden)

    Pavel V. Shevchenko

    2016-07-01

    Full Text Available In this paper, we review pricing of the variable annuity living and death guarantees offered to retail investors in many countries. Investors purchase these products to take advantage of market growth and protect savings. We present pricing of these products via an optimal stochastic control framework and review the existing numerical methods. We also discuss pricing under the complete/incomplete financial market models, stochastic mortality and optimal/sub-optimal policyholder behavior, and in the presence of taxes. For numerical valuation of these contracts in the case of simple risky asset process, we develop a direct integration method based on the Gauss-Hermite quadratures with a one-dimensional cubic spline for calculation of the expected contract value, and a bi-cubic spline interpolation for applying the jump conditions across the contract cashflow event times. This method is easier to implement and faster when compared to the partial differential equation methods if the transition density (or its moments of the risky asset underlying the contract is known in closed form between the event times. We present accurate numerical results for pricing of a Guaranteed Minimum Accumulation Benefit (GMAB guarantee available on the market that can serve as a numerical benchmark for practitioners and researchers developing pricing of variable annuity guarantees to assess the accuracy of their numerical implementation.

  14. Laser Beam and Resonator Calculations on Desktop Computers.

    Science.gov (United States)

    Doumont, Jean-Luc

    There is a continuing interest in the design and calculation of laser resonators and optical beam propagation. In particular, recently, interest has increased in developing concepts such as one-sided unstable resonators, supergaussian reflectivity profiles, diode laser modes, beam quality concepts, mode competition, excess noise factors, and nonlinear Kerr lenses. To meet these calculation needs, I developed a general-purpose software package named PARAXIA ^{rm TM}, aimed at providing optical scientists and engineers with a set of powerful design and analysis tools that provide rapid and accurate results and are extremely easy to use. PARAXIA can handle separable paraxial optical systems in cartesian or cylindrical coordinates, including complex-valued and misaligned ray matrices, with full diffraction effects between apertures. It includes the following programs:. ABCD provides complex-valued ray-matrix and gaussian -mode analyses for arbitrary paraxial resonators and optical systems, including astigmatism and misalignment in each element. This program required that I generalize the theory of gaussian beam propagation to the case of an off-axis gaussian beam propagating through a misaligned, complex -valued ray matrix. FRESNEL uses FFT and FHT methods to propagate an arbitrary wavefront through an arbitrary paraxial optical system using Huygens' integral in rectangular or radial coordinates. The wavefront can be multiplied by an arbitrary mirror profile and/or saturable gain sheet on each successive propagation through the system. I used FRESNEL to design a one-sided negative-branch unstable resonator for a free -electron laser, and to show how a variable internal aperture influences the mode competition and beam quality in a stable cavity. VSOURCE implements the virtual source analysis to calculate eigenvalues and eigenmodes for unstable resonators with both circular and rectangular hard-edged mirrors (including misaligned rectangular systems). I used VSOURCE to

  15. THE ACCOUNTING POSTEMPLOYMENT BENEFITS BASED ON ACTUARIAL CALCULATIONS

    Directory of Open Access Journals (Sweden)

    Anna CEBOTARI

    2017-11-01

    Full Text Available The accounting post-employment benefits, based on actuarial calculations, at present remains a subject studied in Moldova only theoretically. Applying actuarial calculations of accounting in fact denotes its character of evolving. Because national accounting standards have been adapted to international, which, in turn, require the valuation of assets and debts at fair value, there is a need to draw up exact calculations on which stands the theory of probability and mathematical statistics. One of the main objectives of accounting information is reflected in its financial situations and providing internal and external users of the entity. Hence, arises the need to reflect highly reliable information that can be provided by applying actuarial calculations.

  16. Phenotypic and genotypic variability of disc flower corolla length and nectar content in sunflower

    Directory of Open Access Journals (Sweden)

    Joksimović Jovan

    2003-01-01

    Full Text Available The nectar content and disc flower corolla length are the two most important parameters of attractiveness to pollinators in sunflower. The phenotypic and genotypic variability of these two traits was studied in four commercially important hybrids and their parental components in a trial with three fertilizer doses over two years. The results showed that, looking at individual genotypes, the variability of disc flower corolla length was affected the most by year (85.38-97.46%. As the study years were extremely different, the phenotypic variance of the hybrids and parental components was calculated for each year separately. In such conditions, looking at all of the crossing combinations, the largest contribution to phenotypic variance of the corolla length was that of genotype: 57.27-61.11% (NS-H-45 64.51-84.84% (Velja; 96.74-97.20% (NS-H-702 and 13.92-73.17% (NS-H-111. A similar situation was observed for the phenotypic variability of nectar content, where genotype also had the largest influence, namely 39.77-48.25% in NS-H-45; 39.06-42.51% in Velja; 31.97-72.36% in NS-H-702; and 62.13-94.96% in NS-H-111.

  17. Behavioral variability, elimination of responses, and delay-of-reinforcement gradients in SHR and WKY rats

    Directory of Open Access Journals (Sweden)

    Killeen Peter R

    2007-11-01

    Full Text Available Abstract Background Attention-deficit/hyperactivity disorder (ADHD is characterized by a pattern of inattention, hyperactivity, and impulsivity that is cross-situational, persistent, and produces social and academic impairment. Research has shown that reinforcement processes are altered in ADHD. The dynamic developmental theory has suggested that a steepened delay-of-reinforcement gradient and deficient extinction of behavior produce behavioral symptoms of ADHD and increased behavioral variability. Method The present study investigated behavioral variability and elimination of non-target responses during acquisition in an animal model of ADHD, the spontaneously hypertensive rat (SHR, using Wistar Kyoto (WKY rats as controls. The study also aimed at providing a novel approach to measuring delay-of-reinforcement gradients in the SHR and the WKY strains. The animals were tested in a modified operant chamber presenting 20 response alternatives. Nose pokes in a target hole produced water according to fixed interval (FI schedules of reinforcement, while nose pokes in the remaining 19 holes either had no consequences or produced a sound or a short flickering of the houselight. The stimulus-producing holes were included to test whether light and sound act as sensory reinforcers in SHR. Data from the first six sessions testing FI 1 s were used for calculation of the initial distribution of responses. Additionally, Euclidean distance (measured from the center of each hole to the center of the target hole and entropy (a measure of variability were also calculated. Delay-of-reinforcement gradients were calculated across sessions by dividing the fixed interval into epochs and determining how much reinforcement of responses in one epoch contributed to responding in the next interval. Results Over the initial six sessions, behavior became clustered around the target hole. There was greater initial variability in SHR behavior, and slower elimination of

  18. Comparison of calculation methods for estimating annual carbon stock change in German forests under forest management in the German greenhouse gas inventory.

    Science.gov (United States)

    Röhling, Steffi; Dunger, Karsten; Kändler, Gerald; Klatt, Susann; Riedel, Thomas; Stümer, Wolfgang; Brötz, Johannes

    2016-12-01

    The German greenhouse gas inventory in the land use change sector strongly depends on national forest inventory data. As these data were collected periodically 1987, 2002, 2008 and 2012, the time series on emissions show several "jumps" due to biomass stock change, especially between 2001 and 2002 and between 2007 and 2008 while within the periods the emissions seem to be constant due to the application of periodical average emission factors. This does not reflect inter-annual variability in the time series, which would be assumed as the drivers for the carbon stock changes fluctuate between the years. Therefore additional data, which is available on annual basis, should be introduced into the calculations of the emissions inventories in order to get more plausible time series. This article explores the possibility of introducing an annual rather than periodical approach to calculating emission factors with the given data and thus smoothing the trajectory of time series for emissions from forest biomass. Two approaches are introduced to estimate annual changes derived from periodic data: the so-called logging factor method and the growth factor method. The logging factor method incorporates annual logging data to project annual values from periodic values. This is less complex to implement than the growth factor method, which additionally adds growth data into the calculations. Calculation of the input variables is based on sound statistical methodologies and periodically collected data that cannot be altered. Thus a discontinuous trajectory of the emissions over time remains, even after the adjustments. It is intended to adopt this approach in the German greenhouse gas reporting in order to meet the request for annually adjusted values.

  19. The modeler's influence on calculated solubilities for performance assessments at the Aespoe hard-rock laboratory

    International Nuclear Information System (INIS)

    Emren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.

    1999-01-01

    Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH) 4 , under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability. In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system. A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem. In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem

  20. Small-scale variability in tropical tropopause layer humidity

    Science.gov (United States)

    Jensen, E. J.; Ueyama, R.; Pfister, L.; Karcher, B.; Podglajen, A.; Diskin, G. S.; DiGangi, J. P.; Thornberry, T. D.; Rollins, A. W.; Bui, T. V.; Woods, S.; Lawson, P.

    2016-12-01

    Recent advances in statistical parameterizations of cirrus cloud processes for use in global models are highlighting the need for information about small-scale fluctuations in upper tropospheric humidity and the physical processes that control the humidity variability. To address these issues, we have analyzed high-resolution airborne water vapor measurements obtained in the Airborne Tropical TRopopause EXperiment over the tropical Pacific between 14 and 20 km. Using accurate and precise 1-Hz water vapor measurements along approximately-level aircraft flight legs, we calculate structure functions spanning horizontal scales ranging from about 0.2 to 50 km, and we compare the water vapor variability in the lower (about 14 km) and upper (16-19 km) Tropical Tropopause Layer (TTL). We also compare the magnitudes and scales of variability inside TTL cirrus versus in clear-sky regions. The measurements show that in the upper TTL, water vapor concentration variance is stronger inside cirrus than in clear-sky regions. Using simulations of TTL cirrus formation, we show that small variability in clear-sky humidity is amplified by the strong sensitivity of ice nucleation rate to supersaturation, which results in highly-structured clouds that subsequently drive variability in the water vapor field. In the lower TTL, humidity variability is correlated with recent detrainment from deep convection. The structure functions indicate approximately power-law scaling with spectral slopes ranging from about -5/3 to -2.

  1. Comparison of Two- and Three-Dimensional Methods for Analysis of Trunk Kinematic Variables in the Golf Swing.

    Science.gov (United States)

    Smith, Aimée C; Roberts, Jonathan R; Wallace, Eric S; Kong, Pui; Forrester, Stephanie E

    2016-02-01

    Two-dimensional methods have been used to compute trunk kinematic variables (flexion/extension, lateral bend, axial rotation) and X-factor (difference in axial rotation between trunk and pelvis) during the golf swing. Recent X-factor studies advocated three-dimensional (3D) analysis due to the errors associated with two-dimensional (2D) methods, but this has not been investigated for all trunk kinematic variables. The purpose of this study was to compare trunk kinematic variables and X-factor calculated by 2D and 3D methods to examine how different approaches influenced their profiles during the swing. Trunk kinematic variables and X-factor were calculated for golfers from vectors projected onto the global laboratory planes and from 3D segment angles. Trunk kinematic variable profiles were similar in shape; however, there were statistically significant differences in trunk flexion (-6.5 ± 3.6°) at top of backswing and trunk right-side lateral bend (8.7 ± 2.9°) at impact. Differences between 2D and 3D X-factor (approximately 16°) could largely be explained by projection errors introduced to the 2D analysis through flexion and lateral bend of the trunk and pelvis segments. The results support the need to use a 3D method for kinematic data calculation to accurately analyze the golf swing.

  2. Development of a model to calculate the economic implications of improving the indoor climate

    DEFF Research Database (Denmark)

    Jensen, Kasper Lynge

    on performance. The Bayesian Network uses a probabilistic approach by which a probability distribution can take this variation of the different indoor variables into account. The result from total building economy calculations indicated that depending on the indoor environmental change (improvement...

  3. Heat production in growing pigs calculated according to the RQ and CN methods

    DEFF Research Database (Denmark)

    Christensen, K; Chwalibog, André; Henckel, S

    1988-01-01

    1. Heat production, calculated according to the respiratory quotient methods, HE(RQ), and the carbon nitrogen balance method, HE(CN), was compared using the results from a total of 326 balance trials with 56 castrated male pigs fed different dietary composition and variable feed levels during...

  4. SU-E-J-176: Characterization of Inter-Fraction Breast Variability and the Implications On Delivered Dose

    Energy Technology Data Exchange (ETDEWEB)

    Sudhoff, M; Lamba, M; Kumar, N; Ward, A; Elson, H [University of Cincinnati, Cincinnati, OH (United States)

    2015-06-15

    Purpose: To systematically characterize inter-fraction breast variability and determine implications on delivered dose. Methods: Weekly port films were used to characterize breast setup variability. Five evenly spaced representative positions across the contour of each breast were chosen on the electronic port film in reference to graticule, and window and level was set such that the skin surface of the breast was visible. Measurements from the skin surface to treatment field edge were taken on each port film at each position and compared to the planning DRR, quantifying the variability. The systematic measurement technique was repeated for all port films for 20 recently treated breast cancer patients. Measured setup variability for each patient was modeled as a normal distribution. The distribution was randomly sampled from the model and applied as isocentric shifts in the treatment planning computer, representing setup variability for each fraction. Dose was calculated for each shifted fraction and summed to obtain DVHs and BEDs that modeled the dose with daily setup variability. Patients were categorized in to relevant groupings that were chosen to investigate the rigorousness of immobilization types, treatment techniques, and inherent anatomical difficulties. Mean position differences and dosimetric differences were evaluated between planned and delivered doses. Results: The setup variability was found to follow a normal distribution with mean position differences between the DRR and port film between − 8.6–3.5 mm with sigma range of 5.3–9.8 mm. Setup position was not found to be significantly different than zero. The mean seroma or whole breast PTV dosimetric difference, calculated as BED, ranged from a −0.23 to +1.13Gy. Conclusion: A systematic technique to quantify and model setup variability was used to calculate the dose in 20 breast cancer patients including variable setup. No statistically significant PTV or OAR BED differences were found between

  5. Breit–Pauli atomic structure calculations for Fe XI

    International Nuclear Information System (INIS)

    Aggarwal, Sunny; Singh, Jagjit; Mohan, Man

    2013-01-01

    Energy levels, oscillator strengths, and transition probabilities are calculated for the lowest-lying 165 energy levels of Fe XI using configuration-interaction wavefunctions. The calculations include all the major correlation effects. Relativistic effects are included in the Breit–Pauli approximation by adding mass-correction, Darwin, and spin–orbit interaction terms to the non-relativistic Hamiltonian. For comparison with the calculated ab initio energy levels, we have also calculated the energy levels by using the fully relativistic multiconfiguration Dirac–Fock method. The calculated results are in close agreement with the National Institute of Standards and Technology compilation and other available results. New results are predicted for many of the levels belonging to the 3s3p 4 3d and 3s3p 3 3d 2 configurations, which are very important in astrophysics, relevant, for example, to the recent observations by the Hinode spacecraft. We expect that our extensive calculations will be useful to experimentalists in identifying the fine structure levels in their future work

  6. Comparison of seasonal variability in European domestic radon measurements

    Directory of Open Access Journals (Sweden)

    C. J. Groves-Kirkby

    2010-03-01

    Full Text Available Analysis of published data characterising seasonal variability of domestic radon concentrations in Europe and elsewhere shows significant variability between different countries and between regions where regional data is available. Comparison is facilitated by application of the Gini Coefficient methodology to reported seasonal variation data. Overall, radon-rich sedimentary strata, particularly high-porosity limestones, exhibit high seasonal variation, while radon-rich igneous lithologies demonstrate relatively constant, but somewhat higher, radon concentrations. High-variability regions include the Pennines and South Downs in England, Languedoc and Brittany in France, and especially Switzerland. Low-variability high-radon regions include the granite-rich Cornwall/Devon peninsula in England, and Auvergne and Ardennes in France, all components of the Devonian-Carboniferous Hercynian belt.

  7. Eddy current calculations for the tore supra tokamak

    International Nuclear Information System (INIS)

    Blum, J.; Dupas, L.; Leloup, C.; Thooris, B.

    1983-01-01

    This paper deals with the calculation of the eddy currents in the structures of a Tokamak, which can be assimilated to thin conductors, so that the three-dimensional problem can be reduced mathematically to a two-dimensional one, the variables being two orthogonal coordinates of the considered surface. A variational formulation of the problem in terms of the electric vector potential is then given and a finite element method has been used, which enables to treat the complicated geometry of the toroidal field magnet, the mechanical structures and the vacuum vessels of Tore Supra

  8. Data acquisition interface for calculating heat diffusion in certain electronic circuits; Interface d`acquisition des donnees permettant le calcul de la diffusion de la chaleur dans certains circuits electroniques

    Energy Technology Data Exchange (ETDEWEB)

    Spiesser, Ph.

    1996-05-01

    A user interface has been developed for geometrical and thermal data acquisition, in order to allow calculations of heat diffusion in certain types of electronic circuits such as power hybrids and compact electronic modules, using computerized simulations. Data management, structure and organization, the data acquisition interface program, and variables and sources, are described

  9. Manual method for dose calculation in gynecologic brachytherapy

    International Nuclear Information System (INIS)

    Vianello, Elizabeth A.; Almeida, Carlos E. de; Biaggio, Maria F. de

    1998-01-01

    This paper describes a manual method for dose calculation in brachytherapy of gynecological tumors, which allows the calculation of the doses at any plane or point of clinical interest. This method uses basic principles of vectorial algebra and the simulating orthogonal films taken from the patient with the applicators and dummy sources in place. The results obtained with method were compared with the values calculated with the values calculated with the treatment planning system model Theraplan and the agreement was better than 5% in most cases. The critical points associated with the final accuracy of the proposed method is related to the quality of the image and the appropriate selection of the magnification factors. This method is strongly recommended to the radiation oncology centers where are no treatment planning systems available and the dose calculations are manually done. (author)

  10. Batch calculations in CalcHEP

    International Nuclear Information System (INIS)

    Pukhov, A.

    2003-01-01

    CalcHEP is a clone of the CompHEP project which is developed by the author outside of the CompHEP group. CompHEP/CalcHEP are packages for automatic calculations of elementary particle decay and collision properties in the lowest order of perturbation theory. The main idea prescribed into the packages is to make available passing on from the Lagrangian to the final distributions effectively with a high level of automation. According to this, the packages were created as a menu driven user friendly programs for calculations in the interactive mode. From the other side, long-time calculations should be done in the non-interactive regime. Thus, from the beginning CompHEP has a problem of batch calculations. In CompHEP 33.23 the batch session was realized by mean of interactive menu which allows to the user to formulate the task for batch. After that the not-interactive session was launched. This way is too restricted, not flexible, and leads to doubling in programming. In this article I discuss another approach how one can force an interactive program to work in non-interactive mode. This approach was realized in CalcHEP 2.1 disposed on http://theory.sinp.msu.ru/~pukhov/calchep.html

  11. Designing neural networks that process mean values of random variables

    International Nuclear Information System (INIS)

    Barber, Michael J.; Clark, John W.

    2014-01-01

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence

  12. Designing neural networks that process mean values of random variables

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)

    2014-06-13

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.

  13. Thermodynamic calculation of the Fe-Zn-Si system

    Energy Technology Data Exchange (ETDEWEB)

    Su Xuping [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Xiangtan 411105, Hunan (China)]. E-mail: sxping@xtu.edu.cn; Yin Fucheng [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Xiangtan 411105, Hunan (China); Li Zhi [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Xiangtan 411105, Hunan (China); Tang, N.-Y. [Teck Cominco Metals Ltd., Product Technology Centre, Mississauga, Ont., L5K 1B4 (Canada); Zhao Manxiu [Institute of Materials Research, School of Mechanical Engineering, Xiangtan University, Xiangtan 411105, Hunan (China)

    2005-06-21

    Silicon in steel significantly affects alloy growth kinetics in the coating in general galvanizing, thereby changing the coating microstructure from the usual stratified Fe-Zn alloy layers to a mass of {zeta} crystallites surrounding by liquid zinc. The Zn-Fe-Si phase diagram and the relevant thermodynamic information have great importance for the galvanizing industry in developing remedies for this problem. In this work, the available information on the Fe-Zn-Si system, including all three binary systems was reviewed and re-evaluated, and ternary parameters were extracted from the available experimental data. By assuming all the binary intermetallic phases with the exception of the {delta}, {gamma}{sub 1}, and {gamma} phases, have no ternary solubility, a thermodynamic calculation of the Fe-Zn-Si system was carried out, and relevant isothermal and isopleths sections were calculated. Its applicability in galvanizing industry was discussed. There is a good agreement between the calculated and the experimentally determined phase boundaries.

  14. Selection of skin dose calculation methodologies

    International Nuclear Information System (INIS)

    Farrell, W.E.

    1987-01-01

    This paper reports that good health physics practice dictates that a dose assessment be performed for any significant skin contamination incident. There are, however, several methodologies that could be used, and while there is probably o single methodology that is proper for all cases of skin contamination, some are clearly more appropriate than others. This can be demonstrated by examining two of the more distinctly different options available for estimating skin dose the calculational methods. The methods compiled by Healy require separate beta and gamma calculations. The beta calculational method is the derived by Loevinger, while the gamma dose is calculated from the equation for dose rate from an infinite plane source with an absorber between the source and the detector. Healy has provided these formulas in graphical form to facilitate rapid dose rate determinations at density thicknesses of 7 and 20 mg/cm 2 . These density thicknesses equate to the regulatory definition of the sensitive layer of the skin and a more arbitrary value to account of beta absorption in contaminated clothing

  15. Computer calculations of compressibility of natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M

    An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.

  16. Improved Ground Hydrology Calculations for Global Climate Models (GCMs): Soil Water Movement and Evapotranspiration.

    Science.gov (United States)

    Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.

    1988-09-01

    A physically based ground hydrology model is developed to improve the land-surface sensible and latent heat calculations in global climate models (GCMs). The processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff are explicitly included in the model. The amount of detail in the hydrologic calculations is restricted to a level appropriate for use in a GCM, but each of the aforementioned processes is modeled on the basis of the underlying physical principles. Data from the Goddard Institute for Space Studies (GISS) GCM are used as inputs for off-line tests of the ground hydrology model in four 8° × 10° regions (Brazil, Sahel, Sahara, and India). Soil and vegetation input parameters are calculated as area-weighted means over the 8° × 10° gridhox. This compositing procedure is tested by comparing resulting hydrological quantities to ground hydrology model calculations performed on the 1° × 1° cells which comprise the 8° × 10° gridbox. Results show that the compositing procedure works well except in the Sahel where lower soil water levels and a heterogeneous land surface produce more variability in hydrological quantities, indicating that a resolution better than 8° × 10° is needed for that region. Modeled annual and diurnal hydrological cycles compare well with observations for Brazil, where real world data are available. The sensitivity of the ground hydrology model to several of its input parameters was tested; it was found to be most sensitive to the fraction of land covered by vegetation and least sensitive to the soil hydraulic conductivity and matric potential.

  17. Improvement of Thrust Bearing Calculation Considering the Convectional Heating within the Space between the Pads

    Directory of Open Access Journals (Sweden)

    Monika Chmielowiec-Jablczyk

    2018-02-01

    Full Text Available A modern thrust bearing tool is used to estimate the behavior of tilting pad thrust bearings not only in the oil film between pad and rotating collar, but also in the space between the pads. The oil flow in the space significantly influences the oil film inlet temperature and the heating of pad and collar. For that reason, it is necessary to define an oil mixing model for the space between the pads. In the bearing tool, the solutions of the Reynolds equation including a cavitation model, the energy equation and the heat transfer equation are done iteratively with the finite volume method by considering a constant flow rate. Both effects—laminar/turbulent flow and centrifugal force—are considered. The calculation results are compared with measurements done for a flooded thrust bearing with nominal eight tilting pads with an outer diameter of 180 mm. The heat convection coefficients for the pad surfaces mainly influence the pad temperature field and are adjusted to the measurement results. In the following paper, the calculation results for variable space distances, influence of different parameters on the bearing behavior and operating condition at high load are presented.

  18. Variability of textural features in FDG PET images due to different acquisition modes and reconstruction parameters

    DEFF Research Database (Denmark)

    Galavis, P.E.; Hollensen, Christian; Jallow, N.

    2010-01-01

    Background. Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes...... reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were...... classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small...

  19. Calculation of the residual bearing capacity of reinforced concrete beams by the rigidity (deflection) criterion

    OpenAIRE

    V.S. Utkin; S.A. Solovyov

    2015-01-01

    The article proposes the method of calculating the bearing capacity of reinforced concrete beams at the operational stage by the rigidity (deflection) criterion. The methods, which were used in the article, are integral test and probabilistic methods for describing random variables. The author offers a new technique of calculating a deflection limit by a criterion of residual deformations. The article exemplifies the usage of the evidence theory for statistical information processing in the f...

  20. Eddy current calculations for the Tore Supra toroidal field magnet

    International Nuclear Information System (INIS)

    Blum, J.

    1983-01-01

    An outline is given of the calculation of the eddy currents in the magnetic structures of a Tokamak, which can be assimilated to thin conductors, so that the three-dimensional problem can be reduced mathematically to a two-dimensional one, the variables being two orthogonal coordinates of the considered surface. A finite element method has been used in order to treat the complicated geometry of the set of the 18 toroidal field coil casings and mechanical structures of Tore Supra. This eddy current code has been coupled with an axisymmetric equilibrium code in order to simulate typical phases of a Tokamak discharge (plasma current rise, additional heating, disruption, cleaning discharge) and the losses in the toroidal field magnet have thus been calculated. (author)

  1. Cardiovascular Reactivity: its Association with Physical Activity, and Some Hemodynamic and Anthropometric Variables

    Directory of Open Access Journals (Sweden)

    Milagros Lisset León Regal

    2016-09-01

    Full Text Available Background: several studies show the influence of physical activity as a protective factor of the cardiovascular system. New evidence forcorroborating this are needed to ensure the prevention of cardiovascular disease. Objective: to determine the relationship between cardiovascular hyperactivity, physical activity and some homodynamic and anthropometric variables in normotensive individuals. Methods: a descriptive correlational cross-sectional study was conducted. The universe of the study consisted of the population between 15 and 74 of the municipality of Cienfuegos in 2010, the sample was 644. The variables were considered: sex, skin colour, age, height, weight, index of body mass, abdominal waist, blood pressures: systolic, diastolic, average and differential (basal and sustained weight test and physical activity. Pearson Chi- square test was calculated and t was applied for comparison of average independent samples with a significance level of p = 0,05. Prevalence ratios were determined with a confidence interval of 95 %. Results: the prevalence of cardiovascular hyperactivity was higher in the group of 65-74 years and males. Cardiovascular hyperactives showed values of the average hemodynamic variables studied cardiovascular over normoreactive. There is an association between physical activity and better cardiovascular response in normal weight individuals. Conclusions: there is an association between increased blood pressure and obesity in cardiovascular hyperactivity. Physical activity is associated with cardiovascular normoreactivity in normal weight.

  2. Code system BCG for gamma-ray skyshine calculation

    International Nuclear Information System (INIS)

    Ryufuku, Hiroshi; Numakunai, Takao; Miyasaka, Shun-ichi; Minami, Kazuyoshi.

    1979-03-01

    A code system BCG has been developed for calculating conveniently and efficiently gamma-ray skyshine doses using the transport calculation codes ANISN and DOT and the point-kernel calculation codes G-33 and SPAN. To simplify the input forms to the system, the forms for these codes are unified, twelve geometric patterns are introduced to give material regions, and standard data are available as a library. To treat complex arrangements of source and shield, it is further possible to use successively the code such that the results from one code may be used as input data to the same or other code. (author)

  3. Antibiotic Therapy for Acute Infiltrate-Complicated Calculous Cholecystitis

    Directory of Open Access Journals (Sweden)

    Yu. A. Nesterenko

    2007-01-01

    Full Text Available Objective: to summarize the results of treatment in 442 patients of various ages with acute calculous cholecystitis complicated by a compact perivesical infiltrate.Materials and methods. Bile from all the patients was bacteriologically studied. The implication of various antibiotics in limiting perivesical fat inflammation was determined.Results. The importance of decompressive treatments for complicated calculous cholecystitis has been ascertained. The advantages of microcholecystostomy have been revealed. There is evidence that it is expedient to use third-forth-generation cephalosporins, fluoroquinolones, and dioxidine in the combined treatment of destructive calculous cholecystitis complicated by an infiltrate. 

  4. Calculation of ionization within the close-coupling formalism

    International Nuclear Information System (INIS)

    Bray, I.; Fursa, D.V.

    1996-05-01

    A method for calculation of differential ionization cross sections from theories that use the close-coupling expansions for the total wave functions is presented. It is shown how from a single such calculation elastic, excitation, and ionization cross sections may be extracted using solely the T-matrix elements arising from solution of the coupled equations. To demonstrate the applicability of this formalism, the convergent close-coupling (CCC) theory is systematically applied at incident energies of 150-600 eV to the calculation of e-He ionization. Comparison with available measurements is generally very good. 50 refs., 17 figs

  5. Modeling Turbulent Combustion for Variable Prandtl and Schmidt Number

    Science.gov (United States)

    Hassan, H. A.

    2004-01-01

    This report consists of two abstracts submitted for possible presentation at the AIAA Aerospace Science Meeting to be held in January 2005. Since the submittal of these abstracts we are continuing refinement of the model coefficients derived for the case of a variable Turbulent Prandtl number. The test cases being investigated are a Mach 9.2 flow over a degree ramp and a Mach 8.2 3-D calculation of crossing shocks. We have developed an axisymmetric code for treating axisymmetric flows. In addition the variable Schmidt number formulation was incorporated in the code and we are in the process of determining the model constants.

  6. Improved variable reduction in partial least squares modelling based on predictive-property-ranked variables and adaptation of partial least squares complexity.

    Science.gov (United States)

    Andries, Jan P M; Vander Heyden, Yvan; Buydens, Lutgarde M C

    2011-10-31

    The calibration performance of partial least squares for one response variable (PLS1) can be improved by elimination of uninformative variables. Many methods are based on so-called predictive variable properties, which are functions of various PLS-model parameters, and which may change during the variable reduction process. In these methods variable reduction is made on the variables ranked in descending order for a given variable property. The methods start with full spectrum modelling. Iteratively, until a specified number of remaining variables is reached, the variable with the smallest property value is eliminated; a new PLS model is calculated, followed by a renewed ranking of the variables. The Stepwise Variable Reduction methods using Predictive-Property-Ranked Variables are denoted as SVR-PPRV. In the existing SVR-PPRV methods the PLS model complexity is kept constant during the variable reduction process. In this study, three new SVR-PPRV methods are proposed, in which a possibility for decreasing the PLS model complexity during the variable reduction process is build in. Therefore we denote our methods as PPRVR-CAM methods (Predictive-Property-Ranked Variable Reduction with Complexity Adapted Models). The selective and predictive abilities of the new methods are investigated and tested, using the absolute PLS regression coefficients as predictive property. They were compared with two modifications of existing SVR-PPRV methods (with constant PLS model complexity) and with two reference methods: uninformative variable elimination followed by either a genetic algorithm for PLS (UVE-GA-PLS) or an interval PLS (UVE-iPLS). The performance of the methods is investigated in conjunction with two data sets from near-infrared sources (NIR) and one simulated set. The selective and predictive performances of the variable reduction methods are compared statistically using the Wilcoxon signed rank test. The three newly developed PPRVR-CAM methods were able to retain

  7. Interannual variability of mass transport in the Canary region from LADCP data

    Science.gov (United States)

    Comas-Rodríguez, Isis; Hernández-Guerra, Alonso; Vélez-Belchí, Pedro; Fraile-Nuez, Eugenio

    2010-05-01

    The variability of the Canary Current is a widely studied topic regarding its role as eastern boundary of the North Atlantic Subtropical Gyre. The Canary region provides indeed an interesting study area in terms of estimating variability scales of the Subtropical Gyre as well as the water masses dynamics. RAPROCAN (RAdial PROfunda de CANarias - Canary deep hydrographic section) is a project based on the reaching of these goals through the obtaining of hydrographic measures during cruises taking place approximately along 29°N, to the North of the Canary Archipelago, twice a year since 2006. The full depth sampling carried out allows the study of temperature and salinity distribution and the calculation of mass transports across the section. The transport estimates are compared to those obtained from previous measurements and estimates in the region. Therefore, transports and their variability through the last decade are quantified. The most significant advance made to previous works is the use of LADCP (Lowered Acoustic Doppler Current Profiler) data informing the initial geostrophic calculations. Thus, corrections are applied to each geostrophic profile considering the reference velocity obtained from LADCP data. ADCP-referenced transport estimates are obtained, providing a successful comparison between the velocity fields obtained from the hydrographic measures. While this work shows the interannual variability observed in winter since 1997, preliminary results confirm previous hypotheses about the magnitude of the Canary Current. Those results including LADCP data also provide new aspects in the circulation distribution across the Canary Archipelago. Also moored current meter data were taken into account in the up close study of the Current through the Lanzarote Passage. Interesting conclusions were drawn that certify the usefulness of LADCP data in referencing geostrophic calculations, while corroborating the results obtained through this methodology. Hence

  8. APBSmem: a graphical interface for electrostatic calculations at the membrane.

    Directory of Open Access Journals (Sweden)

    Keith M Callenberg

    2010-09-01

    Full Text Available Electrostatic forces are one of the primary determinants of molecular interactions. They help guide the folding of proteins, increase the binding of one protein to another and facilitate protein-DNA and protein-ligand binding. A popular method for computing the electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB equation, and there are several easy-to-use software packages available that solve the PB equation for soluble proteins. Here we present a freely available program, called APBSmem, for carrying out these calculations in the presence of a membrane. The Adaptive Poisson-Boltzmann Solver (APBS is used as a back-end for solving the PB equation, and a Java-based graphical user interface (GUI coordinates a set of routines that introduce the influence of the membrane, determine its placement relative to the protein, and set the membrane potential. The software Jmol is embedded in the GUI to visualize the protein inserted in the membrane before the calculation and the electrostatic potential after completing the computation. We expect that the ease with which the GUI allows one to carry out these calculations will make this software a useful resource for experimenters and computational researchers alike. Three examples of membrane protein electrostatic calculations are carried out to illustrate how to use APBSmem and to highlight the different quantities of interest that can be calculated.

  9. Statistical Dependence of Pipe Breaks on Explanatory Variables

    Directory of Open Access Journals (Sweden)

    Patricia Gómez-Martínez

    2017-02-01

    Full Text Available Aging infrastructure is the main challenge currently faced by water suppliers. Estimation of assets lifetime requires reliable criteria to plan assets repair and renewal strategies. To do so, pipe break prediction is one of the most important inputs. This paper analyzes the statistical dependence of pipe breaks on explanatory variables, determining their optimal combination and quantifying their influence on failure prediction accuracy. A large set of registered data from Madrid water supply network, managed by Canal de Isabel II, has been filtered, classified and studied. Several statistical Bayesian models have been built and validated from the available information with a technique that combines reference periods of time as well as geographical location. Statistical models of increasing complexity are built from zero up to five explanatory variables following two approaches: a set of independent variables or a combination of two joint variables plus an additional number of independent variables. With the aim of finding the variable combination that provides the most accurate prediction, models are compared following an objective validation procedure based on the model skill to predict the number of pipe breaks in a large set of geographical locations. As expected, model performance improves as the number of explanatory variables increases. However, the rate of improvement is not constant. Performance metrics improve significantly up to three variables, but the tendency is softened for higher order models, especially in trunk mains where performance is reduced. Slight differences are found between trunk mains and distribution lines when selecting the most influent variables and models.

  10. What do we know about variability?

    Directory of Open Access Journals (Sweden)

    Sergey G Inge-Vechtomov

    2010-12-01

    Full Text Available Contemporary phenomenological classification of variability types meets lots of contradictions. There is a single group of “mutations”: gene, chromosomal, genomic ones, which originate through different mechanisms. Ontogenetic variability puts even more questions because it embraces: modifications (regulation of gene expression, genetic variations (mutations and recombination and epigenetic variations (and inheritance in addition, with no clear criterions of the latter ones definition so far. Modifications and heritable variations are appeared to be closer to each other then we suspected before. An alternative classification of variability may be proposed basing upon template principle in biology. There is no direct correspondence between mechanisms and phenomenology of variation. It is a witness of a newparadigm coming in biological variability understanding.

  11. The POINT-AGAPE Survey I: The Variable Stars in M31

    CERN Document Server

    An Jun Hong; Hewett, P C; Baillon, Paul; Calchi-Novati, S; Carr, B J; Creze, M; Giraud-Héraud, Yannick; Gould, A; Jetzer, P; Kaplan, J; Kerins, E; Paulin-Henriksson, S; Smartt, S J; Stalin, C S; Tsapras, Y; An, Jin H.; Jetzer, Ph.

    2004-01-01

    The POINT-AGAPE collaboration has been monitoring M31 for three seasons with the Wide Field Camera on the Isaac Newton Telescope. In each season, data are taken for one hour per night for roughly sixty nights during the six months that M31 is visible. The two fields of view straddle the central bulge, northwards and southwards. We have calculated the locations, periods and amplitudes of 35414 variable stars in M31 as a by-product of our microlensing search. The variables are classified according to their period and amplitude of variation. They are classified into population I and II Cepheids, Miras and semi-regular long-period variables. The population I Cepheids are associated with the spiral arms, while the central concentration of the Miras and long-period variables varies noticeably, the stars with brighter (and shorter) variations being much more centrally concentrated. A crucial role in the microlensing experiment is played by the asymmetry signal. It was initially assumed that the variable stars would ...

  12. The modulation of EEG variability between internally- and externally-driven cognitive states varies with maturation and task performance.

    Directory of Open Access Journals (Sweden)

    Jessie M H Szostakiwskyj

    Full Text Available Increasing evidence suggests that brain signal variability is an important measure of brain function reflecting information processing capacity and functional integrity. In this study, we examined how maturation from childhood to adulthood affects the magnitude and spatial extent of state-to-state transitions in brain signal variability, and how this relates to cognitive performance. We looked at variability changes between resting-state and task (a symbol-matching task with three levels of difficulty, and within trial (fixation, post-stimulus, and post-response. We calculated variability with multiscale entropy (MSE, and additionally examined spectral power density (SPD from electroencephalography (EEG in children aged 8-14, and in adults aged 18-33. Our results suggest that maturation is characterized by increased local information processing (higher MSE at fine temporal scales and decreased long-range interactions with other neural populations (lower MSE at coarse temporal scales. Children show MSE changes that are similar in magnitude, but greater in spatial extent when transitioning between internally- and externally-driven brain states. Additionally, we found that in children, greater changes in task difficulty were associated with greater magnitude of modulation in MSE. Our results suggest that the interplay between maturational and state-to-state changes in brain signal variability manifest across different spatial and temporal scales, and influence information processing capacity in the brain.

  13. Variable identification in group method of data handling methodology

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Iraci Martinez, E-mail: martinez@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia, Guarulhos, SP (Brazil)

    2011-07-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  14. Variable identification in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2011-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a preselected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and Artificial Neural Network - ANN methodologies, and applied to the IPEN research Reactor IEA-R1. The GMDH was used to study the best set of variables to be used to train an ANN, resulting in a best monitoring variable estimative. The system performs the monitoring by comparing these estimative calculated values with measured ones. The IPEN Reactor Data Acquisition System is composed of 58 variables (process and nuclear variables). As the GMDH is a self-organizing methodology, the input variables choice is made automatically, and the real input variables used in the Monitoring and Diagnosis System were not showed in the final result. This work presents a study of variable identification of GMDH methodology by means of an algorithm that works in parallel with the GMDH algorithm and traces the initial variables paths, resulting in an identification of the variables that composes the best Monitoring and Diagnosis Model. (author)

  15. Variability of femoral muscle attachments.

    Science.gov (United States)

    Duda, G N; Brand, D; Freitag, S; Lierse, W; Schneider, E

    1996-09-01

    Analytical and experimental models of the musculoskeletal system often assume single values rather than ranges for anatomical input parameters. The hypothesis of the present study was that anatomical variability significantly influences the results of biomechanical analyses, specifically regarding the moment arms of the various thigh muscles. Insertions and origins of muscles crossing or attaching to the femur were digitized in six specimens. Muscle volumes were measured; muscle attachment area and centroid location were computed. To demonstrate the influence of inter-individual anatomic variability on a mechanical modeling parameter, the corresponding range of muscle moment arms were calculated. Standard deviations, as a percentage of the mean, were about 70% for attachment area and 80% for muscle volume and attachment centroid location. The resulting moment arms of the m. gluteus maximus and m. rectus femoris were especially sensitive to anatomical variations (SD 65%). The results indicate that sensitivity to anatomical variations should be analyzed in any investigation simulating musculoskeletal interactions. To avoid misinterpretations, investigators should consider using several anatomical configurations rather than relying on a mean data set.

  16. ELECTRIC MOTORS MAINTENANCE PLANNING FROM ITS OPERATING VARIABLES

    Directory of Open Access Journals (Sweden)

    Francisco RODRIGUES

    2017-07-01

    Full Text Available The maintenance planning corresponds to an approach that seeks to maximize the availability of equipment and, conse-quently, increase the levels of competitiveness of companies by increasing production times. This paper presents a maintenance planning based on operating variables (number of hours worked, duty cycles, number of revolutions to maximizing the availability of operation of electrical motors. The reading of the operating variables and its sampling is done based on predetermined sampling cycles and subsequently is made the data analysis through time series algo-rithms aiming to launch work orders before reaching the variables limit values. This approach is supported by tools and technologies such as logical applications that enable a graphical user interface for access to relevant information about their Physical Asset HMI (Human Machine Interface, including the control and supervision by acquisition through SCADA (Supervisory Control And data acquisition data, also including the communication protocols among different logical applications.

  17. Impact analysis of flow variability in sizing kanbans

    Directory of Open Access Journals (Sweden)

    Isaac Pergher

    2014-02-01

    Full Text Available The aim of this paper is to analyze the effects of variability flow, advocated by Factory Physics, in sizing Kanban production systems. The variability of flow presupposes that the variability of activities performed by a process is dissipated throughout the productive flow system, causing variations in the lead time, the work-in-process levels and the equipment availability, among others. To conduct the research, we created a didactic model of discrete event computer simulation. The proposed model aims to present the possible impacts caused by the variability flow in a production system regarding the sizing of the number of Kanbans cards, by using the results supplied by two different investigated scenarios. The main results of the research allow concluding that, by comparing the two scenarios developed in the model, the presence of variability in the production system caused an average increase of 32% in the number of Kanban cards (p=0,000. This implies that, in real productive systems, the study of Kanban sizing should consider the variability of individual operations, a fact often relegated as an assumption in the formulation from classical literature on the definition of the number of Kanbans, thus providing opportunities for the development of future research.

  18. Evaluation of variability in high-resolution protein structures by global distance scoring

    Directory of Open Access Journals (Sweden)

    Risa Anzai

    2018-01-01

    Full Text Available Systematic analysis of the statistical and dynamical properties of proteins is critical to understanding cellular events. Extraction of biologically relevant information from a set of high-resolution structures is important because it can provide mechanistic details behind the functional properties of protein families, enabling rational comparison between families. Most of the current structural comparisons are pairwise-based, which hampers the global analysis of increasing contents in the Protein Data Bank. Additionally, pairing of protein structures introduces uncertainty with respect to reproducibility because it frequently accompanies other settings for superimposition. This study introduces intramolecular distance scoring for the global analysis of proteins, for each of which at least several high-resolution structures are available. As a pilot study, we have tested 300 human proteins and showed that the method is comprehensively used to overview advances in each protein and protein family at the atomic level. This method, together with the interpretation of the model calculations, provide new criteria for understanding specific structural variation in a protein, enabling global comparison of the variability in proteins from different species.

  19. Scale-dependent spatial variability in peatland lead pollution in the southern Pennines, UK

    International Nuclear Information System (INIS)

    Rothwell, James J.; Evans, Martin G.; Lindsay, John B.; Allott, Timothy E.H.

    2007-01-01

    Increasingly, within-site and regional comparisons of peatland lead pollution have been undertaken using the inventory approach. The peatlands of the Peak District, southern Pennines, UK, have received significant atmospheric inputs of lead over the last few hundred years. A multi-core study at three peatland sites in the Peak District demonstrates significant within-site spatial variability in industrial lead pollution. Stochastic simulations reveal that 15 peat cores are required to calculate reliable lead inventories at the within-site and within-region scale for this highly polluted area of the southern Pennines. Within-site variability in lead pollution is dominant at the within-region scale. The study demonstrates that significant errors may be associated with peatland lead inventories at sites where only a single peat core has been used to calculate an inventory. Meaningful comparisons of lead inventories at the regional or global scale can only be made if the within-site variability of lead pollution has been quantified reliably. - Multiple peat cores are required for accurate peatland Pb inventories

  20. Use of digital applications in the medicament calculation education for nursing

    Directory of Open Access Journals (Sweden)

    Francisco Gilberto Fernandes Pereira

    Full Text Available Objective.To evaluate the influence of the use of digital applications in medicament calculation education for nursing students. Methods. An experimental study was developed with a sample of 100 nursing students, who were divided randomly into two groups (use of the Calculation Medicines - CalcMed application - available free on the Internet, n=50 and control (conventional method of the calculator use and pre-math skills, n=50. Both groups were assessed before and after the application of the teaching strategy through a test with ten specific questions of medicament calculations. Results. The study group showed a mean score of 8.14 versus an average of 5.02 in the control group. The average time of test execution was faster in the study group compared to the control group (15.7 versus 38.9 minutes. Conclusion. The strategy of using this application positively influences learning and enables greater security in the implementation of medicament calculations.

  1. Frequency spectrum analysis of finger photoplethysmographic waveform variability during haemodialysis.

    Science.gov (United States)

    Javed, Faizan; Middleton, Paul M; Malouf, Philip; Chan, Gregory S H; Savkin, Andrey V; Lovell, Nigel H; Steel, Elizabeth; Mackie, James

    2010-09-01

    This study investigates the peripheral circulatory and autonomic response to volume withdrawal in haemodialysis based on spectral analysis of photoplethysmographic waveform variability (PPGV). Frequency spectrum analysis was performed on the baseline and pulse amplitude variabilities of the finger infrared photoplethysmographic (PPG) waveform and on heart rate variability extracted from the ECG signal collected from 18 kidney failure patients undergoing haemodialysis. Spectral powers were calculated from the low frequency (LF, 0.04-0.145 Hz) and high frequency (HF, 0.145-0.45 Hz) bands. In eight stable fluid overloaded patients (fluid removal of >2 L) not on alpha blockers, progressive reduction in relative blood volume during haemodialysis resulted in significant increase in LF and HF powers of PPG baseline and amplitude variability (P analysis of finger PPGV may provide valuable information on the autonomic vascular response to blood volume reduction in haemodialysis, and can be potentially utilized as a non-invasive tool for assessing peripheral circulatory control during routine dialysis procedure.

  2. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.

  3. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    International Nuclear Information System (INIS)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon

    2014-01-01

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist

  4. Assessing the Impact of Climate Variability on Cropland Productivity in the Canadian Prairies Using Time Series MODIS FAPAR

    Directory of Open Access Journals (Sweden)

    Taifeng Dong

    2016-03-01

    Full Text Available Cropland productivity is impacted by climate. Knowledge on spatial-temporal patterns of the impacts at the regional scale is extremely important for improving crop management under limiting climatic factors. The aim of this study was to investigate the effects of climate variability on cropland productivity in the Canadian Prairies between 2000 and 2013 based on time series of MODIS (Moderate Resolution Imaging Spectroradiometer FAPAR (Fraction of Absorbed Photosynthetically Active Radiation product. Key phenological metrics, including the start (SOS and end of growing season (EOS, and the cumulative FAPAR (CFAPAR during the growing season (between SOS and EOS, were extracted and calculated from the FAPAR time series with the Parametric Double Hyperbolic Tangent (PDHT method. The Mann-Kendall test was employed to assess the trends of cropland productivity and climatic variables, and partial correlation analysis was conducted to explore the potential links between climate variability and cropland productivity. An assessment using crop yield statistical data showed that CFAPAR can be taken as a surrogate of cropland productivity in the Canadian Prairies. Cropland productivity showed an increasing trend in most areas of Canadian Prairies, in general, during the period from 2000 to 2013. Interannual variability in cropland productivity on the Canadian Prairies was influenced positively by rainfall variation and negatively by mean air temperature.

  5. Numericware i: Identical by State Matrix Calculator

    Directory of Open Access Journals (Sweden)

    Bongsong Kim

    2017-02-01

    Full Text Available We introduce software, Numericware i, to compute identical by state (IBS matrix based on genotypic data. Calculating an IBS matrix with a large dataset requires large computer memory and takes lengthy processing time. Numericware i addresses these challenges with 2 algorithmic methods: multithreading and forward chopping. The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. Numericware i allows calculation of the IBS matrix for a large genotypic dataset using a laptop or a desktop computer. For comparison with different software, we calculated genetic relationship matrices using Numericware i, SPAGeDi, and TASSEL with the same genotypic dataset. Numericware i calculates IBS coefficients between 0 and 2, whereas SPAGeDi and TASSEL produce different ranges of values including negative values. The Pearson correlation coefficient between the matrices from Numericware i and TASSEL was high at .9972, whereas SPAGeDi showed low correlation with Numericware i (.0505 and TASSEL (.0587. With a high-dimensional dataset of 500 entities by 10 000 000 SNPs, Numericware i spent 382 minutes using 19 CPU threads and 64 GB memory by dividing the dataset into 3 pieces, whereas SPAGeDi and TASSEL failed with the same dataset. Numericware i is freely available for Windows and Linux under CC-BY 4.0 license at https://figshare.com/s/f100f33a8857131eb2db .

  6. Components of genetic variability of ear length of silage maize

    Directory of Open Access Journals (Sweden)

    Sečanski Mile

    2006-01-01

    Full Text Available The objective of this study was to evaluate following parameters of the ear length of silage maize: variability of inbred lines and their diallel hybrids, superior-parent heterosis and genetic components of variability and habitability on the basis of a diallel set. The analysis of genetic variance shows that the additive component (D was lower than the dominant (H1 and H2 genetic variances, while the frequency of dominant genes (u for this trait was greater than the frequency of recessive genes (v Furthermore, this is also confirmed by the dominant to recessive genes ratio in parental inbreeds for the ear length (Kd/Kr> 1, which is greater than unity during both investigation years. The calculated value of the average degree of dominance √H1/D is greater than unity, pointing out to superdominance in inheritance of this trait in both years of investigation, which is also confirmed by the results of Vr/Wr regression analysis of inheritance of the ear length. As a presence of the non-allelic interaction was established it is necessary to study effects of epitasis as it can have a greater significance in certain hybrids. A greater value of dominant than additive variance resulted in high broad-sense habitability for ear length in both investigation years.

  7. Fine particle water and pH in the Eastern Mediterranean: Sources, variability and implications for nutrients availability

    Science.gov (United States)

    Bougiatioti, Aikaterini; Nikolaou, Panayiota; Stavroulas, Iasonas; Kouvarakis, Giorgos; Nenes, Athanasios; Weber, Rodney; Kanakidou, Maria; Mihalopoulos, Nikolaos

    2016-04-01

    total calculated water. Particle pH is also calculated with the help of ISORROPIA-II, and during the studied period, values varied from 0.5 to 2.8, indicating that the aerosol was highly acidic. pH values were also studied depending on the source/origin of the sampled air masses and biomass burning aerosol was found to exhibit the highest values of PM1 pH and the lowest values in total water mass concentrations. The two natural sources, namely mineral and marine origin, contained the largest amounts of total submicron water and the lowest contribution of organic water, as expected. The low pH values estimated for the studied period in the submicron mode and independently of the air masses' origin could potentially have important implications for nutrient availability, especially for phosphorus solubility, which is the nutrient limiting sea water productivity of the Eastern Mediterranean.

  8. An oilspill trajectory analysis model with a variable wind deflection angle

    Science.gov (United States)

    Samuels, W.B.; Huang, N.E.; Amstutz, D.E.

    1982-01-01

    The oilspill trajectory movement algorithm consists of a vector sum of the surface drift component due to wind and the surface current component. In the U.S. Geological Survey oilspill trajectory analysis model, the surface drift component is assumed to be 3.5% of the wind speed and is rotated 20 degrees clockwise to account for Coriolis effects in the Northern Hemisphere. Field and laboratory data suggest, however, that the deflection angle of the surface drift current can be highly variable. An empirical formula, based on field observations and theoretical arguments relating wind speed to deflection angle, was used to calculate a new deflection angle at each time step in the model. Comparisons of oilspill contact probabilities to coastal areas calculated for constant and variable deflection angles showed that the model is insensitive to this changing angle at low wind speeds. At high wind speeds, some statistically significant differences in contact probabilities did appear. ?? 1982.

  9. Calculation of Si(Li) x-ray detector efficiencies

    International Nuclear Information System (INIS)

    Zaluzec, N.; Holton, R.

    1984-01-01

    The calculation of detector efficiency functions is an important step in the quantitative analysis of x-ray spectra when approached by a standardless technique. In this regard, it becomes essential that the analyst not only model the physical aspects of the absorption and transmission of the various windows present, but also use the most accurate data available for the mass absorption coefficients required in these calculations. The topic of modeling the size and shape of the windows present is beyond the scope of this paper and the authors instead concentrate on the mass absorption coefficients used in the calculations and their implications to efficiency calculations. For the purposes of this paper, the authors consider that the relative detector efficiency function of a conventional Si(Li) detector can be modeled by a simple expression

  10. SF12: Stata module to alidate sf12 input and calculate sf12 version 2 t scores

    DEFF Research Database (Denmark)

    2015-01-01

    sf12 takes 12 variables in correct order (i1 i2a i2b i3a i3b i4a i4b i5 i6a i6b i6c i7), validate the variables with respect to sf12 requirements. Only rows that are correct are used for calculating the sf12 t scores....

  11. Numerical procedure for the calculation of nonsteady spherical shock fronts with radiation

    International Nuclear Information System (INIS)

    Winkler, K.H.

    The basis of the numerical method is an implicit difference scheme with time backward differences to a freely moving coordinate system. The coordinate system itself is determined simultaneously with the iterative solution of the physical equations as a function of the physical variables. Shock fronts, even nonsteady ones, are calculated as discontinuities according to the Rankine--Hugoniot equations. The radiation field is obtained from the two-dimensional, static, spherically symmetric transport equation in conjunction with the time-dependent one-dimensional moment equations. No artificial viscosity of any type is ever used. The applicability of the method developed is demonstrated by an example involving the calculation of protostar collapse. 11 figures

  12. Robust best linear estimation for regression analysis using surrogate and instrumental variables.

    Science.gov (United States)

    Wang, C Y

    2012-04-01

    We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.

  13. Addressing Geographic Variability in the Comparative Toxicity Potential of Copper and Nickel in Soils

    DEFF Research Database (Denmark)

    Owsianiak, Mikolaj; Rosenbaum, Ralph K.; Huijbregts, Mark A. J.

    2013-01-01

    Comparative toxicity potentials (CTP), in life cycle impact assessment also known as characterization factors (CF), of copper (Cu) and nickel (Ni) were calculated for a global set of 760 soils. An accessibility factor (ACF) that takes into account the role of the reactive, solid-phase metal pool...... findings stress the importance of dealing with geographic variability in the calculation of CTPs for terrestrial ecotoxicity of metals....

  14. Continuous-Variable Entanglement Swapping

    Directory of Open Access Journals (Sweden)

    Kevin Marshall

    2015-05-01

    Full Text Available We present a very brief overview of entanglement swapping as it relates to continuous-variable quantum information. The technical background required is discussed and the natural link to quantum teleportation is established before discussing the nature of Gaussian entanglement swapping. The limitations of Gaussian swapping are introduced, along with the general applications of swapping in the context of to quantum communication and entanglement distribution. In light of this, we briefly summarize a collection of entanglement swapping schemes which incorporate a non-Gaussian ingredient and the benefits of such schemes are noted. Finally, we motivate the need to further study and develop such schemes by highlighting requirements of a continuous-variable repeater.

  15. Solution of heat equation with variable coefficient using derive

    CSIR Research Space (South Africa)

    Lebelo, RS

    2008-09-01

    Full Text Available In this paper, the method of approximating solutions of partial differential equations with variable coefficients is studied. This is done by considering heat flow through a one-dimensional model with variable cross-sections. Two cases...

  16. Criticality calculation method for mixer-settlers

    International Nuclear Information System (INIS)

    Gonda, Kozo; Aoyagi, Haruki; Nakano, Ko; Kamikawa, Hiroshi.

    1980-01-01

    A new criticality calculation code MACPEX has been developed to evaluate and manage the criticality of the process in the extractor of mixer-settler type. MACPEX can perform the combined calculation with the PUREX process calculation code MIXSET, to get the neutron flux and the effective multiplication constant in the mixer-settlers. MACPEX solves one-dimensional diffusion equation by the explicit difference method and the standard source-iteration technique. The characteristics of MACPEX are as follows. 1) Group constants of 4 energy groups for the 239 Pu-H 2 O solution, water, polyethylene and SUS 28 are provided. 2) The group constants of the 239 Pu-H 2 O solution are given by the functional formulae of the plutonium concentration, which is less than 50 g/l. 3) Two boundary conditions of the vacuum condition and the reflective condition are available in this code. 4) The geometrical bucklings can be calculated for a certain energy group and/or region by using the three dimentional neutron flux profiles obtained by CITATION. 5) The buckling correction search can be carried out in order to get a desired k sub(eff). (author)

  17. Intervariability and intravariability of bone morphogenetic proteins in commercially available demineralized bone matrix products.

    Science.gov (United States)

    Bae, Hyun W; Zhao, Li; Kanim, Linda E A; Wong, Pamela; Delamarter, Rick B; Dawson, Edgar G

    2006-05-20

    Enzyme-linked immunosorbent assay was used to detect bone morphogenetic proteins (BMPs) 2, 4, and 7 in 9 commercially available ("off the shelf") demineralized bone matrix (DBM) product formulations using 3 different manufacturer's production lots of each DBM formulation. To evaluate and compare the quantity of BMPs among several different DBM formulations (inter-product variability), as well as examine the variability of these proteins in different production lots within the same DBM formulation (intra-product variability). DBMs are commonly used to augment available bone graft in spinal fusion procedures. Surgeons are presented with an ever-increasing variety of commercially available human DBMs from which to choose. Yet, there is limited information on a specific DBM product's osteoinductive efficacy, potency, and constancy. There were protein extracts from each DBM sample separately dialyzed 4 times against distilled water at 4 degrees C for 48 hours. The amount of BMP-2, BMP-4, and BMP-7 was determined using enzyme-linked immunosorbent assay. RESULTS.: The concentrations of detected BMP-2 and BMP-7 were low for all DBM formulations, only nanograms of BMP were extracted from each gram of DBM (20.2-120.6 ng BMP-2/g DBM product; 54.2-226.8 ng BMP-7/g DBM). The variability of BMP concentrations among different lots of the same DBM formulation, intra-product variability, was higher than the variability of concentrations among different DBM formulations, inter-product variability (coefficient of variation range BMP-2 [16.34% to 76.01%], P DBMs are low, in the order of 1 x 10(-9) g of BMP/g of DBM. There is higher variability in concentration of BMPs among 3 different lots of the same DBM formulation than among different DBM formulations. This variability questions DBM products' reliability and, possibly, efficacy in providing consistent osteoinduction.

  18. Generating Variable and Random Schedules of Reinforcement Using Microsoft Excel Macros

    Science.gov (United States)

    Bancroft, Stacie L.; Bourret, Jason C.

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time.…

  19. Variable dead time counters. 1 - theoretical responses and the effects of neutron multiplication

    International Nuclear Information System (INIS)

    Lees, E.W.; Hooton, B.W.

    1978-10-01

    A theoretical expression is derived for calculating the response of any variable dead time counter (VDC) used in the passive assay of plutonium by neutron counting of the natural spontaneous fission activity. The effects of neutron multiplication in the sample arising from interactions of the original spontaneous fission neutrons is shown to modify the linear relationship between VDC signal and Pu mass. Numerical examples are shown for the Euratom VDC and a systematic investigation of the various factors affecting neutron multiplication is reported. Limited comparisons between the calculations and experimental data indicate provisional validity of the calculations. (author)

  20. Preparation of small group constants for calculation of shielding

    International Nuclear Information System (INIS)

    Khokhlov, V.F.; Shejno, I.N.; Tkachev, V.D.

    1979-01-01

    Studied is the effect of the shielding calculation error connected with neglect of the angular and spatial neutron flux dependences while determining the small-group constants on the basis of the many-group ones. The economical method allowing for dependences is proposed. The spatial dependence is substituted by the average value according to the zones singled out in the limits of the zones of the same content; the angular cross section dependence is substituted by the average values in the half-ranges of the angular variable. To solve the transfer equation the ALGOL-ROSA-M program using the method of characteristic interpolation and trial run method is developed. The program regards correctly for nonscattered and single scattered radiations. Compared are the calculation results of neutron transmission (10.5 MeV-0.01 eV) in the 21-group approximation with the 3-group calculations for water (the layer thickness is 30 cm) and 5-group calculations for heterogeneous shielding of alternating stainless steel layers (3 layers, each of the 16 cm thickness) and graphite layers (2 layers, each of the 20 cm thickness). The analysis shows that the method proposed permits to obtain rather accurate results in the course of preparation of the small-group cross sections, decreasing considerably the number of the groups (from 21 to 3-5) and saving the machine time

  1. Study The role of latent variables in lost working days by Structural Equation Modeling Approach

    Directory of Open Access Journals (Sweden)

    Meysam Heydari

    2016-12-01

    Full Text Available Background: Based on estimations, each year about 250 million work-related injuries and many temporary or permanent disabilities occur which most are preventable. Oil and Gas industries are among industries with high incidence of injuries in the world. The aim of this study has investigated  the role and effect of different risk management variables on lost working days (LWD in the seismic projects. Methods: This study was a retrospective, cross-sectional and systematic analysis, which was carried out on occupational accidents between 2008-2015(an 8 years period in different seismic projects for oilfield exploration at Dana Energy (Iranian Seismic Company. The preliminary sample size of the study were 487accidents. A systems analysis approach were applied by using root case analysis (RCA and structural equation modeling (SEM. Tools for the data analysis were included, SPSS23 and AMOS23  software. Results: The mean of lost working days (LWD, was calculated 49.57, the final model of structural equation modeling showed that latent variables of, safety and health training factor(-0.33, risk assessment factor(-0.55 and risk control factor (-0.61 as direct causes significantly affected of lost working days (LWD in the seismic industries (p< 0.05. Conclusion: The finding of present study revealed that combination of variables affected in lost working days (LWD. Therefore,the role of these variables in accidents should be investigated and suitable programs should be considered for them.

  2. A Source Area Approach Demonstrates Moderate Predictive Ability but Pronounced Variability of Invasive Species Traits.

    Directory of Open Access Journals (Sweden)

    Günther Klonner

    Full Text Available The search for traits that make alien species invasive has mostly concentrated on comparing successful invaders and different comparison groups with respect to average trait values. By contrast, little attention has been paid to trait variability among invaders. Here, we combine an analysis of trait differences between invasive and non-invasive species with a comparison of multidimensional trait variability within these two species groups. We collected data on biological and distributional traits for 1402 species of the native, non-woody vascular plant flora of Austria. We then compared the subsets of species recorded and not recorded as invasive aliens anywhere in the world, respectively, first, with respect to the sampled traits using univariate and multiple regression models; and, second, with respect to their multidimensional trait diversity by calculating functional richness and dispersion metrics. Attributes related to competitiveness (strategy type, nitrogen indicator value, habitat use (agricultural and ruderal habitats, occurrence under the montane belt, and propagule pressure (frequency were most closely associated with invasiveness. However, even the best multiple model, including interactions, only explained a moderate fraction of the differences in invasive success. In addition, multidimensional variability in trait space was even larger among invasive than among non-invasive species. This pronounced variability suggests that invasive success has a considerable idiosyncratic component and is probably highly context specific. We conclude that basing risk assessment protocols on species trait profiles will probably face hardly reducible uncertainties.

  3. A review of instrumental variable estimators for Mendelian randomization.

    Science.gov (United States)

    Burgess, Stephen; Small, Dylan S; Thompson, Simon G

    2017-10-01

    Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.

  4. Automated calculation of point A coordinates for CT-based high-dose-rate brachytherapy of cervical cancer

    Directory of Open Access Journals (Sweden)

    Hyejoo Kang

    2017-07-01

    Full Text Available Purpose: The goal is to develop a stand-alone application, which automatically and consistently computes the coordinates of the dose calculation point recommended by the American Brachytherapy Society (i.e., point A based solely on the implanted applicator geometry for cervical cancer brachytherapy. Material and methods: The application calculates point A coordinates from the source dwell geometries in the computed tomography (CT scans, and outputs the 3D coordinates in the left and right directions. The algorithm was tested on 34 CT scans of 7 patients treated with high-dose-rate (HDR brachytherapy using tandem and ovoid applicators. A single experienced user retrospectively and manually inserted point A into each CT scan, whose coordinates were used as the “gold standard” for all comparisons. The gold standard was subtracted from the automatically calculated points, a second manual placement by the same experienced user, and the clinically used point coordinates inserted by multiple planners. Coordinate differences and corresponding variances were compared using nonparametric tests. Results: Automatically calculated, manually placed, and clinically used points agree with the gold standard to < 1 mm, 1 mm, 2 mm, respectively. When compared to the gold standard, the average and standard deviation of the 3D coordinate differences were 0.35 ± 0.14 mm from automatically calculated points, 0.38 ± 0.21 mm from the second manual placement, and 0.71 ± 0.44 mm from the clinically used point coordinates. Both the mean and standard deviations of the 3D coordinate differences were statistically significantly different from the gold standard, when point A was placed by multiple users (p < 0.05 but not when placed repeatedly by a single user or when calculated automatically. There were no statistical differences in doses, which agree to within 1-2% on average for all three groups. Conclusions: The study demonstrates that the automated algorithm

  5. Predictive Variable Gain Iterative Learning Control for PMSM

    Directory of Open Access Journals (Sweden)

    Huimin Xu

    2015-01-01

    Full Text Available A predictive variable gain strategy in iterative learning control (ILC is introduced. Predictive variable gain iterative learning control is constructed to improve the performance of trajectory tracking. A scheme based on predictive variable gain iterative learning control for eliminating undesirable vibrations of PMSM system is proposed. The basic idea is that undesirable vibrations of PMSM system are eliminated from two aspects of iterative domain and time domain. The predictive method is utilized to determine the learning gain in the ILC algorithm. Compression mapping principle is used to prove the convergence of the algorithm. Simulation results demonstrate that the predictive variable gain is superior to constant gain and other variable gains.

  6. Calculating the true level of predictors significance when carrying out the procedure of regression equation specification

    Directory of Open Access Journals (Sweden)

    Nikita A. Moiseev

    2017-01-01

    Full Text Available The paper is devoted to a new randomization method that yields unbiased adjustments of p-values for linear regression models predictors by incorporating the number of potential explanatory variables, their variance-covariance matrix and its uncertainty, based on the number of observations. This adjustment helps to control type I errors in scientific studies, significantly decreasing the number of publications that report false relations to be authentic ones. Comparative analysis with such existing methods as Bonferroni correction and Shehata and White adjustments explicitly shows their imperfections, especially in case when the number of observations and the number of potential explanatory variables are approximately equal. Also during the comparative analysis it was shown that when the variance-covariance matrix of a set of potential predictors is diagonal, i.e. the data are independent, the proposed simple correction is the best and easiest way to implement the method to obtain unbiased corrections of traditional p-values. However, in the case of the presence of strongly correlated data, a simple correction overestimates the true pvalues, which can lead to type II errors. It was also found that the corrected p-values depend on the number of observations, the number of potential explanatory variables and the sample variance-covariance matrix. For example, if there are only two potential explanatory variables competing for one position in the regression model, then if they are weakly correlated, the corrected p-value will be lower than when the number of observations is smaller and vice versa; if the data are highly correlated, the case with a larger number of observations will show a lower corrected p-value. With increasing correlation, all corrections, regardless of the number of observations, tend to the original p-value. This phenomenon is easy to explain: as correlation coefficient tends to one, two variables almost linearly depend on each

  7. METHOD OF CALCULATION OF THE NON-STATIONARY TEMPERATURE FIELD INSIDE OF THERMAL PACKED BED ENERGY STORAGE

    Directory of Open Access Journals (Sweden)

    Ermuratschii V.V.

    2014-04-01

    Full Text Available e paper presents a method of the approximate calculation of the non-stationary temperature field inside of thermal packed bed energy storages with feasible and latent heat. Applying thermoelectric models and computational methods in electrical engineering, the task of computing non-stationary heat transfer is resolved with respect to third type boundary conditions without applying differential equations of the heat transfer. For sub-volumes of the energy storage the method is executed iteratively in spatiotemporal domain. Single-body heating is modeled for each sub-volume, and modeling conditions are assumed to be identical for remained bod-ies, located in the same sub-volume. For each iteration step the boundary conditions will be represented by re-sults at the previous step. The fulfillment of the first law of thermodynamics for system “energy storage - body” is obtained by the iterative search of the mean temperature of the energy storage. Under variable boundary con-ditions the proposed method maybe applied to calculating temperature field inside of energy storages with packed beds consisted of solid material, liquid and phase-change material. The method may also be employed to compute transient, power and performance characteristics of packed bed energy storages.

  8. Vigorous physical activity predicts higher heart rate variability among younger adults.

    Science.gov (United States)

    May, Richard; McBerty, Victoria; Zaky, Adam; Gianotti, Melino

    2017-06-14

    Baseline heart rate variability (HRV) is linked to prospective cardiovascular health. We tested intensity and duration of weekly physical activity as predictors of heart rate variability in young adults. Time and frequency domain indices of HRV were calculated based on 5-min resting electrocardiograms collected from 82 undergraduate students. Hours per week of both moderate and vigorous activity were estimated using the International Physical Activity Questionnaire. In regression analyses, hours of vigorous physical activity, but not moderate activity, significantly predicted greater time domain and frequency domain indices of heart rate variability. Adjusted for weekly frequency, greater daily duration of vigorous activity failed to predict HRV indices. Future studies should test direct measurements of vigorous activity patterns as predictors of autonomic function in young adulthood.

  9. O(α2L2) radiative corrections to deep inelastic ep scattering for different kinematical variables

    International Nuclear Information System (INIS)

    Bluemlein, J.

    1994-03-01

    The QED radiative corrections are calculated in the leading log approximation up to O(α 2 ) for different definitions of the kinematical variables using jet measurement, the 'mixed' variables, the double angle method, and a measurement based on θ e and y JB . Higher order contributions due to exponentiation of soft radiation are included. (orig.)

  10. Statistical Model Calculations for (n,γ Reactions

    Directory of Open Access Journals (Sweden)

    Beard Mary

    2015-01-01

    Full Text Available Hauser-Feshbach (HF cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well-observed result that different nuclear input models sensitively affect HF cross section calculations. Less well known however are the effects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, back-shift and giant dipole parameters, as well as effects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the effects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative effects of these model details will be discussed.

  11. Evaluation of energy savings potential of variable refrigerant flow (VRF from variable air volume (VAV in the U.S. climate locations

    Directory of Open Access Journals (Sweden)

    Dongsu Kim

    2017-11-01

    Full Text Available Variable refrigerant flow (VRF systems are known for their high energy performance and thus can improve energy efficiency both in residential and commercial buildings. The energy savings potential of this system has been demonstrated in several studies by comparing the system performance with conventional HVAC systems such as rooftop variable air volume systems (RTU-VAV and central chiller and boiler systems. This paper evaluates the performance of VRF and RTU-VAV systems in a simulation environment using widely-accepted whole building energy modeling software, EnergyPlus. A medium office prototype building model, developed by the U.S. Department of Energy (DOE, is used to assess the performance of VRF and RTU-VAV systems. Each system is placed in 16 different locations, representing all U.S. climate zones, to evaluate the performance variations. Both models are compliant with the minimum energy code requirements prescribed in ASHRAE standard 90.1-2010 — energy standard for buildings except low-rise residential buildings. Finally, a comparison study between the simulation results of VRF and RTU-VAV models is made to demonstrate energy savings potential of VRF systems. The simulation results show that the VRF systems would save around 15–42% and 18–33% for HVAC site and source energy uses compared to the RTU-VAV systems. In addition, calculated results for annual HVAC cost savings point out that hot and mild climates show higher percentage cost savings for the VRF systems than cold climates mainly due to the differences in electricity and gas use for heating sources.

  12. PaCAL: A Python Package for Arithmetic Computations with Random Variables

    Directory of Open Access Journals (Sweden)

    Marcin Korze?

    2014-05-01

    Full Text Available In this paper we present PaCAL, a Python package for arithmetical computations on random variables. The package is capable of performing the four arithmetic operations: addition, subtraction, multiplication and division, as well as computing many standard functions of random variables. Summary statistics, random number generation, plots, and histograms of the resulting distributions can easily be obtained and distribution parameter ?tting is also available. The operations are performed numerically and their results interpolated allowing for arbitrary arithmetic operations on random variables following practically any probability distribution encountered in practice. The package is easy to use, as operations on random variables are performed just as they are on standard Python variables. Independence of random variables is, by default, assumed on each step but some computations on dependent random variables are also possible. We demonstrate on several examples that the results are very accurate, often close to machine precision. Practical applications include statistics, physical measurements or estimation of error distributions in scienti?c computations.

  13. HEU benchmark calculations and LEU preliminary calculations for IRR-1

    International Nuclear Information System (INIS)

    Caner, M.; Shapira, M.; Bettan, M.; Nagler, A.; Gilat, J.

    2004-01-01

    We performed neutronics calculations for the Soreq Research Reactor, IRR-1. The calculations were done for the purpose of upgrading and benchmarking our codes and methods. The codes used were mainly WIMS-D/4 for cell calculations and the three dimensional diffusion code CITATION for full core calculations. The experimental flux was obtained by gold wire activation methods and compared with our calculated flux profile. The IRR-1 is loaded with highly enriched uranium fuel assemblies, of the plate type. In the framework of preparation for conversion to low enrichment fuel, additional calculations were done assuming the presence of LEU fresh fuel. In these preliminary calculations we investigated the effect on the criticality and flux distributions of the increase of U-238 loading, and the corresponding uranium density.(author)

  14. Expanding the calculation of activation volumes: Self-diffusion in liquid water

    Science.gov (United States)

    Piskulich, Zeke A.; Mesele, Oluwaseun O.; Thompson, Ward H.

    2018-04-01

    A general method for calculating the dependence of dynamical time scales on macroscopic thermodynamic variables from a single set of simulations is presented. The approach is applied to the pressure dependence of the self-diffusion coefficient of liquid water as a particularly useful illustration. It is shown how the activation volume associated with diffusion can be obtained directly from simulations at a single pressure, avoiding approximations that are typically invoked.

  15. Machine learning techniques to select variable stars

    Directory of Open Access Journals (Sweden)

    García-Varela Alejandro

    2017-01-01

    Full Text Available In order to perform a supervised classification of variable stars, we propose and evaluate a set of six features extracted from the magnitude density of the light curves. They are used to train automatic classification systems using state-of-the-art classifiers implemented in the R statistical computing environment. We find that random forests is the most successful method to select variables.

  16. Response surfaces and sensitivity analyses for an environmental model of dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Iooss, Bertrand [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)]. E-mail: bertrand.iooss@cea.fr; Van Dorpe, Francois [CEA Cadarache, DEN/DTN/SMTM/LMTE, 13108 Saint Paul lez Durance, Cedex (France); Devictor, Nicolas [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)

    2006-10-15

    A parametric sensitivity analysis is carried out on GASCON, a radiological impact software describing the radionuclides transfer to the man following a chronic gas release of a nuclear facility. An effective dose received by age group can thus be calculated according to a specific radionuclide and to the duration of the release. In this study, we are concerned by 18 output variables, each depending of approximately 50 uncertain input parameters. First, the generation of 1000 Monte-Carlo simulations allows us to calculate correlation coefficients between input parameters and output variables, which give a first overview of important factors. Response surfaces are then constructed in polynomial form, and used to predict system responses at reduced computation time cost; this response surface will be very useful for global sensitivity analysis where thousands of runs are required. Using the response surfaces, we calculate the total sensitivity indices of Sobol by the Monte-Carlo method. We demonstrate the application of this method to one site of study and to one reference group near the nuclear research Center of Cadarache (France), for two radionuclides: iodine 129 and uranium 238. It is thus shown that the most influential parameters are all related to the food chain of the goat's milk, in decreasing order of importance: dose coefficient 'effective ingestion', goat's milk ration of the individuals of the reference group, grass ration of the goat, dry deposition velocity and transfer factor to the goat's milk.

  17. R-matrix calculations for few-quark bound states

    International Nuclear Information System (INIS)

    Shalchi, M.A.; Hadizadeh, M.R.

    2016-01-01

    The R-matrix method is implemented to study the heavy charm and bottom diquark, triquark, tetraquark, and pentaquarks in configuration space, as the bound states of quark-antiquark, diquark-quark, diquark-antidiquark, and diquark-antitriquark systems, respectively. The mass spectrum and the size of these systems are calculated for different partial wave channels. The calculated masses are compared with recent theoretical results obtained by other methods in momentum and configuration spaces and also by available experimental data. (orig.)

  18. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    Science.gov (United States)

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  19. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  20. Complexation of Plutonium (IV) With Sulfate At Variable Temperatures

    International Nuclear Information System (INIS)

    Y. Xia; J.I. Friese; D.A. Moore; P.P. Bachelor; L. Rao

    2006-01-01

    The complexation of plutonium(IV) with sulfate at variable temperatures has been investigated by solvent extraction method. A NaBrO 3 solution was used as holding oxidant to maintain the plutonium(IV) oxidation state throughout the experiments. The distribution ratio of Pu(IV) between the organic and aqueous phases was found to decrease as the concentrations of sulfate were increased. Stability constants of the 1:1 and 1:2 Pu(IV)-HSO 4 - complexes, dominant in the aqueous phase, were calculated from the effect of [HSO 4 - ] on the distribution ratio. The enthalpy and entropy of complexation were calculated from the stability constants at different temperatures using the Van't Hoff equation