The rating reliability calculator
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
Assessing the reliability of calculated catalytic ammonia synthesis rates
Medford, Andrew James; Wellendorff, Jess; Vojvodic, Aleksandra
2014-01-01
We introduce a general method for estimating the uncertainty in calculated materials properties based on density functional theory calculations. We illustrate the approach for a calculation of the catalytic rate of ammonia synthesis over a range of transition-metal catalysts. The correlation...
Włodzimierz Korniluk
2013-12-01
Full Text Available The report presents the use of Bayesian networks in the calculation of symbolic indicators of reliability and unreliability of the electric power supplying load point. The calculation of indicators of reliability is determined by the analytical dependencies. These dependencies are used to estimate: probability of up or down state of power system components supplying the load point; total probability distribution; conditional probabilities of the state power or lack of power appearance; the intensity of current interruptions and the average time of their duration; contributions of individual power system components in the service reliability. This report describes how to obtain these analytical dependencies, using the ultimate application for symbolic computations Mathematica (ver. 8. In this paper we will discuss the results of the symbolic computations for selected supply power system and methods for reducing the duration of symbolic computations of indicators for multiple-compound electrical power systems.
Electronics reliability calculation and design
Dummer, Geoffrey W A; Hiller, N
1966-01-01
Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea
2014-01-01
related metrics for detecting sepsis and multior- gan failure, improvement of HRC calculations may help detect significant changes from baseline values...calculations. Equiva- lence tests between mean HRC values derived from man- ually verified sequences and those derived from automatically detected peaks...assessment of HRC in critically ill patients. Keywords Signal detection analysis Electrocardiography Heart rate Clinical decision support
Nielsen, Ulrik Dam
2010-01-01
Mean outcrossing rates can be used as a basis for decision support for ships in severe sea. The article describes a procedure for calculating the mean outcrossing rate of non-Gaussian processes with stochastic input parameters. The procedure is based on the first-order reliability method (FORM......) and stochastic parameters are incorporated by carrying out a number of FORM calculations corresponding to combinations of specific values of the stochastic parameters. Subsequently, the individual FORM calculation is weighted according to the joint probability with which the specific combination of parameter....... The results of the procedure are compared with brute force simulations obtained by Monte Carlo simulation (MCS) and good agreement is observed. Importantly, the procedure requires significantly less CPU time compared to MCS to produce mean outcrossing rates....
Calculation reliability in vehicle accident reconstruction.
Wach, Wojciech
2016-06-01
The reconstruction of vehicle accidents is subject to assessment in terms of the reliability of a specific system of engineering and technical operations. In the article [26] a formalized concept of the reliability of vehicle accident reconstruction, defined using Bayesian networks, was proposed. The current article is focused on the calculation reliability since that is the most objective section of this model. It is shown that calculation reliability in accident reconstruction is not another form of calculation uncertainty. The calculation reliability is made dependent on modeling reliability, adequacy of the model and relative uncertainty of calculation. All the terms are defined. An example is presented concerning the analytical determination of the collision location of two vehicles on the road in the absence of evidential traces. It has been proved that the reliability of this kind of calculations generally does not exceed 0.65, despite the fact that the calculation uncertainty itself can reach only 0.05. In this example special attention is paid to the analysis of modeling reliability and calculation uncertainty using sensitivity coefficients and weighted relative uncertainty.
Calculating reliability measures for ordinal data.
Gamsu, C V
1986-11-01
Establishing the reliability of measures taken by judges is important in both clinical and research work. Calculating the statistic of choice, the kappa coefficient, unfortunately is not a particularly quick and simple procedure. Two much-needed practical tools have been developed to overcome these difficulties: a comprehensive and easily understood guide to the manual calculation of the most complex form of the kappa coefficient, weighted kappa for ordinal data, has been written; and a computer program to run under CP/M, PC-DOS and MS-DOS has been developed. With simple modification the program will also run on a Sinclair Spectrum home computer.
Rate calculation with colored noise
Bartsch, Thomas; Benito, R M; Borondo, F
2016-01-01
The usual identification of reactive trajectories for the calculation of reaction rates requires very time-consuming simulations, particularly if the environment presents memory effects. In this paper, we develop a new method that permits the identification of reactive trajectories in a system under the action of a stochastic colored driving. This method is based on the perturbative computation of the invariant structures that act as separatrices for reactivity. Furthermore, using this perturbative scheme, we have obtained a formally exact expression for the reaction rate in multidimensional systems coupled to colored noisy environments.
Semiclassical calculation of decay rates
Bessa, A; Fraga, E S
2008-01-01
Several relevant aspects of quantum-field processes can be well described by semiclassical methods. In particular, the knowledge of non-trivial classical solutions of the field equations, and the thermal and quantum fluctuations around them, provide non-perturbative information about the theory. In this work, we discuss the calculation of the one-loop effective action from the semiclasssical viewpoint. We intend to use this formalism to obtain an accurate expression for the decay rate of non-static metastable states.
Towards reliable calculations of the correlation function
Maj, Radoslaw; 10.1142/S0218301307009221
2008-01-01
The correlation function of two identical pions interacting via Coulomb potential is computed for a general case of anisotropic particle's source of finite life time. The effect of halo is taken into account as an additional particle's source of large spatial extension. Due to the Coulomb interaction, the effect of halo is not limited to very small relative momenta but it influences the correlation function in a relatively large domain. The relativistic effects are discussed in detail and it is argued that the calculations have to be performed in the center-of-mass frame of particle's pair where the (nonrelativistic) wave function of particle's relative motion is meaningful. The Bowler-Sinyukov procedure to remove the Coulomb interaction is tested and it is shown to significantly underestimate the source's life time.
Calculation and Updating of Reliability Parameters in Probabilistic Safety Assessment
Zubair, Muhammad; Zhang, Zhijian; Khan, Salah Ud Din
2011-02-01
The internal events of nuclear power plant are complex and include equipment maintenance, equipment damage etc. These events will affect the probability of the current risk level of the system as well as the reliability of the equipment parameter values so such kind of events will serve as an important basis for systematic analysis and calculation. This paper presents a method for reliability parameters calculation and their updating. The method is based on binomial likelihood function and its conjugate beta distribution. For update parameters Bayes' theorem has been selected. To implement proposed method a computer base program is designed which provide help to estimate reliability parameters.
Calculating transient rates from surveys
Carbone, Dario; Wijers, Ralph A M J; Rowlinson, Antonia
2016-01-01
We have developed a method to determine the transient surface density and transient rate for any given survey, using Monte-Carlo simulations. This method allows us to determine the transient rate as a function of both the flux and the duration of the transients in the whole flux-duration plane rather than one or a few points as currently available methods do. It is applicable to every survey strategy that is monitoring the same part of the sky, regardless the instrument or wavelength of the survey, or the target sources. We have simulated both top-hat and Fast Rise Exponential Decay light curves, highlighting how the shape of the light curve might affect the detectability of transients. Another application for this method is to estimate the number of transients of a given kind that are expected to be detected by a survey, provided that their rate is known.
Calculating transient rates from surveys
Carbone, D.; van der Horst, A. J.; Wijers, R. A. M. J.; Rowlinson, A.
2017-03-01
We have developed a method to determine the transient surface density and transient rate for any given survey, using Monte Carlo simulations. This method allows us to determine the transient rate as a function of both the flux and the duration of the transients in the whole flux-duration plane rather than one or a few points as currently available methods do. It is applicable to every survey strategy that is monitoring the same part of the sky, regardless the instrument or wavelength of the survey, or the target sources. We have simulated both top-hat and Fast Rise Exponential Decay light curves, highlighting how the shape of the light curve might affect the detectability of transients. Another application for this method is to estimate the number of transients of a given kind that are expected to be detected by a survey, provided that their rate is known.
Lifeline system network reliability calculation based on GIS and FTA
TANG Ai-ping; OU Jin-ping; LU Qin-nian; ZHANG Ke-xu
2006-01-01
Lifelines, such as pipeline, transportation, communication, electric transmission and medical rescue systems, are complicated networks that always distribute spatially over large geological and geographic units.The quantification of their reliability under an earthquake occurrence should be highly regarded, because the performance of these systems during a destructive earthquake is vital in order to estimate direct and indirect economic losses from lifeline failures, and is also related to laying out a rescue plan. The research in this paper aims to develop a new earthquake reliability calculation methodology for lifeline systems. The methodology of the network reliability for lifeline systems is based on fault tree analysis (FTA) and geological information system(GIS). The interactions existing in a lifeline system are considered herein. The lifeline systems are idealized as equivalent networks, consisting of nodes and links, and are described by network analysis in GIS. Firstly, the node is divided into two types: simple node and complicated node, where the reliability of the complicated node is calculated by FTA and interaction is regarded as one factor to affect performance of the nodes. The reliability of simple node and link is evaluated by code. Then, the reliability of the entire network is assessed based on GIS and FTA. Lastly, an illustration is given to show the methodology.
Composite system reliability evaluation by stochastic calculation of system operation
Haubrick, H.-J.; Hinz, H.-J.; Landeck, E. [Dept. of Power Systems and Power Economics (Germany)
1994-12-31
This report describes a new developed probabilistic approach for steady-state composite system reliability evaluation and its exemplary application to a bulk power test system. The new computer program called PHOENIX takes into consideration transmission limitations, outages of lines and power stations and, as a central element, a highly sophisticated model to the dispatcher performing remedial actions after disturbances. The kernel of the new method is a procedure for optimal power flow calculation that has been specially adapted for the use in reliability evaluations under the above mentioned conditions. (author) 11 refs., 8 figs., 1 tab.
Reliability and validity of the Wolfram Unified Rating Scale (WURS
Nguyen Chau
2012-11-01
Full Text Available Abstract Background Wolfram syndrome (WFS is a rare, neurodegenerative disease that typically presents with childhood onset insulin dependent diabetes mellitus, followed by optic atrophy, diabetes insipidus, deafness, and neurological and psychiatric dysfunction. There is no cure for the disease, but recent advances in research have improved understanding of the disease course. Measuring disease severity and progression with reliable and validated tools is a prerequisite for clinical trials of any new intervention for neurodegenerative conditions. To this end, we developed the Wolfram Unified Rating Scale (WURS to measure the severity and individual variability of WFS symptoms. The aim of this study is to develop and test the reliability and validity of the Wolfram Unified Rating Scale (WURS. Methods A rating scale of disease severity in WFS was developed by modifying a standardized assessment for another neurodegenerative condition (Batten disease. WFS experts scored the representativeness of WURS items for the disease. The WURS was administered to 13 individuals with WFS (6-25 years of age. Motor, balance, mood and quality of life were also evaluated with standard instruments. Inter-rater reliability, internal consistency reliability, concurrent, predictive and content validity of the WURS were calculated. Results The WURS had high inter-rater reliability (ICCs>.93, moderate to high internal consistency reliability (Cronbach’s α = 0.78-0.91 and demonstrated good concurrent and predictive validity. There were significant correlations between the WURS Physical Assessment and motor and balance tests (rs>.67, ps>.76, ps=-.86, p=.001. The WURS demonstrated acceptable content validity (Scale-Content Validity Index=0.83. Conclusions These preliminary findings demonstrate that the WURS has acceptable reliability and validity and captures individual differences in disease severity in children and young adults with WFS.
Fair and Reasonable Rate Calculation Data -
Department of Transportation — This dataset provides guidelines for calculating the fair and reasonable rates for U.S. flag vessels carrying preference cargoes subject to regulations contained at...
Craniosacral rhythm: reliability and relationships with cardiac and respiratory rates.
Hanten, W P; Dawson, D D; Iwata, M; Seiden, M; Whitten, F G; Zink, T
1998-03-01
Craniosacral rhythm (CSR) has long been the subject of debate, both over its existence and its use as a therapeutic tool in evaluation and treatment. Origins of this rhythm are unknown, and palpatory findings lack scientific support. The purpose of this study was to determine the intra- and inter-examiner reliabilities of the palpation of the rate of the CSR and the relationship between the rate of the CSR and the heart or respiratory rates of subjects and examiners. The rates of the CSR of 40 healthy adults were palpated twice by each of two examiners. The heart and respiratory rates of the examiners and the subjects were recorded while the rates of the subjects' CSR were palpated by the examiners. Intraclass correlation coefficients were calculated to determine the intra- and inter-examiner reliabilities of the palpation. Two multiple regression analyses, one for each examiner, were conducted to analyze the relationships between the rate of the CSR and the heart and respiratory rates of the subjects and the examiners. The intraexaminer reliability coefficients were 0.78 for examiner A and 0.83 for examiner B, and the interexaminer reliability coefficient was 0.22. The result of the multiple regression analysis for examiner A was R = 0.46 and adjusted R2 = 0.12 (p = 0.078) and for examiner B was R = 0.63 and adjusted R2 = 0.32 (p = 0.001). The highest bivariate correlation was found between the CSR and the subject's heart rate (r = 0.30) for examiner A and between the CSR and the examiner's heart rate (r = 0.42) for examiner B. The results indicated that a single examiner may be able to palpate the rate of the CSR consistently, if that is what we truly measured. It is possible that the perception of CSR is illusory. The rate of the CSR palpated by two examiners is not consistent. The results of the regression analysis of one examiner offered no validation to those of the other. It appears that a subject's CSR is not related to the heart or respiratory rates of the
Reliability of Multi-Category Rating Scales
Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.
2013-01-01
The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…
Microcircuit Device Reliability. Digital Failure Rate Data
1981-01-01
U- OjO ^-*mog«tOf-< r-t^Of-Hf-Hi-’POOO a* a» o oo oo a* <M o co *r r^ u> m u> *ä m o^ -H c\\j •-» o * in csi oo o ^ ^ •-• * o ^t...a :HAMIPACTUBEB :OPEBATI( MAL TYPE RELIABILITY ANALYSIS CENTER t PAW 1 M. : DIVICI : PUNCTIOM : SOU. 1 CLASS : PACKAGE/ ■ 1 PINS
Cooling rate calculations for silicate glasses.
Birnie, D. P., III; Dyar, M. D.
1986-03-01
Series solution calculations of cooling rates are applied to a variety of samples with different thermal properties, including an analog of an Apollo 15 green glass and a hypothetical silicate melt. Cooling rates for the well-studied green glass and a generalized silicate melt are tabulated for different sample sizes, equilibration temperatures and quench media. Results suggest that cooling rates are heavily dependent on sample size and quench medium and are less dependent on values of physical properties. Thus cooling histories for glasses from planetary surfaces can be estimated on the basis of size distributions alone. In addition, the variation of cooling rate with sample size and quench medium can be used to control quench rate.
RELIABILITY ANALYSIS FOR A REPAIRABLE PARALLEL SYSTEM WITH TIME-VARYING FAILURE RATES
TangShengdao; WangFengquan
2005-01-01
To solve a real problem :how to calculate the reliability of a system with time-varying failure rates in industry systems,this paper studies a model for the load-sharing parallel system with time-varying failure rates,and obtains calculating formulas of reliability and availability of the system by solving differential equations. In this paper, the failure rates are expressed in polynomial configuration. The constant,linear and Weibull failure rate are in their special form. The polynomial failure rates provide flexibility in modeling the practical time-varying failure rates.
Glass dissolution rate measurement and calculation revisited
Fournier, Maxime; Ull, Aurélien; Nicoleau, Elodie; Inagaki, Yaohiro; Odorico, Michaël; Frugier, Pierre; Gin, Stéphane
2016-08-01
Aqueous dissolution rate measurements of nuclear glasses are a key step in the long-term behavior study of such waste forms. These rates are routinely normalized to the glass surface area in contact with solution, and experiments are very often carried out using crushed materials. Various methods have been implemented to determine the surface area of such glass powders, leading to differing values, with the notion of the reactive surface area of crushed glass remaining vague. In this study, around forty initial dissolution rate measurements were conducted following static and flow rate (SPFT, MCFT) measurement protocols at 90 °C, pH 10. The international reference glass (ISG), in the forms of powders with different particle sizes and polished monoliths, and soda-lime glass beads were examined. Although crushed glass grains clearly cannot be assimilated with spheres, it is when using the samples geometric surface (Sgeo) that the rates measured on powders are closest to those found for monoliths. Overestimation of the reactive surface when using the BET model (SBET) may be due to small physical features at the atomic scale-contributing to BET surface area but not to AFM surface area. Such features are very small compared with the thickness of water ingress in glass (a few hundred nanometers) and should not be considered in rate calculations. With a SBET/Sgeo ratio of 2.5 ± 0.2 for ISG powders, it is shown here that rates measured on powders and normalized to Sgeo should be divided by 1.3 and rates normalized to SBET should be multiplied by 1.9 in order to be compared with rates measured on a monolith. The use of glass beads indicates that the geometric surface gives a good estimation of glass reactive surface if sample geometry can be precisely described. Although data clearly shows the repeatability of measurements, results must be given with a high uncertainty of approximately ±25%.
Glass dissolution rate measurement and calculation revisited
Fournier, Maxime, E-mail: maxime.fournier@cea.fr [CEA, DEN, DTCD, SECM, F-30207, Bagnols sur Cèze (France); Ull, Aurélien; Nicoleau, Elodie [CEA, DEN, DTCD, SECM, F-30207, Bagnols sur Cèze (France); Inagaki, Yaohiro [Department of Applied Quantum Physics & Nuclear Engineering, Kyushu University, Fukuoka, 819-0395 (Japan); Odorico, Michaël [ICSM-UMR5257 CEA/CNRS/UM2/ENSCM, Site de Marcoule, BP17171, F-30207, Bagnols sur Cèze (France); Frugier, Pierre; Gin, Stéphane [CEA, DEN, DTCD, SECM, F-30207, Bagnols sur Cèze (France)
2016-08-01
Aqueous dissolution rate measurements of nuclear glasses are a key step in the long-term behavior study of such waste forms. These rates are routinely normalized to the glass surface area in contact with solution, and experiments are very often carried out using crushed materials. Various methods have been implemented to determine the surface area of such glass powders, leading to differing values, with the notion of the reactive surface area of crushed glass remaining vague. In this study, around forty initial dissolution rate measurements were conducted following static and flow rate (SPFT, MCFT) measurement protocols at 90 °C, pH 10. The international reference glass (ISG), in the forms of powders with different particle sizes and polished monoliths, and soda-lime glass beads were examined. Although crushed glass grains clearly cannot be assimilated with spheres, it is when using the samples geometric surface (S{sub geo}) that the rates measured on powders are closest to those found for monoliths. Overestimation of the reactive surface when using the BET model (S{sub BET}) may be due to small physical features at the atomic scale—contributing to BET surface area but not to AFM surface area. Such features are very small compared with the thickness of water ingress in glass (a few hundred nanometers) and should not be considered in rate calculations. With a S{sub BET}/S{sub geo} ratio of 2.5 ± 0.2 for ISG powders, it is shown here that rates measured on powders and normalized to S{sub geo} should be divided by 1.3 and rates normalized to S{sub BET} should be multiplied by 1.9 in order to be compared with rates measured on a monolith. The use of glass beads indicates that the geometric surface gives a good estimation of glass reactive surface if sample geometry can be precisely described. Although data clearly shows the repeatability of measurements, results must be given with a high uncertainty of approximately ±25%. - Highlights: • Initial dissolution
Rating reliability and representation validity in scenic landscape assessments
James F. Palmer; Robin E. Hoffman
2001-01-01
The US Supreme Court recently determined that experts from all fields of knowledge must demonstrate the reliability and validity of their testimony. While the broader implications of their finding have yet to manifest itself, it clearly has the potential to challenge all manner of professional practices. This paper explores the reliability of visual quality ratings of...
Can peers rate reliably as experts in small CSCL groups?
Magnisalis, Ioannis; Demetriadis, Stavros; Papadopoulos, Pantelis M.
2016-01-01
of objective anonymous peer rating through a rubric, and (b) provision of peer rating summary information during collaboration. The case study utilized an asynchronous CSCL tool with the two aforementioned capabilities. Initial results showed that peer rating, when anonymous, and guided, can be as reliable...
Reliability of the calculated maximal lactate steady state in amateur cyclists
Jennifer Adam
2015-01-01
Full Text Available Complex performance diagnostics in sports medicine should contain maximal aerobic and maximal anaerobic performance. The requirements on appropriate stress protocols are high. To validate a test protocol quality criteria like objectivity and reliability are necessary. Therefore, the present study was performed in intention to analyze the reliability of maximal lactate production rate ( VLamax by using a sprint test, maximum oxygen consumption ( VO 2max by using a ramp test and, based on these data, resulting power in calculated maximum lactate-steady-state (PMLSS especially for amateur cyclists. All subjects (n=23, age 26 ± 4 years were leisure cyclists. At three different days they completed first a sprint test to approximate VLamax. After 60 min of recreation time a ramp test to assess VO 2max was performed. The results of VLamax-test and VO 2max -test and the body weight were used to calculate PMLSS for all subjects. The intra class correlation (ICC for VLamax and VO 2max was 0.904 and 0.987, respectively, coefficient of variation (CV was 6.3 % and 2.1 %, respectively. Between the measurements the reliable change index of 0.11 mmol∙l-1∙s-1 for VLamax and 3.3 ml∙kg-1∙min-1 for VO 2max achieved significance. The mean of the calculated PMLSS was 237 ± 72 W with an RCI of 9 W and reached with ICC = 0.985 a very high reliability. Both metabolic performance tests and the calculated PMLSS are reliable for leisure cyclists.
Reliability sensitivity-based correlation coefficient calculation in structural reliability analysis
Yang, Zhou; Zhang, Yimin; Zhang, Xufang; Huang, Xianzhen
2012-05-01
The correlation coefficients of random variables of mechanical structures are generally chosen with experience or even ignored, which cannot actually reflect the effects of parameter uncertainties on reliability. To discuss the selection problem of the correlation coefficients from the reliability-based sensitivity point of view, the theory principle of the problem is established based on the results of the reliability sensitivity, and the criterion of correlation among random variables is shown. The values of the correlation coefficients are obtained according to the proposed principle and the reliability sensitivity problem is discussed. Numerical studies have shown the following results: (1) If the sensitivity value of correlation coefficient ρ is less than (at what magnitude 0.000 01), then the correlation could be ignored, which could simplify the procedure without introducing additional error. (2) However, as the difference between ρ s, that is the most sensitive to the reliability, and ρ R , that is with the smallest reliability, is less than 0.001, ρ s is suggested to model the dependency of random variables. This could ensure the robust quality of system without the loss of safety requirement. (3) In the case of | E abs|>0.001 and also | E rel|>0.001, ρ R should be employed to quantify the correlation among random variables in order to ensure the accuracy of reliability analysis. Application of the proposed approach could provide a practical routine for mechanical design and manufactory to study the reliability and reliability-based sensitivity of basic design variables in mechanical reliability analysis and design.
Nuclear reaction rates and opacity in massive star evolution calculations
Bahena, D [Astronomical Institute of the Academy of Sciences, BocnI II 1401, 14131 Praha 4 (Czech Republic); Klapp, J [Instituto Nacional de Investigaciones Nucleares, Km. 36.5 Carr. Mexico-Toluca, 52750 Edo. de Mexico (Mexico); Dehnen, H, E-mail: jaime.klapp@inin.gob.m [Universitaet Konstanz, Fachbereich Physik, Fach M568, D-78457 Konstanz (Germany)
2010-07-01
Nuclear reaction rates and opacity are important parameters in stellar evolution. The input physics in a stellar evolution code determines the main theoretical characteristics of the stellar structure, evolution and nucleosynthesis of a star. For different input physics, in this work we calculate stellar evolution models of very massive first stars during the hydrogen and helium burning phases. We have considered 100 and 200M{sub s}un galactic and pregalactic stars with metallicity Z = 10{sup -6} and 10{sup 9}, respectively. The results show important differences from old to new formulations for the opacity and nuclear reaction rates, in particular the evolutionary tracks are significantly affected, that indicates the importance of using up to date and reliable input physics. The triple alpha reaction activates sooner for pregalactic than for galactic stars.
Reliability and agreement in student ratings of the class environment.
Nelson, Peter M; Christ, Theodore J
2016-09-01
The current study estimated the reliability and agreement of student ratings of the classroom environment obtained using the Responsive Environmental Assessment for Classroom Teaching (REACT; Christ, Nelson, & Demers, 2012; Nelson, Demers, & Christ, 2014). Coefficient alpha, class-level reliability, and class agreement indices were evaluated as each index provides important information for different interpretations and uses of student rating scale data. Data for 84 classes across 29 teachers in a suburban middle school were sampled to derive reliability and agreement indices for the REACT subscales across 4 class sizes: 25, 20, 15, and 10. All participating teachers were White and a larger number of 6th-grade classes were included (42%) relative to 7th- (33%) or 8th- (23%) grade classes. Teachers were responsible for a variety of content areas, including language arts (26%), science (26%), math (20%), social studies (19%), communications (6%), and Spanish (3%). Coefficient alpha estimates were generally high across all subscales and class sizes (α = .70-.95); class-mean estimates were greatly impacted by the number of students sampled from each class, with class-level reliability values generally falling below .70 when class size was reduced from 25 to 20. Further, within-class student agreement varied widely across the REACT subscales (mean agreement = .41-.80). Although coefficient alpha and test-retest reliability are commonly reported in research with student rating scales, class-level reliability and agreement are not. The observed differences across coefficient alpha, class-level reliability, and agreement indices provide evidence for evaluating students' ratings of the class environment according to their intended use (e.g., differentiating between classes, class-level instructional decisions). (PsycINFO Database Record
Reliability of Calculated Low-Density Lipoprotein Cholesterol.
Meeusen, Jeffrey W; Snozek, Christine L; Baumann, Nikola A; Jaffe, Allan S; Saenger, Amy K
2015-08-15
Aggressive low-density lipoprotein cholesterol (LDL-C)-lowering strategies are recommended for prevention of cardiovascular events in high-risk populations. Guidelines recommend a 30% to 50% reduction in at-risk patients even when LDL-C concentrations are between 70 and 130 mg/dl (1.8 to 3.4 mmol/L). However, calculation of LDL-C by the Friedewald equation is the primary laboratory method for routine LDL-C measurement. We compared the accuracy and reproducibility of calculated LDL-C <130 mg/dl (3.4 mmol/L) to LDL-C measured by β quantification (considered the gold standard method) in 15,917 patients with fasting triglyceride concentrations <400 mg/dl (4.5 mmol/L). Both variation and bias of calculated LDL-C increased at lower values of measured LDL-C. The 95% confidence intervals for a calculated LDL-C of 70 mg/dl (1.8 mmol/L) and 30 mg/dl (0.8 mmol/L) were 60 to 86 mg/dl (1.6 to 2.2 mmol/L) and 24 to 60 mg/dl (0.6 to 1.6 mmol/L), respectively. Previous recommendations have emphasized the requirement for a fasting sample with triglycerides <400 mg/dl (4.5 mmol/L) to calculate LDL-C by the Friedewald equation. However, no recommendations have addressed the appropriate lower reportable limit for calculated LDL-C. In conclusion, calculated LDL-C <30 mg/dl (0.8 mmol/L) should not be reported because of significant deviation from the gold standard measured LDL-C results, and caution is advised when using calculated LDL-CF values <70 mg/dl (1.8 mmol/L) to make treatment decisions.
Construct Validity and Reliability of the Ethical Behavior Rating Scale.
Hill, Gloria; Swanson, H. Lee
1985-01-01
Results of factor and correlational analyses of the Ethical Behavior Rating Scale (EBRS) are reported. The test-retest method and internal consistency estimates yielded reliability coefficients. Construct validity was determined by correlating the EBRS with items from the Ethical Reasoning Inventory. The EBRS reflects the behavioral aspects of…
Rating Scales for Dystonia in Cerebral Palsy: Reliability and Validity
Monbaliu, E.; Ortibus, E.; Roelens, F.; Desloovere, K.; Deklerck, J.; Prinzie, P.; De Cock, P.; Feys, H.
2010-01-01
Aim: This study investigated the reliability and validity of the Barry-Albright Dystonia Scale (BADS), the Burke-Fahn-Marsden Movement Scale (BFMMS), and the Unified Dystonia Rating Scale (UDRS) in patients with bilateral dystonic cerebral palsy (CP). Method: Three raters independently scored videotapes of 10 patients (five males, five females;…
High rate, high reliability Li/SO2 cells
Chireau, R.
1982-03-01
The use of the lithium/sulfur dioxide system for aerospace applications is discussed. The high rate density in the system is compared to some primary systems: mercury zinc, silver zinc, and magnesium oxide. Estimates are provided of the storage life and shelf life of typical lithium sulfur batteries. The design of lithium cells is presented and criteria are given for improving the output of cells in order to achieve high rate and high reliability.
UFMG Sydenham's chorea rating scale (USCRS): reliability and consistency.
Teixeira, Antônio Lúcio; Maia, Débora P; Cardoso, Francisco
2005-05-01
Despite the renewed interest in Sydenham's chorea (SC) in recent years, there were no valid and reliable scales to rate the several signs and symptoms of patients with SC and related disorders. The Universidade Federal de Minas Gerais (UFMG) Sydenham's Chorea Rating Scale (USCRS) was designed to provide a detailed quantitative description of the performance of activities of daily living, behavioral abnormalities, and motor function of subjects with SC. The scale comprises 27 items and each one is scored from 0 (no symptom or sign) to 4 (severe disability or finding). Data from 84 subjects, aged 4.9 to 33.6 years, support the interrater reliability and internal consistency of the scale. The USCRS is a promising instrument for rating the clinical features of SC as well as their functional impact in children and adults.
Calculating lunar retreat rates using tidal rhythmites
Kvale, E.P.; Johnson, H.W.; Sonett, C.P.; Archer, A.W.; Zawistoski, A.N.N.
1999-01-01
Tidal rhythmites are small-scale sedimenta??r}- structures that can preserve a hierarchy of astronomically induced tidal periods. They can also preserve a record of periodic nontidal sedimentation. If properly interpreted and understood, tidal rhjthmites can be an important component of paleoastronomy and can be used to extract information on ancient lunar orbital dynamics including changes in Earth-Moon distance through geologic time. Herein we present techniques that can be used to calculate ancient Earth-Moon distances. Each of these techniques, when used on a modern high-tide data set, results in calculated estimates of lunar orbital periods and an EarthMoon distance that fall well within 1 percent of the actual values. Comparisons to results from modern tidal data indicate that ancient tidal rhythmite data as short as 4 months can provide suitable estimates of lunar orbital periods if these tidal records are complete. An understanding of basic tidal theory allows for the evaluation of completeness of the ancient tidal record as derived from an analysis of tidal rhythmites. Utilizing the techniques presented herein, it appears from the rock record that lunar orbital retreat slowed sometime during the midPaleozoic. Copyright ??1999, SEPM (Society for Sedimentary Geology).
Sakamoto, Y
2002-01-01
In the prevention of nuclear disaster, there needs the information on the dose equivalent rate distribution inside and outside the site, and energy spectra. The three dimensional radiation transport calculation code is a useful tool for the site specific detailed analysis with the consideration of facility structures. It is important in the prediction of individual doses in the future countermeasure that the reliability of the evaluation methods of dose equivalent rate distribution and energy spectra by using of Monte Carlo radiation transport calculation code, and the factors which influence the dose equivalent rate distribution outside the site are confirmed. The reliability of radiation transport calculation code and the influence factors of dose equivalent rate distribution were examined through the analyses of critical accident at JCO's uranium processing plant occurred on September 30, 1999. The radiation transport calculations including the burn-up calculations were done by using of the structural info...
Reliability of single-item ratings of quality in higher education: a replication.
Ginns, Paul; Barrie, Simon
2004-12-01
Single-item ratings of the quality of instructors or subjects are widely used by higher education institutions, yet such ratings are commonly assumed to have inadequate psychometric properties. Recent research has demonstrated that reliability of such ratings can indeed be estimated, using either the correction for attenuation formula or factor analytic methods. This study replicates prior research on the reliability of single-item ratings of quality of instruction, using a different, more student-focussed approach to teaching and learning evaluation than used by previous researchers. Class average data from 1,097 classes, representing responses from 59,815 students, were analysed. At the "class" level of analysis, both methods of estimation suggested the single item of quality had high reliability: .96 using the correction for attenuation formula, and .94 using the factor analytic method. An alternative method of calculating reliability, which takes into account the hierarchical nature of the data, likewise suggested high estimated reliability (.92) of the single-item rating. These results indicate the suitability of the overall class rating for quality improvement in higher education, with a large sample.
V. Rusan
2012-01-01
Full Text Available The paper considers calculation methods for reliability of agricultural distribution power networks while using Boolean algebra functions and analytical method. Reliability of 10 kV overhead line circuits with automatic sectionalization points and automatic standby activation has been investigated in the paper.
Kleijn van Willigen, G.K.; Meerveld, H. van
2016-01-01
The reliability and availability of the Dutch storm surge barriers are calculated by probabilistic risk assessment and various underlying risk analysis methods. These calculations, however, focus on the numerical probability of the storm surge barrier functioning adequately, and the implementation o
Cooper, D.K.; Cooper, J.A.; Ferson, S.
1999-01-21
Calculating safety and reliability probabilities with functions of uncertain variables can yield incorrect or misleading results if some precautions are not taken. One important consideration is the application of constrained mathematics for calculating probabilities for functions that contain repeated variables. This paper includes a description of the problem and develops a methodology for obtaining an accurate solution.
Rate Adaptive Selective Segment Assignment for Reliable Wireless Video Transmission
Sajid Nazir
2012-01-01
Full Text Available A reliable video communication system is proposed based on data partitioning feature of H.264/AVC, used to create a layered stream, and LT codes for erasure protection. The proposed scheme termed rate adaptive selective segment assignment (RASSA is an adaptive low-complexity solution to varying channel conditions. The comparison of the results of the proposed scheme is also provided for slice-partitioned H.264/AVC data. Simulation results show competitiveness of the proposed scheme compared to optimized unequal and equal error protection solutions. The simulation results also demonstrate that a high visual quality video transmission can be maintained despite the adverse effect of varying channel conditions and the number of decoding failures can be reduced.
Validation and Implementation of Uncertainty Estimates of Calculated Transition Rates
Jörgen Ekman
2014-05-01
Full Text Available Uncertainties of calculated transition rates in LS-allowed electric dipole transitions in boron-like O IV and carbon-like Fe XXI are estimated using an approach in which differences in line strengths calculated in length and velocity gauges are utilized. Estimated uncertainties are compared and validated against several high-quality theoretical data sets in O IV, and implemented in large scale calculations in Fe XXI.
Dose Rate Calculations for Rotary Mode Core Sampling Exhauster
Foust, D J
2000-01-01
This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering.
Walker, Benjamin; Alavifard, Sepand; Roberts, Surain; Lanes, Andrea; Ramsay, Tim; Boet, Sylvain
2016-06-01
We investigated the inter-rater reliability of Web of Science (WoS) and Scopus when calculating the h-index of 25 senior scientists in the Clinical Epidemiology Program of the Ottawa Hospital Research Institute. Bibliometric information and the h-indices for the subjects were computed by four raters using the automatic calculators in WoS and Scopus. Correlation and agreement between ratings was assessed using Spearman's correlation coefficient and a Bland-Altman plot, respectively. Data could not be gathered from Google Scholar due to feasibility constraints. The Spearman's rank correlation between the h-index of scientists calculated with WoS was 0.81 (95% CI 0.72-0.92) and with Scopus was 0.95 (95% CI 0.92-0.99). The Bland-Altman plot showed no significant rater bias in WoS and Scopus; however, the agreement between ratings is higher in Scopus compared to WoS. Our results showed a stronger relationship and increased agreement between raters when calculating the h-index of a scientist using Scopus compared to WoS. The higher inter-rater reliability and simple user interface used in Scopus may render it the more effective database when calculating the h-index of senior scientists in epidemiology. © 2016 Health Libraries Group.
Experiences with leak rate calculations methods for LBB application
Grebner, H.; Kastner, W.; Hoefler, A.; Maussner, G. [and others
1997-04-01
In this paper, three leak rate computer programs for the application of leak before break analysis are described and compared. The programs are compared to each other and to results of an HDR Reactor experiment and two real crack cases. The programs analyzed are PIPELEAK, FLORA, and PICEP. Generally, the different leak rate models are in agreement. To obtain reasonable agreement between measured and calculated leak rates, it was necessary to also use data from detailed crack investigations.
Calculation of Spot Reliability Evaluation Scores (SRED) for DNA Microarray Data.
Shimokawa, Kazuro; Kodzius, Rimantas; Matsumura, Yonehiro; Hayashizaki, Yoshihide
2008-02-01
INTRODUCTIONIn terms of cost per measurement, the use of DNA microarrays for comprehensive and quantitative expression measurements is vastly superior to other methods such as Northern blotting or quantitative reverse transcriptase polymerase chain reaction (QRT-PCR). However, the output values of DNA microarrays are not always highly reliable or accurate compared with other techniques, and the output data sometimes consist of measurements of relative expression (treated sample vs. untreated) rather than absolute expression values as desired. In effect, some measurements from some laboratories do not represent absolute expression values (such as the number of transcripts) and as such are experimentally deficient. This protocol addresses one problem in some microarray data: the absence of accurate measurements. Spot reliability evaluation score for DNA microarrays (SRED) offers a reliability value for each spot in the microarray. SRED does not require an entire microarray to assess the reliability, but rather analyzes the reliability of individual spots of the microarray. The calculation of a reliability index can be used for different microarray systems, which facilitates the analysis of multiple microarray data sets from different experimental platforms.
An interval-valued reliability model with bounded failure rates
Kozine, Igor; Krymsky, Victor
2012-01-01
The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... function if only partial failure information is available. An example is provided. © 2012 Copyright Taylor and Francis Group, LLC....
Revisiting Frequency Reuse towards Supporting Ultra-Reliable Ubiquitous-Rate Communication
Park, Jihong; Kim, Dong Min; Popovski, Petar
2017-01-01
One of the goals of 5G wireless systems stated by the NGMN alliance is to provide moderate rates (50+ Mbps) everywhere and with very high reliability. We term this service Ultra-Reliable Ubiquitous-Rate Communication (UR2C). This paper investigates the role of frequency reuse in supporting UR2C...... downlink rate. To fairly capture this reliability-rate tradeoff, we propose ubiquitous rate defined as the maximum downlink rate whose required SIR can be achieved with ultra-reliability. By using stochastic geometry, we derive closed-form ubiquitous rate as well as the optimal frequency reuse rules for UR...
Improved resonance reaction rate calculation for lattice physics subsystem
Finch, D.R.
1974-02-08
The resonance capture calculations of the HAMMER System and HAMBUR System are derived from a consistent statement of the integral slowing down equation and definitions of the resonance integral. The assumptions made in these treatments are explicitly stated, and and an attempt is made to estimate the possible error in the resonance integral arising from these assumptions. This analysis is made to pin-point those parts of the calculation that can be improved and updated. Based on the analysis of existing calculations a method of calculation is derived which avoids most of the problems encountered in HAMMER and HAMBUR. The chief improvements that result are as follows: Careful attention is paid to calculation of the resonance flux as most errors in existing calculations result from consistently overpredicting fluxes in all regions of a lattice cell. The calculation can be modified to produce as crude or detailed a resonance calculation, at the expense of computer time, as required by the user. Resonances that overlap group boundaries contribute the correct contribution to each group's reaction rates. Overlap between resonances of different isotopes is correctly accounted for. Up-to-date resonance formalisms are used including the Adler-Adler multi-level formulations. Provision is made to easily add new formalisms when required. Streaming effects from neutron leaking into a cell may optionally be included in the calculation of resonance reaction rates. A complete description of the physics contained in this new computational module is provided along with additional information on the numerical techniques employed in the module.
SEU rate calculation with GEANT4 (comparison with CREME 86)
Inguimbert, C
2004-01-01
This paper reports on single-event upset (SEU) rate calculations using the GEANT4 code. Single event effect rate modeling can be performed using various approaches. In this paper, we propose to compare the standard rectangular parallepiped (RPP) cosmic ray effects in microelectronic code (CREME86) model with our direct Monte Carlo simulation using the GEANT 4 (radiation transport code developed by CERN) software. The results obtained on two device types are in good agreement with CREME86. (14 refs).
Subdaily evapotranspiration rate calculation from streamflow summer diel signal
Gribovszki, Z.; Kalicz, P.; Szilágyi, J.
2009-04-01
Diel signal of hydrological variables (e.g., shallow groundwater level or streamflow rate) are rarely investigated in the hydrologic literature although these short-term fluctuations may incorporate useful information for the characterization of hydro-ecological systems. Riparian vegetation (especially forest) typically has a great influence on groundwater level and groundwater-sustained baseflow, therefore calculation of the correct evapotranspiration rates is very important for natural protection tasks and water resources management. Recently a new technique was developed by us to calculate daily or even subdaily evapotranspiration rates from groundwater-level measurements, and that method now is modified to estimate evapotranspiration rates from the baseflow diel signal only. The method was successfully tested with hydro-meteorological data from the Hidegvíz Valley experimental catchment in the Sopron Hills at the western border of Hungary. The evapotranspiration rates calculated from the groundwater signal only, are typically (a magnitude) higher than those obtained with an already existing method. With the application of our new technique exploiting the baseflow diel signal of the stream, evapotranspiration rates, very similar to those gained from groundwater level readings and the Penman-Monteith equation, can be obtained. Keywords: baseflow diel signal, evapotranspiration, riparian zone
Assessing validity and reliability of Resting Metabolic Rate in six gas analysis systems
Cooper, Jamie A.; Watras, Abigail C.; O’Brien, Matthew J.; Luke, Amy; Dobratz, Jennifer R.; Earthman, Carrie P.; Schoeller, Dale A.
2008-01-01
The Deltatrac Metabolic Monitor (DTC), one of the most popular indirect calorimetry systems for measuring resting metabolic rate (RMR) in human subjects, is no longer being manufactured. This study compared five different gas analysis systems to the DTC. Resting metabolic rate was measured by the DTC and at least one other instrument at three study sites for a total of 38 participants. The five indirect calorimetry systems included: MedGraphics CPX Ultima, MedGem, Vmax Encore 29 System, TrueOne 2400, and Korr ReeVue. Validity was assessed using paired t-tests to compare means while reliability was assessed by using both paired t-tests and root mean square calculations with F tests for significance. Within-subject comparisons for validity of RMR revealed a significant difference between the DTC and Ultima. Bland-Altman plot analysis showed significant bias with increasing RMR values for the Korr and MedGem. Respiratory exchange ratio (RER) analysis showed a significant difference between the DTC and the Ultima and a trend for a difference with the Vmax (p = 0.09). Reliability assessment for RMR revealed that all instruments had a significantly larger coefficient of variation (CV) (ranging from 4.8% to 10.9%) for RMR compared to the 3.0 % CV for the DTC. Reliability assessment for RER data showed none of the instrument CV’s were significantly larger than the DTC CV. The results were quite disappointing, with none of the instruments equaling the within person reliability of the DTC. The TrueOne and Vmax were the most valid instruments in comparison with the DTC for both RMR and RER assessment. Further testing is needed to identify an instrument with the reliability and validity of the DTC. PMID:19103333
A model for reliability analysis and calculation applied in an example from chemical industry
Pejović Branko B.
2010-01-01
Full Text Available The subject of the paper is reliability design in polymerization processes that occur in reactors of a chemical industry. The designed model is used to determine the characteristics and indicators of reliability, which enabled the determination of basic factors that result in a poor development of a process. This would reduce the anticipated losses through the ability to control them, as well as enabling the improvement of the quality of production, which is the major goal of the paper. The reliability analysis and calculation uses the deductive method based on designing of a scheme for fault tree analysis of a system based on inductive conclusions. It involves the use standard logical symbols and rules of Boolean algebra and mathematical logic. The paper eventually gives the results of the work in the form of quantitative and qualitative reliability analysis of the observed process, which served to obtain complete information on the probability of top event in the process, as well as objective decision making and alternative solutions.
Methods for calculating SEU rates for bipolar and NMOS circuits
McNulty, P. J.; Abdel-Kader, W. G.; Bisgrove, J. M.
1985-12-01
Computer codes developed at Clarkson for simulating charge generation by proton-induced nuclear reactions in well-defined silicon microstructures can be used to calculate SEU rates for specific devices when the critical charge and the dimensions of all SEU sensitive junctions on the device are known, provided one can estimate the contribution from externally-generated charge which enters the sensitive junction by drift and diffusion. Calculations for two important bipolar devices, the AMD 2901B bit slice and the Fairchild 93L422 RAM, for which the dimensions of the sensitive volumes were estimated from available heavy-ion test data, have been found to be in agreement with experimental data. Circuit data for the Intel 2164A, an alpha sensitive dRAM, was provided by the manufacturer. Calculations based on crude assumptions regarding which nuclear recoils and which alphas trigger upsets in the 2164A were found to agree with experimental data.
Production rate calculations for a secondary beam facility
Jiang, C.L.; Back, B.B.; Rehm, K.E.
1995-08-01
In order to select the most cost-effective method for the production of secondary ion beams, yield calculations for a variety of primary beams were performed ranging in mass from protons to {sup 18}O with energies of 100-200 MeV/u. For comparison, production yields for 600-1000 MeV protons were also calculated. For light ion-(A < {sup 4}He) induced reactions at energies above 50 MeV/u the LAHET code was used while the low energy calculations were performed with LPACE. Heavy-ion-induced production rates were calculated with the ISAPACE program. The results of these codes were checked against each other and wherever possible a comparison with experimental data was performed. These comparisons extended to very exotic reaction channels, such as the production of {sup 100}Sn from {sup 112}Sn and {sup 124}Xe induced fragmentation reactions. These comparisons indicate that the codes are able to predict production rates to within one order of magnitude.
Fine-grid calculations for stellar electron and positron capture rates on Fe isotopes
Nabi, Jameel-Un, E-mail: jameel@giki.edu.pk [Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Faculty of Engineering Sciences (Pakistan); Tawfik, Abdel Nasser, E-mail: a.tawfik@eng.mti.edu.eg [MTI University, Egyptian Center for Theoretical Physics (ECTP) (Egypt)
2013-03-15
The acquisition of precise and reliable nuclear data is a prerequisite to success for stellar evolution and nucleosynthesis studies. Core-collapse simulators find it challenging to generate an explosion from the collapse of the core of massive stars. It is believed that a better understanding of the microphysics of core-collapse can lead to successful results. The weak interaction processes are able to trigger the collapse and control the lepton-to-baryon ratio (Y{sub e}) of the corematerial. It is suggested that the temporal variation of Y{sub e} within the core of a massive star has a pivotal role to play in the stellar evolution and a fine-tuning of this parameter at various stages of presupernova evolution is the key to generate an explosion. During the presupernova evolution of massive stars, isotopes of iron, mainly {sup 54-56}Fe, are considered to be key players in controlling Y{sub e} ratio via electron capture on these nuclides. Recently an improved microscopic calculation of weak-interaction-mediated rates for iron isotopes was introduced using the proton-neutron quasiparticle random-phase-approximation (pn-QRPA) theory. The pn-QRPA theory allows a microscopic state-by-state calculation of stellar capture rates which greatly increases the reliability of calculated rates. The results were suggestive of some fine-tuning of the Y{sub e} ratio during various phases of stellar evolution. Here we present for the first time the fine-grid calculation of the electron and positron capture rates on {sup 54-56}Fe. The sensitivity of the pn-QRPA calculated capture rates to the deformation parameter is also studied in this work. Core-collapse simulators may find this calculation suitable for interpolation purposes and for necessary incorporation in the stellar evolution codes.
Excellent reliability of the Hamilton Depression Rating Scale (HDRS-21) in Indonesia after training
Istriana, E.; Kurnia, A.; Weijers, A.; Hidayat, T.; Pinxten, W.J.L.; Jong, C.A.J. de; Schellekens, A.F.A.
2013-01-01
Introduction: The Hamilton Depression Rating Scale (HDRS) is the most widely used depression rating scale worldwide. Reliability of HDRS has been reported mainly from Western countries. The current study tested the reliability of HDRS ratings among psychiatric residents in Indonesia, before and afte
Efficient calculation of rate constants: Downhill versus uphill sampling
Klenin, Konstantin V.
2014-08-01
The classical transition state theory (TST), together with the notion of transmission coefficient, provides a useful tool for calculation of rate constants for rare events. However, in complex biomolecular reactions, such as protein folding, it is difficult to find a good reaction coordinate, so the transition state is ill-defined. In this case, other approaches are more popular, such as the transition interface sampling (TIS) and the forward flux sampling (FFS). Here, we show that the algorithms developed in the frames of TIS and FFS can be successfully applied, after a modification, for calculation of the transmission coefficient. The new procedure (which we call "downhill sampling") is more efficient in comparison with the traditional TIS and FFS ("uphill sampling") even if the reaction coordinate is bad. We also propose a new computational scheme that combines the advantages of TST, TIS, and FFS.
Calculating Outcrossing Rates used in Decision Support Systems for Ships
Nielsen, Ulrik Dam
2008-01-01
Onboard decision support systems (DSS) are used to increase the operational safety of ships. Ideally, DSS can estimate - in the statistical sense - future ship responses on a time scale of the order of 1-3 hours taking into account speed and course changes. The calculations depend on both...... operational and environmental parameters that are known only in the statistical sense. The present paper suggests a procedure to incorporate random variables and associated uncertainties in calculations of outcrossing rates, which are the basis for risk-based DSS. The procedure is based on parallel system...... analysis, and the paper derives and describes the main ideas. The concept is illustrated by an example, where the limit state of a non-linear ship response is considered. The results from the parallel system analysis are in agreement with corresponding Monte Carlo simulations. However, the computational...
Neutrons in the moon. [neutron flux and production rate calculations
Kornblum, J. J.; Fireman, E. L.; Levine, M.; Aronson, A.
1973-01-01
Neutron fluxes for energies between 15 MeV and thermal at depths of 0 to 300 g/sq cm in the moon are calculated by the discrete ordinate mathod with the ANISN code. With the energy spectrum of Lingenfelter et al. (1972). A total neutron-production rate for the moon of 26 plus or minus neutrons/sq cm sec is determined from the Ar-37 activity measurements in the Apollo 16 drill string, which are found to have a depth dependence in accordance with a neutron source function that decreases exponentially with an attenuation length of 155 g/sq cm.
Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA
Mareuil, Fabien [Institut Pasteur, Cellule d' Informatique pour la Biologie (France); Malliavin, Thérèse E.; Nilges, Michael; Bardiaux, Benjamin, E-mail: bardiaux@pasteur.fr [Institut Pasteur, Unité de Bioinformatique Structurale, CNRS UMR 3528 (France)
2015-08-15
In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD–NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD–NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.
Reliable method for calculating the center of rotation in parallel-beam tomography.
Vo, Nghia T; Drakopoulos, Michael; Atwood, Robert C; Reinhard, Christina
2014-08-11
High-throughput processing of parallel-beam X-ray tomography at synchrotron facilities is lacking a reliable and robust method to determine the center of rotation in an automated fashion, i.e. without the need for a human scorer. Well-known techniques based on center of mass calculation, image registration, or reconstruction evaluation work well under favourable conditions but they fail in cases where samples are larger than field of view, when the projections show low signal-to-noise, or when optical defects dominate the contrast. Here we propose an alternative technique which is based on the Fourier analysis of the sinogram. Our technique shows excellent performance particularly on challenging data.
Accurate, robust, and reliable calculations of Poisson-Boltzmann binding energies.
Nguyen, Duc D; Wang, Bao; Wei, Guo-Wei
2017-05-15
Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, ΔGel, and binding free energy, ΔΔGel, is important to computational biophysics and biochemistry. In this work, we investigate the grid dependence of our PB solver (MIBPB) with solvent excluded surfaces for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of ΔGel obtained at the grid spacing of 1.0 Å compared to ΔGel at 0.2 Å averaged over 153 molecules is less than 0.2%. Our results indicate that the use of grid spacing 0.6 Å ensures accuracy and reliability in ΔΔGel calculation. In fact, the grid spacing of 1.1 Å appears to deliver adequate accuracy for high throughput screening. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Relativistic collision rate calculations for electron-air interactions
Graham, G. [EG and G Energy Measurements, Inc., Los Alamos, NM (United States); Roussel-Dupre, R. [Los Alamos National Lab., NM (United States). Space Science and Technologies
1992-12-16
The most recent data available on differential cross sections for electron-air interactions are used to calculate the avalanche, momentum transfer, and energy loss rates that enter into the fluid equations. Data for the important elastic, inelastic, and ionizing processes are generally available out to electron energies of 1--10 kev. Prescriptions for extending these cross sections to the relativistic regime are presented. The angular dependence of the cross sections is included where data is available as is the doubly differential cross section for ionizing collisions. The collision rates are computed by taking moments of the Boltzmann collision integrals with the assumption that the electron momentum distribution function is given by the Juettner distribution function which satisfies the relativistic H- theorem and which reduces to the familiar Maxwellian velocity distribution in the nonrelativistic regime. The distribution function is parameterized in terms of the electron density, mean momentum, and thermal energy and the rates are therefore computed on a two-dimensional grid as a function of mean kinetic energy and thermal energy.
Optimal power flow calculation for power system with UPFC considering load rate equalization
Liu, Jiankun; Chen, Jing; Zhang, Qingsong
2017-06-01
Unified power flow controller (UPFC) device can change system electrical quantity (such as voltage, impedance, phase angle, etc.) rapidly and flexibly under the premise of maintain security, stability and reliability of power system, thus can improve the transmission power and transmission line utilization, so as to enhance the power supply capacity of the power grid. Based on a thorough study of the steady-state model of UPFC, taking load rate equalization as objective function, the optimal power flow model is established with UPFC, and simplified interior point method is used to solve it. Finally, optimal power flow of 24 continuous sections actual data is calculated on a typical day of Nanjing network. The results show that the optimal power flow calculation with UPFC can optimize the load rate equalization on the basis of eliminating line overload, improving the voltage level of local power network.
Precision decay rate calculations in quantum field theory
Andreassen, Anders; Frost, William; Schwartz, Matthew D
2016-01-01
Tunneling in quantum field theory is worth understanding properly, not least because it controls the long term fate of our universe. There are however, a number of features of tunneling rate calculations which lack a desirable transparency, such as the necessity of analytic continuation, the appropriateness of using an effective instead of classical potential, and the sensitivity to short-distance physics. This paper attempts to review in pedagogical detail the physical origin of tunneling and its connection to the path integral. Both the traditional potential-deformation method and a recent more direct propagator-based method are discussed. Some new insights from using approximate semi-classical solutions are presented. In addition, we explore the sensitivity of the lifetime of our universe to short distance physics, such as quantum gravity, emphasizing a number of important subtleties.
Examining Reliability of Reading Comprehension Ratings of Fifth Grade Students' Oral Retellings
Bernfeld, L. Elizabeth Shirley; Morrison, Timothy G.; Sudweeks, Richard R.; Wilcox, Brad
2013-01-01
The purpose of this study was to rate oral retellings of fifth graders to determine how passages, raters, and rating occasions affect those ratings, and to identify what combination of those elements produce reliable retelling ratings. A group of 36 fifth grade students read and orally retold three contemporary realistic fiction passages. Two…
Examining Reliability of Reading Comprehension Ratings of Fifth Grade Students' Oral Retellings
Bernfeld, L. Elizabeth Shirley; Morrison, Timothy G.; Sudweeks, Richard R.; Wilcox, Brad
2013-01-01
The purpose of this study was to rate oral retellings of fifth graders to determine how passages, raters, and rating occasions affect those ratings, and to identify what combination of those elements produce reliable retelling ratings. A group of 36 fifth grade students read and orally retold three contemporary realistic fiction passages. Two…
Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.
2016-07-01
This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.
Simoes Filho, Salvador [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)
2003-07-01
This report presents calculation method for failure rate of submarine flexible pipes in PETROBRAS using the distribution of Weibull in comparison with the traditional method of PARLOC that uses the exponential distribution. PARLOC - Pipeline and Riser Loss of Containment Study - Pipeline Offshore Reliability Data - database of pipes of the North Sea. (author)
Absolute and Relative Reliability of Percentage of Syllables Stuttered and Severity Rating Scales
Karimi, Hamid; O'Brian, Sue; Onslow, Mark; Jones, Mark
2014-01-01
Purpose: Percentage of syllables stuttered (%SS) and severity rating (SR) scales are measures in common use to quantify stuttering severity and its changes during basic and clinical research conditions. However, their reliability has not been assessed with indices measuring both relative and absolute reliability. This study was designed to provide…
Reliability and Validity of a Breast Self-Examination Proficiency Rating Instrument.
Wood, Robin Y.
1994-01-01
The reliability and validity of a newly constructed instrument, the Breast Self-Examination Proficiency Rating Instrument, was tested with 84 instructed and 80 uninstructed nursing students. Results support beginning reliability and preliminary validity when the instrument is used in a controlled setting. (SLD)
MacDonell, Christopher William; Ivanova, Tanya Dimitrova; Garland, S Jayne
2007-05-15
The reliability of the afterhyperpolarization (AHP) time course, as estimated by the interval death rate (IDR) analysis was evaluated both within and between investigators. The IDR analysis uses the firing history of a single motor unit train at low tonic firing rates to calculate an estimate of the AHP time course [Matthews PB. Relationship of firing intervals of human motor units to the trajectory of post-spike after-hyperpolarization and synaptic noise. J Physiol 1996;492:597-628]. Single motor unit trains were collected from the tibialis anterior (TA) to determine intra-rater reliability (within investigator). Data from the first dorsal interosseus (FDI), collected in a previous investigation [Gossen ER, Ivanova TD, Garland SJ. The time course of the motoneurone afterhyperpolarization is related to motor unit twitch speed in human skeletal muscle. J Physiol 2003;552:657-64], were used to examine the inter-rater reliability (between investigators). The lead author was blinded to the original time constants and file identities for the re-analysis. The intra-rater reliability of the AHP time constant in the TA data was high (r(2)=0.88; pFDI data was also strong (r(2)=0.92; pFDI. It is concluded that the interval death rate analysis is a reliable tool for estimating the AHP time course with experienced investigators.
Combining of different data pools for calculating a reliable POD for real defects
Kanzler, Daniel; Müller, Christina; Pitkänen, Jorma
2015-03-01
Real defects are essential for the evaluation of the reliability of non destructive testing (NDT) methods, especially in relation to the integrity of components. But in most of the cases the amount of available real defects is not sufficient to evaluate the system. Model-assisted and transfer functions are one way to handle that challenge. This study is focused on a combination of different data pools to create a sufficient amount of data for the reliability estimation. A widespread approach for calculating the Probability of Detection (POD) was used on a radiographic testing (RT) method. The highest contrast to noise ratio (CNR) of each indication is usually selected as the signal in the "â vs. a" (signal-response) approach for RT. By combining real and artificial defects (flat bottom holes, side drill holes, flat bottom squares, notches, etc) in RT the highest signals are close to each other, but the process of creating and evaluating real defects is much more complex. The solution is seen in the combination of real and artificial data using a weighted least square approach. The weights for real or artificial data were based on the importance, the value and the different detection behavior of the different data. For comparison, the alternative combination through the Bayesian Updating was also applied. As verification, a data pool with a large amount of real data was available. In an advanced approach for evaluating the digital RT data, the size of the indication (perpendicular to the X-ray beam) was introduced as additional information. The signal now consists of the CNR and the area of the indication. The detectability is changing depending on the area of the indication, a fact that was ignored in the previous POD calculations for RT. This points out that a weighted least square approach to pool the data might no longer be adequate. The Bayesian Updating of the estimated parameters of the relationship between the signal field (the area of the indication) and
Combining of different data pools for calculating a reliable POD for real defects
Kanzler, Daniel, E-mail: daniel.kanzler@bam.de, E-mail: christina.mueller@bam.de; Müller, Christina, E-mail: daniel.kanzler@bam.de, E-mail: christina.mueller@bam.de [Federal Institute for Materials Testing and Research, Berlin (Germany); Pitkänen, Jorma, E-mail: jorma.pitkanen@posiva.fi [Posiva Oy, Eurajoki (Finland)
2015-03-31
Real defects are essential for the evaluation of the reliability of non destructive testing (NDT) methods, especially in relation to the integrity of components. But in most of the cases the amount of available real defects is not sufficient to evaluate the system. Model-assisted and transfer functions are one way to handle that challenge. This study is focused on a combination of different data pools to create a sufficient amount of data for the reliability estimation. A widespread approach for calculating the Probability of Detection (POD) was used on a radiographic testing (RT) method. The highest contrast to noise ratio (CNR) of each indication is usually selected as the signal in the 'â vs. a' (signal-response) approach for RT. By combining real and artificial defects (flat bottom holes, side drill holes, flat bottom squares, notches, etc) in RT the highest signals are close to each other, but the process of creating and evaluating real defects is much more complex. The solution is seen in the combination of real and artificial data using a weighted least square approach. The weights for real or artificial data were based on the importance, the value and the different detection behavior of the different data. For comparison, the alternative combination through the Bayesian Updating was also applied. As verification, a data pool with a large amount of real data was available. In an advanced approach for evaluating the digital RT data, the size of the indication (perpendicular to the X-ray beam) was introduced as additional information. The signal now consists of the CNR and the area of the indication. The detectability is changing depending on the area of the indication, a fact that was ignored in the previous POD calculations for RT. This points out that a weighted least square approach to pool the data might no longer be adequate. The Bayesian Updating of the estimated parameters of the relationship between the signal field (the area of the
Reliability of familiarity rating of ordinary Japanese words for different years and places.
Amano, Smigeaki; Kasahara, Kaname; Kondo, Tadahisa
2007-11-01
Two-word familiarity sets were measured in different years (1995 and 2002) and places (Kanto and Kinki, in Japan) for a large number of Japanese words, to examine the reliability of familiarity ratings. The correlation between the word familiarities of the two sets was extremely high (r = .958, N = 10,515). It is suggested that familiarity rating, at least for ordinary words found in a dictionary, is very reliable and not greatly affected by differences in years and places.
Calculation of Reactive-evaporation Rates of Chromia
Holcomb, G.R.
2008-04-01
A methodology is developed to calculate Cr-evaporation rates from Cr2O3 with a flat planar geometry. Variables include temperature, total pressure, gas velocity, and gas composition. The methodology was applied to solid-oxide, fuel cell conditions for metallic interconnects and to advanced-steam turbines conditions. The high velocities and pressures of the advanced steam turbine led to evaporation predictions as high as 5.18 9 10-8 kg/m2/s of CrO2(OH)2(g) at 760 °C and 34.5 MPa. This is equivalent to 0.080 mm per year of solid Cr loss. Chromium evaporation is expected to be an important oxidation mechanism with the types of nickel-base alloys proposed for use above 650 °C in advanced-steam boilers and turbines. It is shown that laboratory experiments, with much lower steam velocities and usually much lower total pressure than found in advanced steam turbines, would best reproduce chromium-evaporation behavior with atmospheres that approach either O2 + H2O or air + H2O with 57% H2O.
Araszkiewicz, Andrzej; Jarosiński, Marek
2013-04-01
In this research we aimed to check if the GPS observations can be used for calculation of a reliable deformation pattern of the intracontinental lithosphere in seismically inactive areas, such as territory of Poland. For this purpose we have used data mainly from the ASG-EUPOS permanent network and the solutions developed by the MUT CAG team (Military University of Technology: Centre of Applied Geomatics). From the 128 analyzed stations almost 100 are mounted on buildings. Daily observations were processed in the Bernese 5.0 software and next the weekly solutions were used to determine the station velocities expressed in ETRF2000. The strain rates were determined for almost 200 triangles with GPS stations in their corners plotted used Delaunay triangulation. The obtained scattered directions of deformations and highly changeable values of strain rates point to insufficient antennas' stabilization as for geodynamical studies. In order to depict badly stabilized stations we carried out a benchmark test to show what might be the effect of one station drift on deformations in contacting triangles. Based on the benchmark results, from our network we have eliminated the stations which showed deformation pattern characteristic for instable station. After several rounds of strain rate calculations and eliminations of dubious points we have reduced the number of stations down to 60. The refined network revealed more consistent deformation pattern across Poland. Deformations compared with the recent stress field of the study area disclosed good correlation in some places and significant discrepancies in the others, which will be the subject of future research.
Calculating the Rate of Senescence From Mortality Data
Koopman, Jacob J E; Rozing, Maarten P; Kramer, Anneke
2016-01-01
The rate of senescence can be inferred from the acceleration by which mortality rates increase over age. Such a senescence rate is generally estimated from parameters of a mathematical model fitted to these mortality rates. However, such models have limitations and underlying assumptions. Notably...
Staggs, Vincent S
2015-12-31
The purpose of this study was to develop methods for assessing the reliability of scores on a widely disseminated hospital quality measure based on nursing unit fall rates. Poisson regression interactive multilevel modeling was adapted to account for clustering of units within hospitals. Three signal-noise reliability measures were computed. Squared correlations between the hospital score and true hospital fall rate averaged 0.52 ± 0.18 for total falls (0.68 ± 0.18 for injurious falls). Reliabilities on the other two measures averaged at least 0.70 but varied widely across hospitals. Parametric bootstrap data reflecting within-unit noise in falls were generated to evaluate percentile-ranked hospital scores as estimators of true hospital fall rate ranks. Spearman correlations between bootstrap hospital scores and true fall rates averaged 0.81 ± 0.01 (0.79 ± 0.01). Bias was negligible, but ranked hospital scores were imprecise, varying across bootstrap samples with average SD 11.8 (14.9) percentiles. Across bootstrap samples, hospital-measure scores fell in the same decile as the true fall rate in about 30% of cases. Findings underscore the importance of thoroughly assessing reliability of quality measurements before deciding how they will be used. Both the hospital measure and the reliability methods described can be adapted to other contexts involving clustered rates of adverse patient outcomes. © The Author(s) 2015.
The Reliability and Validity of Self- and Investigator Ratings of ADHD in Adults
Adler, Lenard A.; Faraone, Stephen V.; Spencer, Thomas J.; Michelson, David; Reimherr, Frederick W.; Glatt, Stephen J.; Marchant, Barrie K.; Biederman, Joseph
2008-01-01
Objective: Little information is available comparing self- versus investigator ratings of symptoms in adult ADHD. The authors compared the reliability, validity, and utility in a sample of adults with ADHD and also as an index of clinical improvement during treatment of self- and investigator ratings of ADHD symptoms via the Conners Adult ADHD…
Carlo Jurth
2014-01-01
Full Text Available BACKGROUND: The endogenous modulation of pain can be assessed through conditioned pain modulation (CPM, which can be quantified using subjective pain ratings or nociceptive flexion reflexes. However, to date, the test-retest reliability has only been investigated for subjective pain ratings.
How reliable are the equations for predicting maximal heart rate values in military personnel?
Sporis, Goran; Vucetic, Vlatko; Jukic, Igor; Omrcen, Darija; Bok, Daniel; Custonja, Zrinko
2011-03-01
The purpose of this study was to evaluate the validity and reliability of equations for predicting maximal values of heart rate (HR) in military personnel. Five hundred and nine members of the Croatian Armed Forces (age 29.1 +/- 5.5 years; height 180.1 +/- 6.6 cm; body mass 83.4 +/- 11.3 kg; maximal oxygen uptake [VO2(max)] 49.7 +/- 6.9 mL O2/kg/min) were tested. The graded exercise test with gas exchange measurements was used to determine VO2(max) and maximum HR (HR(max)). The analysis of variance was used to determine the differences between the equations to calculate HR(max). The analysis of variance yielded statistically significant differences between seven HR equations (p max) = 205 - [age/2]) and Fox and Haskell's (HR(max) = 220 - age) equations had the highest correlation with the HRmax obtained by the graded exercise test. The authors recommend using the HR(max) values from the Stevens Creek and the Fox and Haskell equations for the purpose of training, testing, and daily exercise routine in military personnel.
Comment on mesic-atom Auger-rate calculation
Altman, A.; Fried, Z.
1983-07-01
Auger rates for a mesic atom consisting of a lithium nucleus and two electrons are presented. It is shown that the results are sensitive to the screening of the initial and final state of the ejected electron by the spectator electron. These results are compared to transition rates one would obtain by following the procedure used by Burbridge and de Borde, which neglect screening of one electron by the others. Our results show a 40% reduction in transition rates.
Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne
2008-03-01
The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.
Cooke, Roger M.; van Noortwijk, Jan M.
1999-03-01
We define local probabilistic sensitivity measures as proportional to ∂E( X i| Z = z)/ ∂z, where Z is a function of random variables XI,…, X n. These measures are local in that they depend only on the neighborhood of Z = z, but unlike other local sensitivity measures, the local probabilistic sensitivity of X i does not depend on values of other input variables. For the independent linear normal model, or indeed for any model for which X i has linear regression on Z, the above measure equals σx iρ ( Z,X i)/ σz. When linear regression does not hold, the new sensitivity measures can be compared with the correlation coefficients to indicate degree of departure from linearity. We say that Z is probabilistically dissonant in X i at Z = z if Z is increasing (decreasing) in X i at z, but probabilistically decreasing (increasing) at z. Probabilistic dissonance is rather common in complicated models. The new measures are able to pick up this probabilistic dissonance. These notions are illustrated with data from an ongoing uncertainty analysis of dike ring reliability.
2010-07-23
...-Power System; and Standards for Business Practices and Communications Protocols for Public Utilities..., Order No. 729-A, 131 FERC ] 61,109 (2010). \\2\\ Standards for Business Practices and Communication... of Available Transfer Capability, Capacity Benefit Margins, Transmission Reliability Margins,...
Reliability and validity of a Portuguese version of the Young Mania Rating Scale
J.A.A. Vilela
2005-09-01
Full Text Available The reliability and validity of a Portuguese version of the Young Mania Rating Scale were evaluated. The original scale was translated into and adapted to Portuguese by the authors. Definitions of clinical manifestations, a semi-structured anchored interview and more explicit rating criteria were added to the scale. Fifty-five adult subjects, aged 18 to 60 years, with a diagnosis of Current Manic Episode according to DSM-III-R criteria were assessed using the Young Mania Rating Scale as well as the Brief Psychiatric Rating Scale in two sessions held at intervals from 7 to 10 days. Good reliability ratings were obtained, with intra-class correlation coefficient of 0.97 for total scores, and levels of agreement above 0.80 (P < 0.001 for all individual items. Internal consistency analysis resulted in an alpha = 0.67 for the scale as a whole, and an alpha = 0.72 for each standardized item (P < 0.001. For the concurrent validity, a correlation of 0.78 was obtained by the Pearson coefficient between the total scores of the Young Mania Rating Scale and Brief Psychiatric Rating Scale. The results are similar to those reported for the English version, indicating that the Portuguese version of the scale constitutes a reliable and valid instrument for the assessment of manic patients.
Benchmarks to supplant export FPDR (Floating Point Data Rate) calculations
Bailey, D.; Brooks, E.; Dongarra, J.; Hayes, A.; Lyon, G.
1988-06-01
Because modern computer architectures render application of the FPDR (Floating Point Data Processing Rate) increasingly difficult, there has been increased interest in export evaluation via actual system performances. The report discusses benchmarking of uniprocessor (usually vector) machines for scientific computation (SIMD array processors are not included), and parallel processing and its characterization for export control.
The Effects of Participation Rate on the Internal Reliability of Peer Nomination Measures
Marks, Peter E. L.; Babcock, Ben; Cillessen, Antonius H. N.; Crick, Nicki R.
2013-01-01
Although low participation rates have historically been considered problematic in peer nomination research, some researchers have recently argued that small proportions of participants can, in fact, provide adequate sociometric data. The current study used a classical measurement perspective to investigate the internal reliability (Cronbach's…
Reliability of DSM-IV Symptom Ratings of ADHD: Implications for DSM-V
Solanto, Mary V.; Alvir, Jose
2009-01-01
Objective: The objective of this study was to examine the intrarater reliability of "DSM-IV" ADHD symptoms. Method: Two-hundred-two children referred for attention problems and 49 comparison children (all 7-12 years) were rated by parents and teachers on the identical "DSM-IV" items presented in two different formats, the…
Reliability and discriminant validity of ataxia rating scales in early onset ataxia
Brandsma, Rick; Lawerman, Tjitske F.; Kuiper, Marieke J; Lunsing, Roelineke J; Burger, Huibert; Sival, Deborah A
AIM: To determine whether ataxia rating scales are reliable disease biomarkers for early onset ataxia (EOA). METHOD: In 40 patients clinically identified with EOA (28 males, 12 females; mean age 15y 3mo [range 5-34y]), we determined interobserver and intraobserver agreement (interclass correlation
Reliability and discriminant validity of ataxia rating scales in early onset ataxia
Brandsma, Rick; Lawerman, Tjitske F.; Kuiper, Marieke J.; Lunsing, Roelineke J.; Burger, Huibert; Sival, Deborah A.
AIM To determine whether ataxia rating scales are reliable disease biomarkers for early onset ataxia (EOA). METHOD In 40 patients clinically identified with EOA (28 males, 12 females; mean age 15y 3mo [range 5-34y]), we determined interobserver and intraobserver agreement (interclass correlation
Reliability of DSM-IV Symptom Ratings of ADHD: Implications for DSM-V
Solanto, Mary V.; Alvir, Jose
2009-01-01
Objective: The objective of this study was to examine the intrarater reliability of "DSM-IV" ADHD symptoms. Method: Two-hundred-two children referred for attention problems and 49 comparison children (all 7-12 years) were rated by parents and teachers on the identical "DSM-IV" items presented in two different formats, the SNAP-IV and Conners'…
Reliability and discriminant validity of ataxia rating scales in early onset ataxia
Brandsma, Rick; Lawerman, Tjitske F; Kuiper, Marieke J; Lunsing, Roelineke J; Burger, Huibert; Sival, Deborah A
2016-01-01
AIM: To determine whether ataxia rating scales are reliable disease biomarkers for early onset ataxia (EOA). METHOD: In 40 patients clinically identified with EOA (28 males, 12 females; mean age 15y 3mo [range 5-34y]), we determined interobserver and intraobserver agreement (interclass correlation c
Factorial Validity and Reliability of the Devereux Elementary School Behavior Rating Scale.
Reynolds, William M.; Bernstein, Sydna M.
1982-01-01
The factorial validity and internal consistency reliability of the Devereux Elementary School Behavior Rating Scale were examined with a random sample of elementary school children. Given the problem of multicollinearity that was shown to exist among subscales, the authors suggest caution in the interpretation of Devereux subscales as discrete…
Sample size calculation for comparing two negative binomial rates.
Zhu, Haiyuan; Lakkis, Hassan
2014-02-10
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations.
Reliability and validity of the Chinese version of the Scale for Assessment and Rating of Ataxia
TAN Song; NIU Hui-xia; ZHAO Lu; GAO Yuan; LU Jia-meng; SHI Chang-he; Chandra Avinash
2013-01-01
Background The Scale for the Assessment and Rating of Ataxia (SARA) was shown to be a reliable and valid measurement for patients with spinocerebellar ataxia (SCA).The Brazilian version and the Japanese version of SARAwere favorable for good reliability and validity.This study aimed to translate SARA into Chinese and test its reliability and validity in measurement of cerebellar ataxia.Methods SARA was translated into Chinese.A total 39 patients with degeneration cerebellar ataxia were evaluated independently by two neurologists with the Chinese version of SARA.Then the patients were evaluated by one of above neurologists with International Cooperative Ataxia Rating Scale (ICARS).The statistical analyses were performed using SPSS 17.0 for Windows.Results The Cronbach's alpha coefficient of the Chinese version of SARA was 0.78,which represents a good internal consistence.The correlation coefficient of the Chinese version of SARA scores between the two evaluators was 0.86,illustrating that the inter-rater reliability of Chinese version of SARA was good.The correlation coefficient between the Chinese version of SARA and ICARS was 0.91,illustrating that the criterion validity of Chinese version of SARA was not bad.Conclusions The Chinese version of SARA is reliable and effective for the assessment of degeneration cerebellar ataxia.Compared with ICARS,the evaluation of Chinese version of SARA is more objective,the assessment time is shortened,and the maneuverability is better.
Rate Constant Calculation for Thermal Reactions Methods and Applications
DaCosta, Herbert
2011-01-01
Providing an overview of the latest computational approaches to estimate rate constants for thermal reactions, this book addresses the theories behind various first-principle and approximation methods that have emerged in the last twenty years with validation examples. It presents in-depth applications of those theories to a wide range of basic and applied research areas. When doing modeling and simulation of chemical reactions (as in many other cases), one often has to compromise between higher-accuracy/higher-precision approaches (which are usually time-consuming) and approximate/lower-preci
Calculations on decay rates of various proton emissions
Qian, Yibin [Nanjing University of Science and Technology, Department of Applied Physics, Nanjing (China); Nanjing University, Department of Physics and Key Laboratory of Modern Acoustics, Nanjing (China); Ren, Zhongzhou [Nanjing University, Department of Physics and Key Laboratory of Modern Acoustics, Nanjing (China); Kavli Institute for Theoretical Physics China, Beijing (China); National Laboratory of Heavy-Ion Accelerator, Center of Theoretical Nuclear Physics, Lanzou (China)
2016-03-15
Proton radioactivity of neutron-deficient nuclei around the dripline has been systematically studied within the deformed density-dependent model. The crucial proton-nucleus potential is constructed via the single-folding integral of the density distribution of daughter nuclei and the effective M3Y nucleon-nucleon interaction or the proton-proton Coulomb interaction. After the decay width is obtained by the modified two-potential approach, the final decay half-lives can be achieved by involving the spectroscopic factors from the relativistic mean-field (RMF) theory combined with the BCS method. Moreover, a simple formula along with only one adjusted parameter is tentatively proposed to evaluate the half-lives of proton emitters, where the introduction of nuclear deformation is somewhat discussed as well. It is found that the calculated results are in satisfactory agreement with the experimental values and consistent with other theoretical studies, indicating that the present approach can be applied to the case of proton emission. Predictions on half-lives are made for possible proton emitters, which may be useful for future experiments. (orig.)
Dutton, Diane M
2008-10-01
A 46-item rating scale was used to obtain personality ratings from 75 captive chimpanzees (Pan troglodytes) from 7 zoological parks. Factor analysis revealed five personality dimensions similar to those found in previous research on primate personality: Agreeableness, Dominance, Neuroticism, Extraversion and Intellect. There were significant sex and age differences in ratings on these dimensions, with males rated more highly on Dominance and older chimpanzees rated as more agreeable but less extraverted than younger chimpanzees. Interobserver agreement for most individual trait items was high, but tended to be less reliable for trait terms expressing more subtle social or cognitive abilities. Personality ratings for one zoo were found to be largely stable across a 3-year period, but highlighted the effects of environmental factors on the expression of personality in captive chimpanzees.
钟绍鹏; 邓卫
2015-01-01
A reliability-based stochastic system optimum congestion pricing (SSOCP) model with endogenous market penetration and compliance rate in an advanced traveler information systems (ATIS) environment was proposed. All travelers were divided into two classes. The first guided travelers were referred to as the equipped travelers who follow ATIS advice, while the second unguided travelers were referred to as the unequipped travelers and the equipped travelers who do not follow the ATIS advice (also referred to as non-complied travelers). Travelers were assumed to take travel time, congestion pricing, and travel time reliability into account when making travel route choice decisions. In order to arrive at on time, travelers needed to allow for a safety margin to their trip. The market penetration of ATIS was determined by a continuous increasing function of the information benefit, and the ATIS compliance rate of equipped travelers was given as the probability of the actually experienced travel costs of guided travelers less than or equal to those of unguided travelers. The analysis results could enhance our understanding of the effect of travel demand level and travel time reliability confidence level on the ATIS market penetration and compliance rate; and the effect of travel time perception variation of guided and unguided travelers on the mean travel cost savings (MTCS) of the equipped travelers, the ATIS market penetration, compliance rate, and the total network effective travel time (TNETT).
The case for using the repeatability coefficient when calculating test-retest reliability.
Sharmila Vaz
Full Text Available The use of standardised tools is an essential component of evidence-based practice. Reliance on standardised tools places demands on clinicians to understand their properties, strengths, and weaknesses, in order to interpret results and make clinical decisions. This paper makes a case for clinicians to consider measurement error (ME indices Coefficient of Repeatability (CR or the Smallest Real Difference (SRD over relative reliability coefficients like the Pearson's (r and the Intraclass Correlation Coefficient (ICC, while selecting tools to measure change and inferring change as true. The authors present statistical methods that are part of the current approach to evaluate test-retest reliability of assessment tools and outcome measurements. Selected examples from a previous test-retest study are used to elucidate the added advantages of knowledge of the ME of an assessment tool in clinical decision making. The CR is computed in the same units as the assessment tool and sets the boundary of the minimal detectable true change that can be measured by the tool.
2011-02-01
... Weighted Average Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings AGENCY: Import... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate in... regarding the calculation of the weighted average dumping margin and antidumping duty assessment rate...
Reliability Analysis-Based Numerical Calculation of Metal Structure of Bridge Crane
Wenjun Meng
2013-01-01
Full Text Available The study introduced a finite element model of DQ75t-28m bridge crane metal structure and made finite element static analysis to obtain the stress response of the dangerous point of metal structure in the most extreme condition. The simulated samples of the random variable and the stress of the dangerous point were successfully obtained through the orthogonal design. Then, we utilized BP neural network nonlinear mapping function trains to get the explicit expression of stress in response to the random variable. Combined with random perturbation theory and first-order second-moment (FOSM method, the study analyzed the reliability and its sensitivity of metal structure. In conclusion, we established a novel method for accurately quantitative analysis and design of bridge crane metal structure.
Accurate, robust and reliable calculations of Poisson-Boltzmann solvation energies
Wang, Bao
2016-01-01
Developing accurate solvers for the Poisson Boltzmann (PB) model is the first step to make the PB model suitable for implicit solvent simulation. Reducing the grid size influence on the performance of the solver benefits to increasing the speed of solver and providing accurate electrostatics analysis for solvated molecules. In this work, we explore the accurate coarse grid PB solver based on the Green's function treatment of the singular charges, matched interface and boundary (MIB) method for treating the geometric singularities, and posterior electrostatic potential field extension for calculating the reaction field energy. We made our previous PB software, MIBPB, robust and provides almost grid size independent reaction field energy calculation. Large amount of the numerical tests verify the grid size independence merit of the MIBPB software. The advantage of MIBPB software directly make the acceleration of the PB solver from the numerical algorithm instead of utilization of advanced computer architectures...
Naghdi, S; Ansari, N Nakhostin; Yazdanpanah, M; Feise, R J; Fakhari, Z
2015-12-01
The purpose of the present study was to determine the reliability and validity of the Functional Rating Index (FRI) for athletes with low back pain (LBP). In this cross-sectional and prospective cohort study, the validated Persian FRI (PFRI) was tested in 100 athletes with LBP and 50 healthy athletes. From the athletes with LBP, data were recollected among 50 athletes with a 7-day interval to examine test-retest reliability. The content validity was excellent, and the athletes with LBP responded to all items with no floor or ceiling effects. The discriminative validity was supported by a statistically significant difference in PFRI total scores between the athletes with LBP and healthy athletes. The concurrent criterion validity was good (rho = 0.72). The construct, convergent validity was good (r = 0.83). The internal consistency reliability estimate was high (Cronbach's α = 0.90). Factor analysis demonstrated a single-factor structure with an explained variance of 52.22%. The test-retest reliability was excellent, indicated by an ICC(agreement) of 0.97, and the agreement observed in the Bland and Altman plot demonstrated no systematic bias. It is concluded that the PFRI has excellent psychometric properties for assessing athletes with LBP.
Reliability assessment to determine the optimal forced outage rate of components
Habib Daryabad
2014-03-01
Full Text Available Determining the optimal forced outage rate (FOR ofcomponents can lead to reducing the operational and maintenance costs inelectric power systems. FOR is closely associated with two factors: number ofoutages and duration of outages. Therefore, it is possible to decrease the FORthrough decreasing the number of outages or reducing the duration ofoutages. Decreasing number of outages is usually carried out throughreinforcement of the network and reducing the duration of outages is mainlyperformed through increasing the repair and maintenance groups. Both of theproposed methods to decrease the FOR possess the costs. Therefore, it is verysuitable to find the optimal rate of FOR and avoiding unnecessary costs. Thispaper presents a new methodology to find the optimal rate of FOR. In thisregard, the system reliability is assessed and evaluated from view of FOR andthe optimal rate of FOR is denoted for all components.
The reliability of a severity rating scale to measure stuttering in an unfamiliar language.
Hoffman, Laura; Wilson, Linda; Copley, Anna; Hewat, Sally; Lim, Valerie
2014-06-01
With increasing multiculturalism, speech-language pathologists (SLPs) are likely to work with stuttering clients from linguistic backgrounds that differ from their own. No research to date has estimated SLPs' reliability when measuring severity of stuttering in an unfamiliar language. Therefore, this study was undertaken to estimate the reliability of SLPs' use of a 9-point severity rating (SR) scale, to measure severity of stuttering in a language that was different from their own. Twenty-six Australian SLPs rated 20 speech samples (10 Australian English [AE] and 10 Mandarin) of adults who stutter using a 9-point SR scale on two separate occasions. Judges showed poor agreement when using the scale to measure stuttering in Mandarin samples. Results also indicated that 50% of individual judges were unable to reliably measure the severity of stuttering in AE. The results highlight the need for (a) SLPs to develop intra- and inter-judge agreement when using the 9-point SR scale to measure severity of stuttering in their native language (in this case AE) and in unfamiliar languages; and (b) research into the development and evaluation of practice and/or training packages to assist SLPs to do so.
Microscopic Calculation of Total Ordinary Muon Capture Rates for Medium-weight and Heavy Nuclei
Kuzmin, V A; Junker, K; Ovchinnikova, A A
2002-01-01
Total Ordinary Muon Capture (OMC) rates are calculated on the basis of the Quasiparticle Random Phase Approximation for several spherical nuclei from 90^Zr to 208^Pb. It is shown that total OMC rates calculated with the free value of the axial-vector coupling constant g_A agree well with the experimental data for medium-size nuclei and exceed considerably the experimental rates for heavy nuclei. The sensitivity of theoretical OMC rates to the nuclear residual interactions is discussed.
The development, reliability, and validity of a clinical rating scale for codependency.
Harkness, D; Swenson, M; Madsen-Hampton, K; Hale, R
2001-01-01
This investigation examined the reliability and validity of a rating scale for codependency in substance abuse treatment. The investigators developed an example-anchored rating scale to operationalize codependency as substance abuse counselors construe it in practice, and recruited 27 counselors for a counterbalanced multiple-treatment experiment. Counselors were randomly assigned to one of four continuing education workshops for rating-scale training, and asked to evaluate codependency in five videotaped cases. Semistructured case interviews were videotaped with a male and a female from five adult populations to vary the gender and codependency of cases: (1) outpatients in treatment for addiction, (2) outpatient spouses, (3) members of Codependents Anonymous, (4) United States Bureau of Land Management smoke jumpers, and (5) college students majoring in business or economics. To control for gender effects, one workshop presented male cases, one workshop presented female cases, and two workshops presented cases of both genders. To control for order effects, the assignment of videotapes to workshops was randomized to counterbalance the order in which counselors viewed them. The findings suggest that the rating scale yields reliable and valid evaluations of codependency without appreciable gender bias.
Reliability and discriminant validity of ataxia rating scales in early onset ataxia.
Brandsma, Rick; Lawerman, Tjitske F; Kuiper, Marieke J; Lunsing, Roelineke J; Burger, Huibert; Sival, Deborah A
2017-04-01
To determine whether ataxia rating scales are reliable disease biomarkers for early onset ataxia (EOA). In 40 patients clinically identified with EOA (28 males, 12 females; mean age 15y 3mo [range 5-34y]), we determined interobserver and intraobserver agreement (interclass correlation coefficient [ICC]) and discriminant validity of ataxia rating scales (International Cooperative Ataxia Rating Scale [ICARS], Scale for Assessment and Rating of Ataxia [SARA], and Brief Ataxia Rating Scale [BARS]). Three paediatric neurologists independently scored ICARS, SARA and BARS performances recorded on video, and also phenotyped the primary and secondary movement disorder features. When ataxia was the primary movement disorder feature, we assigned patients to the subgroup 'EOA with core ataxia' (n=26). When ataxia concurred with other prevailing movement disorders (such as dystonia, myoclonus, and chorea), we assigned patients to the subgroup 'EOA with comorbid ataxia' (n=12). ICC values were similar in both EOA subgroups of 'core' and 'comorbid' ataxia (0.92-0.99; ICARS, SARA, and BARS). Independent of the phenotype, the severity of the prevailing movement disorder predicted the ataxia rating scale scores (β=0.83-0.88; pataxia rating scales is high. However, the discriminative validity for 'ataxia' is low. For adequate interpretation of ataxia rating scale scores, application in uniform movement disorder phenotypes is essential. © 2016 Mac Keith Press.
HU, T.A.
2005-10-27
Assess the steady-state flammability level at normal and off-normal ventilation conditions. The hydrogen generation rate was calculated for 177 tanks using the rate equation model. Flammability calculations based on hydrogen, ammonia, and methane were performed for 177 tanks for various scenarios.
HU TA
2009-10-26
Assess the steady-state flammability level at normal and off-normal ventilation conditions. The hydrogen generation rate was calculated for 177 tanks using the rate equation model. Flammability calculations based on hydrogen, ammonia, and methane were performed for 177 tanks for various scenarios.
Staggs, Vincent S; Cramer, Emily
2016-08-01
Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital-acquired pressure ulcer rates and evaluate a standard signal-noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step-down, medical, surgical, and medical-surgical nursing units from 1,299 US hospitals were analyzed. Using beta-binomial models, we estimated between-unit variability (signal) and within-unit variability (noise) in annual unit pressure ulcer rates. Signal-noise reliability was computed as the ratio of between-unit variability to the total of between- and within-unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal-noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal-noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc.
Reliability of DSM-IV Symptom Ratings of ADHD: implications for DSM-V.
Solanto, Mary V; Alvir, Jose
2009-09-01
The objective of this study was to examine the intrarater reliability of DSM-IV ADHD symptoms. Two-hundred-two children referred for attention problems and 49 comparison children (all 7-12 years) were rated by parents and teachers on the identical DSM-IV items presented in two different formats, the SNAP-IV and Conners' Revised Questionnaires, at two closely spaced points in time. For the combined sample, weighted kappa scores for intrarater agreement ranged from .30 ("fair") to .77 ("good") across symptoms. Kappa scores were good with respect to agreement on the DSM-IV criterion of endorsement of at least six symptoms in a given cluster for Inattention (.60 and .76, for parents and teachers, respectively) and Hyperactivity-Impulsivity (.72 and .75, respectively). Kappas for identification of cases as AD/HD or not AD/HD were good to excellent (.67 and .79 for parents and teachers, respectively). Classification as AD/HD or not AD/HD changed from the first to the second rating in 12% and 10% of cases rated parents and teachers, respectively. Reliability of individual ADHD symptoms appears to be suboptimal for clinical and research use and is improved, although less than ideal, at the levels of cluster endorsement and case classification.
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Ferson, S. [Applied Biomathematics, Setauket, NY (United States)
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
Bräutigam, Klaus-Rainer; Jörissen, Juliane; Priefer, Carmen
2014-08-01
The reduction of food waste is seen as an important societal issue with considerable ethical, ecological and economic implications. The European Commission aims at cutting down food waste to one-half by 2020. However, implementing effective prevention measures requires knowledge of the reasons and the scale of food waste generation along the food supply chain. The available data basis for Europe is very heterogeneous and doubts about its reliability are legitimate. This mini-review gives an overview of available data on food waste generation in EU-27 and discusses their reliability against the results of own model calculations. These calculations are based on a methodology developed on behalf of the Food and Agriculture Organization of the United Nations and provide data on food waste generation for each of the EU-27 member states, broken down to the individual stages of the food chain and differentiated by product groups. The analysis shows that the results differ significantly, depending on the data sources chosen and the assumptions made. Further research is much needed in order to improve the data stock, which builds the basis for the monitoring and management of food waste.
Doughty, M J; Fonn, D; Trang Nguyen, K
1993-09-01
In endothelial morphometry, uncertainty exists concerning how many cells should be measured. A study was undertaken to calculate mean cell area and coefficient of variation (COV) of cell areas using different numbers of cells from photo-slitlamp pictures and published micrographs. Groups of 65, 95, or 165 tesselated cells were measured and area and COV values calculated in progressive sets of 5 cells; each pair of values was compared to that obtained using all cells in each group. The results show that, for both normal (homomegethous) and irregular (polymegethous) endothelia, even cell counts as low as 50 cells can usually provide average cell area values that are within 1 to 2% of the values estimated from larger groups of cells. A similar reliability was observed for estimates of COV for normal endothelia. However, for polymegethous endothelia, even with 100 cells analyzed, the estimates of COV generally only approached a +/- 4% reliability. This uncertainty in COV estimates should be considered in both comparative studies and in regression analyses of COV changes over time or other variables.
Using a Calculated Pulse Rate with an Artificial Neural Network to Detect Irregular Interbeats.
Yeh, Bih-Chyun; Lin, Wen-Piao
2016-03-01
Heart rate is an important clinical measure that is often used in pathological diagnosis and prognosis. Valid detection of irregular heartbeats is crucial in the clinical practice. We propose an artificial neural network using the calculated pulse rate to detect irregular interbeats. The proposed system measures the calculated pulse rate to determine an "irregular interbeat on" or "irregular interbeat off" event. If an irregular interbeat is detected, the proposed system produces a danger warning, which is helpful for clinicians. If a non-irregular interbeat is detected, the proposed system displays the calculated pulse rate. We include a flow chart of the proposed software. In an experiment, we measure the calculated pulse rates and achieve an error percentage of pulse rates to detect irregular interbeats, we find such irregular interbeats in eight participants.
Madewell, Zachary J; Wester, Robert B; Wang, Wendy W; Smith, Tyler C; Peddecord, K Michael; Morris, Jessica; DeGuzman, Heidi; Sawyer, Mark H; McDonald, Eric C
Accurate data on immunization coverage levels are essential to public health program planning. Reliability of coverage estimates derived from immunization information systems (IISs) in states where immunization reporting by medical providers is not mandated by the state may be compromised by low rates of participation. To overcome this problem, data on coverage rates are often acquired through random-digit-dial telephone surveys, which require substantial time and resources. This project tested both the reliability of voluntarily reported IIS data and the feasibility of using these data to estimate regional immunization rates. We matched telephone survey records for 553 patients aged 19-35 months obtained in 2013 to 430 records in the San Diego County IIS. We assessed concordance between survey data and IIS data using κ to measure the degree of nonrandom agreement. We used multivariable logistic regression models to investigate differences among demographic variables between the 2 data sets. These models were used to construct weights that enabled us to predict immunization rates in areas where reporting is not mandated. We found moderate agreement between the telephone survey and the IIS for the diphtheria, tetanus, and acellular pertussis (κ = 0.49), pneumococcal conjugate (κ = 0.49), and Haemophilus influenzae type b (κ = 0.46) vaccines; fair agreement for the varicella (κ = 0.39), polio (κ = 0.39), and measles, mumps, and rubella (κ = 0.35) vaccines; and slight agreement for the hepatitis B vaccine (κ = 0.17). Consistency in factors predicting immunization coverage levels in a telephone survey and IIS data confirmed the feasibility of using voluntarily reported IIS data to assess immunization rates in children aged 19-35 months.
Soriano-Maldonado, Alberto; Ruiz, Jonatan R; Álvarez-Gallardo, Inmaculada C; Segura-Jiménez, Víctor; Santalla, Alfredo; Munguía-Izquierdo, Diego
2015-01-01
The present study aimed (1) to assess the validity and reliability of the Borg category-ratio (CR-10) scale for monitoring exercise intensity in women with fibromyalgia (FM) and (2) to examine whether women with FM can discriminate between perceived exertion and exercise-induced pain. Thirty-three women with FM performed two incremental treadmill tests (1 week separated). Heart rate, oxygen uptake, minute ventilation and respiratory quotient were measured. The ratings of perceived exertion (RPE: CR-10 scale) and exercise-induced pain were obtained at each workload. The Spearman's correlation of RPE with the physiological responses ranged from 0.69 to 0.79. The regression models explained ~50% of the variability of the studied physiological responses. We found "perfect acceptable" agreement in 69% of the observations. Weighted Kappa was 0.66 (95% confidence interval [CI]: 0.59-0.72). There were differences between RPE and pain at workloads 3 (1.50; 95% CI: 0.85-2.16), 4 (2.10; 95% CI: 1.23-2.96), 5 (3.40; 95% CI: 1.29-5.51) and 6 (3.97; 95% CI: 1.61-6.33). The main findings of the present study suggest that the Borg CR-10 scale is valid and moderately reliable for monitoring exercise intensity in women with FM, and these patients were able to discriminate between exertion and exercise-induced pain.
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-02-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Calculation of the rate of coagulation of hydrophobic colloids in the non-steady state
Roebersen, G.J.; Wiersema, P.H.
1974-01-01
In accurate coagulation measurements, the observed coagulation rate should be extrapolated to time zero to find the rate of formation of doublets from singlet particles. In the theoretical calculation of coagulation rates, generally a steady state is assumed. At the onset of coagulation, however, a
Calculation of the rate of coagulation of hydrophobic colloids in the non-steady state
Roebersen, G.J.; Wiersema, P.H.
1974-01-01
In accurate coagulation measurements, the observed coagulation rate should be extrapolated to time zero to find the rate of formation of doublets from singlet particles. In the theoretical calculation of coagulation rates, generally a steady state is assumed. At the onset of coagulation, however, a
Reynders, Alexandre; Scheerder, Gert; Van Audenhove, Chantal
2011-06-01
National suicide data are an underestimation of the actual number of suicides but are often assumed to be reliable and useful for scientific research. The aim of this study is to contribute to the discussion of the reliability of suicide mortality data by comparing railway suicides from two data sources. Data for the railway suicides and the concurrent causes of death of fifteen European countries were collected from the European Detailed Mortality Database and the European Railway Agency (ERA). Suicide rates, odds ratios and confidence intervals were calculated. The suicide data from the ERA were significantly higher than the national data for six out of fifteen countries. In three countries, the ERA registered significantly more railway suicides compared to the sum of the national suicides and undetermined deaths. In Italy and France, the ERA statistics recorded significantly more railway related fatalities than the national statistical offices. In total the ERA statistics registered 34% more suicides and 9% more railway fatalities compared with the national statistics. The findings of this study concern railway suicides and they cannot be extrapolated to all types of suicides. Further, the national suicide statistics and the ERA data are not perfectly comparable, due to the different categorisations of the causes of death. Based on the data for railway suicides, it seems that the underestimation of suicide rates is significant for some countries, and that the degree of underestimation differs substantially among countries. Caution is needed when comparing national suicide rates. There is a need for standardisation of national death registration procedures at the European level. Copyright © 2010 Elsevier B.V. All rights reserved.
王艳; 钱英; 冯文林; 刘若庄
2003-01-01
An implementation of the variational quantum RRKM program is presented to utilize the direct ab initio dynamics approach for calculating k(E, J), k(E) and k(T) within the framework of the microcanonical transition state (μTST) and microcanonical variational TST (μVT) theories. An algorithm including tunneling contributions in Beyer-Swinehart method for calculating microcanonical rate constants is also proposed. An efficient piece-wise interpolation method is developed to evaluate the Boltzmann integral in calculation of thermal rate constants. Calculations on several test reactions, namely the H(D)2CO→H(D)2 + CO, CH2CO→CH2 + CO and CH4 + H→CH3 + H2 reactions, show that the results are in good agreement with the previous rate constants calculations. This approach would require much less computational resource.
Stemke, JA; Santiago, LS
2011-01-01
The proportional light absorptance by photosynthetic tissue (α) is used with chlorophyll (Chl) fluorescence methods to calculate electron transport rate (ETR). Although a value of α of 0.84 is often used as a standard for calculating ETR, many succulent plant species and species with crassulacean acid metabolism (CAM) have photosynthetic tissues that vary greatly in color or are highly reflective, and could have values of α that differ from 0.84, thus affecting the calculation of ETR. We meas...
40 CFR 1065.642 - SSV, CFV, and PDP molar flow rate calculations.
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false SSV, CFV, and PDP molar flow rate calculations. 1065.642 Section 1065.642 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.642...
Margarita eStolarova
2014-06-01
Full Text Available This report has two main purposes. First, we combine well-known analytical approaches to conduct a comprehensive assessment of agreement and correlation of rating-pairs and to dis-entangle these often confused concepts, providing a best-practice example on concrete data and a tutorial for future reference. Second, we explore whether a screening questionnaire deve-loped for use with parents can be reliably employed with daycare teachers when assessing early expressive vocabulary. A total of 53 vocabulary rating pairs (34 parent-teacher and 19 mother-father pairs collected for two-year-old children (12 bilingual are evaluated. First, inter-rater reliability both within and across subgroups is assessed using the intra-class correlation coefficient (ICC. Next, based on this analysis of reliability and on the test-retest reliability of the employed tool, inter-rater agreement is analyzed, magnitude and direction of rating differences are considered. Finally, Pearson correlation coefficients of standardized vocabulary scores are calculated and compared across subgroups. The results underline the necessity to distinguish between reliability measures, agreement and correlation. They also demonstrate the impact of the employed reliability on agreement evaluations. This study provides evidence that parent-teacher ratings of children’s early vocabulary can achieve agreement and correlation comparable to those of mother-father ratings on the assessed vocabulary scale. Bilingualism of the evaluated child decreased the likelihood of raters’ agreement. We conclude that future reports of agree-ment, correlation and reliability of ratings will benefit from better definition of terms and stricter methodological approaches. The methodological tutorial provided here holds the potential to increase comparability across empirical reports and can help improve research practices and knowledge transfer to educational and therapeutic settings.
Evidence-informed case rates: paying for safer, more reliable care.
De Brantes, François; Rastogi, Amita
2008-06-01
There is widespread dissatisfaction with the current modes of paying for health care. Created by Prometheus Payment, evidence-informed case rates (ECRs) are designed to create fair payments for all providers delivering care to a patient for a particular condition. ECRs would combine global fees with an allowance for complications and performance incentives.The authors model ECRs for two scenarios, acute myocardial infarction and diabetes. Their analysis shows that, under fee-for-service payments, a high proportion of the costs of care go toward potentially avoidable complications--some 30 percent of payments for acute myocardial infarctions and 60 percent of payment for diabetes care. They conclude that ECRs would hold the delivery system accountable for the technical risk it imputes on the total costs of care--for medical errors and potentially avoidable complications. Further, ECRs would create incentives for providers to deliver care that is safer, more reliable, and consistent with evidence-based guidelines.
Reliability of Urinary Excretion Rate Adjustment in Measurements of Hippuric Acid in Urine
Annamaria Nicolli
2014-07-01
Full Text Available The urinary excretion rate is calculated based on short-term, defined time sample collections with a known sample mass, and this measurement can be used to remove the variability in urine concentrations due to urine dilution. Adjustment to the urinary excretion rate of hippuric acid was evaluated in 31 healthy volunteers (14 males and 17 females. Urine was collected as short-term or spot samples and tested for specific gravity, creatinine and hippuric acid. Hippuric acid values were unadjusted or adjusted to measurements of specific gravity, creatinine or urinary excretion rate. Hippuric acid levels were partially independent of urinary volume and urinary flow rate, in contrast to specific gravity and creatinine, which were both highly dependent on the hippuric acid level. Accordingly, hippuric acid was independent on urinary specific gravity and creatinine excretion. Unadjusted and adjusted values for specific gravity or creatinine were generally closely correlated, especially in spot samples. Values adjusted to the urinary excretion rate appeared well correlated to those unadjusted and adjusted to specific gravity or creatinine values. Thus, adjustment of crude hippuric acid values to the urinary excretion rate is a valid procedure but is difficult to apply in the field of occupational medicine and does not improve the information derived from values determined in spot urine samples, either unadjusted or adjusted to specific gravity and creatinine.
Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)
2003-07-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
Re-examining the Dissolution of Spent Fuel: A Comparison of Different Methods for Calculating Rates
Hanson, B D; Stout, R B
2004-04-09
Dissolution rates for spent fuel have typically been reported in terms of a rate normalized to the surface area of the specimen. Recent evidence has shown that neither the geometric surface area nor that measured with BET accurately predicts the effective surface area of spent fuel. Dissolution rates calculated from results obtained by flowthrough tests were reexamined comparing the cumulative releases and surface area normalized rates. While initial surface area is important for comparison of different rates, it appears that normalizing to the surface area introduces unnecessary uncertainty compared to using cumulative or fractional release rates. Discrepancies in past data analyses are mitigated using this alternative method.
汤胜道; 汪凤泉
2005-01-01
To solve a real problem:how to calculate the reliability of a system with time v arying failure rates in industry systems,this paper studies a model for the load sharing parallel system with timevarying failure rates,and obtains calculati ng formulas of reliability and availability of the system by solving differentia l equations.In this paper,the failure rates are expressed in polynomial configur ation.The constant,linear and Weibull failure rate are in their special form.The polynomial failure rates provide flexibility in modeling the practical timevar ying failure rates.
Feasibility, Reliability and Predictive Value Of In-Ambulance Heart Rate Variability Registration.
Laetitia Yperzeele
Full Text Available Heart rate variability (HRV is a parameter of autonomic nervous system function. A decrease of HRV has been associated with disease severity, risk of complications and prognosis in several conditions.We aim to investigate the feasibility and the reliability of in-ambulance HRV registration during emergency interventions, and to evaluate the association between prehospital HRV parameters, patient characteristics, vital parameters and short-term outcome.We conducted a prospective study using a non-invasive 2-lead ECG registration device in 55 patients transported by the paramedic intervention team of the Universitair Ziekenhuis Brussel. HRV assessment included time domain parameters, frequency domain parameters, nonlinear analysis, and time-frequency analysis. The correlation between HRV parameters and patient and outcome characteristics was analyzed and compared to controls.Artifact and ectopic detection rates were higher in patients during ambulance transportation compared to controls in resting conditions, yet technical reasons precluding in-ambulance HRV analysis occurred in only 9.6% of cases. HRV acquisition was possible without safety issues or interference with routine emergency care. Reliability of the results was considered sufficient for Sample entropy (SampEn, good for the ratio of low frequency and high frequency components (LF/HF ratio in the frequency and the time frequency domain, and excellent for the triangular interpolation of the NN interval histogram (TINN, and for the short-term scaling exponent of the detrended fluctuation analysis (DFA α1. HRV indices were significantly reduced inpatients with unfavorable outcome compared to patients with favorable outcome and controls. Multivariate analysis identified lower DFA α1 as an independent predictor of unfavorable outcome (OR, 0.155; 95% CI 0.024-0.966; p = 0.049.In-ambulance HRV registration is technically and operationally feasible and produces reliable results for parameters
HU, T.A.
2000-04-27
This work is to assess the steady-state flammability level at normal and off-normal ventilation conditions in the tank dome space for 177 double-shell and single-shell tanks at Hanford. Hydrogen generation rate was calculated for 177 tanks using rate equation model developed recently.
42 CFR 413.337 - Methodology for calculating the prospective payment rates.
2010-10-01
... data that account for the relative resource utilization of different resident types; and (v) Medicare... associated case-mix indices that account for the relative resource utilization of different patient types... prospective payment rates. (a) Data used. (1) To calculate the prospective payment rates, CMS uses—...
Nesterenok, A V
2012-01-01
The propagation of cosmic rays in the Earth's atmosphere is simulated. Calculations of the omnidirectional differential flux of neutrons for different solar activity levels are presented. The solar activity effect on the production rate of cosmogenic radiocarbon by the nuclear-interacting and muon components of cosmic rays in polar ice is studied. It has been obtained that the $^{14}C$ production rate in ice by the cosmic ray nuclear-interacting component is lower or higher than the average value by 30% during periods of solar activity maxima or minima, respectively. Calculations of the altitudinal dependence of the radiocarbon production rate in ice by the cosmic ray components are illustrated.
A hybrid approach to calculate the Shielding Failure-Caused Trip-out Rate
Zhou Liang
2016-01-01
Full Text Available Lightning has become a big threat to the safe operation of the main transmission line. Reasonable and accurate calculation of shielding failure rate plays important role in transmission line and tower design. This paper proposes a hybrid approach to calculate the shielding failure-caused trip-out rate, based on the typical electro-geometric model and the regulation method. The case study prove the validity and correctness of this approach, by comparing with the actual operation shielding failure rate.
Takemine, S.; Rikimaru, A.; Takahashi, K.
The rice is one of the staple foods in the world High quality rice production requires periodically collecting rice growth data to control the growth of rice The height of plant the number of stem the color of leaf is well known parameters to indicate rice growth Rice growth diagnosis method based on these parameters is used operationally in Japan although collecting these parameters by field survey needs a lot of labor and time Recently a laborsaving method for rice growth diagnosis is proposed which is based on vegetation cover rate of rice Vegetation cover rate of rice is calculated based on discriminating rice plant areas in a digital camera image which is photographed in nadir direction Discrimination of rice plant areas in the image was done by the automatic binarization processing However in the case of vegetation cover rate calculation method depending on the automatic binarization process there is a possibility to decrease vegetation cover rate against growth of rice In this paper a calculation method of vegetation cover rate was proposed which based on the automatic binarization process and referred to the growth hysteresis information For several images obtained by field survey during rice growing season vegetation cover rate was calculated by the conventional automatic binarization processing and the proposed method respectively And vegetation cover rate of both methods was compared with reference value obtained by visual interpretation As a result of comparison the accuracy of discriminating rice plant areas was increased by the proposed
Bellumori, Maria; Jaric, Slobodan; Knight, Christopher A
2011-07-01
Performing a set of isometric muscular contractions to varied amplitudes with instructions to generate force most rapidly reveals a strong linear relationship between peak forces (PF) achieved and corresponding peak rates of force development (RFD). The slope of this relationship, termed the RFD scaling factor (RFD-SF), quantifies the extent to which RFD scales with contraction amplitude. Such scaling allows relative invariance in the time required to reach PF regardless of contraction size. Considering the increasing use of this relationship to study quickness and consequences of slowness in older adults and movement disorders, our purpose was to further develop the protocol to measure RFD-SF. Fifteen adults (19-28 years) performed 125 rapid isometric contractions to a variety of force levels in elbow extensors, index finger abductors, and knee extensors, on 2 days. Data were used to determine (1) how the number of pulses affects computation of the RFD-SF, (2) day-to-day reliability of the RFD-SF, and (3) the nature of RFD-SF differences between diverse muscle groups. While sensitive to the number of pulses used in its computation (P50 pulses (ICC>.7) and more so with 100-125 pulses (ICC=.8-.92). Despite differences in size and function across muscles, RFD-SF was generally similar (i.e., only 8.5% greater in elbow extensors than in index finger abductors and knee extensors; P=.049). Results support this protocol as a reliable means to assess how RFD scales with PF in rapid isometric contractions as well as a simple, non-invasive probe into neuromuscular health.
A comparison of measured and calculated values of air kerma rates from 137Cs in soil
V. P. Ramzaev
2015-01-01
Full Text Available In 2010, a study was conducted to determine the air gamma dose rate from 137Cs deposited in soil. The gamma dose rate measurements and soil sampling were performed at 30 reference plots from the south-west districts of the Bryansk region (Russia that had been heavily contaminated as a result of the Chernobyl accident. The 137Cs inventory in the top 20 cm of soil ranged from 260 kBq m–2 to 2800 kBq m–2. Vertical distributions of 137Cs in soil cores (6 samples per a plot were determined after their sectioning into ten horizontal layers of 2 cm thickness. The vertical distributions of 137Cs in soil were employed to calculate air kerma rates, K, using two independent methods proposed by Saito and Jacob [Radiat. Prot. Dosimetry, 1995, Vol. 58, P. 29–45] and Golikov et al. [Contaminated Forests– Recent Developments in Risk Identification and Future Perspective. Kluwer Academic Publishers, 1999. – P. 333–341]. A very good coincidence between the methods was observed (Spearman’s rank coefficient of correlation = 0.952; P<0.01; on average, a difference between the kerma rates calculated with two methods did not exceed 3%. The calculated air kerma rates agreed with the measured dose rates in air very well (Spearman’s coefficient of correlation = 0.952; P<0.01. For large grassland plots (n=19, the measured dose rates were on average 6% less than the calculated kerma rates. The tested methods for calculating the air dose rate from 137Cs in soil can be recommended for practical studies in radiology and radioecology.
Ab Initio Calculation of Rate Constants for Molecule–Surface Reactions with Chemical Accuracy
Piccini, GiovanniMaria; Alessio, Maristella
2016-01-01
Abstract The ab initio prediction of reaction rate constants for systems with hundreds of atoms with an accuracy that is comparable to experiment is a challenge for computational quantum chemistry. We present a divide‐and‐conquer strategy that departs from the potential energy surfaces obtained by standard density functional theory with inclusion of dispersion. The energies of the reactant and transition structures are refined by wavefunction‐type calculations for the reaction site. Thermal effects and entropies are calculated from vibrational partition functions, and the anharmonic frequencies are calculated separately for each vibrational mode. This method is applied to a key reaction of an industrially relevant catalytic process, the methylation of small alkenes over zeolites. The calculated reaction rate constants (free energies), pre‐exponential factors (entropies), and enthalpy barriers show that our computational strategy yields results that agree with experiment within chemical accuracy limits (less than one order of magnitude). PMID:27008460
无
2002-01-01
This paper introduces a software specially in calculating the contribution rate of machanization in agriculture by usng economy math method ,computer technology and Visual Basic 6. 0 version. The software package has friendly interface,simple operating way and accurate, feasible calculating method. It greatly changes the condition in the past which had considerable lots of data and miscellaneous and trivial methods,which were even hard to seek answer. So it has very high practicl value.
Large-scale calculations of the beta-decay rates and r-process nucleosynthesis
Borzov, I.N.; Goriely, S. [Inst. d`Astronomie et d`Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium); Pearson, J.M. [Inst. d`Astronomie et d`Astrophysique, Univ. Libre de Bruxelles, Campus Plaine, Bruxelles (Belgium)]|[Lab. de Physique Nucleaire, Univ. de Montreal, Montreal (Canada)
1998-06-01
An approximation to a self-consistent model of the ground state and {beta}-decay properties of neutron-rich nuclei is outlined. The structure of the {beta}-strength functions in stable and short-lived nuclei is discussed. The results of large-scale calculations of the {beta}-decay rates for spherical and slightly deformed nuclides of relevance to the r-process are analysed and compared with the results of existing global calculations and recent experimental data. (orig.)
Phan, Ngoc Quan; Blome, Christine; Fritz, Fleur; Gerss, Joachim; Reich, Adam; Ebata, Toshiya; Augustin, Matthias; Szepietowski, Jacek C; Ständer, Sonja
2012-09-01
The most commonly used tool for self-report of pruritus intensity is the visual analogue scale (VAS). Similar tools are the numerical rating scale (NRS) and verbal rating scale (VRS). In the present study, initiated by the International Forum for the Study of Itch assessing reliability of these tools, 471 randomly selected patients with chronic itch (200 males, 271 females, mean age 58.44 years) recorded their pruritus intensity on VAS (100-mm line), NRS (0-10) and VRS (four-point) scales. Re-test reliability was analysed in a subgroup of 250 patients after one hour. Statistical analysis showed a high reliability and concurrent validity (r>0.8; pscales showed a high correlation. In conclusion, high reliability and concurrent validity was found for VAS, NRS and VRS. On re-test, higher correlation and less missing values were observed. A training session before starting a clinical trial is recommended.
Miller, James; Leggett, Jay; Kramer-White, Julie
2008-01-01
A team directed by the NASA Engineering and Safety Center (NESC) collected methodologies for how best to develop safe and reliable human rated systems and how to identify the drivers that provide the basis for assessing safety and reliability. The team also identified techniques, methodologies, and best practices to assure that NASA can develop safe and reliable human rated systems. The results are drawn from a wide variety of resources, from experts involved with the space program since its inception to the best-practices espoused in contemporary engineering doctrine. This report focuses on safety and reliability considerations and does not duplicate or update any existing references. Neither does it intend to replace existing standards and policy.
Calculating the rate of exothermic energy release for catalytic converter efficiency monitoring
Hepburn, J.S.; Meitzler, A.H. [Ford Motor Co., Dearborn, MI (United States)
1995-12-31
This paper reports on the development of a new methodology for OBD-II catalyst efficiency monitoring. Temperature measurements taken from the center of the catalyst substrate or near the exterior surface of the catalyst brick were used in conjunction with macroscopic energy balances to calculate the instantaneous rate of exothermic energy generation within the catalyst. The total calculated rate of exothermic energy release over the FTP test cycle was within 10% of the actual or theoretical value and provided a good indicator of catalyst light-off for a variety of aged catalytic converters. Normalization of the rate of exothermic energy release in the front section of the converter by the mass flow rate of air inducted through the engine was found to provide a simple yet practical means of monitoring the converter under both FTP and varying types of road driving.
Panthere V2: Multipurpose Simulation Software for 3D Dose Rate Calculations
Penessot, Gaël; Bavoil, Éléonore; Wertz, Laurent; Malouch, Fadhel; Visonneau, Thierry; Dubost, Julien
2017-09-01
PANTHERE is a multipurpose radiation protection software developed by EDF to calculate gamma dose rates in complex 3D environments. PANTHERE takes a key role in the EDF ALARA process, enabling to predict dose rates and to organize and optimize operations in high radiation environments. PANTHERE is also used for nuclear waste characterization, transport of nuclear materials, etc. It is used in most of the EDF engineering units and their design service providers and industrial partners.
Ziyun Deng
2016-09-01
Full Text Available In order to develop a Supercomputing Cloud Platform (SCP prototype system using Service-Oriented Architecture (SOA and Petri nets, we researched some technologies for Web service composition. Specifically, in this paper, we propose a reliability calculation method for Web service compositions, which uses Fuzzy Reasoning Colored Petri Net (FRCPN to verify the Web service compositions. We put forward a definition of semantic threshold similarity for Web services and a formal definition of FRCPN. We analyzed five kinds of production rules in FRCPN, and applied our method to the SCP prototype. We obtained the reliability value of the end Web service as an indicator of the overall reliability of the FRCPN. The method can test the activity of FRCPN. Experimental results show that the reliability of the Web service composition has a correlation with the number of Web services and the range of reliability transition values.
Chuang, Yao-Yuan
2007-08-01
Variational transition state theory with multidimensional tunneling (VTST/MT) has been used for calculating the rate constants of reactions. The updated Hessians have been used to reduce the computational costs for both geometry optimization and trajectory following procedures. In this paper, updated Hessians are used to reduce the computational costs while calculating the rate constants applying VTST/MT. Although we found that directly applying the updated Hessians will not generate good vibrational frequencies along the minimum energy path (MEP), however, we can either re-compute the full Hessian matrices at fixed intervals or calculate the Block Hessians, which is constructed by numerical one-side difference for the Hessian elements in the "critical" region and Bofill updating scheme for the rest of the Hessian elements. Due to the numerical instability of the Bofill update method near the saddle point region, we have suggested a simple strategy in which we follow the MEP until certain percentage of the classical barrier height from the barrier top with full Hessians computed and then performing rate constant calculation with the extended MEP using Block Hessians. This strategy results a mean unsigned percentage deviation (MUPD) around 10% with full Hessians computed till the point with 80% classical barrier height for four studied reactions. This proposed strategy is attractive not only it can be implemented as an automatic procedure but also speeds up the VTST/MT calculation via embarrassingly parallelization to a personal computer cluster.
Accuracy of heart strain rate calculation derived from Doppler tissue velocity data
Santos, Andres; Ledesma-Carbayo, Maria J.; Malpica, Norberto; Desco, Manuel; Antoranz, Jose C.; Marcos-Alberca, Pedro; Garcia-Fernandez, Miguel A.
2001-05-01
Strain Rate (SR) Imaging is a recent imaging technique that provides information about regional myocardial deformation by measuring local compression and expansion rates. SR can be obtained by calculating the local in-plane velocity gradients along the ultrasound beam from Doppler Tissue velocity data. However, SR calculations are very dependent on the image noise and artifacts, and different calculation algorithms may provide inconsistent results. This paper compares techniques to calculate SR. 2D Doppler Tissue Images (DTI) are acquired with an Acuson Sequoia scanner. Noise was measured with the aid of a rotating phantom. Processing is performed on polar coordinates. For each image, after removal of black spot artifacts by a selective median filter, two different SR calculation methods have been implemented. In the first one, SR is computed as the discrete velocity derivative, and noise is reduced with a variable-width gaussian filter. In the second method a smoothing cubic spine is calculated for every scan line according to the noise level and the derivative is obtained from an analytical expression. Both methods have been tested with DTI data from synthetic phantoms and normal volunteers. Results show that noise characteristics, border effects and the adequate scale are critical to obtain meaningful results.
Reliability of the peer-review process for adverse event rating.
Alan J Forster
Full Text Available BACKGROUND: Adverse events are poor patient outcomes caused by medical care. Their identification requires the peer-review of poor outcomes, which may be unreliable. Combining physician ratings might improve the accuracy of adverse event classification. OBJECTIVE: To evaluate the variation in peer-reviewer ratings of adverse outcomes; determine the impact of this variation on estimates of reviewer accuracy; and determine the number of reviewers who judge an adverse event occurred that is required to ensure that the true probability of an adverse event exceeded 50%, 75% or 95%. METHODS: Thirty physicians rated 319 case reports giving details of poor patient outcomes following hospital discharge. They rated whether medical management caused the outcome using a six-point ordinal scale. We conducted latent class analyses to estimate the prevalence of adverse events as well as the sensitivity and specificity of each reviewer. We used this model and Bayesian calculations to determine the probability that an adverse event truly occurred to each patient as function of their number of positive ratings. RESULTS: The overall median score on the 6-point ordinal scale was 3 (IQR 2,4 but the individual rater median score ranged from a minimum of 1 (in four reviewers to a maximum median score of 5. The overall percentage of cases rated as an adverse event was 39.7% (3798/9570. The median kappa for all pair-wise combinations of the 30 reviewers was 0.26 (IQR 0.16, 0.42; Min = -0.07, Max = 0.62. Reviewer sensitivity and specificity for adverse event classification ranged from 0.06 to 0.93 and 0.50 to 0.98, respectively. The estimated prevalence of adverse events using a latent class model with a common sensitivity and specificity for all reviewers (0.64 and 0.83 respectively was 47.6%. For patients to have a 95% chance of truly having an adverse event, at least 3 of 3 reviewers are required to deem the outcome an adverse event. CONCLUSION: Adverse event
Thermal Fission Rate Calculated Numerically by Particles Multi-passing over Saddle Point
LIU Ling; BAO Jing-Dong
2004-01-01
Langevin simulation of the particles multi-passing over the saddle point is proposed to calculate thermal fission rate. Due to finite friction and the corresponding thermal fluctuation, a backstreaming exists in the process of the particle descent from the saddle to the scission. This leads to that the diffusion behind the saddle point has influence upon the stationary flow across the saddle point. A dynamical correction factor, as a ratio of the flows of multi- and firstoverpassing the saddle point, is evaluated analytically. The results show that the fission rate calculated by the particles multi-passing over the saddle point is lower than the one calculated by the particle firstly passing over the saddle point,and the former approaches the results at the scission point.
2010-12-28
... Weighted Average Dumping Margin and Assessment Rate in Certain Antidumping Duty Proceedings AGENCY: Import... comments regarding the calculation of the weighted average dumping margin and antidumping duty assessment...-specific export prices and average normal values and does not offset any dumping that is found with...
31 CFR 356.21 - How are awards at the high yield or discount rate calculated?
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are awards at the high yield or discount rate calculated? 356.21 Section 356.21 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE, DEPARTMENT OF THE TREASURY BUREAU OF THE PUBLIC DEBT SALE...
Ma Shwe Zin Nyunt
2013-10-01
Full Text Available Background: The Clinical Dementia Rating (CDR scale is widely used to assess cognitive impairment in Alzheimer's disease. It requires collateral information from a reliable informant who is not available in many instances. We adapted the original CDR scale for use with elderly subjects without an informant (CDR-NI and evaluated its reliability and validity for assessing mild cognitive impairment (MCI and dementia among community-dwelling elderly subjects. Method: At two consecutive visits 1 week apart, nurses trained in CDR assessment interviewed, observed and rated cognitive and functional performance according to a protocol in 90 elderly subjects with suboptimal cognitive performance [Mini-Mental State Examination (MMSE Results: The CDR-NI scores (0, 0.5, 1 showed good internal consistency (Crohnbach's a 0.83-0.84, inter-rater reliability (κ 0.77-1.00 for six domains and 0.95 for global rating and test-retest reliability (κ 0.75-1.00 for six domains and 0.80 for global rating, good agreement (κ 0.79 with the clinical assessment status of MCI (n = 37 and dementia (n = 4 and significant differences in the mean scores for MMSE, MOCA and Instrumental Activities of Daily Living (ANOVA global p Conclusion: Owing to the protocol of the interviews, assessments and structured observations gathered during the two visits, CDR-NI provides valid and reliable assessment of MCI and dementia in community-living elderly subjects without an informant.
The impact of different sampling rates and calculation time intervals on ROTI values
Jacobsen Knut Stanley
2014-01-01
Full Text Available The ROTI (Rate of TEC index is a commonly used measure of ionospheric irregularities level. The algorithm to calculate ROTI is easily implemented, and is the same from paper to paper. However, the sample rate of the GNSS data used, and the time interval over which a value of ROTI is calculated, varies from paper to paper. When comparing ROTI values from different studies, this must be taken into account. This paper aims to show what these differences are, to increase the awareness of this issue. We have investigated the effect of different parameters for the calculation of ROTI values, using one year of data from 8 receivers at latitudes ranging from 59° N to 79° N. We have found that the ROTI values calculated using different parameter choices are strongly positively correlated. However, the ROTI values are quite different. The effect of a lower sample rate is to lower the ROTI value, due to the loss of high-frequency parts of the ROT spectrum, while the effect of a longer calculation time interval is to remove or reduce short-lived peaks due to the inherent smoothing effect. The ratio of ROTI values based on data of different sampling rate is examined in relation to the ROT power spectrum. Of relevance to statistical studies, we find that the median level of ROTI depends strongly on sample rate, strongly on latitude at auroral latitudes, and weakly on time interval. Thus, a baseline “quiet” or “noisy” level for one location or choice or parameters may not be valid for another location or choice of parameters.
Calculation of expected rates of fisheries‐induced evolution in data‐poor situations
Andersen, Ken Haste
A central part of an impact assessment of the evolutionary effects of fishing is a calculation of the expected rates of fishing induced by current fishing practice and an evaluation of how alternative fishing patterns may reduce evolutionary impacts of fishing. Here a general size-based framework...... for modeling the demography of fish based on size-based prescriptions of natural mortality, growth, and fishing is presented. Life history theory is used to reduce the necessary parameter set by utilizing relations between parameters making the framework particularly well suited for data-poor situations where...... only the size at maturation or the asymptotic size is known. The framework is applied to perform the modeling part of an evolutionary impact assessment using basic quantitative genetics to calculated expected rates of evolution on size at maturation, growth rate, and investment in gonads. A sensitivity...
CHARADE: A characteristic code for calculating rate-dependent shock-wave response
Johnson, J.N.; Tonks, D.L.
1991-01-01
In this report we apply spatially one-dimensional methods and simple shock-tracking techniques to the solution of rate-dependent material response under flat-plate-impact conditions. This method of solution eliminates potential confusion of material dissipation with artificial dissipative effects inherent in finite-difference codes, and thus lends itself to accurate calculation of elastic-plastic deformation, shock-to-detonation transition in solid explosives, and shock-induced structural phase transformation. Equations are presented for rate-dependent thermoelastic-plastic deformation for (100) planar shock-wave propagation in materials of cubic symmetry (or higher). Specific numerical calculations are presented for polycrystalline copper using the mechanical threshold stress model of Follansbee and Kocks with transition to dislocation drag. A listing of the CHARADE (for characteristic rate dependence) code and sample input deck are given. 26 refs., 11 figs.
L. Chubuk
2013-09-01
Full Text Available The existent methods of determination of the discount and capitalization rates for the valuation of the profitable real estate are analyzed from point of their prevalence, advantages and lacks of application. The alternative methods for setting the discount rates are selected (Galasyuk’s method. There are resulted recommendations in relation to approaching calculated size of the discount and capitalization rates to the real market information. In particular, expansion of the use of actually attained level of the expected return on the invested capital is offered. The actual values of capitalization rates on the office, commercial, ware-house real estate market in the capital of Ukraine are examined for period from 2008 to 2013, which evidence of considerable changeability of investment return indexes. There is confirmed the necessity of increasing the size of corrections in supply price to the level 18-20% for calculation of the capitalization rates after the method of market extraction. There is also propagates the account of additional market factors at the construction of recapitalization rates after the Invud’s method: annual growth (decline of leasing rates that are obtained from the profit real estate object; annual growth (decline of cost of the real property object; a percent of diminishing of cost of the real estate object in result of all kinds of depreciation (when residual value differ from a zero.
Clouvas, A; Antonopoulos-Domis, M; Silva, J
2000-01-01
The dose rate conversion factors D/sub CF/ (absorbed dose rate in air per unit activity per unit of soil mass, nGy h/sup -1/ per Bq kg/sup -1/) are calculated 1 m above ground for photon emitters of natural radionuclides uniformly distributed in the soil. Three Monte Carlo codes are used: 1) The MCNP code of Los Alamos; 2) The GEANT code of CERN; and 3) a Monte Carlo code developed in the Nuclear Technology Laboratory of the Aristotle University of Thessaloniki. The accuracy of the Monte Carlo results is tested by the comparison of the unscattered flux obtained by the three Monte Carlo codes with an independent straightforward calculation. All codes and particularly the MCNP calculate accurately the absorbed dose rate in air due to the unscattered radiation. For the total radiation (unscattered plus scattered) the D/sub CF/ values calculated from the three codes are in very good agreement between them. The comparison between these results and the results deduced previously by other authors indicates a good ag...
A Precise Analytic Delayed Coincidence Efficiency and Accidental Coincidence Rate Calculation
Yu, Jingyi; Chen, Shaomin
2013-01-01
In a delayed coincidence experiment, for example, the recent reactor neutrino oscillation experiments, a precise analytic determination of the delayed coincidence signal efficiency and the accidental coincidence background rate is important for the high accuracy measurement of the oscillation parameters and to understand systematic uncertainties associated with fluctuations in muon rate and random background rate. In this work, a data model is proposed to describe the full time sequence of all possible events on the live time axis. The acceptance of delayed coincidence signals, the rate of accidental backgrounds and other coincidence possibilities are calculated by assuming that all of the `net muons' are uniformly distributed on the live time axis. The intrinsic relative uncertainties in the event rates are at the $10^{-5}$ level for all combinations. The model and predictions are verified with a high statistics Monte Carlo study with a set of realistic parameters.
Sharmila Vaz
Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.
Song, L.; Balakrishnan, N.; Walker, K. M.; Stancil, P. C.; Thi, W. F.; Kamp, I.; van der Avoird, A.; Groenenboom, G. C.
2015-11-01
We present calculated rate coefficients for ro-vibrational transitions of CO in collisions with H atoms for a gas temperature range of 10 K ≤ T ≤ 3000 K, based on the recent three-dimensional ab initio H-CO interaction potential of Song et al. Rate coefficients for ro-vibrational v=1,j=0-30\\to v\\prime =0,j\\prime transitions were obtained from scattering cross sections previously computed with the close-coupling (CC) method by Song et al. Combining these with the rate coefficients for vibrational v=1-5\\to v\\prime \\lt v quenching obtained with the infinite-order sudden approximation, we propose a new extrapolation scheme that yields the rate coefficients for ro-vibrational v=2-5,j=0-30\\to v\\prime ,j\\prime de-excitation. Cross sections and rate coefficients for ro-vibrational v=2,j=0-30\\to v\\prime =1,j\\prime transitions calculated with the CC method confirm the effectiveness of this extrapolation scheme. Our calculated and extrapolated rates are very different from those that have been adopted in the modeling of many astrophysical environments. The current work provides the most comprehensive and accurate set of ro-vibrational de-excitation rate coefficients for the astrophysical modeling of the H-CO collision system. The application of the previously available and new data sets in astrophysical slab models shows that the line fluxes typically change by 20%-70% in high temperature environments (800 K) with an H/H2 ratio of 1; larger changes occur for lower temperatures.
Kimberley, Teresa Jacobson; Borich, Michael R; Prochaska, Kristina D; Mundfrom, Shannon L; Perkins, Ariel E; Poepping, Joseph M
2009-10-23
The purpose of this paper is to describe a clearly defined manual method for calculating cortical silent period (CSP) length that can be employed successfully and reliably by raters after minimal training in subjects with focal hand dystonia (FHD) and healthy subjects. A secondary purpose was to explore intra-subject variability of the CSP in subjects with FHD vs. healthy subjects. Two raters previously naïve to CSP identification and one experienced rater independently analyzed 170 CSP measurements collected in 6 subjects with focal hand dystonia (FHD) and 9 healthy subjects. Intraclass correlation coefficient (ICC) was calculated to quantify inter-rater reliability within the two groups of subjects. The relative variability of CSP in each group was calculated by the coefficient of variation (CV). Relative variation between raters within repeated measures of individual subjects was also quantified by CV. Reliability measures were as follows-mean of three raters: all subjects: ICC=0.976; within healthy subjects: ICC=0.965; in subjects with FHD: ICC=0.956. The median within-subject variability for the healthy group was CV=7.33% and in subjects with FHD:CV=11.78%. The median variability of calculating individual subject CSP duration between raters was CV=10.23% in subjects with dystonia and CV=10.46% in healthy subjects. Manual calculation of CSP results in excellent reliability between raters of varied levels of experience. Healthy subjects display less variability in CSP. Despite greater variability, the CSP in impaired subjects can be reliably calculated across raters.
Iftimie, R; Schofield, J P; Iftimie, Radu; Salahub, Dennis; Schofield, Jeremy
2003-01-01
In this article, we propose an efficient method for sampling the relevant state space in condensed phase reactions. In the present method, the reaction is described by solving the electronic Schr\\"{o}dinger equation for the solute atoms in the presence of explicit solvent molecules. The sampling algorithm uses a molecular mechanics guiding potential in combination with simulated tempering ideas and allows thorough exploration of the solvent state space in the context of an ab initio calculation even when the dielectric relaxation time of the solvent is long. The method is applied to the study of the double proton transfer reaction that takes place between a molecule of acetic acid and a molecule of methanol in tetrahydrofuran. It is demonstrated that calculations of rates of chemical transformations occurring in solvents of medium polarity can be performed with an increase in the cpu time of factors ranging from 4 to 15 with respect to gas-phase calculations.
Niu, YiFei; Vretenar, Dario; Meng, Jie
2011-01-01
We introduce a self-consistent microscopic theoretical framework for modelling the process of electron capture on nuclei in stellar environment, based on relativistic energy density functionals. The finite-temperature relativistic mean-field model is used to calculate the single-nucleon basis and the occupation factors in a target nucleus, and $J^{\\pi} = 0^{\\pm}$, $1^{\\pm}$, $2^{\\pm}$ charge-exchange transitions are described by the self-consistent finite-temperature relativistic random-phase approximation. Cross sections and rates are calculated for electron capture on 54,56Fe and 76,78Ge in stellar environment, and results compared with predictions of similar and complementary model calculations.
Maathuis, KGB; van der Schans, CP; van Iperen, A; Rietman, HS; Geertzen, JHB
2005-01-01
The aim of this study was to test the inter- and intra-observer reliability of the Physician Rating Scale (PRS) and the Edinburgh Visual Gait Analysis Interval Testing (GAIT) scale for use in children with cerebral palsy (CP). Both assessment scales are quantitative observational scales, evaluating
Maathuis, KGB; van der Schans, CP; van Iperen, A; Rietman, HS; Geertzen, JHB
2005-01-01
The aim of this study was to test the inter- and intra-observer reliability of the Physician Rating Scale (PRS) and the Edinburgh Visual Gait Analysis Interval Testing (GAIT) scale for use in children with cerebral palsy (CP). Both assessment scales are quantitative observational scales, evaluating
van Ark, Mathijs; Zwerver, Johannes; Diercks, Ronald L; van den Akker-Scheek, Inge
2014-01-01
Background: Lateral Epicondylalgia (LE) is a common injury for which no reliable and valid measure exists to determine severity in the Dutch language. The Patient-Rated Tennis Elbow Evaluation (PRTEE) is the first questionnaire specifically designed for LE but in English. The aim of this study was t
Dalla Pozza, Robert; Kleinmann, Arne; Bechtold, Susanne; Kozlik-Feldmann, R; Daebritz, S; Netz, Heinrich
2006-06-01
Assessing sympathovagal balance by calculating LF/HF-ratio from power spectral analysis (PSA) of heart rate variability (HRV) may be difficult in adolescents as chaotic breathing leads to methodical bias and metronomic breathing is not easy to perform. Diastolic blood pressure variability (dBPV) is less influenced and may therefore offer more stable values for calculations. The present study was performed on 72 paediatric subjects to investigate possible alternative LF/HF-calculations from PSA of HRV and dBPV. Seventy-two paediatric individuals in three groups: 12 controls, 17 heart- and heart-lung-transplanted children (TX) and 43 adolescents born small for gestational age (SGA). Short-term beat-to-beat HRV and BP-recordings were made supine and during active standing. Ratios calculated: LF/HF from HRV, LF/HF from dBPV, LF-dBPV/HF-HRV and LF-HRV/HF-dBPV. LF/HF from dBPV as well as LF-HRV/HF-dBPV did not correlate with LF/HF-HRV. Correlation of LF/HF from HRV and LF-dBPV/HF-HRV was high especially in TX and in patients with resting heart rate of above 90 beats per minute. In adolescents, the ratio of LF-dBPV/HF-HRV may be an alternative method for calculating sympathicovagal balance being less influenced by breathing patterns. In younger patients with elevated resting heart rate, but also in patients with very low HRV such as TX-patients this method could be a supplemental diagnostic tool whenever autonomic nervous control on the cardiocirculatory system has to be assessed.
Direct Calculation of Ice Homogeneous Nucleation Rate for a Molecular Model of Water
Haji-Akbari, Amir
2015-01-01
Ice formation is ubiquitous in nature, with important consequences in a variety of systems and environments, including biological cells [1], soil [2], aircraft [3], transportation infrastructure [4] and atmospheric clouds [5,6]. However, its intrinsic kinetics and microscopic mechanism are difficult to discern with current experiments. Molecular simulations of ice nucleation are also challenging, and direct rate calculations have only been performed for coarse-grained models of water [7-9]. For the more realistic molecular models, only indirect estimates have been obtained, e.g.~by assuming the validity of classical nucleation theory [10]. Here, we use a path sampling approach to perform the first direct rate calculation of homogeneous nucleation of ice in a molecular model of water. We use TIP4P/Ice [11], the most accurate among the existing molecular models for studying ice polymorphs. By using a novel topological order parameter for distinguishing different polymorphs, we are able to identify a freezing me...
Voropaeva, Z. I.
2010-01-01
The comparative assessment of methods for the calculation of the gypsum application rates based on the exchangeable sodium (Gedroits, Schollenberger), the estimated sodium (Schoonover), and the soil’s requirement for calcium (the version of the Omsk State Agrarian University) showed that, for the chemical amelioration of solonetzes with different contents of exchangeable sodium in Western Siberia, it is economically and ecologically advisable to calculate the ameliorant application rates from the estimated sodium. It was experimentally shown that the content of displaced magnesium used by Schoonover is a more efficient unified criterion than the value of the calcium adsorption by zonal soils. For improving the method’s accuracy, it was proposed to change the conditions of the soil preparation by regulating the concentration of the displacing solution, the interaction time, and the temperature.
External dose-rate conversion factors for calculation of dose to the public
1988-07-01
This report presents a tabulation of dose-rate conversion factors for external exposure to photons and electrons emitted by radionuclides in the environment. This report was prepared in conjunction with criteria for limiting dose equivalents to members of the public from operations of the US Department of Energy (DOE). The dose-rate conversion factors are provided for use by the DOE and its contractors in performing calculations of external dose equivalents to members of the public. The dose-rate conversion factors for external exposure to photons and electrons presented in this report are based on a methodology developed at Oak Ridge National Laboratory. However, some adjustments of the previously documented methodology have been made in obtaining the dose-rate conversion factors in this report. 42 refs., 1 fig., 4 tabs.
Simone, Angela; Kolarik, Jakub; Iwamatsu, Toshiya
2011-01-01
. Generally, the relationship between air temperature and the exergy consumption rate, as a first approximation, shows an increasing trend. Taking account of both convective and radiative heat exchange between the human body and the surrounding environment by using the calculated operative temperature, exergy...... consumption rates increase as the operative temperature increases above 24 ◦C or decreases below 22 ◦C. With the data available so far, a second-order polynomial relationship between thermal sensation and the exergy consumption rate was established....... occupants, it is reasonable to consider both the exergy flows in building and those within the human body. Until now, no data have been available on the relation between human-body exergy consumption rates and subjectively assessed thermal sensation. The objective of the present work was to relate thermal...
Assessments of fluid friction factors for use in leak rate calculations
Chivers, T.C. [Berkeley Technology Centre, Glos (United Kingdom)
1997-04-01
Leak before Break procedures require estimates of leakage, and these in turn need fluid friction to be assessed. In this paper available data on flow rates through idealized and real crack geometries are reviewed in terms of a single friction factor k It is shown that for {lambda} < 1 flow rates can be bounded using correlations in terms of surface R{sub a} values. For {lambda} > 1 the database is less precise, but {lambda} {approx} 4 is an upper bound, hence in this region flow calculations can be assessed using 1 < {lambda} < 4.
Window Energy Rating System and Calculation of Energy Performance of Windows
Laustsen, Jacob Birck; Svendsen, Svend
The goal of reducing the energy consumption in buildings is the background for the introduction of an energy rating system of fenestration products in Denmark. The energy rating system requires that producers declare, among other things, the heat loss coefficient, U, and the total solar energy......, and for reference condi-tions it is used for classification of panes. In actual situations the net energy gain is dependent on window orientation, climate and shadows. Therefore a simple calculation program is made that takes these circumstances into account. The program is combined with a database of Danish...
Arbib, Zouhayr; de Godos Crespo, Ignacio; Corona, Enrique Lara; Rogalla, Frank
2017-06-01
Microalgae culture in high rate algae ponds (HRAP) is an environmentally friendly technology for wastewater treatment. However, for the implementation of these systems, a better understanding of the oxygenation potential and the influence of climate conditions is required. In this work, the rates of oxygen production, consumption, and exchange with the atmosphere were calculated under varying conditions of solar irradiance and dilution rate during six months of operation in a real scale unit. This analysis allowed determining the biological response of these dynamic systems. The rates of oxygen consumption measured were considerably higher than the values calculated based on the organic loading rate. The response to light intensity in terms of oxygen production in the bioreactor was described with one of the models proposed for microalgae culture in dense concentrations. This model is based on the availability of light inside the culture and the specific response of microalgae to this parameter. The specific response to solar radiation intensity showed a reasonable stability in spite of the fluctuations due to meteorological conditions. The methodology developed is a useful tool for optimization and prediction of the performance of these systems.
Calculation of the similarity rate between images based on the local minima present Therein
K. Hourany
2016-12-01
Full Text Available Hourany, K., Benmeddour, F., Moulin, E., Assaad, J. and Zaatar, Y. Calculation of the similarity rate between images based on the local minima present therein. 2016. Lebanese Science Journal, 17(2: 177-192. Image processing is a very vast field that includes both IT and applied mathematics. It is a discipline that studies the improvement and transformations of digital images hence permitting the improvement of the quality of these images and the extraction of information. The comparison of digital images is a paramount issue that has been discussed in several researches because of its various applications especially in the field of control and surveillance such as the Structural Health Monitoring using acoustic waves. The IT support of the images serves especially for comparing them notably in distinguishing differences between these images and quantifying them automatically. In this study we will present an algorithm, allowing us to calculate the similarity rate between images based on the local minima present therein. This algorithm is divided into two main parts. In the first part we will explain how to extract the local minima from an image and in the second part we will show how to calculate the similarity rate between two images.
Adam, J; Tater, M; Truhlik, E; Epelbaum, E; Machleidt, R; Ricci, P
2011-01-01
The doublet capture rate of the negative muon capture in deuterium is calculated employing the nuclear wave functions generated from accurate nucleon-nucleon potentials constructed at next-to-next-to-next-to-leading order of heavy-baryon chiral perturbation theory and the weak meson exchange current operator derived within the same formalism. All but one of the low-energy constants that enter the calculation were fixed from pion-nucleon and nucleon-nucleon scattering data. The low-energy constant d^R (c_D), which cannot be determined from the purely two-nucleon data, was extracted recently from the triton beta-decay and the binding energies of the three-nucleon systems. The calculated values of the doublet capture rates show a rather large spread for the used values of the d^R. Precise measurement of the doublet capture rate in the future will not only help to constrain the value of d^R, but also provide a highly nontrivial test of the nuclear chiral EFT framework. Besides, the precise knowledge of the consta...
Riefenberg, J.; Wuest, W.J.
1994-01-01
A family of personal computer programs that calculate the Coal Mine Roof Rating (CMRR) have been developed by the U.S. Bureau of Mines. The CMRR, rock mass classification system, was recently developed by Bureau researchers to provide a link between the qualitative geologists' description of coal mine roof and the quantitive mine engineers' needs for mine design, roof support selection, and hazard detection. The program CMRR, is a user-friendly, interactive program into which raw field data are input, and a CMRR is calculated and output along with two graphic displays. The first graphic display is a plan view map with the roof ratings displayed on a color-coded scale, and the second display shows a stratigraphic section of the bolted roof interval and its resultant roof rating. In addition, a Lotus 1-2-3 worksheet, BOM-CMRR.WK3, has been developed for easy storage of field data. The worksheet also includes macros developed for calculation and storage of the CMRR. Production of summary reports for analysis of site-specific information are readily generated using Lotus. These programs help to facilitate the engineer in utilizing the CMRR in ground control studies.
Resolving an ostensible inconsistency in calculating the evaporation rate of sessile drops.
Chini, S F; Amirfazli, A
2016-06-04
This paper resolves an ostensible inconsistency in the literature in calculating the evaporation rate for sessile drops in a quiescent environment. The earlier models in the literature have shown that adapting the evaporation flux model for a suspended spherical drop to calculate the evaporation rate of a sessile drop needs a correction factor; the correction factor was shown to be a function of the drop contact angle, i.e. f(θ). However, there seemed to be a problem as none of the earlier models explicitly or implicitly mentioned the evaporation flux variations along the surface of a sessile drop. The more recent evaporation models include this variation using an electrostatic analogy, i.e. the Laplace equation (steady-state continuity) in a domain with a known boundary condition value, or known as the Dirichlet problem for Laplace's equation. The challenge is that the calculated evaporation rates using the earlier models seemed to differ from that of the recent models (note both types of models were validated in the literature by experiments). We have reinvestigated the recent models and found that the mathematical simplifications in solving the Dirichlet problem in toroidal coordinates have created the inconsistency. We also proposed a closed form approximation for f(θ) which is valid in a wide range, i.e. 8°≤θ≤131°. Using the proposed model in this study, theoretically, it was shown that the evaporation rate in the CWA (constant wetted area) mode is faster than the evaporation rate in the CCA (constant contact angle) mode for a sessile drop.
Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas
2011-01-01
Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.
Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data
Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.
2006-01-01
The use of multi-dimensional finite volume heat conduction techniques for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the standard one-dimensional semi-infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the NASA Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody was investigated. An array of streamwise-orientated heating striations was generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients caused by striation patterns multi-dimensional heat transfer techniques were necessary to obtain more accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates compared to 2-D analysis because it did not account for lateral heat conduction in the model.
A systematic calculation of muon capture rates in the number projected QRPA
Santos, Danilo Sande; Samana, Arturo Rodolfo; Dimarco, Alejandro Javier [Universidade Estadual de Santa Cruz (UESC), Itabuna, BA (Brazil); Krmpotic, Francisco [Universidad Nacional de La Plata (UNLP), Buenos Aires (Argentina)
2011-07-01
Full text: The pairing correlations at the level of the one-body transition matrix elements were introduced ad-hoc by Zinner et al. in the evaluation the total muon capture rates in a large number of nuclei with 6 < Z < 94, employing the random phase approximation (RPA). The quasiparticle RPA (QRPA) formalism is a full self-consistent procedure to describe consistently the short range correlations as pairing and those from the large range correlations handled with RPA. In this way, the relativistic QRPA (RQRPA) was applied in the calculation of total muon capture rates on a large set of nuclei from {sup 12}C to {sup 244}Pu, for which experimental values are available. Moreover, it was shown that the conservation of number of particles plays an important role on the weak interactions processes in nuclei with N {approx_equal} Z. This issue was not taken into account in the previous QRPA calculations. We show that the projection procedure is important in the muon capture rate and neutrino nucleus cross section in {sup 56}Fe. Therefore, in the present work we do a systematic study of the muon capture rates with nuclei with 12 {<=} A {<=} 56 masses within the projected QRPA (PQRPA) because it is the only RPA model that treats the Pauli Principle correctly. (author)
Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao
2017-05-01
To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.
Suess, Christian J; Hirst, Jonathan D; Besley, Nicholas A
2017-04-01
The development of optical multidimensional spectroscopic techniques has opened up new possibilities for the study of biological processes. Recently, ultrafast two-dimensional ultraviolet spectroscopy experiments have determined the rates of tryptophan → heme electron transfer and excitation energy transfer for the two tryptophan residues in myoglobin (Consani et al., Science, 2013, 339, 1586). Here, we show that accurate prediction of these rates can be achieved using Marcus theory in conjunction with time-dependent density functional theory. Key intermediate residues between the donor and acceptor are identified, and in particular the residues Val68 and Ile75 play a critical role in calculations of the electron coupling matrix elements. Our calculations demonstrate how small changes in structure can have a large effect on the rates, and show that the different rates of electron transfer are dictated by the distance between the heme and tryptophan residues, while for excitation energy transfer the orientation of the tryptophan residues relative to the heme is important. © 2017 The Authors Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Patient-specific dose calculation methods for high-dose-rate iridium-192 brachytherapy
Poon, Emily S.
In high-dose-rate 192Ir brachytherapy, the radiation dose received by the patient is calculated according to the AAPM Task Group 43 (TG-43) formalism. This table-based dose superposition method uses dosimetry parameters derived with the radioactive 192Ir source centered in a water phantom. It neglects the dose perturbations caused by inhomogeneities, such as the patient anatomy, applicators, shielding, and radiographic contrast solution. In this work, we evaluated the dosimetric characteristics of a shielded rectal applicator with an endocavitary balloon injected with contrast solution. The dose distributions around this applicator were calculated by the GEANT4 Monte Carlo (MC) code and measured by ionization chamber and GAFCHROMIC EBT film. A patient-specific dose calculation study was then carried out for 40 rectal treatment plans. The PTRAN_CT MC code was used to calculate the dose based on computed tomography (CT) images. This study involved the development of BrachyGUI, an integrated treatment planning tool that can process DICOM-RT data and create PTRAN_CT input initialization files. BrachyGUI also comes with dose calculation and evaluation capabilities. We proposed a novel scatter correction method to account for the reduction in backscatter radiation near tissue-air interfaces. The first step requires calculating the doses contributed by primary and scattered photons separately, assuming a full scatter environment. The scatter dose in the patient is subsequently adjusted using a factor derived by MC calculations, which depends on the distances between the point of interest, the 192Ir source, and the body contour. The method was validated for multicatheter breast brachytherapy, in which the target and skin doses for 18 patient plans agreed with PTRAN_CT calculations better than 1%. Finally, we developed a CT-based analytical dose calculation method. It corrects for the photon attenuation and scatter based upon the radiological paths determined by ray tracing
Calculation of the Distribution Rule of Equivalent Strain Rate near Explosive Welding Interface
李晓杰; 闫鸿浩; 李瑞勇; 王金相
2004-01-01
The objectives of this study were to analyze the distribution of equivalent strain rate near the stagnation point and probe into the effects of colliding angle on strain rate. An ideal fluid model of symmetrically colliding was used to research them. Calculations showed the equivalent strain rate and the colliding half angle are closely related to each other with the material geometrical size and explosive velocity selected, the equivalent strain has large gradient within several jet thicknesses near the stagnation point, the maximal strain points are lined up along a beeline, but a curve near the stagnation point. With different colliding angles, they can be fitted by using exponential curve. That is, the exponential curve can be regarded as the token curve in explosive welding.
Small groups, large profits: Calculating interest rates in community-managed microfinance
Rasmussen, Ole Dahl
2012-01-01
Savings groups are a widely used strategy for women’s economic resilience – over 80% of members worldwide are women, and in the case described here, 72.5%. In these savings groups it is common to see the interest rate on savings reported as "20-30% annually". Using panel data from 204 groups...... in Malawi, I show that the right figure is likely to be at least twice this figure. For these groups, the annual return is 62%. The difference comes from sector-wide application of a non-standard interest rate calculations and unrealistic assumptions about the savings profile in the groups. As a result......, it is impossible to compare returns in savings groups with returns elsewhere. Moreover, the interest on savings is incomparable to the interest rate on loans. I argue for the use of a standardized comparable metric and suggest easy ways to implement it. Developments of new tools and standard along these lines...
Fischer, Herbert Felix; Schirmer, Nicole; Tritt, Karin; Klapp, Burghard F; Fliege, Herbert
2011-03-01
Assessment of the retest-reliability and sensitivity to change of the ICD-10-Symptom-Rating (ISR) is provided. The ISR was filled out repeatedly by a non-clinical as well as different samples of psychosomatic patients. Between the two measurements either no or an integrated psychosomatic treatment took place. During the treatment free phase a high degree of stability of the test scores was expected, whereas a significant improvement of test scores was expected for the respective scales over the treatment phase. The retest-reliability for the individual scales ranges from 0.70 to 0.94. Between admission to a psychosomatic treatment and discharge significant differences were found for all scales. The retest-reliability showed satisfactory results comparable to similar, symptom-oriented instruments. Furthermore, the instruments reproduces symptomatic changes consistently and is - from our point of view - suitable for the assessment of change.
Calculating inspector probability of detection using performance demonstration program pass rates
Cumblidge, Stephen; D'Agostino, Amy
2016-02-01
The United States Nuclear Regulatory Commission (NRC) staff has been working since the 1970's to ensure that nondestructive testing performed on nuclear power plants in the United States will provide reasonable assurance of structural integrity of the nuclear power plant components. One tool used by the NRC has been the development and implementation of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code Section XI Appendix VIII[1] (Appendix VIII) blind testing requirements for ultrasonic procedures, equipment, and personnel. Some concerns have been raised, over the years, by the relatively low pass rates for the Appendix VIII qualification testing. The NRC staff has applied statistical tools and simulations to determine the expected probability of detection (POD) for ultrasonic examinations under ideal conditions based on the pass rates for the Appendix VIII qualification tests for the ultrasonic testing personnel. This work was primarily performed to answer three questions. First, given a test design and pass rate, what is the expected overall POD for inspectors? Second, can we calculate the probability of detection for flaws of different sizes using this information? Finally, if a previously qualified inspector fails a requalification test, does this call their earlier inspections into question? The calculations have shown that one can expect good performance from inspectors who have passed appendix VIII testing in a laboratory-like environment, and the requalification pass rates show that the inspectors have maintained their skills between tests. While these calculations showed that the PODs for the ultrasonic inspections are very good under laboratory conditions, the field inspections are conducted in a very different environment. The NRC staff has initiated a project to systematically analyze the human factors differences between qualification testing and field examinations. This work will be used to evaluate and prioritize
Adam, J.; Tater, M.; Truhlík, E.; Epelbaum, E.; Machleidt, R.; Ricci, P.
2012-03-01
The doublet capture rate Λ1 / 2 of the negative muon capture in deuterium is calculated employing the nuclear wave functions generated from accurate nucleon-nucleon (NN) potentials constructed at next-to-next-to-next-to-leading order of heavy-baryon chiral perturbation theory and the weak meson exchange current operator derived within the same formalism. All but one of the low-energy constants that enter the calculation were fixed from pion-nucleon and nucleon-nucleon scattering data. The low-energy constant dˆR (cD), which cannot be determined from the purely two-nucleon data, was extracted recently from the triton β-decay and the binding energies of the three-nucleon systems. The calculated values of Λ1 / 2 show a rather large spread for the used values of the dˆR. Precise measurement of Λ1 / 2 in the future will not only help to constrain the value of dˆR, but also provide a highly nontrivial test of the nuclear chiral EFT framework. Besides, the precise knowledge of the constant dˆR will allow for consistent calculations of other two-nucleon weak processes, such as proton-proton fusion and solar neutrino scattering on deuterons, which are important for astrophysics.
An Analytic Approach for Calculating Frame Erasue Rate in Cellular GSM Networks
Ahmed M. Alaa
2013-11-01
Full Text Available The Quality of Service (QoS of a GSM system is quantified in terms of Bit Error Rate (BER and Frame Erasure Rate (FER observed by the user. The problem of obtaining analytical expressions for BER and FER in a fading channel with multiple cochannel interferers (CCI is an extremely complex mathematical problem. The reason for this complexity is that the involvement of several GSM physical layer modules is required to obtain an expression for the probability of bit error. Besides, one needs to obtain the statistical properties of faded cochannel interferers in order to obtain the raw BER of GMSK modulation. Thus, error rate metrics are usually obtained by simulating the GSM physical layer rather than treating the problem analytically. A reliable interface between system and link level models can be obtained by evaluating the BER and FER in terms of the Signal-to-Interference Ratio (SIR analytically, instead of the pre-defined statistical mapping data usually used in literature. In this work, bounds on the uplink BER and FER are obtained for the GSM physical layer assuming a CCI limited system where both the desired and interference signals are subjected to Rayleigh fading. The analysis considers GMSK modulation, convolutional coding and Frequency Hopping.
How reliable is estimation of glomerular filtration rate at diagnosis of type 2 diabetes?
Chudleigh, Richard A; Dunseath, Gareth; Evans, William; Harvey, John N; Evans, Philip; Ollerton, Richard; Owens, David R
2007-02-01
The Cockcroft-Gault (CG) and Modification of Diet in Renal Disease (MDRD) equations previously have been recommended to estimate glomerular filtration rate (GFR). We compared both estimates with true GFR, measured by the isotopic (51)Cr-EDTA method, in newly diagnosed, treatment-naïve subjects with type 2 diabetes. A total of 292 mainly normoalbuminuric (241 of 292) subjects were recruited. Subjects were classified as having mild renal impairment (group 1, GFR /=90 ml/min per 1.73 m(2)). Estimated GFR (eGFR) was calculated by the CG and MDRD equations. Blood samples drawn at 44, 120, 180, and 240 min after administration of 1 MBq of (51)Cr-EDTA were used to measure isotopic GFR (iGFR). For subjects in group 1, mean (+/-SD) iGFR was 83.8 +/- 4.3 ml/min per 1.73 m(2). eGFR was 78.0 +/- 16.5 or 73.7 +/- 12.0 ml/min per 1.73 m(2) using CG and MDRD equations, respectively. Ninety-five percent CIs for method bias were -11.1 to -0.6 using CG and -14.4 to -7.0 using MDRD. Ninety-five percent limits of agreement (mean bias +/- 2 SD) were -37.2 to 25.6 and -33.1 to 11.7, respectively. In group 2, iGFR was 119.4 +/- 20.3 ml/min per 1.73 m(2). eGFR was 104.4 +/- 26.3 or 92.3 +/- 18.7 ml/min per 1.73 m(2) using CG and MDRD equations, respectively. Ninety-five percent CIs for method bias were -17.4 to -12.5 using CG and -29.1 to -25.1 using MDRD. Ninety-five percent limits of agreement were -54.4 to 24.4 and -59.5 to 5.3, respectively. In newly diagnosed type 2 diabetic patients, particularly those with a GFR >/=90 ml/min per 1.73 m(2), both CG and MDRD equations significantly underestimate iGFR. This highlights a limitation in the use of eGFR in the majority of diabetic subjects outside the setting of chronic kidney disease.
Ravesloot, M J L; de Vries, N
2011-01-01
Various treatment methods exist to treat obstructive sleep apnea (OSA); continuous positive airway pressure (CPAP) is considered the gold standard. It is however a clinical reality that the use of CPAP is often cumbersome. CPAP treatment is considered compliant when used ≥ 4 h per night as an average over all nights observed. Surgery, on the other hand, is regarded as successful when the apnea hypopnea index (AHI) drops at least 50% and is reduced below 20/h postoperatively in patients whose preoperative AHI was > 20/h. The effectiveness of CPAP compliance criteria can be questioned, just as the effectiveness of surgical success criteria has often been questioned. The aim of the study was to compare non optimal use of optimal therapy (CPAP) with the continuous effect (100%) of often non optimal therapy (surgery). Using mathematical function formulas, the effect on the AHI of various treatment modalities and their respective compliance and success criteria were calculated. The more severe the AHI, the more percentage of total sleep time (TST) CPAP must be used to significantly reduce the AHI. Patients with moderate OSA reduce the AHI by 33.3% to 48.3% when using CPAP 4 h/ night (AHI 0-5, respectively). The required nightly percentage use rises as one reduces the AHI target to < 5. CPAP must be used 66.67% to 83.33% per night to reduce the AHI below 5 (AHI of 0 while using CPAP). Using a mean AHI in CPAP therapy is more realistic than using arbitrary compliance rates, which, in fact, hide insufficient reductions in AHI.
New reaction rates for improved primordial D /H calculation and the cosmic evolution of deuterium
Coc, Alain; Petitjean, Patrick; Uzan, Jean-Philippe; Vangioni, Elisabeth; Descouvemont, Pierre; Iliadis, Christian; Longland, Richard
2015-12-01
Primordial or big bang nucleosynthesis (BBN) is one of the three historically strong evidences for the big bang model. Standard BBN is now a parameter-free theory, since the baryonic density of the Universe has been deduced with an unprecedented precision from observations of the anisotropies of the cosmic microwave background radiation. There is a good agreement between the primordial abundances of 4He, D, 3He, and 7Li deduced from observations and from primordial nucleosynthesis calculations. However, the 7Li calculated abundance is significantly higher than the one deduced from spectroscopic observations and remains an open problem. In addition, recent deuterium observations have drastically reduced the uncertainty on D /H , to reach a value of 1.6%. It needs to be matched by BBN predictions whose precision is now limited by thermonuclear reaction rate uncertainties. This is especially important as many attempts to reconcile Li observations with models lead to an increased D prediction. Here, we reevaluate the d (p ,γ )3He, d (d ,n ) 3H3, and d (d ,p ) 3H reaction rates that govern deuterium destruction, incorporating new experimental data and carefully accounting for systematic uncertainties. Contrary to previous evaluations, we use theoretical ab initio models for the energy dependence of the S factors. As a result, these rates increase at BBN temperatures, leading to a reduced value of D /H =(2.45 ±0.10 )×10-5 (2 σ ), in agreement with observations.
Goldie, John; Schwartz, Lisa; McConnachie, Alex; Jolly, Brian; Morrison, Jillian
2004-12-01
Although ethics is an important part of modern curricula, measures of students' ethical disposition have not been easy to develop. A potential method is to assess students' written justifications for selecting one option from a preset range of answers to vignettes and compare these justifications with predetermined 'expert' consensus. We describe the development of and reliability estimation for such a method -- the Ethics in Health Care Instrument (EHCI). Seven raters classified the responses of ten subjects to nine vignettes, on two occasions. The first stage of analysis involved raters' judging how consistent with consensus were subjects' justifications using generalizability theory, and then rating consensus responses on the action justification and values recognition hierarchies. The inter-rater reliability was 0.39 for the initial rating. Differential performance on questions was identified as the largest source of variance. Hence reliability was investigated also for students' total scores over the nine consensus vignettes. Rater effects were the largest source of variance identified. Examination of rater performance showed lack of rater consistency. D-studies were performed which showed acceptable reliability could nevertheless be obtained using four raters per EHCI. This study suggests that the EHCI has potential as an assessment instrument although further testing is required of all components of the methodology.
Calculation of the Actual Failure Rate of a DSP Board Using the FMEDA
Keum, Jong Yong; Suh, Yong Suk; Jang, Gwi Sook; Park, Je Yun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2008-10-15
Most components have multiple failure modes and these failure modes are more or less important depending on how they are used within a particular design. Though a component is part of a safety function, there is a particular failure mode that has no effect on a safety function. This failure mode is called 'No effect'. This paper presents a method to calculate a DSP (Digital Signal Processor) board failure rate and failure mode data using an FMEDA (Failure Modes Effects and Diagnostic Analysis)
Calculation of in-target production rates for radioactive isotope beam production at TRIUMF
Garcia, Fatima; Andreoiu, Corina; Kunz, Peter; Laxdal, Aurelia
2016-09-01
Rare Isotope Beam (RIB) facilities around the world, such as TRIUMF, work towards development of new target materials to generate exotic species. Access to these rare radioactive isotopes is key for applications in nuclear medicine, astrophysics and fundamental nuclear science. To better understand production from these and other materials, we have built a computer simulation of the RIB targets used at the TRIUMF Isotope Separation and ACceleration (ISAC) facility, to support new target material development. Built at Simon Fraser University, the simulation runs in the GEANT4 nuclear transport toolkit, and can simulate the production rate of isotopes from a given set of beam and target characteristics. The simulation models the bombardment of a production target by an incident high-energy proton beam and calculates isotope in-target production rates different nuclear reactions. Results from the simulation will be presented, along with an evaluation of various nuclear reaction models and a experimentally determined RIB yields at the ISAC Yield Station.
Window Energy Rating System and Calculation of Energy Performance of Windows
Laustsen, Jacob Birck; Svendsen, Svend
, and for reference condi-tions it is used for classification of panes. In actual situations the net energy gain is dependent on window orientation, climate and shadows. Therefore a simple calculation program is made that takes these circumstances into account. The program is combined with a database of Danish......The goal of reducing the energy consumption in buildings is the background for the introduction of an energy rating system of fenestration products in Denmark. The energy rating system requires that producers declare, among other things, the heat loss coefficient, U, and the total solar energy...... transmittance, g, for the panes and whole windows. The energy balance of a window can be described by the net energy gain, which is the solar gain minus the heat loss during the heating season. The net energy gain is a suitable quantity for evaluating the energy performance of windows in a simple and direct way...
Fine-Grid Calculations for Stellar Electron and Positron Capture Rates on Fe-Isotopes
Nabi, Jameel-Un
2011-01-01
The acquisition of precise and reliable nuclear data is a prerequisite to success for stellar evolution and nucleosynthesis studies. Core-collapse simulators find it challenging to generate an explosion from the collapse of the core of massive stars. It is believed that a better understanding of the microphysics of core-collapse can lead to successful results. The weak interaction processes are able to trigger the collapse and control the lepton-to-baryon ratio ($Y_{e}$) of the core material. It is suggested that the temporal variation of $Y_{e}$ within the core of a massive star has a pivotal role to play in the stellar evolution and a fine-tuning of this parameter at various stages of presupernova evolution is the key to generate an explosion. During the presupernova evolution of massive stars, isotopes of iron, mainly $^{54,55,56}$Fe, are considered to be key players in controlling $Y_{e}$ ratio via electron capture on these nuclide. Recently an improved microscopic calculation of weak interaction mediated...
When can Electrochemical Techniques give Reliable Corrosion Rates on Carbon Steel in Sulfide Media?
Hilbert, Lisbeth Rischel; Hemmingsen, Tor; Nielsen, Lars Vendelbo
2005-01-01
Effects of film formation on carbon steel in hydrogen sulfide media may corrupt corrosion rate monitoring by electrochemical techniques. Electrochemical data from hydrogen sulfide solutions, biological sulfide media and natural sulfide containing geothermal water have been collected and the process...... corrosion rates, but this effect may not be detected if rates are already overestimated. It is concluded that electrochemical techniques can be used for corrosion rate monitoring in som hydrogen sulfide media, but care must be taken when choosing the scan rates, and it is important to realize when direct...... in combination with ferrous sulfide corrosion products cover the steel surface. Corrosion rates can be overestimated by a factor of 10 to 100 with electrochemical techniques - both by linear polarization resistance (LPR) and electrochemical impedance spectroscopy (EIS). Oxygen entering the system accelerates...
Toyoshima, Kuniyoshi; Fujii, Yutaka; Mitsui, Nobuyuki; Kako, Yuki; Asakura, Satoshi; Martinez-Aran, Anabel; Vieta, Eduard; Kusumi, Ichiro
2017-08-01
In Japan, there are currently no reliable rating scales for the evaluation of subjective cognitive impairment in patients with bipolar disorder. We studied the relationship between the Japanese version of the Cognitive Complaints in Bipolar Disorder Rating Assessment (COBRA) and objective cognitive assessments in patients with bipolar disorder. We further assessed the reliability and validity of the COBRA. Forty-one patients, aged 16-64, in a remission period of bipolar disorder were recruited from Hokkaido University Hospital in Sapporo, Japan. The COBRA (Japanese version) and Frankfurt Complaint Questionnaire (FCQ), the gold standard in subjective cognitive assessment, were administered. A battery of neuropsychological tests was employed to measure objective cognitive impairment. Correlations among the COBRA, FCQ, and neuropsychological tests were determined using Spearman's correlation coefficient. The Japanese version of the COBRA had high internal consistency, good retest reliability, and concurrent validity-as indicated by a strong correlation with the FCQ. A significant correlation was also observed between the COBRA and objective cognitive measurements of processing speed. These findings are the first to demonstrate that the Japanese version of the COBRA may be clinically useful as a subjective cognitive impairment rating scale in Japanese patients with bipolar disorder. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Comfort, Paul
2013-05-01
Although there has been extensive research regarding the power clean, its application to sports performance, and use as a measure of assessing changes in performance, no research has determined the reliability assessing the kinetics of the power clean across testing session. The aim of this study was to determine the within- and between-session reliability of kinetic variables during the power clean. Twelve professional rugby league players (age 24.5 ± 2.1 years; height 182.86 ± 6.97 cm; body mass 92.85 ± 5.67 kg; 1 repetition maximum [1RM] power clean 102.50 ± 10.35 kg) performed 3 sets of 3 repetitions of power cleans at 70% of their 1RM, while standing on a force plate, to determine within-session reliability and repeated on 3 separate occasions to determine reliability between sessions. Intraclass correlation coefficients revealed a high reliability within- (r ≥ 0.969) and between-sessions (r ≥ 0.988). Repeated-measures analysis of variance showed no significant difference (p > 0.05) in peak vertical ground reaction force, rate of force development, and peak power between sessions, with small standard error of the measurements and smallest detectable differences for each kinetic variable (3.13 and 8.68 N; 84.39 and 233.93 N·s; 24.54 and 68.01 W, respectively). Therefore, to identify a meaningful change in performance, the strength and conditioning coach should look for a change in peak force ≥8.68 N, rate of force development ≥24.54 N·s, and a change in peak power ≥68.01 W to signify an adaptive response to training, which is greater than the variance between sessions, in trained athletes proficient at performing the power clean.
Lin, Changyu; Djordjevic, Ivan B; Zou, Ding
2015-06-29
We propose a method to estimate the lower bound of achievable information rates (AIRs) of high speed orthogonal frequency-division multiplexing (OFDM) in spatial division multiplexing (SDM) optical long-haul transmission systems. The estimation of AIR is based on the forward recursion of multidimensional super-symbol efficient sliding-window Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm. We consider most of the degradations of fiber links including nonlinear effects in few-mode fiber (FMF). This method does not consider the SDM as a simple multiplexer of independent data streams, but provides a super-symbol version for AIR calculation over spatial channels. This super-symbol version of AIR calculation algorithm, in principle, can be used for arbitrary multiple-input-multiple-output (MIMO)-SDM system with channel memory consideration. We illustrate this method by performing Monte Carlo simulations in a complete FMF model. Both channel model and algorithm for calculation of the AIRs are described in details. We also compare the AIRs results for QPSK/16QAM in both single mode fiber (SMF)- and FMF-based optical OFDM transmission.
Bing Li
2015-01-01
Full Text Available In order to have a better evaluation process to determine the experts weight in the evaluation process, this paper proposes a new expert weight calculation method. First of all to establish electric propulsion simulation evaluation system, use AHP method to calculate the initial weight principle of index. Then use the D-S to fuse the experts evaluation information, combined with the weight vector, structure of the expert weight objective function, and through the genetic algorithm to solve the expert weight size. According to the expert weight vector, calculate the final weight vector. Not only can it greatly make use of the experts information and analyze the similarity of information effectively but also it calculates the weight of each expert objectively. At the same time the evaluation subjective factors have been reduced by the adoption of this new method.
Jingyu Zhang
2016-01-01
Full Text Available In water-cooled reactor, the dominant radioactive source term under normal operation is activated corrosion products (ACPs, which have an important impact on reactor inspection and maintenance. A three-node transport model of ACPs was introduced into the new version of ACPs source term code CATE in this paper, which makes CATE capable of theoretically simulating the variation and the distribution of ACPs in a water-cooled reactor and suitable for more operating conditions. For code testing, MIT PWR coolant chemistry loop was simulated, and the calculation results from CATE are close to the experimental results from MIT, which means CATE is available and credible on ACPs analysis of water-cooled reactor. Then ACPs in the blanket cooling loop of water-cooled fusion reactor ITER under construction were analyzed using CATE and the results showed that the major contributors are the short-life nuclides, especially Mn-56. At last a point kernel integration code ARShield was coupled with CATE, and the dose rate around ITER blanket cooling loop was calculated. Results showed that after shutting down the reactor only for 8 days, the dose rate decreased nearly one order of magnitude, which was caused by the rapid decay of the short-life ACPs.
A simple parameterization of ozone infrared absorption for atmospheric heating rate calculations
Rosenfield, Joan E.
1991-01-01
A simple parameterization of ozone absorption in the 9.6-micron region which is suitable for two- and three-dimensional stratospheric and tropospheric models is presented. The band is divided into two parts, a brand center region and a band wing region, grouping together regions for which the temperature dependence of absorption is similar. Each of the two regions is modeled with a function having the form of the Goody random model, with pressure and temperature dependent band parameters chosen by empirically fitting line-by-line equivalent widths for pressures between 0.25 and 1000 mbar and ozone absorber amounts between 1.0 x 10 to the -7th and 1.0 cm atm. The model has been applied to calculations of atmospheric heating rates using an absorber amount weighted mean pressure and temperature along the inhomogeneous paths necessary for flux computations. In the stratosphere, maximum errors in the heating rates relative to line-by-line calculations are 0.1 K/d, or 5 percent of the peak cooling at the stratopause. In the troposphere the errors are at most 0.005 K/d.
Comparison of measured and calculated dose rates near nuclear medicine patients.
Yi, Y; Stabin, M G; McKaskle, M H; Shone, M D; Johnson, A B
2013-08-01
Widely used release criteria for patients receiving radiopharmaceuticals (NUREG-1556, Vol. 9, Rev.1, Appendix U) are known to be overly conservative. The authors measured external exposure rates near patients treated with I, Tc, and F and compared the measurements to calculated values using point and line source models. The external exposure dose rates for 231, 11, and 52 patients scanned or treated with I, Tc, and F, respectively, were measured at 0.3 m and 1.0 m shortly after radiopharmaceutical administration. Calculated values were always higher than measured values and suggested the application of "self-shielding factors," as suggested by Siegel et al. in 2002. The self-shielding factors of point and line source models for I at 1 m were 0.60 ± 0.16 and 0.73 ± 0.20, respectively. For Tc patients, the self-shielding factors for point and line source models were 0.44 ± 0.19 and 0.55 ± 0.23, and the values were 0.50 ± 0.09 and 0.60 ± 0.12, respectively, for F (all FDG) patients. Treating patients as unshielded point sources of radiation is clearly inappropriate. In reality, they are volume sources, but treatment of their exposures using a line source model with appropriate self-shielding factors produces a more realistic, but still conservative, approach for managing patient release.
Heart rate calculation from ensemble brain wave using wavelet and Teager-Kaiser energy operator.
Srinivasan, Jayaraman; Adithya, V
2015-01-01
Electroencephalogram (EEG) signal artifacts are caused by various factors, such as, Electro-oculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG), movement artifact and line interference. The relatively high electrical energy cardiac activity causes EEG artifacts. In EEG signal processing the general approach is to remove the ECG signal. In this paper, we introduce an automated method to extract the ECG signal from EEG using wavelet and Teager-Kaiser energy operator for R-peak enhancement and detection. From the detected R-peaks the heart rate (HR) is calculated for clinical diagnosis. To check the efficiency of our method, we compare the HR calculated from ECG signal recorded in synchronous with EEG. The proposed method yields a mean error of 1.4% for the heart rate and 1.7% for mean R-R interval. The result illustrates that, proposed method can be used for ECG extraction from single channel EEG and used in clinical diagnosis like estimation for stress analysis, fatigue, and sleep stages classification studies as a multi-model system. In addition, this method eliminates the dependence of additional synchronous ECG in extraction of ECG from EEG signal process.
Ghods, P.; Isgor, O.B.; Pour-Ghaz, M. [Carleton University, Department of Civil and Environmental Engineering, Mackenzie Engineering Building, Ottawa, K1S 5B6 ON (Canada)
2007-04-15
The quantification of active corrosion rate of steel in concrete structures through nondestructive methods is a crucial task for scheduling maintenance/repair operations and for achieving accurate service life predictions. Measuring the polarization resistance of corroding systems and using the Stern-Geary equation to calculate the corrosion current density of active steel is a widely-used method for this purpose. However, these measurements are greatly influenced by environmental factors; therefore, accurate monitoring of corrosion requires integrating the instantaneous corrosion rates over time. Although advanced numerical models are helpful in research settings, they remain to be computationally expensive and complex to be adopted by general engineering community. In this paper, a practical numerical model for predicting corrosion rate of uniformly depassivated steel in concrete is developed. The model is built on Stern's earlier work that an optimum anode-to-cathode ratio exists for which the corrosion current on the metal surface reaches a maximum value. The developed model, which represents the corrosion rate as a function of concrete resistivity and oxygen concentration, is validated using experimental data obtained from the literature. (Abstract Copyright [2007], Wiley Periodicals, Inc.)
The, Bertram; Reininga, Inge H. F.; El Moumni, Mostafa; Eygendaal, Denise
2013-01-01
Background: The modern standard of evaluating treatment results includes the use of rating systems. Elbow-specific rating systems are frequently used in studies aiming at elbow-specific pathology. However, proper validation studies seem to be relatively sparse. In addition, these scoring systems mig
Rating Performance Assessments of Students with Disabilities: A Study of Reliability and Bias
Mastergeorge, Ann M.; Martinez, Jose Felipe
2010-01-01
Inclusion of students with disabilities in district-wide and state assessments is mandated by federal regulations, and teachers sometimes play an important role in rating these students' work. In this study, trained teachers rated student proficiency in performance assessments in language arts and mathematics in third, fifth, and ninth grades. The…
ESTIMATION OF FLEXIBILITY OF AN ORGANIZATION ON THE GROUND OF THE CALCULATION OF PROFIT MARGIN RATE
Olga Gennadevna Rybakova
2016-12-01
Full Text Available The article deals with the problem of the flexibility of an organization as the ability to adapt effectively to the external environment. The authors have identified and investigated different approaches to estimating the flexibility of an organization on the ground of flexibility grading, calculation of the general index of flexibility as well as the calculation of flexibility’s ranking score. We have identified the advantages and disadvantages of these approaches. A new method of the estimation of an organization’s flexibility on the ground of the calculation of relative profit margin has been developed. This method is the multifunctional assessment tool of enterprise’s functionability in the current context of difficult and volatile economic environment. It allows in the early stage to identify negative trends in the production and financial figures and thus, it enables the organizational leadership to take steps in advance in order to avert a crisis in its activity. Keeping the profit margin at the same rate at the forced contraction of output, because of the negative impact of external factors, will confirm that the organization has adapted to the external environment and, therefore, it is flexible. The organization can be considered with margin rate beginning to low up to zero value as an organization with an insufficient level of flexibility that is at the “zone of crisis” and it is characterized by the depletion of reserved funds and reduction of current assets. Loss-maker is nonflexible and the presence of loss means that the organization has an evident sign of crisis and it can be bankrupt.
A simple algebraic cancer equation: calculating how cancers may arise with normal mutation rates
Shibata Darryl
2010-01-01
Full Text Available Abstract Background The purpose of this article is to present a relatively easy to understand cancer model where transformation occurs when the first cell, among many at risk within a colon, accumulates a set of driver mutations. The analysis of this model yields a simple algebraic equation, which takes as inputs the number of stem cells, mutation and division rates, and the number of driver mutations, and makes predictions about cancer epidemiology. Methods The equation [p = 1 - (1 - (1 - (1 - udkNm ] calculates the probability of cancer (p and contains five parameters: the number of divisions (d, the number of stem cells (N × m, the number of critical rate-limiting pathway driver mutations (k, and the mutation rate (u. In this model progression to cancer "starts" at conception and mutations accumulate with cell division. Transformation occurs when a critical number of rate-limiting pathway mutations first accumulates within a single stem cell. Results When applied to several colorectal cancer data sets, parameter values consistent with crypt stem cell biology and normal mutation rates were able to match the increase in cancer with aging, and the mutation frequencies found in cancer genomes. The equation can help explain how cancer risks may vary with age, height, germline mutations, and aspirin use. APC mutations may shorten pathways to cancer by effectively increasing the numbers of stem cells at risk. Conclusions The equation illustrates that age-related increases in cancer frequencies may result from relatively normal division and mutation rates. Although this equation does not encompass all of the known complexity of cancer, it may be useful, especially in a teaching setting, to help illustrate relationships between small and large cancer features.
Serel Arslan, S; Demir, N; Karaduman, A A
2017-02-01
This study aimed to develop a scale called Tongue Thrust Rating Scale (TTRS), which categorised tongue thrust in children in terms of its severity during swallowing, and to investigate its validity and reliability. The study describes the developmental phase of the TTRS and presented its content and criterion-based validity and interobserver and intra-observer reliability. For content validation, seven experts assessed the steps in the scale over two Delphi rounds. Two physical therapists evaluated videos of 50 children with cerebral palsy (mean age, 57·9 ± 16·8 months), using the TTRS to test criterion-based validity, interobserver and intra-observer reliability. The Karaduman Chewing Performance Scale (KCPS) and Drooling Severity and Frequency Scale (DSFS) were used for criterion-based validity. All the TTRS steps were deemed necessary. The content validity index was 0·857. A very strong positive correlation was found between two examinations by one physical therapist, which indicated intra-observer reliability (r = 0·938, P thrust in children. © 2016 John Wiley & Sons Ltd.
Moseley, J.; Miller, D.; Shah, Q.-U.-A. S. J.; Sakurai, K.; Kempe, M.; Tamizhmani, G.; Kurtz, S.
2011-10-01
Use of thermoplastic materials as encapsulants in photovoltaic (PV) modules presents a potential concern in terms of high temperature creep, which should be evaluated before thermoplastics are qualified for use in the field. Historically, the issue of creep has been avoided by using thermosetting polymers as encapsulants, such as crosslinked ethylene-co-vinyl acetate (EVA). Because they lack crosslinked networks, however, thermoplastics may be subject to phase transitions and visco-elastic flow at the temperatures and mechanical stresses encountered by modules in the field, creating the potential for a number of reliability and safety issues. Thermoplastic materials investigated in this study include PV-grade uncured-EVA (without curing agents and therefore not crosslinked); polyvinyl butyral (PVB); thermoplastic polyurethane (TPU); and three polyolefins (PO), which have been proposed for use as PV encapsulation. Two approaches were used to evaluate the performance of these materials as encapsulants: module-level testing and a material-level testing.
Hilbert, Lisbeth Rischel; Hemmingsen, T.; Nielsen, Lars Vendelbo
2007-01-01
accelerates corrosion rates, but this effect may not be detected if rates are already overestimated. It is concluded that electrochemical techniques can be used for corrosion reate monitoring in some H2S media, but care must be taken in the choice of scan ratre; it is important to realize when direct......Effects of film formation on carbon steel in hydrogen sulfide (H2S) media may corrupt corrosion rate monitoring by electrochemical techniques. Electrochemical data from H2S solutions, biological sulfide media, and natural sulfide containing geothermal water have been collected, and the process...... techniques like electrical resistance or mass loss should be used instead....
The Effects of Participation Rate on the Internal Reliability of Peer Nomination Measures
Marks, P.E.L.; Babcock, B.; Cillessen, A.H.N.; Crick, N.R.
2013-01-01
Although low participation rates have historically been considered problematic in peer nomination research, some researchers have recently argued that small proportions of participants can, in fact, provide adequate sociometric data. The current study used a classical measurement perspective to inve
2011-07-13
... SECURITY U.S. Customs and Border Protection Quarterly IRS Interest Rates Used in Calculating Interest on Overdue Accounts and Refunds on Customs Duties AGENCY: Customs and Border Protection, Department of... Internal Revenue Service interest rates used to calculate interest on overdue accounts (underpayments)...
Shun-Wei Liu
2014-01-01
Full Text Available We demonstrated a fabrication technique to reduce the driving voltage, increase the current efficiency, and extend the operating lifetime of an organic light-emitting diode (OLED by simply controlling the deposition rate of bis(10-hydroxybenzo[h]qinolinato beryllium (Bebq2 used as the emitting layer and the electron-transport layer. In our optimized device, 55 nm of Bebq2 was first deposited at a faster deposition rate of 1.3 nm/s, followed by the deposition of a thin Bebq2 (5 nm layer at a slower rate of 0.03 nm/s. The Bebq2 layer with the faster deposition rate exhibited higher photoluminescence efficiency and was suitable for use in light emission. The thin Bebq2 layer with the slower deposition rate was used to modify the interface between the Bebq2 and cathode and hence improve the injection efficiency and lower the driving voltage. The operating lifetime of such a two-step deposition OLED was 1.92 and 4.6 times longer than that of devices with a single deposition rate, that is, 1.3 and 0.03 nm/s cases, respectively.
Ayala, F; De Ste Croix, M; Sainz de Baranda, P; Santonja, F
2012-11-01
The main purpose of this study was to determine the absolute reliability of conventional (H/Q(CONV)) and functional (H/Q(FUNC)) hamstring to quadriceps strength imbalance ratios calculated using peak torque values, 3 different joint angle-specific torque values (10°, 20° and 30° of knee flexion) and 4 different joint ROM-specific average torque values (0-10°, 11-20°, 21-30° and 0-30° of knee flexion) adopting a prone position in recreational athletes. A total of 50 recreational athletes completed the study. H/Q(CONV) and H/Q(FUNC) ratios were recorded at 3 different angular velocities (60, 180 and 240°/s) on 3 different occasions with a 72-96 h rest interval between consecutive testing sessions. Absolute reliability was examined through typical percentage error (CVTE), percentage change in the mean (CM) and intraclass correlations (ICC) as well as their respective confidence limits. H/Q(CONV) and H/Q(FUNC) ratios calculated using peak torque values showed moderate reliability values, with CM scores lower than 2.5%, CV(TE) values ranging from 16 to 20% and ICC values ranging from 0.3 to 0.7. However, poor absolute reliability scores were shown for H/Q(CONV) and H/Q(FUNC) ratios calculated using joint angle-specific torque values and joint ROM-specific average torque values, especially for H/Q(FUNC) ratios (CM: 1-23%; CV(TE): 22-94%; ICC: 0.1-0.7). Therefore, the present study suggests that the CV(TE) values reported for H/Q(CONV) and H/Q(FUNC) (≈18%) calculated using peak torque values may be sensitive enough to detect large changes usually observed after rehabilitation programmes but not acceptable to examine the effect of preventitive training programmes in healthy individuals. The clinical reliability of hamstring to quadriceps strength ratios calculated using joint angle-specific torque values and joint ROM-specific average torque values are questioned and should be re-evaluated in future research studies.
Morgado, Fabiane F Rocha; Ferreira, Maria Elisa C; Campana, Angela N N B; Rigby, Alan S; Tavares, Maria da Consolação G C F
2013-02-01
Research on body dissatisfaction has grown significantly. However, valid and reliable instruments for measuring body dissatisfaction in the congenitally blind have yet to be developed. In three studies, we report on development, test-retest reliability, and concurrent and content validity of the Three-dimensional Body Rating Scale (3BRS) for the congenitally blind. In Study 1, 58 people with congenital blindness (28 women, 30 men; M age = 36.7, SD = 13.1) numerically ordered models of the 3BRS and models of the Two-dimensional Body Rating Scale (2BRS), from very thin to the very fat. In Study 2, the construct validity and reliability of the 38RS was assessed. The same participants from Study 1 chose the 3BRS model that represented their ideal body and the 3BRS model that represented their actual body. Two weeks later, a re-test was done. In Study 3, 16 experts judged the content validity of the 3BRS. The psychometric properties of the 3BRS, its utility, and its limitations are discussed along with considerations for future research.
Implications of imprecision in kinetic rate data for photochemical model calculations
Stewart, R.W.; Thompson, A.M. [National Aeronautics and Space Administration, Greenbelt, MD (United States). Goddard Space Flight Center
1997-12-31
Evaluation of uncertainties in photochemical model calculations is of great importance to scientists performing assessment modeling. A major source of uncertainty is the measurement imprecision inherent in photochemical reaction rate data that modelers rely on. A rigorous method of evaluating the impact of data imprecision on computational uncertainty is the study of error propagation using Monte Carlo techniques. There are two problems with the current implementation of the Monte Carlo method. First, there is no satisfactory way of accounting for the variation of imprecision with temperature in 1, 2, or 3D models; second, due to its computational expense, it is impractical in 3D model studies. These difficulties are discussed. (author) 4 refs.
Systematic comparison of ISOLDE-SC yields with calculated in-target production rates
Lukic, S.; Gevaert, F.; Kelic, A.; Ricciardi, M.V.; Schmidt, K.H.; Yordanov, O.
2006-02-15
Recently, a series of dedicated inverse-kinematics experiments performed at GSI, Darmstadt, has brought an important progress in our understanding of proton and heavy-ion induced reactions at relativistic energies. The nuclear reaction code ABRABLA that has been developed and benchmarked against the results of these experiments has been used to calculate nuclide production cross sections at different energies and with different targets and beams. These calculations are used to estimate nuclide production rates by protons in thick targets, taking into account the energy loss and the attenuation of the proton beam in the target, as well as the low-energy fission induced by the secondary neutrons. The results are compared to the yields of isotopes of various elements obtained from different targets at CERN-ISOLDE with 600 MeV protons, and the overall extraction efficiencies are deduced. The dependence of these extraction efficiencies on the nuclide half-life is found to follow a simple pattern in many different cases. A simple function is proposed to parameterize this behavior in a way that quantifies the essential properties of the extraction efficiency for the element and the target - ion-source system in question. (orig.)
New reaction rates for improved primordial D/H calculation and the cosmic evolution of deuterium
Coc, Alain; Uzan, Jean-Philippe; Vangioni, Elisabeth; Descouvemont, Pierre; Illiadis, Christian; Longland, Richard
2015-01-01
Primordial or big bang nucleosynthesis (BBN) is one of the three historical strong evidences for the big bang model. Standard BBN is now a parameter free theory, since the baryonic density of the Universe has been deduced with an unprecedented precision from observations of the anisotropies of the cosmic microwave background (CMB) radiation. There is a good agreement between the primordial abundances of 4He, D, 3He and 7Li deduced from observations and from primordial nucleosynthesis calculations. However, the 7Li calculated abundance is significantly higher than the one deduced from spectroscopic observations and remains an open problem. In addition, recent deuterium observations have drastically reduced the uncertainty on D/H, to reach a value of 1.6%. It needs to be matched by BBN predictions whose precision is now limited by thermonuclear reaction rate uncertainties. This is especially important as many attempts to reconcile Li observations with models lead to an increased D prediction. Here, we re-evalua...
Kai, Tetsuya; Maekawa, Fujio; Kasugai, Yoshimi; Takada, Hiroshi; Ikeda, Yujiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kosako, Kazuaki [Sumitomo Atomic Energy Industries, Ltd., Tokyo (Japan)
2002-03-01
Reliability assessment for the high energy particle induced radioactivity calculation code DCHAIN-SP 2001 was carried out through analysis of integral activation experiments with 14-MeV neutrons aiming at validating the cross section and decay data revised from previous version. The following three kinds of experiments conducted at the D-T neutron source facility, FNS, in JAERI were employed: (1) the decay gamma-ray measurement experiment for fusion reactor materials, (2) the decay heat measurement experiment for 32 fusion reactor materials, and (3) the integral activation experiment on mercury. It was found that the calculations with DCHAIN-SP 2001 predicted the experimental data for (1) - (3) within several tens of percent. It was concluded that the cross section data below 20 MeV and the associated decay data as well as the calculation algorithm for solving the Beteman equation that was the master equation of DCHAIN-SP were adequate. (author)
Mathews, Alyssa
Emissions from the combustion of fossil fuels are a growing pollution concern throughout the global community, as they have been linked to numerous health issues. The freight transportation sector is a large source of these emissions and is expected to continue growing as globalization persists. Within the US, the expanding development of the natural gas industry is helping to support many industries and leading to increased transportation. The process of High Volume Hydraulic Fracturing (HVHF) is one of the newer advanced extraction techniques that is increasing natural gas and oil reserves dramatically within the US, however the technique is very resource intensive. HVHF requires large volumes of water and sand per well, which is primarily transported by trucks in rural areas. Trucks are also used to transport waste away from HVHF well sites. This study focused on the emissions generated from the transportation of HVHF materials to remote well sites, dispersion, and subsequent health impacts. The Geospatial Intermodal Freight Transport (GIFT) model was used in this analysis within ArcGIS to identify roadways with high volume traffic and emissions. High traffic road segments were used as emissions sources to determine the atmospheric dispersion of particulate matter using AERMOD, an EPA model that calculates geographic dispersion and concentrations of pollutants. Output from AERMOD was overlaid with census data to determine which communities may be impacted by increased emissions from HVHF transport. The anticipated number of mortalities within the impacted communities was calculated, and mortality rates from these additional emissions were computed to be 1 in 10 million people for a simulated truck fleet meeting stricter 2007 emission standards, representing a best case scenario. Mortality rates due to increased truck emissions from average, in-use vehicles, which represent a mixed age truck fleet, are expected to be higher (1 death per 341,000 people annually).
Reliability and discriminant validity of ataxia rating scales in early onset ataxia
Brandsma, R.; Lawerman, T. F.; Kuiper, M. J.; Geffen, van Joke; Lunsing, I. J.; Burger, H.; de Koning, T. J.; de Vries, J. J.; de Koning-Tijssen, M. A. J.; Sival, D. A.
Objective: To determine observer-agreement and discriminantvalidity of ataxia rating scales.Background: In children and young adults, Early Onset Ataxia(EOA) is frequently concurrent with other Movement Disorders,resulting in moderate inter-observer agreement among MovementDisorder professionals. To
The Reliability of Methodological Ratings for speechBITE Using the PEDro-P Scale
Murray, Elizabeth; Power, Emma; Togher, Leanne; McCabe, Patricia; Munro, Natalie; Smith, Katherine
2013-01-01
Background: speechBITE (http://www.speechbite.com) is an online database established in order to help speech and language therapists gain faster access to relevant research that can used in clinical decision-making. In addition to containing more than 3000 journal references, the database also provides methodological ratings on the PEDro-P (an…
Reliability and discriminant validity of ataxia rating scales in early onset ataxia
Brandsma, R.; Lawerman, T. F.; Kuiper, M. J.; Geffen, van Joke; Lunsing, I. J.; Burger, H.; de Koning, T. J.; de Vries, J. J.; de Koning-Tijssen, M. A. J.; Sival, D. A.
2015-01-01
Objective: To determine observer-agreement and discriminantvalidity of ataxia rating scales.Background: In children and young adults, Early Onset Ataxia(EOA) is frequently concurrent with other Movement Disorders,resulting in moderate inter-observer agreement among MovementDisorder professionals. To
Lauritsen, Jakob; Gundgaard, Maria G; Mortensen, Mette S
2014-01-01
Estimates of glomerular filtration rate (eGFR) are widely used when administering nephrotoxic chemotherapy. No studies performed in oncology patients have shown whether eGFR can safely substitute a measured GFR (mGFR) based on a marker method. We aimed to assess the validity of four major formula...
When can Electrochemical Techniques give Reliable Corrosion Rates on Carbon Steel in Sulfide Media?
Hilbert, Lisbeth Rischel; Hemmingsen, Tor; Nielsen, Lars Vendelbo
2005-01-01
in combination with ferrous sulfide corrosion products cover the steel surface. Corrosion rates can be overestimated by a factor of 10 to 100 with electrochemical techniques - both by linear polarization resistance (LPR) and electrochemical impedance spectroscopy (EIS). Oxygen entering the system accelerates...
Hilbert, Lisbeth Rischel; Hemmingsen, T.; Nielsen, Lars Vendelbo
2007-01-01
if the biofilm in combination with ferrous sulfide corrosion products cover the steel surface. Corrosion rates can be overestimated by a factor of 10 to 100 with electrochemical techniques - both by linear polarization resistance (LPR) and electrochemicel impedance spectroscopy (EIS). Oxygen entering the system...
Koopman, Jacob J E; Rozing, Maarten P; Kramer, Anneke; Abad, José M; Finne, Patrik; Heaf, James G; Hoitsma, Andries J; De Meester, Johan M J; Palsson, Runolfur; Postorino, Maurizio; Ravani, Pietro; Wanner, Christoph; Jager, Kitty J; van Bodegom, David; Westendorp, Rudi G J
2016-04-01
The rate of senescence can be inferred from the acceleration by which mortality rates increase over age. Such a senescence rate is generally estimated from parameters of a mathematical model fitted to these mortality rates. However, such models have limitations and underlying assumptions. Notably, they do not fit mortality rates at young and old ages. Therefore, we developed a method to calculate senescence rates from the acceleration of mortality directly without modeling the mortality rates. We applied the different methods to age group-specific mortality data from the European Renal Association-European Dialysis and Transplant Association Registry, including patients with end-stage renal disease on dialysis, who are known to suffer from increased senescence rates (n = 302,455), and patients with a functioning kidney transplant (n = 74,490). From age 20 to 70, senescence rates were comparable when calculated with or without a model. However, when using non-modeled mortality rates, senescence rates were yielded at young and old ages that remained concealed when using modeled mortality rates. At young ages senescence rates were negative, while senescence rates declined at old ages. In conclusion, the rate of senescence can be calculated directly from non-modeled mortality rates, overcoming the disadvantages of an indirect estimation based on modeled mortality rates.
Montero-Cabrera, Luis Alberto; Röhrig, Ute; Padrón-Garcia, Juan A; Crespo-Otero, Rachel; Montero-Alejo, Ana L; Garcia de la Vega, José M; Chergui, Majed; Rothlisberger, Ursula
2007-10-14
Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.
HU TA
2007-10-26
Assess the steady-state flammability level at normal and off-normal ventilation conditions. The methodology of flammability analysis for Hanford tank waste is developed. The hydrogen generation rate model was applied to calculate the gas generation rate for 177 tanks. Flammability concentrations and the time to reach 25% and 100% of the lower flammability limit, and the minimum ventilation rate to keep from 100 of the LFL are calculated for 177 tanks at various scenarios.
Field theoretic calculation of energy cascade rates in non-helical magnetohydrodynamic turbulence
Mahendra K Verma
2003-09-01
Energy cascade rates and Kolmogorov’s constant for non-helical steady magnetohydrodynamic turbulence have been calculated by solving the ﬂux equations to the ﬁrst order in perturbation. For zero cross helicity and space dimension = 3, magnetic energy cascades from large length-scales to small length-scales (forward cascade). In addition, there are energy ﬂuxes from large-scale magnetic ﬁeld to small-scale velocity ﬁeld, large-scale velocity ﬁeld to small-scale magnetic ﬁeld, and large-scale velocity ﬁeld to large-scale magnetic ﬁeld. Kolmogorov’s constant for magnetohydrodynamics is approximately equal to that for ﬂuid turbulence (≈ 1.6) for Alfvén ratio 0.5 ≤ A ≤ ∞. For higher space-dimensions, the energy ﬂuxes are qualitatively similar, and Kolmogorov’s constant varies as 1/3. For the normalized cross helicity c → 1, the cascade rates are proportional to (1-c)/(1+c), and the Kolmogorov’s constants vary signiﬁcantly with c.
Field theoretic calculation of energy cascade rates in non-helical magnetohydrodynamic turbulence
Mahendra K Verma
2004-06-01
Energy cascade rates and Kolmogorov’s constant for non-helical steady magnetohydrodynamic turbulence have been calculated by solving the ﬂux equations to the ﬁrst order in perturbation. For zero cross helicity and space dimension $d = 3$, magnetic energy cascades from large length-scales to small length-scales (forward cascade). In addition, there are energy ﬂuxes from large-scale magnetic ﬁeld to small-scale velocity ﬁeld, large-scale velocity ﬁeld to small-scale magnetic ﬁeld, and large-scale velocity ﬁeld to large-scale magnetic ﬁeld. Kolmogorov’s constant for magnetohydrodynamics is approximately equal to that for ﬂuid turbulence $(≈ 1.6)$ for Alfvén ratio $0.5≤ r_{A}≤ ∞$. For higher space-dimensions, the energy ﬂuxes are qualitatively similar, and Kolmogorov’s constant varies as $d^{1/3}$. For the normalized cross helicity $_{c}→ 1$, the cascade rates are proportional to $(1-_{c})/(1+_{c})$, and the Kolmogorov’s constants vary signiﬁcantly with $_{c}$.
Limitations of the TG-43 formalism for skin high-dose-rate brachytherapy dose calculations
Granero, Domingo, E-mail: dgranero@eresa.com [Department of Radiation Physics, ERESA, Hospital General Universitario, 46014 Valencia (Spain); Perez-Calatayud, Jose [Radiotherapy Department, La Fe University and Polytechnic Hospital, Valencia 46026 (Spain); Vijande, Javier [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100, Spain and IFIC (UV-CSIC), Paterna 46980 (Spain); Ballester, Facundo [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Rivard, Mark J. [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States)
2014-02-15
Purpose: In skin high-dose-rate (HDR) brachytherapy, sources are located outside, in contact with, or implanted at some depth below the skin surface. Most treatment planning systems use the TG-43 formalism, which is based on single-source dose superposition within an infinite water medium without accounting for the true geometry in which conditions for scattered radiation are altered by the presence of air. The purpose of this study is to evaluate the dosimetric limitations of the TG-43 formalism in HDR skin brachytherapy and the potential clinical impact. Methods: Dose rate distributions of typical configurations used in skin brachytherapy were obtained: a 5 cm × 5 cm superficial mould; a source inside a catheter located at the skin surface with and without backscatter bolus; and a typical interstitial implant consisting of an HDR source in a catheter located at a depth of 0.5 cm. Commercially available HDR{sup 60}Co and {sup 192}Ir sources and a hypothetical {sup 169}Yb source were considered. The Geant4 Monte Carlo radiation transport code was used to estimate dose rate distributions for the configurations considered. These results were then compared to those obtained with the TG-43 dose calculation formalism. In particular, the influence of adding bolus material over the implant was studied. Results: For a 5 cm × 5 cm{sup 192}Ir superficial mould and 0.5 cm prescription depth, dose differences in comparison to the TG-43 method were about −3%. When the source was positioned at the skin surface, dose differences were smaller than −1% for {sup 60}Co and {sup 192}Ir, yet −3% for {sup 169}Yb. For the interstitial implant, dose differences at the skin surface were −7% for {sup 60}Co, −0.6% for {sup 192}Ir, and −2.5% for {sup 169}Yb. Conclusions: This study indicates the following: (i) for the superficial mould, no bolus is needed; (ii) when the source is in contact with the skin surface, no bolus is needed for either {sup 60}Co and {sup 192}Ir. For
Ahmadpanah M
2016-03-01
Full Text Available Mohammad Ahmadpanah,1 Meisam Sheikhbabaei,1 Mohammad Haghighi,1 Fatemeh Roham,1 Leila Jahangard,1 Amineh Akhondi,2 Dena Sadeghi Bahmani,3 Hafez Bajoghli,4 Edith Holsboer-Trachsler,3 Serge Brand3,5 1Behavioral Disorders and Substances Abuse Research Center, Hamadan University of Medical Sciences, Hamadan, Iran; 2Hamadan Educational Organization, Ministry of Education, Hamadan, Iran; 3Center for Affective, Stress, and Sleep Disorders, Psychiatric Clinics of the University of Basel, Basel, Switzerland; 4Iranian National Center for Addiction Studies (INCAS, Tehran University of Medical Sciences, Tehran, Iran; 5Department of Sport, Exercise and Health Science, Sport Science Section, University of Basel, Basel, Switzerland Background and aims: The Montgomery–Asberg Depression Rating Scale (MADRS is an expert’s rating tool to assess the severity and symptoms of depression. The aim of the present two studies was to validate the Persian version of the MADRS and determine its test–retest reliability in patients diagnosed with major depressive disorders (MDD. Methods: In study 1, the translated MADRS and the Hamilton Depression Rating Scale (HDRS were applied to 210 patients diagnosed with MDD and 100 healthy adults. In study 2,200 patients diagnosed with MDD were assessed with the MADRS in face-to-face interviews. Thereafter, 100 patients were assessed 3–14 days later, again via face-to-face-interviews, while the other 100 patients were assessed 3–14 days later via a telephone interview. Results: Study 1: The MADRS and HDRS scores between patients with MDD and healthy controls differed significantly. Agreement between scoring of the MADRS and HDRS was high (r=0.95. Study 2: The intraclass correlation coefficient (test–retest reliability was r=0.944 for the face-to-face interviews, and r=0.959 for the telephone interviews. Conclusion: The present data suggest that the Persian MADRS has high validity and excellent test–retest reliability over
Vandenberg, Justin M; George, Deanna R; O'Leary, Andrea J; Olson, Lindsay C; Strassburg, Kaitlyn R; Hollman, John H
2015-01-01
Individuals with conversion disorder have neurologic symptoms that are not identified by an underlying organic cause. Often the symptoms manifest as gait disturbances. The modified gait abnormality rating scale (GARS-M) may be useful for quantifying gait abnormalities in these individuals. The purpose of this study was to examine the reliability, responsiveness and concurrent validity of GARS-M scores in individuals with conversion disorder. Data from 27 individuals who completed a rehabilitation program were included in this study. Pre- and post-intervention videos were obtained and walking speed was measured. Five examiners independently evaluated gait performance according to the GARS-M criteria. Inter- and intrarater reliability of GARS-M scores were estimated with intraclass correlation coefficients (ICCs). Responsiveness was estimated with the minimum detectable change (MDC). Pre- to post-treatment changes in GARS-M scores were analyzed with a dependent t-test. The correlation between GARS-M scores and walking speed was analyzed to assess concurrent validity. GARS-M scores were quantified with good-to-excellent inter- (ICC = 0.878) and intrarater reliability (ICC = 0.989). The MDC was 2 points. Mean GARS-M scores decreased from 7 ± 5 at baseline to 1 ± 2 at discharge (t26 = 7.411, p conversion disorder. GARS-M scores provide objective measures upon which treatment effects can be assessed. Copyright © 2014 Elsevier B.V. All rights reserved.
Sherbini, S; Tamasanis, D; Sykes, J; Porter, S W
1986-12-01
A program was developed to calculate the exposure rate resulting from airborne gases inside a reactor containment building. The calculations were performed at the location of a wall-mounted area radiation monitor. The program uses Monte Carlo techniques and accounts for both the direct and scattered components of the radiation field at the detector. The scattered component was found to contribute about 30% of the total exposure rate at 50 keV and dropped to about 7% at 2000 keV. The results of the calculations were normalized to unit activity per unit volume of air in the containment. This allows the exposure rate readings of the area monitor to be used to estimate the airborne activity in containment in the early phases of an accident. Such estimates, coupled with containment leak rates, provide a method to obtain a release rate for use in offsite dose projection calculations.
Bendell, A
1986-01-01
Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo
Parshall Mark B
2012-07-01
Full Text Available Abstract Background Dyspnea is among the most common reasons for emergency department (ED visits by patients with cardiopulmonary disease who are commonly asked to recall the symptoms that prompted them to come to the ED. The reliability of recalled dyspnea has not been systematically investigated in ED patients. Methods Patients with chronic or acute cardiopulmonary conditions who came to the ED with dyspnea (N = 154 completed the Multidimensional Dyspnea Profile (MDP several times during the visit and in a follow-up visit 4 to 6 weeks later (n = 68. The MDP has 12 items with numerical ratings of intensity, unpleasantness, sensory qualities, and emotions associated with how breathing felt when participants decided to come to the ED (recall MDP or at the time of administration (“now” MDP. The recall MDP was administered twice in the ED and once during the follow-up visit. Principal components analysis (PCA with varimax rotation was used to assess domain structure of the recall MDP. Internal consistency reliability was assessed with Cronbach’s alpha. Test–retest reliability was assessed with intraclass correlation coefficients (ICCs for absolute agreement for individual items and domains. Results PCA of the recall MDP was consistent with two domains (Immediate Perception, 7 items, Cronbach’s alpha = .89 to .94; Emotional Response, 5 items; Cronbach’s alpha = .81 to .85. Test–retest ICCs for the recall MDP during the ED visit ranged from .70 to .87 for individual items and were .93 and .94 for the Immediate Perception and Emotional Response domains. ICCs were much lower for the interval between the ED visit and follow-up, both for individual items (.28 to .66 and for the Immediate Perception and Emotional Response domains (.72 and .78, respectively. Conclusions During an ED visit, recall MDP ratings of dyspnea at the time participants decided to seek care in the ED are reliable and sufficiently stable, both for
Rogers, G; Oosthuyse, T
2000-02-01
The standard equation used to calculate mean arterial pressure (MAP) assumes that diastole persists for 2/3 and systole for 1/3 of each cardiac cycle. This ratio is altered when heart rate increases, and therefore we investigated the efficacy of predicting MAP during exercise using non-invasive indirect methods. Eight subjects exercised on a cycle ergometer for 3 minute intervals to elicit heart rates between 100-110, 120-130, 140-150, 160-170, and 180-190 beats/min. In the last minute of each 3 min interval an ECG recording was taken and systolic (SP) and diastolic (DP) blood pressure was measured by manual auscultation. MAP was calculated for each heart rate interval by: MAP=DP+1/3(SP-DP) (method A), and MAP= DP + Fs(SP- DP) (method B), where Fs is the fraction of the cardiac cycle comprising systole, measured from the ECG. Fs increased from 0.35+/-0.049 at rest to 0.47+/-0.039 at a heart rate of 180-190 beats/min. MAP measured by method B was consistently greater than MAP calculated by method A at all heart rates greater than resting heart rate (pequation (method A) to derive MAP during exercise (measured as the percentage difference between method A and B) increased linearly with heart rate (r=0.98). The standard MAP equation should not be applied during exercise, as it does not account for the change in the systolic: diastolic period ratio as heart rate increases.
Vegge, Tejs
2004-01-01
The dissociation of molecular hydrogen on a Mgs0001d surface and the subsequent diffusion of atomic hydrogen into the magnesium substrate is investigated using Density Functional Theory (DFT) calculations and rate theory. The minimum energy path and corresponding transition states are located using...... the nudged elastic band method, and rates of the activated processes are calculated within the harmonic approximation to transition state rate theory, using both classical and quantum partition functions based atomic vibrational frequencies calculated by DFT. The dissociation/recombination of H2 is found...... to be rate-limiting for the ab- and desorption of hydrogen, respectively. Zero-point energy contributions are found to be substantial for the diffusion of atomic hydrogen, but classical rates are still found to be within an order of magnitude at room temperature....
Carlos Janssen Gomes da Cruz
Full Text Available Abstract The objective of this study was to evaluate reproducibility of heart rate variability threshold (HRVT and parasympathetic reactivation in physically active men (n= 16, 24.3 ± 5.1 years. During the test, HRVT was assessed by SD1 and r-MSSD dynamics. Immediately after exercise, r-MSSD was analyzed in segments of 60 seconds for a period of five minutes. High absolute and relatively reproducible analysis of HRVT were observed, as assessed by SD1 and r-MSSD dynamics (ICC = 0.92, CV = 10.8, SEM = 5.8. During the recovery phase, a moderate to high reproducibility was observed for r-MSSD from the first to the fifth minute (ICC = 0.69-0.95, CV = 7.5-14.2, SEM = 0.07-1.35. We conclude that HRVT and r-MSSD analysis after a submaximal stress test are highly reproducible measures that might be used to assess the acute and chronic effects of exercise training on cardiac autonomic modulation during and/or after a submaximal stress test.
Bentley, T William
2015-01-01
.... Third order rate constants (k3) are calculated for solvolytic reactions in a wide range of compositions of acetone-water mixtures, and are shown to be either approximately constant or correlated with the Grunwald-Winstein Y parameter...
机械结构可靠性计算方法%Research on the improved calculation method for mechanical structural reliability
李四超; 张代国; 张强
2011-01-01
The calculation method for mechanical structural reliability was studied. Based on strainstrength model, the decision method of strength and strain was put forward. Aiming at the existing limitations in the traditional model, an improved calculation method was studied. This improved method widens the application range of traditional strain-strength further.%对机械结构可靠性的计算方法进行了研究,在建立起应力-强度干涉模型的基础上,提出了强度和应力的一般确定方法.同时针对传统应力-强度干涉模型中存在的局限,提出了一种改进方法,进一步拓宽了应力-强度干涉模型的应用范围.
Validity and reliability of menopause rating scale in colombian indigenous population
Álvaro Monterrosa-Castro
2017-01-01
Full Text Available The Menopause Rating Scale (MRS measures quality of life in menopausal women. It compounds of three dimensions that assess somatic, psychological and urogenital menopausal-related symptoms. However, the validity of the scales may vary according to population characteristics, and there are no validations to date of MRS in American indigenous population. To assess the validity of MRS in Indigenous Colombian women during menopause. A research was done a sample of 914 indigenous women, 507 postmenopausal women and 407 premenopausal. They were between 40-49 years-old, with a mean age of 59.3 ± 5.9years. MRS was applied to all enrolled women. Cronbach's alpha was applied for the original proposed dimensions, and the dimensions from the results of factor analysis and maximum likelihood methods. A Promax rotation was applied to analysis. MRS showed a Cronbach's alpha: 0.86. The somatic dimension: 0.63, the psychological dimension: 0.75, and urogenital: 0.84. Score was greater in postmenopausal compared to premenopausal, 14.4 (±SD, 6.4 versus 8.4 (±SD, 5.9 (P<0.001. The factor analysis showed two dimensions. The first dimension included items 1,7,8,9,10,11; and accounted for 39.9% of variance. The second dimension included items 2,3,4,5,6; explaining 14.2% of variance. Cronbach's alpha was 0.86 for the first dimension and 0.81 for the second dimension. MRS showed high internal consistency and adequate nomological validity. The factor analysis resulted in two dimensions. These results evidence the need to better assess the validity of the instruments in different populations.
van Harrevelt, Rob; Honkala, Johanna Karoliina; Nørskov, Jens Kehlet
2005-01-01
Quantum-mechanical calculations of the reaction rate for dissociative adsorption of N-2 on stepped Ru(0001) are presented. Converged six-dimensional quantum calculations for this heavy-atom reaction have been performed using the multiconfiguration time-dependent Hartree method. A potential...
The reliability of [C II] as an indicator of the star formation rate
De Looze, Ilse; Baes, Maarten; Bendo, George J.; Cortese, Luca; Fritz, Jacopo
2011-10-01
The [C II] 157.74 μm line is an important coolant for the neutral interstellar gas. Since [C II] is the brightest spectral line for most galaxies, it is a potentially powerful tracer of star formation activity. In this paper, we present a calibration of the star formation rate (SFR) as a function of the [C II] luminosity for a sample of 24 star-forming galaxies in the nearby Universe. This sample includes objects classified as H II regions or low-ionization nuclear emission-line regions, but omits all Seyfert galaxies with a significant contribution from the active galactic nucleus to the mid-infrared photometry. In order to calibrate the SFR against the line luminosity, we rely on both Galaxy Evolution Explorer far-ultraviolet data, which is an ideal tracer of the unobscured star formation, and MIPS 24 μm, to probe the dust-enshrouded fraction of star formation. In the case of normal star-forming galaxies, the [C II] luminosity correlates well with the SFR. However, the extension of this relation to more quiescent (Hα EW ≤ 10 Å) or ultraluminous galaxies should be handled with caution, since these objects show a non-linearity in the ?-to-LFIR ratio as a function of LFIR (and thus, their star formation activity). We provide two possible explanations for the origin of the tight correlation between the [C II] emission and the star formation activity on a global galaxy-scale. A first interpretation could be that the [C II] emission from photodissociation regions (PDRs) arises from the immediate surroundings of star-forming regions. Since PDRs are neutral regions of warm dense gas at the boundaries between H II regions and molecular clouds and they provide the bulk of [C II] emission in most galaxies, we believe that a more or less constant contribution from these outer layers of photon-dominated molecular clumps to the [C II] emission provides a straightforward explanation for this close link between the [C II] luminosity and SFR. Alternatively, we consider the
Guevar, Julien; Penderis, Jacques; Faller, Kiterie; Yeamans, Carmen; Stalin, Catherine; Gutierrez-Quintana, Rodrigo
2014-01-01
The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013) to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.
Julien Guevar
Full Text Available The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013 to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.
Analysis and calculation method of reliability of anchored retaining wall%锚杆挡土墙可靠度分析与计算方法
唐仁华; 陈昌富
2012-01-01
A series-parallel model of rib beam-anchor system in reliability analyses of anchored retaining wall is presented. Regarding the rib beam as a continuous beam and the anchors as an elastic support, introducing a composite stiffness coefficient of anchor and the soil around anchorage section, the application loads acting on each anchor are obtained by displacement method. Considering the correlation of performance functions, based on the system reliability theory, a series system of three failure models for single anchor and a parallel system for multiple anchors are developed, while the calculation method of system reliability for the two systems is derived. The corresponding program of new method herein is used to compute the reliability in an engineering example. The calculation results show that the three failure models for single anchor are related and they have different effects on anchor's reliability, yet the parallel system failure probability of three anchors is approximately equal to sum of the three single anchor's failure probability at the condition of other anchors is not failure.%提出了锚杆挡墙中肋柱锚杆体系的串-并联模型.将肋柱视为连续梁,锚杆视为弹性支座,引入锚杆与锚固段周围岩土体的复合刚度系数,用位移法导出了各根锚杆所受荷载的统一计算公式.考虑各功能函数之间的相关性,运用系统可靠性理论,提出了单根锚杆3种破坏模式的串联系统与多根锚杆并联系统的体系可靠度计算方法,并编制了计算程序.对一工程实例进行计算,并对计算结果进行了分析,发现单根锚杆的3种失效模式并非相互独立,每种失效模式对锚杆可靠度的影响也不一样,而3根锚杆并联系统的失效概率近似等于在其他锚杆均未破坏的条件下每根锚杆单独失效概率之和.
Baumgarten, Werner; Thiele, Holger; Ruprecht, Benjamin; Phlippen, Peter-W.; Schlömer, Luc
2017-09-01
Dose rate calculations are important for judging the shielding performance of transport casks for radioactive material. Therefore it is important to have reliable calculation tools. We report on measured and calculated dose rates near a thick-walled transport and storage cask of ductile cast iron with lead inserts and a Co-60 source inside. In a series of experiments the thickness of the inserts was varied, and measured dose rates near the cask were compared with SCALE/MAVRIC 6.1.3 and SCALE/MAVRIC 6.2 calculation results. Deviations from the measurements were found to be higher for increased lead thicknesses. Furthermore, it is shown how the shielding material density, air scattering and accounting for the floor influence the quality of the calculation.
Chen, Y.; García de Abajo, F. J.; Chassé, A.; Ynzunza, R. X.; Kaduwela, A. P.; van Hove, M. A.; Fadley, C. S.
1998-11-01
The Rehr-Albers (RA) separable Green's-function formalism, which is based on an expansion series, has been successful in speeding up multiple-scattering cluster calculations for photoelectron diffraction simulations, particularly in its second-order version. The performance of this formalism is explored here in terms of computational speed, convergence over orders of multiple scattering, over orders of approximation, and over cluster size, by comparison with exact cluster-based formalisms. It is found that the second-order RA approximation [characterized by (6×6) scattering matrices] is adequate for many situations, particularly if the initial state from which photoemission occurs is of s or p type. For the most general and quantitative applications, higher-order versions of RA may become necessary for d initial states [third-order, i.e., (10×10) matrices] and f initial states [fourth-order, i.e., (15×15) matrices]. However, the required RA order decreases as an electron wave proceeds along a multiple-scattering path, and this can be exploited, together with the selective and automated cutoff of weakly contributing matrix elements and paths, to yield computer time savings of at least an order of magnitude with no significant loss of accuracy. Cluster sizes of up to approximately 100 atoms should be sufficient for most problems that require about 5% accuracy in diffracted intensities. Excellent sensitivity to structure is seen in comparisons of second-order theory with variable geometry to exact theory as a fictitious ``experiment.'' Our implementation of the Rehr-Albers formalism thus represents a versatile, quantitative, and efficient method for the accurate simulation of photoelectron diffraction.
New method of calculating fuzzy reliability of hydraulic cylinder stability%液压缸稳定性模糊可靠度计算的新方法
龚相超; 胡百鸣; 韩芳
2011-01-01
There are many factors to influence the critical press of the hydraulic cylinder. The action mechanism is not entirely clear about how various parts of the hydraulic cylinder work on its stability, moreover, the load of the hydraulic cylinder is fuzzy and random. In the paper, the fuzzy reliability of the hydraulic cylinder stability is calculated according to mechanical fuzzy reliability theory, which provides an important reference for the design of the hydraulic cylinder. In addition, linear membership function is used for the critical pressure of the hydraulic cylinder, and the load is a kind of random variable which is normallydistributed. The calculation example shows that the method is effective.%根据机械模糊可靠性理论，笔者计算了液压缸稳定性的模糊可靠度，为液压缸设计提供了一个重要的参照指标。计算液压缸临界压力采用了线性隶属度函数，工作载荷为正态分布的随机变量，计算实例证明了该计算方法的有效性。
Markus Klimscheffskij
2015-05-01
Full Text Available In the EU, electricity suppliers are obliged to disclose to their customers the energy origin and environmental impacts of sold electricity. To this end, guarantees of origin (GOs are used to explicitly track electricity generation attributes to individual electricity consumers. When part of a reliable electricity disclosure system, GOs deliver an important means for consumers to participate in the support of renewable power. In order to be considered reliable, GOs require the support of an implicit disclosure system, a residual mix, which prevents once explicitly tracked attributes from being double counted in a default energy mix. This article outlines the key problems in implicit electricity disclosure: (1 uncorrected generation statistics used for implicit disclosure; (2 contract-based tracking; (3 uncoordinated calculation within Europe; (4 overlapping regions for implicit disclosure; (5 active GOs. The improvements achieved during the RE-DISS project (04/2010-10/2012 with regard to these problems have reduced the total implicit disclosure error by 168 TWh and double counting of renewable generation attributes by 70 TWh, in 16 selected countries. Quantitatively, largest individual improvements were achieved in Norway, Germany and Italy. Within the 16 countries, a total disclosure error of 75 TWh and double counting of renewable generation attributes of 36 TWh still reside after the end of the project on national level. Regarding the residual mix calculation methodology, the article justifies the implementation of a shifted transaction-based method instead of a production year-based method.
Cassell, K.J. (Saint Luke' s Hospital, Guildford (UK))
1983-02-01
A method, developed from the Quantisation Method, of calculating dose-rate distributions around uniformly and non-uniformly loaded brachytherapy sources is described. It allows accurate and straightforward corrections for oblique filtration and self-absorption to be made. Using this method, dose-rate distributions have been calculated for sources of radium 226, gold 198, iridium 192, caesium 137 and cobalt 60, all of which show very good agreement with existing measured and calculated data. This method is now the basis of the Interstitial and Intracavitary Dosimetry (IID) program on the General Electric RT/PLAN computerised treatment planning system.
Frankel, Arthur; Mueller, Charles
2008-01-01
One of the key issues in the development of an earthquake recurrence model for California and adjacent portions of Nevada and Mexico is the comparison of the predicted rates of earthquakes with the observed rates. Therefore, it is important to make an accurate determination of the observed rate of M>6.5 earthquakes in California and the adjacent region. We have developed a procedure to calculate observed earthquake rates from an earthquake catalog, accounting for magnitude uncertainty and magnitude rounding. We present a Bayesian method that corrects for the effect of the magnitude uncertainty in calculating the observed rates. Our recommended determination of the observed rate of M>6.5 in this region is 0.246 ? 0.085 (for two sigma) per year, although this rate is likely to be underestimated because of catalog incompleteness and this uncertainty estimate does not include all sources of uncertainty.
Gholamreza Jandaghi
2008-07-01
Full Text Available The purpose of the research is to determine high school teachers’ skill rate in designing exam questions in mathematics subject. The statistical population was all of mathematics exam shits for two semesters in one school year from which a sample of 364 exam shits was drawn using multistage cluster sampling. Two experts assessed the shits and by using appropriate indices and z-test and chi-squared test the analysis of the data was done. We found that the designed exams have suitable coefficients of validity and reliability. The level of difficulty of exams was high. No significant relationship was found between male and female teachers in terms of the coefficient of validity and reliability but a significant difference between the difficulty level in male and female teachers was found (P<.001. It means that female teachers had designed more difficult questions. We did not find any significant relationship between the teachers’ gender and the coefficient of discrimination of the exams.
This paper analyzes the accuracy of metabolic rate calculations performed in the whole room indirect calorimeter using the molar balance equations. The equations are treated from the point of view of cause-effect relationship where the gaseous exchange rates representing the unknown causes need to b...
Dourado, V.Z.; Guerra, R.L.F. [Laboratório de Estudos da Motricidade Humana, Departamento de Ciências do Movimento Humano, Universidade Federal de São Paulo, Santos, SP (Brazil)
2013-02-01
Studies on the assessment of heart rate variability threshold (HRVT) during walking are scarce. We determined the reliability and validity of HRVT assessment during the incremental shuttle walk test (ISWT) in healthy subjects. Thirty-one participants aged 57 ± 9 years (17 females) performed 3 ISWTs. During the 1st and 2nd ISWTs, instantaneous heart rate variability was calculated every 30 s and HRVT was measured. Walking velocity at HRVT in these tests (WV-HRVT1 and WV-HRVT2) was registered. During the 3rd ISWT, physiological responses were assessed. The ventilatory equivalents were used to determine ventilatory threshold (VT) and the WV at VT (WV-VT) was recorded. The difference between WV-HRVT1 and WV-HRVT2 was not statistically significant (median and interquartile range = 4.8; 4.8 to 5.4 vs 4.8; 4.2 to 5.4 km/h); the correlation between WV-HRVT1 and WV-HRVT2 was significant (r = 0.84); the intraclass correlation coefficient was high (0.92; 0.82 to 0.96), and the agreement was acceptable (-0.08 km/h; -0.92 to 0.87). The difference between WV-VT and WV-HRVT2 was not statistically significant (4.8; 4.8 to 5.4 vs 4.8; 4.2 to 5.4 km/h) and the agreement was acceptable (0.04 km/h; -1.28 to 1.36). HRVT assessment during walking is a reliable measure and permits the estimation of VT in adults. We suggest the use of the ISWT for the assessment of exercise capacity in middle-aged and older adults.
V.Z. Dourado
2013-02-01
Full Text Available Studies on the assessment of heart rate variability threshold (HRVT during walking are scarce. We determined the reliability and validity of HRVT assessment during the incremental shuttle walk test (ISWT in healthy subjects. Thirty-one participants aged 57 ± 9 years (17 females performed 3 ISWTs. During the 1st and 2nd ISWTs, instantaneous heart rate variability was calculated every 30 s and HRVT was measured. Walking velocity at HRVT in these tests (WV-HRVT1 and WV-HRVT2 was registered. During the 3rd ISWT, physiological responses were assessed. The ventilatory equivalents were used to determine ventilatory threshold (VT and the WV at VT (WV-VT was recorded. The difference between WV-HRVT1 and WV-HRVT2 was not statistically significant (median and interquartile range = 4.8; 4.8 to 5.4 vs 4.8; 4.2 to 5.4 km/h; the correlation between WV-HRVT1 and WV-HRVT2 was significant (r = 0.84; the intraclass correlation coefficient was high (0.92; 0.82 to 0.96, and the agreement was acceptable (-0.08 km/h; -0.92 to 0.87. The difference between WV-VT and WV-HRVT2 was not statistically significant (4.8; 4.8 to 5.4 vs 4.8; 4.2 to 5.4 km/h and the agreement was acceptable (0.04 km/h; -1.28 to 1.36. HRVT assessment during walking is a reliable measure and permits the estimation of VT in adults. We suggest the use of the ISWT for the assessment of exercise capacity in middle-aged and older adults.
Kusano, Maggie; Caldwell, Curtis B
2014-07-01
A primary goal of nuclear medicine facility design is to keep public and worker radiation doses As Low As Reasonably Achievable (ALARA). To estimate dose and shielding requirements, one needs to know both the dose equivalent rate constants for soft tissue and barrier transmission factors (TFs) for all radionuclides of interest. Dose equivalent rate constants are most commonly calculated using published air kerma or exposure rate constants, while transmission factors are most commonly calculated using published tenth-value layers (TVLs). Values can be calculated more accurately using the radionuclide's photon emission spectrum and the physical properties of lead, concrete, and/or tissue at these energies. These calculations may be non-trivial due to the polyenergetic nature of the radionuclides used in nuclear medicine. In this paper, the effects of dose equivalent rate constant and transmission factor on nuclear medicine dose and shielding calculations are investigated, and new values based on up-to-date nuclear data and thresholds specific to nuclear medicine are proposed. To facilitate practical use, transmission curves were fitted to the three-parameter Archer equation. Finally, the results of this work were applied to the design of a sample nuclear medicine facility and compared to doses calculated using common methods to investigate the effects of these values on dose estimates and shielding decisions. Dose equivalent rate constants generally agreed well with those derived from the literature with the exception of those from NCRP 124. Depending on the situation, Archer fit TFs could be significantly more accurate than TVL-based TFs. These results were reflected in the sample shielding problem, with unshielded dose estimates agreeing well, with the exception of those based on NCRP 124, and Archer fit TFs providing a more accurate alternative to TVL TFs and a simpler alternative to full spectral-based calculations. The data provided by this paper should assist
Smati, A.; Younsi, K.; Zeraibi, N.; Zemmour, N. [Universite de Boumerdes, Faculte des Hydrocarbures, Dept. Transport et Equipement, Boumerdes (Algeria)
2003-07-01
LNG plants are characterized by their relatively low number in the world, diversity of processes involved, very high investment and operating costs. The fuel consumption of this type of facilities (about 15%) may double in given cases, when the frequency of untimely and volunteer shut downs is high. Then, the improvement of the reliability of the LNG chain in its overall will lead objectively to substantial decrease of energy costs. For reparable systems, availability is more often used as reliability indicator. In reliability point of view, the LNG chain must be assimilated to a unique complex system. However, modeling of complex systems, in reliability point of view or other, is always difficult in relation with the large dimensions of the space of phases. In this paper, a systemic approach is used to reduce the space of phases. A representation of subsystems by reliability diagrams permit a more easy calculation of probabilities associated with every phase. A bottom up technique allows the reconstitution of the global model of reliability of the chain. In an environment characterized by its weakness in statistical data, a Bayesian estimation approach is used to define the failure and repair rates of different equipments composing the LNG chain. Some results concerning Algerian LNG chairs Hassi R'mel-Skikda are furnished. (authors)
Basu, Asit P; Basu, Sujit K
1998-01-01
This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul
Reliability computation from reliability block diagrams
Chelson, P. O.; Eckstein, E. Y.
1975-01-01
Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.
Liu, Dapeng
2017-01-10
Reaction rate coefficients for the reaction of hydroxyl (OH) radicals with nine large branched alkanes (i.e., 2-methyl-3-ethyl-pentane, 2,3-dimethyl-pentane, 2,2,3-trimethylbutane, 2,2,3-trimethyl-pentane, 2,3,4-trimethyl-pentane, 3-ethyl-pentane, 2,2,3,4-tetramethyl-pentane, 2,2-dimethyl-3-ethyl-pentane, and 2,4-dimethyl-3-ethyl-pentane) are measured at high temperatures (900-1300 K) using a shock tube and narrow-line-width OH absorption diagnostic in the UV region. In addition, room-temperature measurements of six out of these nine rate coefficients are performed in a photolysis cell using high repetition laser-induced fluorescence of OH radicals. Our experimental results are combined with previous literature measurements to obtain three-parameter Arrhenius expressions valid over a wide temperature range (300-1300 K). The rate coefficients are analyzed using the next-nearest-neighbor (N-N-N) methodology to derive nine tertiary (T003, T012, T013, T022, T023, T111, T112, T113, and T122) site-specific rate coefficients for the abstraction of H atoms by OH radicals from branched alkanes. Derived Arrhenius expressions, valid over 950-1300 K, are given as (the subscripts denote the number of carbon atoms connected to the next-nearest-neighbor carbon): T003 = 1.80 × 10-10 exp(-2971 K/T) cm3 molecule-1 s-1; T012 = 9.36 × 10-11 exp(-3024 K/T) cm3 molecule-1 s-1; T013 = 4.40 × 10-10 exp(-4162 K/T) cm3 molecule-1 s-1; T022 = 1.47 × 10-10 exp(-3587 K/T) cm3 molecule-1 s-1; T023 = 6.06 × 10-11 exp(-3010 K/T) cm3 molecule-1 s-1; T111 = 3.98 × 10-11 exp(-1617 K/T) cm3 molecule-1 s-1; T112 = 9.08 × 10-12 exp(-3661 K/T) cm3 molecule-1 s-1; T113 = 6.74 × 10-9 exp(-7547 K/T) cm3 molecule-1 s-1; T122 = 3.47 × 10-11 exp(-1802 K/T) cm3 molecule-1 s-1.
Spruck, K; Krantz, C; Novotný, O; Becker, A; Bernhardt, D; Grieser, M; Hahn, M; Repnow, R; Savin, D W; Wolf, A; Müller, A; Schippers, S
2014-01-01
We present new experimentally measured and theoretically calculated rate coefficients for the electron-ion recombination of W$^{18+}$([Kr] $4d^{10}$ $4f^{10}$) forming W$^{17+}$. At low electron-ion collision energies, the merged-beam rate coefficient is dominated by strong, mutually overlapping, recombination resonances. In the temperature range where the fractional abundance of W$^{18+}$ is expected to peak in a fusion plasma, the experimentally derived Maxwellian recombination rate coefficient is 5 to 10 times larger than that which is currently recommended for plasma modeling. The complexity of the atomic structure of the open-$4f$-system under study makes the theoretical calculations extremely demanding. Nevertheless, the results of new Breit-Wigner partitioned dielectronic recombination calculations agree reasonably well with the experimental findings. This also gives confidence in the ability of the theory to generate sufficiently accurate atomic data for the plasma modeling of other complex ions.
Gkionis, Konstantinos; Kruse, Holger; Šponer, Jiří
2016-04-12
Modern dispersion-corrected DFT methods have made it possible to perform reliable QM studies on complete nucleic acid (NA) building blocks having hundreds of atoms. Such calculations, although still limited to investigations of potential energy surfaces, enhance the portfolio of computational methods applicable to NAs and offer considerably more accurate intrinsic descriptions of NAs than standard MM. However, in practice such calculations are hampered by the use of implicit solvent environments and truncation of the systems. Conventional QM optimizations are spoiled by spurious intramolecular interactions and severe structural deformations. Here we compare two approaches designed to suppress such artifacts: partially restrained continuum solvent QM and explicit solvent QM/MM optimizations. We report geometry relaxations of a set of diverse double-quartet guanine quadruplex (GQ) DNA stems. Both methods provide neat structures without major artifacts. However, each one also has distinct weaknesses. In restrained optimizations, all errors in the target geometries (i.e., low-resolution X-ray and NMR structures) are transferred to the optimized geometries. In QM/MM, the initial solvent configuration causes some heterogeneity in the geometries. Nevertheless, both approaches represent a decisive step forward compared to conventional optimizations. We refine earlier computations that revealed sizable differences in the relative energies of GQ stems computed with AMBER MM and QM. We also explore the dependence of the QM/MM results on the applied computational protocol.
Development of wide-range constitutive equations for calculations of high-rate deformation of metals
Preston D.
2011-01-01
Full Text Available For development of models of strength and compressibility of metals in wide range of pressures (up to several megabar and strain rates ~ 1÷108 s−1, the method of dynamic tests is used. Since direct measurement of strength is impossible under complicated intensive high-rate loading, a formal model is created at first, and then it is updated basing on comparison with many experiments, which are sensitive to shear strength. Elastic-plastic, viscous-elastic-plastic and relaxation integral models became nowadays most commonly used. The basic unsolved problems in simulation of high-rate deformation of metals are mentioned in the paper.
Wang, Kaicun; Zhou, Chunlüe
2016-04-01
Global analyses of surface mean air temperature (Tm) are key datasets for climate change studies and provide fundamental evidences for global warming. However, the causes of regional contrasts in the warming rate revealed by such datasets, i.e., enhanced warming rates over the northern high latitudes and the "warming hole" over the central U.S., are still under debate. Here we show these regional contrasts depends on the calculation methods of Tm. Existing global analyses calculated Tm from daily minimum and maximum temperatures (T2). We found that T2 has a significant standard deviation error of 0.23 °C/decade in depicting the regional warming rate from 2000 to 2013 but can be reduced by two-thirds using Tm calculated from observations at four specific times (T4), which samples diurnal cycle of land surface air temperature more often. From 1973 to 1997, compared with T4, T2 significantly underestimated the warming rate over the central U.S. and overestimated the warming rate over the northern high latitudes. The ratio of the warming rate over China to that over the U.S. reduces from 2.3 by T2 to 1.4 by T4. This study shows that the studies of regional warming can be substantially improved by T4 instead of T2.
Lane, Kathleen Lynne; Kalberg, Jemma Robertson; Bruhn, Allison Leigh; Driscoll, Steven A.; Wehby, Joseph H.; Elliott, Stephen N.
2009-01-01
This study provides initial evidence for the reliability and structural validity of scores from the Primary Intervention Rating Scale (Lane, Robertson, & Wehby, 2002), an adapted version of the Intervention Rating Profile-15 (Witt & Elliott, 1985) designed to assess faculty's perceptions of social validity of primary prevention plans prior…
Hot electron mediated desorption rates calculated from excited state potential energy surfaces
Olsen, Thomas; Schiøtz, Jakob
2008-01-01
We present a model for Desorption Induce by (Multiple) Electronic Transitions (DIET/DIMET) based on potential energy surfaces calculated with the Delta Self-Consistent Field extension of Density Functional Theory. We calculate potential energy surfaces of CO and NO molecules adsorbed on various transition metal surfaces, and show that classical nuclear dynamics does not suffice for propagation in the excited state. We present a simple Hamiltonian describing the system, with parameters obtained from the excited state potential energy surface, and show that this model can describe desorption dynamics in both the DIET and DIMET regime, and reproduce the power law behavior observed experimentally. We observe that the internal stretch degree of freedom in the molecules is crucial for the energy transfer between the hot electrons and the molecule when the coupling to the surface is strong.
Calculation of expected rates of fisheries‐induced evolution in data‐poor situations
Andersen, Ken Haste
for modeling the demography of fish based on size-based prescriptions of natural mortality, growth, and fishing is presented. Life history theory is used to reduce the necessary parameter set by utilizing relations between parameters making the framework particularly well suited for data-poor situations where...... analysis of the parameter values is performed, and calculations of how different fishing patterns influences the results are presented....
Phillips, R.L.; London, E.D.; Links, J.M.; Cascella, N.G. (NIDA Addiction Research Center, Baltimore, MD (USA))
1990-12-01
A program was developed to align positron emission tomography images from multiple studies on the same subject. The program allowed alignment of two images with a fineness of one-tenth the width of a pixel. The indications and effects of misalignment were assessed in eight subjects from a placebo-controlled double-blind crossover study on the effects of cocaine on regional cerebral metabolic rates for glucose. Visual examination of a difference image provided a sensitive and accurate tool for assessing image alignment. Image alignment within 2.8 mm was essential to reduce variability of measured cerebral metabolic rates for glucose. Misalignment by this amount introduced errors on the order of 20% in the computed metabolic rate for glucose. These errors propagate to the difference between metabolic rates for a subject measured in basal versus perturbed states.
Quantum Tunneling Rates of Gas-Phase Reactions from On-the-Fly Instanton Calculations.
Beyer, Adrian N; Richardson, Jeremy O; Knowles, Peter J; Rommel, Judith; Althorpe, Stuart C
2016-11-03
The instanton method obtains approximate tunneling rates from the minimum-action path (known as the instanton) linking reactants to the products at a given temperature. An efficient way to find the instanton is to search for saddle-points on the ring-polymer potential surface, which is obtained by expressing the quantum Boltzmann operator as a discrete path-integral. Here we report a practical implementation of this ring-polymer form of instanton theory into the Molpro electronic-structure package, which allows the rates to be computed on-the-fly, without the need for a fitted analytic potential-energy surface. As a test case, we compute tunneling rates for the benchmark H + CH4 reaction, showing how the efficiency of the instanton method allows the user systematically to converge the tunneling rate with respect to the level of electronic-structure theory.
45 CFR 261.36 - Do welfare reform waivers affect the calculation of a State's participation rates?
2010-10-01
... 45 Public Welfare 2 2010-10-01 2010-10-01 false Do welfare reform waivers affect the calculation of a State's participation rates? 261.36 Section 261.36 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND...
CALCULATION STUDIES OF SPATIAL DISTRIBUTION OF THE ABSORBED DOSE RATE FOR VARIOUS SEEDS
N. A. Nerozin
2015-01-01
Full Text Available Purpose. Conducting computational studies of dosimetric characteristics of microsources with the radionuclide I‑125, pilot production of which is established in the research and production complex of isotope and radiopharmaceuticals, JSC “State Scientific Centre of the Russian Federation — Institute for Physics and Power Engineering named after A. I. Leypunsky” (SSC RF IPPE. Sources of production IPPE are similar to the model 6711 of the company Nicomed Amersham, dosimetric characteristics of which are standardized in accordance with the TG43 AAPM formalism.Materials and methods. Microsourse «SEED No. 6711» (model of the company Nicomed Amersham is hermetically sealed in a titanium capsule silver rod covered with a thin layer of radioactive I‑125. The half-life of iodine‑125 is 59,43 days. In the process of decay of I‑125 is converted into the Te‑125.Calculation of parameters of microsources and their comparison with the standard model 6711 is carried out with use of the computer code MCNP.Results. The method of calculation of the basic dosimetric characteristics of the microsourse SSC RF-IPPE in accordance with the TG43 formalism is developed. A comparative analysis of experimental data and calculated results by MCNP code, which allowed to identify possible reasons for differences, is performed. The estimated dose characteristics and recommended standard data for dose characteristics of micro «SEED No. 6711» are compared.Conclusions. There are two possible reasons for the differences between experimental and calculated results. The first one may be the roughness of the surface of a silver rod or diffusion of radioactive iodine in silver. The second reason might be the difference of the cross sections of the characteristic radiation of silver used in MCNP code. In the comparison of calculated dose characteristics and recommended standard the role of the application activity is very important. In compliance with the standard
Lazzaroni, Massimo
2012-01-01
This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be
Dmitrijus Styra
2011-04-01
Full Text Available Equivalent dose rate measurements were carried out in the Baltic Sea coast near Juodkrantė. The measurements were performed at the ground level and 1 meter above it at 63 points within the territory of 2,0´0,2 km on 2 July 2008 and 10 July 2008 under conditions of northern and southern wind directions respectively. The extreme rates of the equivalent dose rate were 51 and 90 nSv/h respectively which means that the structure of the equivalent dose field was unhomogeneous. The method of optimal interpollation was used to calculate and evaluate the structure of the equivalent dose rate field. This method was used in 3 cases when 63, 33 and 18 numbers of measurement were carried out. The identical structures of the equivalent dose field were accepted. Using 18 measurement points, coincidence between the measured and calculated values of the equivalent dose rate was satisfactory. Difference between the measured and calculated values does not exceed 15% in 80% of the measurement points.Article in Lithuanian
Quantum three-body calculation of the nonresonant triple-\\alpha reaction rate at low temperatures
Ogata, Kazuyuki; Kamimura, Masayasu
2009-01-01
The triple-\\alpha reaction rate is re-evaluated by directly solving the three-body Schroedinger equation. The resonant and nonresonant processes are treated on the same footing using the continuum-discretized coupled-channels method for three-body scattering. Accurate description of the \\alpha-\\alpha nonresonant states significantly quenches the Coulomb barrier between the two-\\alpha's and the third \\alpha particle. Consequently, the \\alpha-\\alpha nonresonant continuum states below the resonance at 92.04 keV, i.e., the ground state of 8Be, give markedly larger contribution at low temperatures than in foregoing studies. We find about 20 orders-of-magnitude enhancement of the triple-\\alpha reaction rate around 10^7 K compared to the rate of the NACRE compilation.
Preliminary Calculations of Shutdown Dose Rate for the CTS Diagnostics System
Klinkby, Esben Bryndt; Nonbøl, Erik; Lauritzen, Bent
2015-01-01
DTU and IST 2 are partners in the design of a collective Thomson Scattering (CTS) diagnostics for ITER through a contract with F4E. The CTS diagnostic utilizes probing radiation of ~60 GHz emitted into the plasma and, using a mirror, collects the scattered radiation by an array of receivers. Having...... a direct and unshielded view to the plasma, the first mirror will be subject to significant radiation and among the first tasks in the CTS design, is to determine whether the mirror will need active cooling. At present the CTS is in the conceptual design phase and the related neutronics calculations focus...
Biological shielding assessment and dose rate calculation for a neutron inspection portal
Donzella, A.; Bonomi, G.; Giroletti, E.; Zenoni, A.
2012-04-01
With reference to the prototype of neutron inspection portal built and successfully tested in the Rijeka seaport (Croatia) within the EURITRACK (EURopean Illicit Trafficking Countermeasures Kit) project, an assessment of the biological shielding in different set-up configurations of a future portal has been calculated with MCNP Monte Carlo code in the frame of the Eritr@C (European Riposte against Illicit TR@ffiCking) project. In the configurations analyzed the compliance with the dose limits for workers and the population stated by the European legislation is provided by appropriate shielding of the neutron sources and by the delimitation of a controlled area.
Variational RRKM calculation of thermal rate constant for C–H bond fission reaction of nitro methane
Afshin Taghva Manesh
2017-02-01
Full Text Available The present work provides quantitative results for the rate constants of unimolecular C–H bond fission reactions in the nitro methane at elevated temperatures up to 2000 K. In fact, there are three different hydrogen atoms in the nitro methane. The potential energy surface for each C–H bond fission reaction of nitro methane was investigated by ab initio calculations. The geometry and vibrational frequencies of the species involved in this process were optimized at the MP2 level of theory, using the cc-pvdz basis set. Since C–H bond fission channel is a barrierless reaction, we have used variational RRKM theory to predict rate coefficients. By means of calculated rate coefficients at different temperatures, the Arrhenius expression of the channel over the temperature range of 100–2000 K is k(T = 5.9E19∗exp(−56274.6/T.
Yoshida, Yutaka; Yokoyama, Kiyoko; Ishii, Naohiro
Many reports show that dominant frequency of high frequency component(HF:0.15∼0.4Hz) of heart rate time series is synchronized with the respiratory frequency. In this paper, we proposed the method for estimating the condition of respiration continuously using dominant frequency and power of HF (HFP) of heart rate time series. Dominant frequency and HFP is calculated from the interval of the neighboring two extreme points and the subtraction of the neighboring two extremals.In the experimental results, high frequency components did not disappear completely during breath-holding. This fact is different from the previous study. Subjects were classified into two groups. In one group, dominant frequency of the HF is significant increased during breath-holding compared with normal breathing. In the other group, this phenomenon was not observed. On the other hand, HFP of total subjects significantly decreased during breath-holding compared with normal breathing. Correct rate during breath-holding and error rate during rest and recovery are calculated using HFP. In the results, average and S. D. of correct rate during breath-holding were 65.0±26.3%. Correct rate of 18 subjects was 80.0±14.1% and correct rate of other 8 subjects was 31.5±11.9%. Our method is expected to apply to the development of the respiratory monitor which can measure respiratory condition with non-restricted in continuously.
Small groups, large profits: Calculating interest rates in community-managed microfinance
Rasmussen, Ole Dahl
2012-01-01
Savings groups are a widely used strategy for women’s economic resilience – over 80% of members worldwide are women, and in the case described here, 72.5%. In these savings groups it is common to see the interest rate on savings reported as "20-30% annually". Using panel data from 204 groups in M...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
LA_BASELINE - Offshore Baseline for Louisiana Generated to Calculate Shoreline Change Rates
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
AL_BASELINE - Offshore Baseline for Alabama Generated to Calculate Shoreline Change Rates
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
TX_BASELINE - Offshore Baseline for Texas Generated to Calculate Shoreline Change Rates
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
FL_BASELINE - Offshore Baseline for Florida Generated to Calculate Shoreline Change Rates
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
Should Thermostatted Ring Polymer Molecular Dynamics be used to calculate reaction rates?
Hele, Timothy J H
2015-01-01
We apply Thermostatted Ring Polymer Molecular Dynamics (TRPMD), a recently-proposed approximate quantum dynamics method, to the computation of thermal reaction rates. Its short-time Transition-State Theory (TST) limit is identical to rigorous Quantum Transition-State Theory, and we find that its long-time limit is independent of the location of the dividing surface. TRPMD rate theory is then applied to one-dimensional model systems, the atom-diatom bimolecular reactions H+H$_2$, D+MuH and F+H$_2$, and the prototypical polyatomic reaction H+CH$_4$. Above the crossover temperature, the TRPMD rate is virtually invariant to the strength of the friction applied to the internal ring-polymer normal modes, and beneath the crossover temperature the TRPMD rate generally decreases with increasing friction, in agreement with the predictions of Kramers theory. We therefore find that TRPMD is less accurate than Ring Polymer Molecular Dynamics (RPMD) for symmetric reactions, and in certain asymmetric systems closer to the q...
Should thermostatted ring polymer molecular dynamics be used to calculate thermal reaction rates?
Hele, Timothy J. H., E-mail: tjhh2@cam.ac.uk [Department of Chemistry, University of Cambridge, Lensfield Road, Cambridge CB2 1EW (United Kingdom); Suleimanov, Yury V. [Computation-based Science and Technology Research Center, Cyprus Institute, 20 Kavafi St., Nicosia 2121 (Cyprus); Department of Chemical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, Massachusetts 02139 (United States)
2015-08-21
We apply Thermostatted Ring Polymer Molecular Dynamics (TRPMD), a recently proposed approximate quantum dynamics method, to the computation of thermal reaction rates. Its short-time transition-state theory limit is identical to rigorous quantum transition-state theory, and we find that its long-time limit is independent of the location of the dividing surface. TRPMD rate theory is then applied to one-dimensional model systems, the atom-diatom bimolecular reactions H + H{sub 2}, D + MuH, and F + H{sub 2}, and the prototypical polyatomic reaction H + CH{sub 4}. Above the crossover temperature, the TRPMD rate is virtually invariant to the strength of the friction applied to the internal ring-polymer normal modes, and beneath the crossover temperature the TRPMD rate generally decreases with increasing friction, in agreement with the predictions of Kramers theory. We therefore find that TRPMD is approximately equal to, or less accurate than, ring polymer molecular dynamics for symmetric reactions, and for certain asymmetric systems and friction parameters closer to the quantum result, providing a basis for further assessment of the accuracy of this method.
MS_BASELINE - Offshore Baseline for Mississippi Generated to Calculate Shoreline Change Rates
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS using the Digital Shoreline Analysis System (DSAS) version 3.0; An ArcGIS extension for...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS using the Digital Shoreline Analysis System (DSAS) version 3.0; An ArcGIS extension for...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS using the Digital Shoreline Analysis System (DSAS) version 3.0; An ArcGIS extension for...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS using the Digital Shoreline Analysis System (DSAS) version 3.0; An ArcGIS extension for...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS using the Digital Shoreline Analysis System (DSAS) version 3.0; An ArcGIS extension for...
Concept for calculating dose rates from activated groundwater at accelerator sites
Prolingheuer, N; Vanderborght, J; Schlögl, B; Nabbi, R; Moormann, R
Licensing of particle accelerators requires the proof that the groundwater outside of the site will not be significantly contaminated by activation products formed below accelerator and target. In order to reduce the effort for this proof, a site independent simplified but conservative method is under development. The conventional approach for calculation of activation of soil and groundwater is shortly described on example of a site close to Forschungszentrum Juelich, Germany. Additionally an updated overview of a data library for partition coefficients for relevant nuclides transported in the aquifer at the site is presented. The approximate model for transport of nuclides with ground water including exemplary results on nuclide concentrations outside of the site boundary and of resulting effective doses is described. Further applications and developments are finally outlined.
LIN Sheng-Lu; ZHOU Hui; XU Xue-You; JIA Zheng-Mao; DENG Shan-Hong
2008-01-01
@@ By using a semiclassical method, we present theoretical computations of the ionization rate of Rydberg lithium atoms in parallel electric and magnetic fields with different scaled energies above the classical saddle point.The yielded irregular pulse trains of the escape electrons are recorded as a function of emission time, which allows for relating themselves to the terms of the recurrence periods of the photoabsorption.This fact turns to illustrate the dynamic mechanism how the electron pulses are stochastically generated.Comparing our computations with previous investigation results, we can deduce that the complicated chaos under consideration here consists of two kinds of self-similar fractal structures which correspond to the contributions of the applied magnetic field and the core scattering events.Furthermore, the effect of the magnetic field plays a major role in the profile of the autoionization rate curves, while the contribution of the core scattering is critical for specifying the positions of the pulse peaks.
Change Rate Control of Photovoltaic Generation Output and Calculation of Necessary Capacitance
Satoh, Hiroyuki; Takayama, Satoshi; Nakamura, Koichi; Kakimoto, Naoto
The photovoltaic (PV) generator changes its power output with the weather. If the PV output changes fast, the power system may require more load-following capability and spinning-reserve. This paper proposes a method of controlling the change rate of the PV output. The PV generator is combined with an electric double layer capacitor (EDLC). The moving average is used to eliminate short period fluctuations of the PV output. The output of the power conversion system (PCS) is determined by the moving average. The output changes within a limited rate. The capacitor voltage is maintained at a constant value to make the capacitor as small as possible. The necessary capacitance is theoretically derived. The effectiveness of this method is verified by the experiment.
I$_{2}$ molecule for neutrino mass spectroscopy: ab initio calculation of spectral rate
Tashiro, Motomichi; Kuma, Susumu; Miyamoto, Yuki; Sasao, Noboru; Uetake, Satoshi; Yoshimura, Motohiko
2014-01-01
It has recently been argued that atoms and molecules may become good targets of determining neutrino parameters still undetermined, if atomic/molecular process is enhanced by a new kind of coherence. We compute photon energy spectrum rate arising from coherent radiative neutrino pair emission processes of metastable excited states of I$_2$ and its iso-valent molecules, $|Av \\rangle \\rightarrow |Xv' \\rangle + \\gamma + \
Statistical properties of the maximum Lyapunov exponent calculated via the divergence rate method.
Franchi, Matteo; Ricci, Leonardo
2014-12-01
The embedding of a time series provides a basic tool to analyze dynamical properties of the underlying chaotic system. To this purpose, the choice of the embedding dimension and lag is crucial. Although several methods have been devised to tackle the issue of the optimal setting of these parameters, a conclusive criterion to make the most appropriate choice is still lacking. An accepted procedure to rank different embedding methods relies on the evaluation of the maximum Lyapunov exponent (MLE) out of embedded time series that are generated by chaotic systems with explicit analytic representation. The MLE is evaluated as the local divergence rate of nearby trajectories. Given a system, embedding methods are ranked according to how close such MLE values are to the true MLE. This is provided by the so-called standard method in a way that exploits the mathematical description of the system and does not require embedding. In this paper we study the dependence of the finite-time MLE evaluated via the divergence rate method on the embedding dimension and lag in the case of time series generated by four systems that are widely used as references in the scientific literature. We develop a completely automatic algorithm that provides the divergence rate and its statistical uncertainty. We show that the uncertainty can provide useful information about the optimal choice of the embedding parameters. In addition, our approach allows us to find which systems provide suitable benchmarks for the comparison and ranking of different embedding methods.
Sotiropoulou, Maria; Florou, Heleny; Manolopoulou, Metaxia
2016-06-01
In the present study, the radioactivity levels to which terrestrial non-human biota were exposed are examined. Organisms (grass and herbivore mammals) and abiotic components (soil) were collected during the period of 2010 to 2014 from grasslands where sheep and goats were free-range grazing. Natural background radionuclides ((226)Ra, (228)Ra, (228)Th) and artificial radionuclides ((137)Cs, (134)Cs, (131)I) were detected in the collected samples using gamma spectrometry. The actual measured activity concentrations and site-specific data of the studied organisms were imported in ERICA Assessment Tool (version 1.2.0) in order to provide an insight of the radiological dose rates. The highest activity concentrations were detected in samples collected from Lesvos island and the lowest in samples collected from Attiki and Etoloakarnania prefectures. The highest contribution to the total dose rate was clearly derived from the internal exposure and is closely related to the exposure to alpha emitters of natural background ((226)Ra and (228)Th). The Fukushima-derived traces of (137)Cs, (134)Cs, and (131)I, along with the residual (137)Cs, resulted in quite low contribution to the total dose rate. The obtained results may strengthen the adaptation of software tools to a wider range of ecosystems and may be proved useful in further research regarding the possible impact of protracted low level ionizing radiation on non-human biota. This kind of studies may contribute to the effective incorporation of dosimetry tools in the development of integrated environmental and radiological impact assessment policies.
HU, T.A.
2003-09-30
Flammable gases such as hydrogen, ammonia, and methane are observed in the tank dome space of the Hanford Site high-level waste tanks. This report assesses the steady-state flammability level under normal and off-normal ventilation conditions in the tank dome space for 177 double-shell tanks and single-shell tanks at the Hanford Site. The steady-state flammability level was estimated from the gas concentration of the mixture in the dome space using estimated gas release rates, Le Chatelier's rule and lower flammability limits of fuels in an air mixture. A time-dependent equation of gas concentration, which is a function of the gas release and ventilation rates in the dome space, has been developed for both soluble and insoluble gases. With this dynamic model, the time required to reach the specified flammability level at a given ventilation condition can be calculated. In the evaluation, hydrogen generation rates can be calculated for a given tank waste composition and its physical condition (e.g., waste density, waste volume, temperature, etc.) using the empirical rate equation model provided in Empirical Rate Equation Model and Rate Calculations of Hydrogen Generation for Hanford Tank Waste, HNF-3851. The release rate of other insoluble gases and the mass transport properties of the soluble gas can be derived from the observed steady-state gas concentration under normal ventilation conditions. The off-normal ventilation rate is assumed to be natural barometric breathing only. A large body of data is required to do both the hydrogen generation rate calculation and the flammability level evaluation. For tank waste that does not have sample-based data, a statistical-based value from probability distribution regression was used based on data from tanks belonging to a similar waste group. This report (Revision 3) updates the input data of hydrogen generation rates calculation for 177 tanks using the waste composition information in the Best-Basis Inventory Detail
Dodson, Christopher M
2012-01-01
Given growing interest in optical-frequency magnetic dipole transitions, we use intermediate coupling calculations to identify strong magnetic dipole emission lines that are well suited for experimental study. The energy levels for all trivalent lanthanide ions in the 4fn configuration are calculated using a detailed free ion Hamiltonian, including electrostatic and spin-orbit terms as well as two-body, three-body, spin-spin, spin-other-orbit, and electrostatically correlated spin-orbit interactions. These free ion energy levels and eigenstates are then used to calculate the oscillator strengths for all ground-state magnetic dipole absorption lines and the spontaneous emission rates for all magnetic dipole emission lines including transitions between excited states. A large number of strong magnetic dipole transitions are predicted throughout the visible and near-infrared spectrum, including many at longer wavelengths that would be ideal for experimental investigation of magnetic light-matter interactions wit...
Kim, T. W.; Yarnell, S. M.; Yager, E.; Leidman, S. Z.
2015-12-01
Caspar Creek is a gravel-bedded stream located in the Jackson Demonstration State Forest in the coast range of California. The Caspar Creek Experimental Watershed has been actively monitored and studied by the Pacific Southwest Research Station and California Department of Forestry and Fire Protection for over five decades. Although total annual sediment yield has been monitored through time, sediment transport during individual storm events is less certain. At a study site on North Fork Caspar Creek, cross-section averaged sediment flux was collected throughout two storm events in December 2014 and February 2015 to determine if two commonly used sediment transport equations—Meyer-Peter-Müller and Wilcock—approximated observed bedload transport. Cross-section averaged bedload samples were collected approximately every hour during each storm event using a Helley-Smith bedload sampler. Five-minute composite samples were collected at five equally spaced locations along a cross-section and then sieved to half-phi sizes to determine the grain size distribution. The measured sediment flux values varied widely throughout the storm hydrographs and were consistently less than two orders of magnitude in value in comparison to the calculated values. Armored bed conditions, changing hydraulic conditions during each storm and variable sediment supply may have contributed to the observed differences.
Owlia, P; Vasei, M; Goliaei, B; Nassiri, I
2011-04-01
The interests in journal impact factor (JIF) in scientific communities have grown over the last decades. The JIFs are used to evaluate journals quality and the papers published therein. JIF is a discipline specific measure and the comparison between the JIF dedicated to different disciplines is inadequate, unless a normalization process is performed. In this study, normalized impact factor (NIF) was introduced as a relatively simple method enabling the JIFs to be used when evaluating the quality of journals and research works in different disciplines. The NIF index was established based on the multiplication of JIF by a constant factor. The constants were calculated for all 54 disciplines of biomedical field during 2005, 2006, 2007, 2008 and 2009 years. Also, ranking of 393 journals in different biomedical disciplines according to the NIF and JIF were compared to illustrate how the NIF index can be used for the evaluation of publications in different disciplines. The findings prove that the use of the NIF enhances the equality in assessing the quality of research works produced by researchers who work in different disciplines.
Urbina-Villalba, German; García-Sucre, Máximo; Toro-Mendoza, Jhoan
2003-12-01
In order to account for the hydrodynamic interaction (HI) between suspended particles in an average way, Honig et al. [J. Colloid Interface Sci. 36, 97 (1971)] and more recently Heyes [Mol. Phys. 87, 287 (1996)] proposed different analytical forms for the diffusion constant. While the formalism of Honig et al. strictly applies to a binary collision, the one from Heyes accounts for the dependence of the diffusion constant on the local concentration of particles. However, the analytical expression of the latter approach is more complex and depends on the particular characteristics of each system. Here we report a combined methodology, which incorporates the formula of Honig et al. at very short distances and a simple local volume-fraction correction at longer separations. As will be shown, the flocculation behavior calculated from Brownian dynamics simulations employing the present technique, is found to be similar to that of Batchelor’s tensor [J. Fluid. Mech. 74, 1 (1976); 119, 379 (1982)]. However, it corrects the anomalous coalescence found in concentrated systems as a result of the overestimation of many-body HI.
Introducing a Quantitative Method to Calculate the Rate of Primary Infertility
MM Akhondi
2012-12-01
Full Text Available Background: In the previous studies, the rate of primary infertility was reported differently. It seems the main reasons are related to the different methods of data collection and information analysis. Therefore, introducing a precise method to determine the infertile couples and the population exposed to the risk of infertility is an important issue to study primary infertility.Methods: The proposed methodology for assessing primary infertility rate has been designed and applied by Avicenna Research Institute in a national survey. Sampling was conducted based on probability proportional to size cluster method. In this survey, after reviewing the former studies, the reproductive history was used as a basis for data collection. Every reproductive event was recorded with a code and a date in the questionnaire. To introduce a precise method, all possible events were considered thoroughly and for each situation, it was determined whether these cases should be considered in numerator, denominator or it should be eliminated from the study. Also in some situations where the correct diagnosis of infertility was not possible, a sensitivity analysis was recommended to see the variability of results under different scenarios.Conclusion: The proposed methodology can precisely define the infertile women and the population exposed to the risk of infertility. So, this method is more accurate than other available data collection strategies. To avoid bias and make a consistent methodology, using this method is recommended in future prevalence studies.
Introducing a Quantitative Method to Calculate the Rate of Primary Infertility
Akhondi, MM; Kamali, K; Ranjbar, F; Shafeghati, S; Ardakani, Z Behjati; Shirzad, M; Eslamifar, M; Mohammad, K; Parsaeian, M
2012-01-01
Background In the previous studies, the rate of primary infertility was reported differently. It seems the main reasons are related to the different methods of data collection and information analysis. Therefore, introducing a precise method to determine the infertile couples and the population exposed to the risk of infertility is an important issue to study primary infertility. Methods: The proposed methodology for assessing primary infertility rate has been designed and applied by Avicenna Research Institute in a national survey. Sampling was conducted based on probability proportional to size cluster method. In this survey, after reviewing the former studies, the reproductive history was used as a basis for data collection. Every reproductive event was recorded with a code and a date in the questionnaire. To introduce a precise method, all possible events were considered thoroughly and for each situation, it was determined whether these cases should be considered in numerator, denominator or it should be eliminated from the study. Also in some situations where the correct diagnosis of infertility was not possible, a sensitivity analysis was recommended to see the variability of results under different scenarios. Conclusion: The proposed methodology can precisely define the infertile women and the population exposed to the risk of infertility. So, this method is more accurate than other available data collection strategies. To avoid bias and make a consistent methodology, using this method is recommended in future prevalence studies. PMID:23641391
Ayman A. El-Abnoudy⇑; Sayed F. Hassan
2016-01-01
Potential alpha emitters are of prime concern to the ventilation engineer due to their rapid concentration increasing once radon released in the mine atmosphere, causing tissue irradiation and lung cancer. Studying of the time based variations of the natural ventilation in tunnels and their relationship to the external parameters contribute to the air circulation assessment. Due to the continuous and high fluctu-ation of the meteorological conditions affecting the air circulation and intensity through the underground workings, there is a difficulty in the natural ventilation assessment by only the ordinary meteorological measurements. So, in this paper, the possibility of using the radioactive measurements, allowing for the air aging and ventilation quality to be qualified, is investigated through three different underground structures. Referring to the most confined structure of them, results show that one structure has a better exchange rate by a factor 1.8 and the other has the best rate by a factor 2.1. This parameter can be linked to the operating costs and size of a future ventilation system.
Lim, J. T.; Raper, C. D. Jr; Gold, H. J.; Wilkerson, G. G.; Raper CD, J. r. (Principal Investigator)
1989-01-01
A simple mathematical model for calculating the concentration of mobile carbon skeletons in the shoot of soya bean plants [Glycine max (L.) Merrill cv. Ransom] was built to examine the suitability of measured net photosynthetic rates (PN) for calculation of saccharide flux into the plant. The results suggest that either measurement of instantaneous PN overestimated saccharide influx or respiration rates utilized in the model were underestimated. If neither of these is the case, end-product inhibition of photosynthesis or waste respiration through the alternative pathway should be included in modelling of CH2O influx or efflux; and even if either of these is the case, the model output at a low coefficient of leaf activity indicates that PN still may be controlled by either end-product inhibition or alternative respiration.
Savukov, I.; Safronova, U. I.; Safronova, M. S.
2015-11-01
Excitation energies, term designations, g factors, transition rates, and lifetimes of U2 + are determined using a relativistic configuration interaction (CI) + linearized-coupled-cluster (LCC) approach. The CI-LCC energies are compared with CI + many-body-perturbation-theory (MBPT) and available experimental energies. Close agreement has been found with experiment, within hundreds of cm-1. In addition, lifetimes of higher levels have been calculated for comparison with three experimentally measured lifetimes, and close agreement has been found within the experimental error. CI-LCC calculations constitute a benchmark test of the CI + all-order method in complex relativistic systems such as actinides and their ions with many valence electrons. The theory yields many energy levels, g factors, transition rates, and lifetimes of U2 + that are not available from experiment. The theory can be applied to other multivalence atoms and ions, which would be of interest to many applications.
Stephen Carstens
2008-11-01
Full Text Available Companies tend to outsource transport to fleet management companies to increase efficiencies if transport is a non-core activity. The provision of fleet management services on contract introduces a certain amount of financial risk to the fleet management company, specifically fixed rate maintenance contracts. The quoted rate needs to be sufficient and also competitive in the market. Currently the quoted maintenance rates are based on the maintenance specifications of the manufacturer and the risk management approach of the fleet management company. This is usually reflected in a contingency that is included in the quoted maintenance rate. An alternative methodology for calculating the average maintenance cost for a vehicle fleet is proposed based on the actual maintenance expenditures of the vehicles and accepted statistical techniques. The proposed methodology results in accurate estimates (and associated confidence limits of the true average maintenance cost and can beused as a basis for the maintenance quote.
On the symmetric α-stable distribution with application to symbol error rate calculations
Soury, Hamza
2016-12-24
The probability density function (PDF) of the symmetric α-stable distribution is investigated using the inverse Fourier transform of its characteristic function. For general values of the stable parameter α, it is shown that the PDF and the cumulative distribution function of the symmetric stable distribution can be expressed in terms of the Fox H function as closed-form. As an application, the probability of error of single input single output communication systems using different modulation schemes with an α-stable perturbation is studied. In more details, a generic formula is derived for generalized fading distribution, such as the extended generalized-k distribution. Later, simpler expressions of these error rates are deduced for some selected special cases and compact approximations are derived using asymptotic expansions.
Semiclassical calculation of ionisation rate for Rydberg helium atoms in an electric field
Wang De-Hua
2011-01-01
The ionisation of Rydberg helium atoms in an electric field above the classical ionisation threshold has been examined using the semiclassical method, with particular emphasis on discussing the influence of the core scattering on the escape dynamics of electrons. The results show that the Rydberg helium atoms ionise by emitting a train of electron pulses. Unlike the case of the ionisation of Rydberg hydrogen atom in parallel electric and magnetic fields,where the pulses of the electron are caused by the external magnetic field, the pulse trains for Rydberg helium atoms are created through core scattering. Each peak in the ionisation rate corresponds to the contribution of one core-scattered combination trajectory. This fact further illustrates that the ionic core scattering leads to the chaotic property of the Rydberg helium atom in external fields. Our studies provide a simple explanation for the escape dynamics in the ionisation of nonhydrogenic atoms in external fields.
LDA+DMFT calculations of the Knight shift and relaxation rate in VOMoO{sub 4}
Kiani, Amin; Pavarini, Eva [Institute for Advanced Simulation and JARA, Forschungszentrum Juelich, 52425 Juelich (Germany)
2013-07-01
By using the LDA+DMFT approach and the local vertex approximation, we calculate the magnetic linear response function of strongly correlated transition-metal oxides. From the susceptibility we obtain Knight shift and relaxation rate. We present results for the frustrated system VOMoO{sub 4}. In particular we investigate how the Knight shift and the relaxation time behave in different temperature and correlation regimes.
Pan, Wenxiao; Daily, Michael D.; Baker, Nathan A.
2015-12-01
We demonstrate the accuracy and effectiveness of a Lagrangian particle-based method, smoothed particle hydrodynamics (SPH), to study diffusion in biomolecular systems by numerically solving the time-dependent Smoluchowski equation for continuum diffusion. The numerical method is first verified in simple systems and then applied to the calculation of ligand binding to an acetylcholinesterase monomer. Unlike previous studies, a reactive Robin boundary condition (BC), rather than the absolute absorbing (Dirichlet) boundary condition, is considered on the reactive boundaries. This new boundary condition treatment allows for the analysis of enzymes with "imperfect" reaction rates. Rates for inhibitor binding to mAChE are calculated at various ionic strengths and compared with experiment and other numerical methods. We find that imposition of the Robin BC improves agreement between calculated and experimental reaction rates. Although this initial application focuses on a single monomer system, our new method provides a framework to explore broader applications of SPH in larger-scale biomolecular complexes by taking advantage of its Lagrangian particle-based nature.
Benslimane-Bouland, A
1997-09-01
The framework of this study was the evaluation of the nuclear data requirements for Actinides and Fission Products applied to current nuclear reactors as well as future applications. This last item includes extended irradiation campaigns, 100 % Mixed Oxide fuel, transmutation or even incineration. The first part of this study presents different types of integral measurements which are available for capture rate measurements, as well as the methods used for reactor core calculation route design and nuclear data library validation. The second section concerns the analysis of three specific irradiation experiments. The results have shown the extent of the current knowledge on nuclear data as well as the associated uncertainties. The third and last section shows both the coherency between all the results, and the statistical method applied for nuclear data library adjustment. A relevant application of this method has demonstrated that only specifically chosen integral experiments can be of use for the validation of nuclear data libraries. The conclusion is reached that even if co-ordinated efforts between reactor and nuclear physicists have made possible a huge improvement in the knowledge of capture cross sections of the main nuclei such as uranium and plutonium, some improvements are currently necessary for the minor actinides (Np, Am and Cm). Both integral and differential measurements are recommended to improve the knowledge of minor actinide cross sections. As far as integral experiments are concerned, a set of criteria to be followed during the experimental conception have been defined in order to both reduce the number of required calculation approximations, and to increase as much as possible the maximum amount of extracted information. (author)
Piepsz, Amy; Tondeur, Marianne [CHU St. Pierre, Department of Radioisotopes, Brussels (Belgium); Ham, Hamphrey [University Hospital Ghent, Department of Nuclear Medicine, Ghent (Belgium)
2008-09-15
{sup 51}Cr ethylene diamine tetraacetic acid ({sup 51}Cr EDTA) clearance is nowadays considered as an accurate and reproducible method for measuring glomerular filtration rate (GFR) in children. Normal values in function of age, corrected for body surface area, have been recently updated. However, much criticism has been expressed about the validity of body surface area correction. The aim of the present paper was to present the normal GFR values, not corrected for body surface area, with the associated percentile curves. For that purpose, the same patients as in the previous paper were selected, namely those with no recent urinary tract infection, having a normal left to right {sup 99m}Tc MAG3 uptake ratio and a normal kidney morphology on the early parenchymal images. A single blood sample method was used for {sup 51}Cr EDTA clearance measurement. Clearance values, not corrected for body surface area, increased progressively up to the adolescence. The percentile curves were determined and allow, for a single patient, to estimate accurately the level of non-corrected clearance and the evolution with time, whatever the age. (orig.)
Ho, Chih-Hsiang; Smith, Eugene I.; Feuerbach, Daniel L.; Naumann, Terry R.
1991-12-01
Investigations are currently underway to evaluate the impact of potentially adverse conditions (e.g. volcanism, faulting, seismicity) on the waste-isolation capability of the proposed nuclear waste repository at Yucca Mountain, Nevada, USA. This paper is the first in a series that will examine the probability of disruption of the Yucca Mountain site by volcanic eruption. In it, we discuss three estimating techniques for determining the recurrence rate of volcanic eruption (λ), an important parameter in the Poisson probability model. The first method is based on the number of events occurring over a certain observation period, the second is based on repose times, and the final is based on magma volume. All three require knowledge of the total number of eruptions in the Yucca Mountain area during the observation period ( E). Following this discussion we then propose an estimate of E which takes into account the possibility of polygenetic and polycyclic volcanism at all the volcanic centers near the Yucca Mountain site.
Støre-Valen, Jakob; Ryum, Truls; Pedersen, Geir A F; Pripp, Are H; Jose, Paul E; Karterud, Sigmund
2015-09-01
The Global Assessment of Functioning (GAF) Scale is used in routine clinical practice and research to estimate symptom and functional severity and longitudinal change. Concerns about poor interrater reliability have been raised, and the present study evaluated the effect of a Web-based GAF training program designed to improve interrater reliability in routine clinical practice. Clinicians rated up to 20 vignettes online, and received deviation scores as immediate feedback (i.e., own scores compared with expert raters) after each rating. Growth curves of absolute SD scores across the vignettes were modeled. A linear mixed effects model, using the clinician's deviation scores from expert raters as the dependent variable, indicated an improvement in reliability during training. Moderation by content of scale (symptoms; functioning), scale range (average; extreme), previous experience with GAF rating, profession, and postgraduate training were assessed. Training reduced deviation scores for inexperienced GAF raters, for individuals in clinical professions other than nursing and medicine, and for individuals with no postgraduate specialization. In addition, training was most beneficial for cases with average severity of symptoms compared with cases with extreme severity. The results support the use of Web-based training with feedback routines as a means to improve the reliability of GAF ratings performed by clinicians in mental health practice. These results especially pertain to clinicians in mental health practice who do not have a masters or doctoral degree.
Gao, R. S.; Hall, S. R.; Swartz, W. H.; Spackman, J. R.; Watts, L. A.; Fahey, D. W.; Aikin, K. C.; Shetter, R. E.; Bui, T. P.
2008-01-01
Results for the solar heating rates in ambient air due to absorption by black-carbon (BC) containing particles and ozone are presented as calculated from airborne observations made in the tropical tropopause layer (TTL) in January-February 2006. The method uses airborne in situ observations of BC particles, ozone and actinic flux. Total BC mass is obtained along the flight track by summing the masses of individually detected BC particles in the range 90 to 600-nm volume-equivalent diameter, which includes most of the BC mass. Ozone mixing ratios and upwelling and partial downwelling solar actinic fluxes were measured concurrently with BC mass. Two estimates used for the BC wavelength-dependent absorption cross section yielded similar heating rates. For mean altitudes of 16.5, 17.5, and 18.5 km (0.5 km) in the tropics, average BC heating rates were near 0.0002 K/d. Observed BC coatings on individual particles approximately double derived BC heating rates. Ozone heating rates exceeded BC heating rates by approximately a factor of 100 on average and at least a factor of 4, suggesting that BC heating rates in this region are negligible in comparison.
Dolinar, E. K.; Dong, X.; Xi, B.
2015-12-01
One-dimensional radiative transfer models (RTM) are a common tool used for calculating atmospheric heating rates and radiative fluxes. In the forward sense, RTMs use known (or observed) quantities of the atmospheric state and surface characteristics to determine the appropriate surface and top-of-atmosphere (TOA) radiative fluxes. The NASA CERES science team uses the modified Fu-Liou RTM to calculate atmospheric heating rates and surface and TOA fluxes using the CERES observed TOA shortwave (SW) and longwave (LW) fluxes as constraints to derive global surface and TOA radiation budgets using a reanalyzed atmospheric state (e.g. temperature and various greenhouse gases) from the newly developed MERRA-2. However, closure studies have shown that using the reanalyzed state as input to the RTM introduces some disparity between the RTM calculated fluxes and surface observed ones. The purpose of this study is to generate a database of observed atmospheric state profiles, from satellite and ground-based sources, at several permanent Atmospheric Radiation Measurement (ARM) Program sites, including the Southern Great Plains (SGP), Northern Slope of Alaska (NSA) and Tropical Western Pacific Nauru (TWP-C2), and Eastern North Atlantic (ENA) permanent facilities. Since clouds are a major modulator of radiative transfer within the Earth's atmosphere, we will focus on the clear-sky conditions in this study, which will set up the baseline for our cloudy studies in the future. Clear-sky flux profiles are calculated using the Edition 4 NASA LaRC modified Fu-Liou RTM. The aforementioned atmospheric profiles generated in-house are used as input into the RTM, as well as from reanalyses. The calculated surface and TOA fluxes are compared with ARM surface measured and CERES satellite observed SW and LW fluxes, respectively. Clear-sky cases are identified by the ARM radar-lidar observations, as well as satellite observations, at the select ARM sites.
Corruble, E; Purper, D; Payan, C; Guelfi, J
1998-08-01
The inter-rater reliability of the French versions of the MADRS and the DRRS was studied on the basis of 58 videotape records of structured standardised interviews of depressed inpatients under antidepressant treatment. Each patient was assessed by two trained raters, from the same videotape recording. The inter-rater reliability of total scores was high with both scales (intra-class correlation coefficients: 0.86 for MADRS and 0.77 for DRRS). However, the inter-rater reliability for individual items was higher and more homogeneous for the MADRS than for the DRRS. Finally, the structured interview in French appears to be relevant for the MADRS, but it should be improved for the DRRS.
Topin, Jérémie; Diharce, Julien; Fiorucci, Sébastien; Antonczak, Serge; Golebiowski, Jérôme
2014-01-23
Hydrogenases are promising candidates for the catalytic production of green energy by means of biological ways. The major impediment to such a production is rooted in their inhibition under aerobic conditions. In this work, we model dioxygen migration rates in mutants of a hydrogenase of Desulfovibrio fructusovorans. The approach relies on the calculation of the whole potential of mean force for O2 migration within the wild-type as well as in V74M, V74F, and V74Q mutant channels. The three free-energy barriers along the entire migration pathway are converted into chemical rates through modeling based on Transition State Theory. The use of such a model recovers the trend of O2 migration rates among the series.
Trippolini, M. A.; Dijkstra, P. U.; Jansen, B.; Oesch, P.; Geertzen, J. H. B.; Reneman, M. F.
2014-01-01
Introduction Functional capacity evaluation (FCE) can be used to make clinical decisions regarding fitness-for-work. During FCE the evaluator attempts to assess the amount of physical effort of the patient. The aim of this study is to analyze the reliability of physical effort determination using ob
Romanets, Y; Kadi, Y; Luis, R; Goncalves, I F; Tecchio, L; Kharoua, C; Vaz, P; Ene, D; David, J C; Rocca, R; Negoita, F
2010-01-01
One of the objectives of the EURISOL (EURopean Isotope Separation On-Line Radioactive Ion Beam) Design Study consisted of providing a safe and reliable facility layout and design for the following operational parameters and characteristics: (a) a 4 MW proton beam of 1 GeV energy impinging on a mercury target (the converter); (b) high neutron fluxes (similar to 3 x 10(16) neutrons/s) generated by spallation reactions of the protons impinging in the converter and (c) fission rate on fissile U-235 targets in excess of 10(15) fissions/s. In this work, the state-of-the-art Monte Carlo codes MCNPX (Pelowitz, 2005) and FLUKA (Vlachoudis, 2009; Ferrari et al., 2008) were used to characterize the neutronics performance and to perform the shielding assessment (Herrera-Martinez and Kadi, 2006; Cornell, 2003) of the EURISOLTarget Unit and to provide estimations of dose rate and activation of different components, in view of the radiation safety assessment of the facility. Dosimetry and activation calculations were perfor...
Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear
2011-07-01
A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)
Copeland, Kyle
2015-07-01
The superposition approximation was commonly employed in atmospheric nuclear transport modeling until recent years and is incorporated into flight dose calculation codes such as CARI-6 and EPCARD. The useful altitude range for this approximation is investigated using Monte Carlo transport techniques. CARI-7A simulates atmospheric radiation transport of elements H-Fe using a database of precalculated galactic cosmic radiation showers calculated with MCNPX 2.7.0 and is employed here to investigate the influence of the superposition approximation on effective dose rates, relative to full nuclear transport of galactic cosmic ray primary ions. Superposition is found to produce results less than 10% different from nuclear transport at current commercial and business aviation altitudes while underestimating dose rates at higher altitudes. The underestimate sometimes exceeds 20% at approximately 23 km and exceeds 40% at 50 km. Thus, programs employing this approximation should not be used to estimate doses or dose rates for high-altitude portions of the commercial space and near-space manned flights that are expected to begin soon.
Nabi, Jameel-Un
2014-01-01
Few white dwarfs, located in binary systems, may acquire sufficiently high mass accretion rates resulting in the burning of carbon and oxygen under nondegenerate conditions forming a O+Ne+Mg core. These O+Ne+Mg cores are gravitationally less bound than more massive progenitor stars and can release more energy due to the nuclear burning. They are also amongst the probable candidates for low entropy r-process sites. Recent observations of subluminous Type II-P supernovae (e.g., 2005cs, 2003gd, 1999br, 1997D) were able to rekindle the interest in 8 -- 10 M$_{\\odot}$ which develop O+Ne+Mg cores. Microscopic calculations of capture rates on $^{24}$Mg, which may contribute significantly to the collapse of O+Ne+Mg cores, using shell model and proton-neutron quasiparticle random phase approximation (pn-QRPA) theory, were performed earlier and comparisons made. Simulators, however, may require these capture rates on a fine scale. For the first time a detailed microscopic calculation of the electron and positron captur...
Denis-Petit, David; Gosselin, Gilbert; Hannachi, Fazia; Tarisien, Medhi; Bonnet, Thomas; Comet, Maxime; Gobet, Franck; Versteegen, Maud; Morel, Pascal; Méot, Vincent; Matea, Iolanda
2017-08-01
One promising candidate for the first detection of nuclear excitation in plasma is the 463-keV, 20.26-min-lifetime isomeric state in 84Rb, which can be excited via a 3.5-keV transition to a higher lying state. According to our preliminary calculations, under specific plasma conditions, nuclear excitation by electron transition (NEET) may be its strongest excitation process. Evaluating a reliable NEET rate requires, in particular, a thorough examination of all atomic transitions contributing to the rate under plasma conditions. We report the results of a detailed evaluation of the NEET rate based on multiconfiguration Dirac Fock (MCDF) atomic calculations, in a rubidium plasma at local thermodynamic equilibrium with a temperature of 400 eV and a density of 10-2g /cm3 and based on a more precise energy measurement of the nuclear transition involved in the excitation.
Ligero, R.A., E-mail: rufino.ligero@uca.e [Departamento de Fisica Aplicada, Universidad de Cadiz, 11510 Puerto Real, Cadiz (Spain); Casas-Ruiz, M. [Departamento de Fisica Aplicada, Universidad de Cadiz, 11510 Puerto Real, Cadiz (Spain); Barrera, M. [Departamento de Medio Ambiente, CIEMAT, Madrid 28040 (Spain); Barbero, L. [Departamento de Ciencias de la Tierra, Universidad de Cadiz, 11510 Puerto Real (Spain); Melendez, M.J. [Departamento de Fisica Aplicada, Universidad de Cadiz, 11510 Puerto Real, Cadiz (Spain)
2010-09-15
A new method using the inventory determined for the activity of the radionuclide {sup 137}Cs, coming from global radioactive fallout has been utilised to calculate the sedimentation rates. The method has been applied in a wide intertidal region in the Bay of Cadiz Natural Park (SW Spain). The sedimentation rates estimated by the {sup 137}Cs inventory method ranged from 0.26 cm/year to 1.72 cm/year. The average value of the sedimentation rate obtained is 0.59 cm/year, and this rate has been compared with those resulting from the application of the {sup 210}Pb dating technique. A good agreement between the two procedures has been found. From the study carried out, it has been possible for the first time, to draw a map of sedimentation rates for this zone where numerous physico-chemical, oceanographic and ecological studies converge, since it is situated in a region of great environmental interest. This area, which is representative of common environmental coastal scenarios, is particularly sensitive to perturbations related to climate change, and the results of the study will allow to make short and medium term evaluations of this change.
Ligero, R A; Casas-Ruiz, M; Barrera, M; Barbero, L; Meléndez, M J
2010-09-01
A new method using the inventory determined for the activity of the radionuclide (137)Cs, coming from global radioactive fallout has been utilised to calculate the sedimentation rates. The method has been applied in a wide intertidal region in the Bay of Cádiz Natural Park (SW Spain). The sedimentation rates estimated by the (137)Cs inventory method ranged from 0.26 cm/year to 1.72 cm/year. The average value of the sedimentation rate obtained is 0.59 cm/year, and this rate has been compared with those resulting from the application of the (210)Pb dating technique. A good agreement between the two procedures has been found. From the study carried out, it has been possible for the first time, to draw a map of sedimentation rates for this zone where numerous physico-chemical, oceanographic and ecological studies converge, since it is situated in a region of great environmental interest. This area, which is representative of common environmental coastal scenarios, is particularly sensitive to perturbations related to climate change, and the results of the study will allow to make short and medium term evaluations of this change.
Crawford, C. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); King, W. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-08-14
Savannah River Remediation (SRR) personnel requested that the Savannah River National Laboratory (SRNL) evaluate available data and determine its applicability to defining the impact of planned glycolate anion additions to Savannah River Site (SRS) High Level Waste (HLW) on Tank Farm flammability (primarily with regard to H_{2} production). Flammability evaluations of formate anion, which is already present in SRS waste, were also needed. This report describes the impacts of glycolate and formate radiolysis and thermolysis on Hydrogen Generation Rate (HGR) calculations for the SRS Tank Farm.
Amy Lansky
Full Text Available This study estimated the proportions and numbers of heterosexuals in the United States (U.S. to calculate rates of heterosexually acquired human immunodeficiency virus (HIV infection. Quantifying the burden of disease can inform effective prevention planning and resource allocation.Heterosexuals were defined as males and females who ever had sex with an opposite-sex partner and excluded those with other HIV risks: persons who ever injected drugs and males who ever had sex with another man. We conducted meta-analysis using data from 3 national probability surveys that measured lifetime (ever sexual activity and injection drug use among persons aged 15 years and older to estimate the proportion of heterosexuals in the United States population. We then applied the proportion of heterosexual persons to census data to produce population size estimates. National HIV infection rates among heterosexuals were calculated using surveillance data (cases attributable to heterosexual contact in the numerators and the heterosexual population size estimates in the denominators.Adult and adolescent heterosexuals comprised an estimated 86.7% (95% confidence interval: 84.1%-89.3% of the U.S. population. The estimate for males was 84.1% (CI: 81.2%-86.9% and for females was 89.4% (95% CI: 86.9%-91.8%. The HIV diagnosis rate for 2013 was 5.2 per 100,000 heterosexuals and the rate of persons living with diagnosed HIV infection in 2012 was 104 per 100,000 heterosexuals aged 13 years or older. Rates of HIV infection were >20 times as high among black heterosexuals compared to white heterosexuals, indicating considerable disparity. Rates among heterosexual men demonstrated higher disparities than overall population rates for men.The best available data must be used to guide decision-making for HIV prevention. HIV rates among heterosexuals in the U.S. are important additions to cost effectiveness and other data used to make critical decisions about resources for
Franc, Jeffrey Micheal; Verde, Manuela; Gallardo, Alba Ripoll; Carenzo, Luca; Ingrassia, Pier Luigi
2017-08-01
Objective measurement of simulation performance requires a validated and reliable tool. However, no published Italian language assessment tool is available. Translation of a published English language tool, the Ottawa Crisis Resource Management Global Rating Scale (GRS), may lead to a validated and reliable tool. After developing an Italian language translation of the English language tool, the study measured the reliability of the new tool by comparison with the English language tool used independently in the same simulation scenarios. In addition, the validity of the Italian language tool was measured by comparison to a skills score also applied independently. The correlation coefficient between the Italian language overall GRS and the English language overall GRS was 0.82 (adjusted 95 % confidence interval: 0.62-0.92). The correlation coefficient between the Italian language overall GRS and the skill score was 0.85 (adjusted 95 % confidence interval 0.68-0.94). This study demonstrated that the Italian language GRS has acceptable reliability when compared with the English language tool, suggesting that it can be used reliably to evaluate the performance during simulated emergencies. The study also suggests that the tool has acceptable validity for assessing the simulation performance. The study suggests that the Italian language GRS translation has reasonable reliability when compared with the English language GRS and reasonable validity when compared with the assessment of the skills scores. Data suggest that the instrument is adequately reliable for informal and formative type of examinations, but may require further confirmation before use for high-stake examinations such as licensing.
... to the left or right of your Adam's apple. Or try the pulse spot inside your wrist ... fundraising notices Site Comments © 2017 American Cancer Society, Inc. All rights reserved. The American Cancer Society is ...
Shehla, Romana; Khan, Athar Ali
2016-01-01
Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.
Betzler, Benjamin R., E-mail: betzlerbr@ornl.gov [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Kiedrowski, Brian C., E-mail: bckiedro@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Brown, Forrest B., E-mail: fbrown@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS A143, Los Alamos, NM 87545 (United States); Martin, William R., E-mail: wrm@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States)
2015-12-15
Highlights: • A transition rate matrix method for calculating α-eigenvalues is formulated. • Verification of this method is performed using multigroup infinite-medium problems. • Applications to continuous-energy media examine the slowing down of neutrons. • The effect of the α-eigenvalue spectrum on the short-time flux behavior is discussed. - Abstract: The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. For this, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.
Sarriguren, P
2013-01-01
Electron-capture rates at different density and temperature conditions are evaluated for a set of pf-shell nuclei representative of the constituents in presupernova formations. The nuclear structure part of the problem is described within a quasiparticle random-phase approximation based on a deformed Skyrme Hartree-Fock selfconsistent mean field with pairing correlations and residual interactions in particle-hole and particle-particle channels. The energy distributions of the Gamow-Teller strength are evaluated and compared to benchmark shell-model calculations and experimental data extracted from charge-exchange reactions. The model dependence of the weak rates are discussed and the various sensitivities to both density and temperature are analyzed.
Losa-Iglesias, Marta Elena; Becerro-de-Bengoa-Vallejo, Ricardo; Becerro-de-Bengoa-Losa, Klark Ricardo
2016-06-01
There are downloadable applications (Apps) for cell phones that can measure heart rate in a simple and painless manner. The aim of this study was to assess the reliability of this type of App for a Smartphone using an Android system, compared to the radial pulse and a portable pulse oximeter. We performed a pilot observational study of diagnostic accuracy, randomized in 46 healthy volunteers. The patients' demographic data and cardiac pulse were collected. Radial pulse was measured by palpation of the radial artery with three fingers at the wrist over the radius; a low-cost portable, liquid crystal display finger pulse oximeter; and a Heart Rate Plus for Samsung Galaxy Note®. This study demonstrated high reliability and consistency between systems with respect to the heart rate parameter of healthy adults using three systems. For all parameters, ICC was > 0.93, indicating excellent reliability. Moreover, CVME values for all parameters were between 1.66-4.06 %. We found significant correlation coefficients and no systematic differences between radial pulse palpation and pulse oximeter and a high precision. Low-cost pulse oximeter and App systems can serve as valid instruments for the assessment of heart rate in healthy adults. © The Author(s) 2014.
Yu-Wei Chen; Han-Hsiang Chen; Tsang-En Wang; Ching-Wei Chang; Chen-Wang Chang; Chih-Jen Wu
2011-01-01
AIM: To evaluate the difference between the performance of the (CKD-EPI) and Modification of Diet in Renal Disease (MDRD) equations in cirrhotic patients. METHODS: From Jan 2004 to Oct 2008, 4127 cirrhotic patients were reviewed. Patients with incomplete data with respect to renal function were excluded; thus, a total of 3791 patients were included in the study. The glomerular filtration rate (GFR) was estimated by the 4-variable MDRD (MDRD-4), 6-variable MDRD (MDRD-6), and CKD-EPI equations. RESULTS: When serum creatinine was 0.7-6.8 mg/dL and 0.6-5.3 mg/dL in men and women, respectively, a significantly lower GFR was estimated by the MDRD-6 than by the CKD-EPI. Similar GFRs were calculated by both equations when creatinine was > 6.9 mg/dL and > 5.4 mg/dL in men and women, respectively. In predicting in-hospital mortality, estimated GFR obtained by the MDRD-6 showed better accuracy [81.72%; 95% confidence interval (CI), 0.94-0.95] than that obtained by the MDRD-4 (80.22%; 95%CI, 0.96-0.97), CKD-EPI (79.93%; 95%CI, 0.96-0.96), and creatinine (77.50%; 95%CI, 2.27-2.63). CONCLUSION: GFR calculated by the 6-variable MDRD equation may be closer to the true GFR than that calculated by the CKD-EPI equation.
Shizgal, Bernie D.
2016-08-01
Nonclassical quadratures based on a new set of half-range polynomials, Tn(x) , orthogonal with respect to w(x) =e - x - b /√{ x } for x ∈ [ 0 , ∞) are employed in the efficient calculation of the nuclear fusion reaction rate coefficients from cross section data. The parameter b = B /√{kB T } in the weight function is temperature dependent and B is the Gamow factor. The polynomials Tn(x) satisfy a three term recurrence relation defined by two sets of recurrence coefficients, αn and βn. These recurrence coefficients define in turn the tridiagonal Jacobi matrix whose eigenvalues are the quadrature points and the weights are calculated from the first components of the eigenfunctions. For nonresonant nuclear reactions for which the astrophysical function can be expressed as a lower order polynomial in the relative energy, the convergence of the thermal average of the reactive cross section with this nonclassical quadrature is extremely rapid requiring in many cases 2-4 quadrature points. The results are compared with other libraries of nuclear reaction rate coefficient data reported in the literature.
Sutor, Malinda M.; Dagg, Michael J.
2008-06-01
The effects of vertical sampling resolution on estimates of plankton biomass and grazing calculations were examined using data collected in two different areas with vertically stratified water columns. Data were collected from one site in the upwelling region off Oregon and from four sites in the Northern Gulf of Mexico, three within the Mississippi River plume and one in adjacent oceanic waters. Plankton were found to be concentrated in discrete layers with sharp vertical gradients at all the stations. Phytoplankton distributions were correlated with gradients in temperature and salinity, but microzooplankton and mesozooplankton distributions were not. Layers of zooplankton were sometimes collocated with layers of phytoplankton, but this was not always the case. Simulated calculations demonstrate that when averages are taken over the water column, or coarser scale vertical sampling resolution is used, biomass and mesozooplankton grazing and filtration rates can be greatly underestimated. This has important implications for understanding the ecological significance of discrete layers of plankton and for assessing rates of grazing and production in stratified water columns.
Yang, Mingxin; He, Jingsha; Zhang, Yuqiang
2014-01-01
Due to limited resources in wireless sensor nodes, energy efficiency is considered as one of the primary constraints in the design of the topology of wireless sensor networks (WSNs). Since data that are collected by wireless sensor nodes exhibit the characteristics of temporal association, data fusion has also become a very important means of reducing network traffic as well as eliminating data redundancy as far as data transmission is concerned. Another reason for data fusion is that, in many applications, only some of the data that are collected can meet the requirements of the sink node. In this paper, we propose a method to calculate the number of cluster heads or data aggregators during data fusion based on the rate-distortion function. In our discussion, we will first establish an energy consumption model and then describe a method for calculating the number of cluster heads from the point of view of reducing energy consumption. We will also show through theoretical analysis and experimentation that the network topology design based on the rate-distortion function is indeed more energy-efficient.
Mingxin Yang
2014-01-01
Full Text Available Due to limited resources in wireless sensor nodes, energy efficiency is considered as one of the primary constraints in the design of the topology of wireless sensor networks (WSNs. Since data that are collected by wireless sensor nodes exhibit the characteristics of temporal association, data fusion has also become a very important means of reducing network traffic as well as eliminating data redundancy as far as data transmission is concerned. Another reason for data fusion is that, in many applications, only some of the data that are collected can meet the requirements of the sink node. In this paper, we propose a method to calculate the number of cluster heads or data aggregators during data fusion based on the rate-distortion function. In our discussion, we will first establish an energy consumption model and then describe a method for calculating the number of cluster heads from the point of view of reducing energy consumption. We will also show through theoretical analysis and experimentation that the network topology design based on the rate-distortion function is indeed more energy-efficient.
Hanssen-Bauer, Ketil; Gowers, Simon; Aalen, Odd O
2007-01-01
Assessment Scale (CGAS) and the Global Assessment of Psychosocial Disability (GAPD). Thirty clinicians from 5 nations independently rated 20 written vignettes. The national groups afterwards established national consensus ratings. There were no cross-national differences in independent scores, but there were...
Lisa A Simpson
Full Text Available OBJECTIVE: To develop a brief, valid and reliable tool [the Rating of Everyday Arm-use in the Community and Home (REACH scale] to classify affected upper limb use after stroke outside the clinical setting. METHODS: Focus groups with clinicians, patients and caregivers (n = 33 and a literature review were employed to develop the REACH scale. A sample of community-dwelling individuals with stroke was used to assess the validity (n = 96 and inter-rater reliability (n = 73 of the new scale. RESULTS: The REACH consists of separate scales for dominant and non-dominant affected upper limbs, and takes five minutes to administer. Each scale consists of six categories that capture 'no use' to 'full use'. The intraclass correlation coefficient and weighted kappa for inter-rater reliability were 0.97 (95% confidence interval: 0.95-0.98 and 0.91 (0.89-0.93 respectively. REACH scores correlated with external measures of upper extremity use, function and impairment (rho = 0.64-0.94. CONCLUSIONS: The REACH scale is a reliable, quick-to-administer tool that has strong relationships to other measures of upper limb use, function and impairment. By providing a rich description of how the affected upper limb is used outside of the clinical setting, the REACH scale fills an important gap among current measures of upper limb use and is useful for understanding the long term effects of stroke rehabilitation.
McCarthy, K.
2008-01-01
Semipermeable membrane devices (SPMDs) were deployed in the Columbia Slough, near Portland, Oregon, on three separate occasions to measure the spatial and seasonal distribution of dissolved polycyclic aromatic hydrocarbons (PAHs) and organochlorine compounds (OCs) in the slough. Concentrations of PAHs and OCs in SPMDs showed spatial and seasonal differences among sites and indicated that unusually high flows in the spring of 2006 diluted the concentrations of many of the target contaminants. However, the same PAHs - pyrene, fluoranthene, and the alkylated homologues of phenanthrene, anthracene, and fluorene - and OCs - polychlorinated biphenyls, pentachloroanisole, chlorpyrifos, dieldrin, and the metabolites of dichlorodiphenyltrichloroethane (DDT) - predominated throughout the system during all three deployment periods. The data suggest that storm washoff may be a predominant source of PAHs in the slough but that OCs are ubiquitous, entering the slough by a variety of pathways. Comparison of SPMDs deployed on the stream bed with SPMDs deployed in the overlying water column suggests that even for the very hydrophobic compounds investigated, bed sediments may not be a predominant source in this system. Perdeuterated phenanthrene (phenanthrene-d10). spiked at a rate of 2 ??g per SPMD, was shown to be a reliable performance reference compound (PRC) under the conditions of these deployments. Post-deployment concentrations of the PRC revealed differences in sampling conditions among sites and between seasons, but indicate that for SPMDs deployed throughout the main slough channel, differences in sampling rates were small enough to make site-to-site comparisons of SPMD concentrations straightforward. ?? Springer Science+Business Media B.V. 2007.
Chuang, Ching-Cheng; Tsai, Jui-che; Chen, Chung-Ming; Yu, Zong-Han; Sun, Chia-Wei
2012-04-01
Diffuse optical tomography (DOT) is an emerging technique for functional biological imaging. The imaging quality of DOT depends on the imaging reconstruction algorithm. The SIRT has been widely used for DOT image reconstruction but there is no criterion to truncate based on any kind of residual parameter. The iteration loops will always be decided by experimental rule. This work presents the CR calculation that can be great help for SIRT optimization. In this paper, four inhomogeneities with various shapes of absorption distributions are simulated as imaging targets. The images are reconstructed and analyzed based on the simultaneous iterative reconstruction technique (SIRT) method. For optimization between time consumption and imaging accuracy in reconstruction process, the numbers of iteration loop needed to be optimized with a criterion in algorithm, that is, the root mean square error (RMSE) should be minimized in limited iterations. For clinical applications of DOT, the RMSE cannot be obtained because the measured targets are unknown. Thus, the correlations between the RMSE and the convergence rate (CR) in SIRT algorithm are analyzed in this paper. From the simulation results, the parameter CR reveals the related RMSE value of reconstructed images. The CR calculation offers an optimized criterion of iteration process in SIRT algorithm for DOT imaging. Based on the result, the SIRT can be modified with CR calculation for self-optimization. CR reveals an indicator of SIRT image reconstruction in clinical DOT measurement. Based on the comparison result between RMSE and CR, a threshold value of CR (CRT) can offer an optimized number of iteration steps for DOT image reconstruction. This paper shows the feasibility study by utilizing CR criterion for SIRT in simulation and the clinical application of DOT measurement relies on further investigation.
Fonseca Diaz, Nestor [Universidad Tecnologica de Pereira, Facultad de Ingenieria Mecanica, Pereira (Colombia); University of Liege, Campus du Sart Tilman, Bat: B49, P33, B-4000 Liege (Belgium)
2009-09-15
This article presents the general procedure for uncertainty calculation of net total cooling effect estimation for rating room air conditioners and packaged terminal air conditioners, by means of measurements carried out in a test bench specially designed for this purpose. The uncertainty analysis presented in this work looks for establishing a confidence degree or certainty of experimental results. It is particularly important considering that international standards related to this type of analysis are too ambiguous when treating this subject. The uncertainty analysis is on the other hand an indispensable requirement to international standard ISO 17025 [ISO, 2005. International Standard. 17025. General Requirement to Test and Calibration Laboratories Competences. International Organization for Standardization, Geneva.], which must be applied to obtain the required quality levels according to the Word Trade Organization WTO. (author)
Seyed Mohsen Hosseini Daghigh
2012-03-01
Full Text Available Introduction Interaluminal brachytherapy is one of the important methods of esophageal cancer treatment. The effect of applicator attenuation is not considered in dose calculation method released by AAPM-TG43. In this study, the effect of High-Dose Rate (HDR brachytherapy esophageal applicator on dose distribution was surveyed in HDR brachytherapy. Materials and Methods A cylindrical PMMA phantom was built in order to be inserted by various sizes of esophageal applicators. EDR2 films were placed at 33 mm from Ir-192 source and irradiated with 1.5 Gy after planning using treatment planning system for all applicators. Results The results of film dosimetry in reference point for 6, 8, 10, and 20 mm applicators were 1.54, 1.53, 1.48, and 1.50 Gy, respectively. The difference between practical and treatment planning system results was 0.023 Gy (
Reliability Evaluation Method for IP Multicast Communication under QoS Constraints
Dai Fusheng; Bao Xuecai; Han Weizhan
2011-01-01
In order to estimate the reliability performance of multicast communication under multiple constraint conditions,the weight of service rate and the reliability index are defined,accompanying the calculation method.Firstly,according to the Quality of Service requirements,the appropriate routings between the central node and target nodes that meet the requirements are calculated using the iterative method in the weighted internet.Then,the disjoint set of network state and the coefficients of weighted service rate are calculated by decomposition and merge methods.Lastly,the formula for calculating the service rate is obtained based on the disjoint set of network state and the calculation of the reliability index will be completed.The simulation result shows that the reliability of multicast communication can be appropriately reflected by the weight of service rate and the calculation method,which can provide the theoretical basis for the reliability evaluation of multicast communication.
Ruslan, Siti Zaharah Mohd; Jaffar, Maheran Mohd
2017-05-01
Islamic banking in Malaysia offers variety of products based on Islamic principles. One of the concepts is a diminishing musyarakah. The concept of diminishing musyarakah helps Muslims to avoid transaction which are based on riba. The diminishing musyarakah can be defined as an agreement between capital provider and entrepreneurs that enable entrepreneurs to buy equity in instalments where profits and losses are shared based on agreed ratio. The objective of this paper is to determine the internal rate of return (IRR) for a diminishing musyarakah model by applying a numerical method. There are several numerical methods in calculating the IRR such as by using an interpolation method and a trial and error method by using Microsoft Office Excel. In this paper we use a bisection method and secant method as an alternative way in calculating the IRR. It was found that the diminishing musyarakah model can be adapted in managing the performance of joint venture investments. Therefore, this paper will encourage more companies to use the concept of joint venture in managing their investments performance.
Assuring reliability program effectiveness.
Ball, L. W.
1973-01-01
An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.
Pasalich, Dave S; Dadds, Mark R; Hawes, David J; Brennan, John
2011-02-28
Direct observational assessment of parent-child interaction is important in clinical intervention with conduct-problem children, but is costly and resource-intensive. We examined the reliability and validity of a brief measure of parents' relational schemas (RSs) regarding their child. Children (aged 4 to 11years) and their families receiving treatment at a clinic for externalizing behavior problems (n=150) or mood/developmental disorders (n=28) were assessed using a multi-method, multi-informant procedure. RSs were coded from Five-Minute Speech Samples (FMSS) using the Family Affective Attitude Rating Scale (FAARS), and were compared with directly observed parent-child interaction and questionnaire measures of family and parental dysfunction and conduct problems. Mothers' and fathers' RS scales were internally consistent and could be reliably coded in under 10min. Less positive RSs and more negative RSs were associated with higher rates of child conduct problems, and were more characteristic of the speech samples of parents of children with externalizing disorders, compared with clinic control parents. RSs demonstrated some associations with parenting behavior and measures of family functioning and symptoms of parental psychopathology, and predicted conduct problems independently of observed parental criticism. The results demonstrate the reliability and validity of the FAARS assessment of parental RSs in clinic-referred families. This brief measure of parent-child dynamics appears well-suited to 'real-world' (i.e., community) clinical settings in which intensive methods of observation are often not feasible.
Chibani, Omar, E-mail: omar.chibani@fccc.edu; C-M Ma, Charlie [Fox Chase Cancer Center, Philadelphia, Pennsylvania 19111 (United States)
2014-05-15
Purpose: To present a new accelerated Monte Carlo code for CT-based dose calculations in high dose rate (HDR) brachytherapy. The new code (HDRMC) accounts for both tissue and nontissue heterogeneities (applicator and contrast medium). Methods: HDRMC uses a fast ray-tracing technique and detailed physics algorithms to transport photons through a 3D mesh of voxels representing the patient anatomy with applicator and contrast medium included. A precalculated phase space file for the{sup 192}Ir source is used as source term. HDRM is calibrated to calculated absolute dose for real plans. A postprocessing technique is used to include the exact density and composition of nontissue heterogeneities in the 3D phantom. Dwell positions and angular orientations of the source are reconstructed using data from the treatment planning system (TPS). Structure contours are also imported from the TPS to recalculate dose-volume histograms. Results: HDRMC was first benchmarked against the MCNP5 code for a single source in homogenous water and for a loaded gynecologic applicator in water. The accuracy of the voxel-based applicator model used in HDRMC was also verified by comparing 3D dose distributions and dose-volume parameters obtained using 1-mm{sup 3} versus 2-mm{sup 3} phantom resolutions. HDRMC can calculate the 3D dose distribution for a typical HDR cervix case with 2-mm resolution in 5 min on a single CPU. Examples of heterogeneity effects for two clinical cases (cervix and esophagus) were demonstrated using HDRMC. The neglect of tissue heterogeneity for the esophageal case leads to the overestimate of CTV D90, CTV D100, and spinal cord maximum dose by 3.2%, 3.9%, and 3.6%, respectively. Conclusions: A fast Monte Carlo code for CT-based dose calculations which does not require a prebuilt applicator model is developed for those HDR brachytherapy treatments that use CT-compatible applicators. Tissue and nontissue heterogeneities should be taken into account in modern HDR
Takeuchi, Hiroyoshi; Fervaha, Gagan; Remington, Gary
2016-10-30
This study aimed to assess patient's capacity to perform a patient-reported outcome (PRO) measure (i.e., a self-rating scale) and examine its relationship with clinical characteristics including cognition. Fifty patients with schizophrenia were asked to rate the Subjective Well-being under Neuroleptics scale - Short form (SWNS) twice; the second rating was started immediately after they completed the first to minimize the gap between ratings. At the same time, the Positive and Negative Symptoms Scale (PANSS) and Brief Neurocognitive Assessment (BNA) were administered. The correlations between the two ratings for the SWNS total and each item scores were high (rs=0.94 and rs=0.60-0.84, respectively); however, for 16 (80%) of 20 items, 5 or more patients (i.e., ≥10%) demonstrated a>1 point score difference. There was no significant correlation between the SWNS total score difference and any clinical characteristics including age, education duration, illness duration, antipsychotic dose, psychopathology, and cognition. In contrast, the number of items with a>1 point score difference was significantly correlated with disorganized symptoms and overall severity (rs=0.29 for both), as well as working memory and global cognition (rs=-0.41 and rs=-0.40, respectively). These findings suggest that PROs should be interpreted with caution in patients with schizophrenia with prominent disorganization and cognitive impairment.
R. H. R. Stanley
2011-10-01
Full Text Available We present three years of Apparent Oxygen Utilization Rates (AOUR estimated from oxygen and tracer data collected over the ocean thermocline at monthly resolution between 2003 and 2006 at the Bermuda Atlantic Time-series Study (BATS site. We estimate water ages by calculating a transit time distribution from tritium and helium-3 data. The vertically integrated AOUR over the upper 500 m, which is a regional estimate of export, during the three years is 3.1 ± 0.5 mol O_{2} m^{−2} yr^{−1}. This is comparable to previous AOUR-based estimates of export production at the BATS site but is several times larger than export estimates derived from sediment traps or ^{234}Th fluxes. We compare AOUR determined in this study to AOUR measured in the 1980s and show AOUR is significantly greater today than decades earlier because of changes in AOU, rather than changes in ventilation rates. The changes in AOU may be a methodological artefact associated with problems with early oxygen measurements.
Cattania, C.; Khalid, F.
2016-09-01
The estimation of space and time-dependent earthquake probabilities, including aftershock sequences, has received increased attention in recent years, and Operational Earthquake Forecasting systems are currently being implemented in various countries. Physics based earthquake forecasting models compute time dependent earthquake rates based on Coulomb stress changes, coupled with seismicity evolution laws derived from rate-state friction. While early implementations of such models typically performed poorly compared to statistical models, recent studies indicate that significant performance improvements can be achieved by considering the spatial heterogeneity of the stress field and secondary sources of stress. However, the major drawback of these methods is a rapid increase in computational costs. Here we present a code to calculate seismicity induced by time dependent stress changes. An important feature of the code is the possibility to include aleatoric uncertainties due to the existence of multiple receiver faults and to the finite grid size, as well as epistemic uncertainties due to the choice of input slip model. To compensate for the growth in computational requirements, we have parallelized the code for shared memory systems (using OpenMP) and distributed memory systems (using MPI). Performance tests indicate that these parallelization strategies lead to a significant speedup for problems with different degrees of complexity, ranging from those which can be solved on standard multicore desktop computers, to those requiring a small cluster, to a large simulation that can be run using up to 1500 cores.
R. H. R. Stanley
2012-06-01
Full Text Available We present three years of Apparent Oxygen Utilization Rates (AOUR estimated from oxygen and tracer data collected over the ocean thermocline at monthly resolution between 2003 and 2006 at the Bermuda Atlantic Time-series Study (BATS site. We estimate water ages by calculating a transit time distribution from tritium and helium-3 data. The vertically integrated AOUR over the upper 500 m, which is a regional estimate of export, during the three years is 3.1 ± 0.5 mol O_{2} m^{−2} yr^{−1}. This is comparable to previous AOUR-based estimates of export production at the BATS site but is several times larger than export estimates derived from sediment traps or ^{234}Th fluxes. We compare AOUR determined in this study to AOUR measured in the 1980s and show AOUR is significantly greater today than decades earlier because of changes in AOU, rather than changes in ventilation rates. The changes in AOU are likely a methodological artefact associated with problems with early oxygen measurements.
Nabi, Jameel-Un
2014-01-01
Accurate estimate of the neutrino cooling rates is required in order to study the various stages of stellar evolution of massive stars. Neutrino losses from proto-neutron stars play a crucial role in deciding whether these stars would be crushed into black holes or explode as supernovae. Both pure leptonic and weak-interaction processes contribute to the neutrino energy losses in stellar matter. At low temperatures and densities, characteristic of the early phase of presupernova evolution, cooling through neutrinos produced via the weak-interaction is important. Proton-neutron quasi-particle random phase approximation (pn-QRPA) theory has recently being used for calculation of stellar weak-interaction rates of $fp$-shell nuclide with success. The lepton-to-baryon ratio ($Y_{e}$) during early phases of stellar evolution of massive stars changes substantially alone due to electron captures on $^{56}$Ni. The stellar matter is transparent to the neutrinos produced during the presupernova evolution of massive star...
M. O.B Olaogun
2003-02-01
Full Text Available The objective of this study was to determine the reliability and concurrent validity of two pain rating scales - Visual Analogue Scale (VAS and Verbal Rating Scale (VRS. The verbal rating scale was modified by translating the English description of subjective pain experience into vernacular (Yoruba equivalents and rating the knee pain when the patient was standing with the knee flexed . Twenty seven patients who were clinically and radiologically diagnosed with osteoarthritis (OA and with knee pain were purposively selected for the study. Two testers (physiotherapists independently rated the pain experienced by patients, when bearing full weight while standing on the affected leg with slight knee flexion, over a period of several days. For each patient pain was rated with the VAS and the modified VRS (MVRS. There were significant correlations between VAS and MVRS by the same tester (tester 1 and tester2 (r=0.92, p<0.01; r = 0.89, p<0.01respectively, and between VAS and MVRS between tester 1 and tester 2 (r=0.91,p<0.01. There were no significant differences between VAS for tester 1 and VAS for tester 2, and between MVRS for tester 1and MVRS for tester 2 (p> 0.01. According to this study, the two pain rating scales for knee OA are reliable. Our use of VAS and MVRS togetherwith the procedure involving the flexed knee posture is, therefore, recommended for wider clinical trials.
Hale, Leigh; McIlraith, Lucy; Miller, Clare; Stanley-Clarke, Terri; George, Rebecca
2010-01-01
Background: Researching falls in persons with ID is limited by difficulties in applying standardised balance outcome measures. The modified Gait Abnormality Rating Scale (GARS-M), developed to identify falls risk in older adults, requires only that the participant walks and thus may be a feasible falls research tool to use with people with ID. In…
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
Reliability of 4 Rating Scales for White Matter Lesions%脑白质病变4个分级量表的信度研究
魏娜; 王拥军; 张玉梅
2012-01-01
目的 研究4 个常用的脑白质病变分级量表的信度.方法 本研究连续入选2007 年8 月~2008 年10 月在北京天坛医院神经内科住院的260 例脑白质病变患者,分别用4 个分级量表评分,重测信度和评定者间信度检验用kappa 相关,内容一致性检验用Cronbach α相关.结果 4 个量表的重测信度、评定者间信度及Cronbach α系数均达到统计学意义.Ylikoski 量表的评定者间信度和内部一致性信度最好(评定者间信度kappa=0.656,P<0.01;Cronbach α=0.901).结论 4 个量表在评价脑白质病变中各有其优缺点,建议根据研究的侧重点不同来选择适当的分级方法.%Objective To study the reliability of the four rating scales which are widely used. Methods 260 consecutive inpatients from Aug. 2007 to Oct. 2008 with white matter lesions were enrolled into the study. Interrater and intrarater reliability of four scales was analyzed by using kappa correlation. Internal consistency was expressed with Cronbach's a. Results All the scales have statistic significant interrater and intrarater reliability. Scale of Ylikoski has the best intrarater reliability and Cranach's a (intrarater reliability, kappa=0.656, P<0.01, Cran-ach's a=0.901). Conclusion The four rating scales for white matter lesions have advantage and disadvantage. Disparate rating scales should be used according to purpose of study.
Volotka, A.V.
2006-07-01
Studies of the hyperfine splitting in hydrogen are strongly motivated by the level of accuracy achieved in recent atomic physics experiments, which yield finally model-independent informations about nuclear structure parameters with utmost precision. Considering the current status of the determination of corrections to the hyperfine splitting of the ground state in hydrogen, this thesis provides further improved calculations by taking into account the most recent value for the proton charge radius. Comparing theoretical and experimental data of the hyperfine splitting in hydrogen the proton-size contribution is extracted and a relativistic formula for this contribution is derived in terms of moments of the nuclear charge and magnetization distributions. An iterative scheme for the determination of the Zemach and magnetic radii of the proton is proposed. As a result, the Zemach and magnetic radii are determined and the values are compared with the corresponding ones deduced from data obtained in electron-proton scattering experiments. The extraction of the Zemach radius from a rescaled difference between the hyperfine splitting in hydrogen and in muonium is considered as well. Investigations of forbidden radiative transitions in few-electron ions within ab initio QED provide a most sensitive tool for probing the influence of relativistic electron-correlation and QED corrections to the transition rates. Accordingly, a major part of this thesis is devoted to detailed studies of radiative and interelectronic-interaction effects to the transition probabilities. The renormalized expressions for the corresponding corrections in one- and twoelectron ions as well as for ions with one electron over closed shells are derived employing the two-time Green's function method. Numerical results for the correlation corrections to magnetic transition rates in He-like ions are presented. For the first time also the frequency-dependent contribution is calculated, which has to be
Osaka, Motohisa; Murata, Hiroshige; Tateoka, Katsuhiko; Katoh, Takao
2007-07-01
Some cases of traffic accidents are assumed to be due to the occurrences of cardiac events during driving, which are thought to be induced by imbalance of autonomic nervous activities. These can be measured by analyzing heart rate variability. Therefore, we developed a new system of steering-wheel electrocardiogram with a soft-ware to remove noises. We compared the trends of sympathetic and parasympathetic nerve activities measured from the steering-wheel electrocardiograms with those recorded simultaneously from chest leads. For each parameter of instantaneous heart rate, low- or high-frequency component of heart rate variability in all the cases, the trend from the steering-wheel electrocardiogram resembled that from the chest-lead electrocardiogram. In 3 of 7 subjects, the trend of LF/HF showed a strong relationship between the steering-wheel electrocardiogram and the chest-lead electrocardiogram. Our system will open doors to a new strategy to keep a driver out of a risk by notifying it while driving.
Fu S
2013-02-01
Full Text Available Shihui Fu, Yuan Liu, Bing Zhu, Tiehui Xiao, Shuangyan Yi, Yongyi Bai, Ping Ye, Leiming LuoDepartment of Geriatric Cardiology, Chinese PLA General Hospital, Beijing, People's Republic of ChinaObjective: As a standard indicator of renal function, the glomerular filtration rate (GFR is vital for the prognostic analysis of elderly patients with coronary artery disease (CAD. Thus, the search for the calculation equation of GFR with the best prognostic ability is an important task. The most commonly used Modification of Diet in Renal Disease (MDRD equation and the Chinese version (CMDRD of the MDRD equation has many shortcomings. The newly developed Mayo Clinic quadratic (Mayo and Chronic Kidney Disease (CKD Epidemiology Collaboration (CKD-EPI equations may overcome these shortcomings. Because the populations involved in these equation-related studies are almost completely devoid of subjects > 70 years of age, there are more debates on the performance of these equations in the elderly. This study was designed to compare the prognostic abilities of different calculation formulas for the GFR in elderly Chinese patients with CAD.Methods: This study included 1050 patients (≥60 years of age with CAD. The endpoint was all-cause mortality over a mean follow-up period of 417 days.Results: The median age was 86 years (60–104 years. The median values for the MDRD-GFR, CMDRD-GFR, CKD-EPI-GFR, and Mayo-GFR were 66.0, 69.2, 65.6, and 75.8 mL/minute/1.73 m2, respectively. The prevalence of GFR < 60 mL/minute/1.73 m2 based on these measures was 39.3%, 35.4%, 43.0%, and 28.7%, respectively. Their area under the curve values for predicting death were 0.611, 0.610, 0.625, and 0.632, respectively. Their cut-off points for predicting death were 54.1, 53.5, 48.0, and 57.4 mL/minute/1.73 m2, respectively. Compared with the MDRD-GFR, the net reclassification improvement values of the CMDRD-GFR, CKD-EPI-GFR, and Mayo-GFR were 0.02, 0.10, and 0.14, respectively
Scaled CMOS Technology Reliability Users Guide
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is
Inoue, Yuichi; Oka, Yasunori; Kagimura, Tatsuo; Kuroda, Kenji; Hirata, Koichi
2013-09-01
This study was conducted to verify the reliability, validity, and responsiveness of the Japanese version of the International Restless Legs Syndrome Study Group Rating Scale for restless legs syndrome (J-IRLS) as a sub-study of a clinical trial of pramipexole against restless legs syndrome. After evaluating the test-retest reliability, concurrent validity and construct validity were analyzed. The responsiveness of J-IRLS was confirmed by evaluating the correlations between the changes in J-IRLS total score after treatment, Clinical Global Impression Improvement Scale (CGI-I), and Patient Global Impression. Test-retest reliability of J-IRLS was good (intra-class correlation coefficient, 0.877; 95% confidence interval, 0.802-0.925). The correlation coefficient of J-IRLS total score and CGI-S score for the first and second visit was 0.804 and 0.796, respectively (both P restless legs syndrome and for assessing drug efficacy. © 2013 The Authors. Psychiatry and Clinical Neurosciences © 2013 Japanese Society of Psychiatry and Neurology.
Kopáček Jaroslav
2016-01-01
Full Text Available This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element, which is seen as a random variable and their data (values can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.
Gaffney, J. E., Jr.; Judge, R. W.
1981-01-01
A model of a software development process is described. The software development process is seen to consist of a sequence of activities, such as 'program design' and 'module development' (or coding). A manpower estimate is made by multiplying code size by the rates (man months per thousand lines of code) for each of the activities relevant to the particular case of interest and summing up the results. The effect of four objectively determinable factors (organization, software product type, computer type, and code type) on productivity values for each of nine principal software development activities was assessed. Four factors were identified which account for 39% of the observed productivity variation.
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.
U.S. Geological Survey, Department of the Interior — This dataset includes a reference baseline used by the Digital Shoreline Analysis System (DSAS) to calculate rate-of-change statistics for the exposed north coast of...
U.S. Geological Survey, Department of the Interior — This dataset includes a reference baseline used by the Digital Shoreline Analysis System (DSAS) to calculate rate-of-change statistics for the sheltered north coast...
U.S. Geological Survey, Department of the Interior — This dataset includes a reference baseline used by the Digital Shoreline Analysis System (DSAS) to calculate rate-of-change statistics for the sheltered north coast...
U.S. Geological Survey, Department of the Interior — This dataset includes a reference baseline used by the Digital Shoreline Analysis System (DSAS) to calculate rate-of-change statistics for the sheltered north coast...
U.S. Geological Survey, Department of the Interior — This dataset includes a reference baseline used by the Digital Shoreline Analysis System (DSAS) to calculate rate-of-change statistics for the exposed north coast of...
U.S. Geological Survey, Department of the Interior — This dataset includes a reference baseline used by the Digital Shoreline Analysis System (DSAS) to calculate rate-of-change statistics for the exposed north coast of...
Wang, K; Dang, W; Jönsson, P; Guo, X L; Li, S; Chen, Z B; Zhang, H; Long, F Y; Liu, H T; Li, D F; Hutton, R; Chen, C Y; Yan, J
2016-01-01
Combined relativistic configuration interaction and many-body perturbation calculations are performed for the 359 fine-structure levels of the $2s^2 2p^3$, $2s 2p^4$, $2p^5$, $2s^2 2p^2 3l$, $2s 2p^3 3l$, $2p^4 3l$, and $2s^2 2p^2 4l$ configurations in N-like ions from Ar XII to Zn XXIV. A complete and consistent data set of energies, wavelengths, radiative rates, oscillator strengths, and line strengths for all possible electric dipole, magnetic dipole, electric quadrupole, and magnetic quadrupole transitions among the 359 levels are given for each ion. The present work significantly increases the amount of accurate data for ions in the nitrogen-like sequence, and the accuracy of the energy levels is high enough to serve identification and interpretation of observed spectra involving the $n=3,4$ levels, for which the experimental values are largely scarce. Meanwhile, the results should be of great help in modeling and diagnosing astrophysical and fusion plasmas.
T. William Bentley
2015-05-01
Full Text Available Hydrolyses of acid derivatives (e.g., carboxylic acid chlorides and fluorides, fluoro- and chloroformates, sulfonyl chlorides, phosphorochloridates, anhydrides exhibit pseudo-first order kinetics. Reaction mechanisms vary from those involving a cationic intermediate (SN1 to concerted SN2 processes, and further to third order reactions, in which one solvent molecule acts as the attacking nucleophile and a second molecule acts as a general base catalyst. A unified framework is discussed, in which there are two reaction channels—an SN1-SN2 spectrum and an SN2-SN3 spectrum. Third order rate constants (k3 are calculated for solvolytic reactions in a wide range of compositions of acetone-water mixtures, and are shown to be either approximately constant or correlated with the Grunwald-Winstein Y parameter. These data and kinetic solvent isotope effects, provide the experimental evidence for the SN2-SN3 spectrum (e.g., for chloro- and fluoroformates, chloroacetyl chloride, p-nitrobenzoyl p-toluenesulfonate, sulfonyl chlorides. Deviations from linearity lead to U- or V-shaped plots, which assist in the identification of the point at which the reaction channel changes from SN2-SN3 to SN1-SN2 (e.g., for benzoyl chloride.
Song, Lei
2016-01-01
Investigating how formamide forms in the interstellar medium is a hot topic in astrochemistry, which can contribute to our understanding of the origin of life on Earth. We have constructed a QM/MM model to simulate the hydrogenation of isocyanic acid on amorphous solid water surfaces to form formamide. The binding energy of HNCO on the ASW surface varies significantly between different binding sites, we found values between $\\sim$0 and 100 kJ mol$^{-1}$. The barrier for the hydrogenation reaction is almost independent of the binding energy, though. We calculated tunneling rate constants of H + HNCO $\\rightarrow$ NH$_2$CO at temperatures down to 103 K combining QM/MM with instanton theory. Tunneling dominates the reaction at such low temperatures. The tunneling reaction is hardly accelerated by the amorphous solid water surface compared to the gas phase for this system, even though the activation energy of the surface reaction is lower than the one of the gas-phase reaction. Both the height and width of the ba...
Power electronics reliability analysis.
Smith, Mark A.; Atcitty, Stanley
2009-12-01
This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.
Tsuchiyagaito A
2017-05-01
Full Text Available Aki Tsuchiyagaito,1–3 Satoshi Horiuchi,4 Toko Igarashi,5 Yoshiya Kawanori,4 Yoshiyuki Hirano,1,3 Hirooki Yabe,2 Akiko Nakagawa1,3 1Research Center for Child Mental Development, Chiba University, Chiba, 2Department of Neuropsychiatry, Fukushima Medical University, Fukushima, 3United Graduate School of Child Development, Osaka University, Kanazawa University, Hamamatsu University School of Medicine, Chiba University and University of Fukui, Osaka, 4Faculty of Social Welfare, Iwate Prefectural University, Iwate, 5Graduate School of Education, Joetsu University of Education, Niigata, Japan Background: The Hoarding Rating Scale-Self-Report (HRS-SR is a five-item scale that assesses the symptoms of hoarding. These symptoms include excessive acquisition, difficulty in discarding, and excessive clutter that causes distress. We conducted three studies to examine the factor structure, reliability, and validity of the Japanese version of the HRS-SR (HRS-SR-J. Methods: Study 1 examined its reliability; 193 college students and 320 adolescents and adults completed the HRS-SR-J and, of the college students, 32 took it again 2 weeks later. Study 2 aimed to confirm that its scores in a sample of 210 adolescents and adults are independent of social desirability. Study 3 aimed to validate the HRS-SR-J in the aspects of convergent and discriminant validity in a sample of 550 adults. Results: The HRS-SR-J showed good internal consistency and 2-week test–retest reliability. Based on the nonsignificant correlations between the HRS-SR-J and social desirability, the HRS-SR-J was not strongly affected by social desirability. In addition, it also had a good convergent validity with the Japanese version of the Saving Inventory-Revised (SI-R-J and the hoarding subscale of the Obsessive-Compulsive Inventory, while having a significantly weaker correlation with the five subscales of the Obsessive-Compulsive Inventory, except for the hoarding subscale. In addition, the
Tanakamaru, Shuhei; Fukuda, Mayumi; Higuchi, Kazuhide; Esumi, Atsushi; Ito, Mitsuyoshi; Li, Kai; Takeuchi, Ken
2011-04-01
A dynamic codeword transition ECC scheme is proposed for highly reliable solid-state drives, SSDs. By monitoring the error number or the write/erase cycles, the ECC codeword dynamically increases from 512 Byte (+parity) to 1 KByte, 2 KByte, 4 KByte…32 KByte. The proposed ECC with a larger codeword decreases the failure rate after ECC. As a result, the acceptable raw bit error rate, BER, before ECC is enhanced. Assuming a NAND Flash memory which requires 8-bit correction in 512 Byte codeword ECC, a 17-times higher acceptable raw BER than the conventional fixed 512 Byte codeword ECC is realized for the mobile phone application without an interleaving. For the MP3 player, digital-still camera and high-speed memory card applications with a dual channel interleaving, 15-times higher acceptable raw BER is achieved. Finally, for the SSD application with 8 channel interleaving, 13-times higher acceptable raw BER is realized. Because the ratio of the user data to the parity bits is the same in each ECC codeword, no additional memory area is required. Note that the reliability of SSD is improved after the manufacturing without cost penalty. Compared with the conventional ECC with the fixed large 32 KByte codeword, the proposed scheme achieves a lower power consumption by introducing the "best-effort" type operation. In the proposed scheme, during the most of the lifetime of SSD, a weak ECC with a shorter codeword such as 512 Byte (+parity), 1 KByte and 2 KByte is used and 98% lower power consumption is realized. At the life-end of SSD, a strong ECC with a 32 KByte codeword is used and the highly reliable operation is achieved. The random read performance is also discussed. The random read performance is estimated by the latency. The latency is below 1.5 ms for ECC codeword up to 32 KByte. This latency is below the average latency of 15,000 rpm HDD, 2 ms.
de Werd, Maartje M E; Hoelzenbein, Angela C; Boelen, Daniëlle H E; Rikkert, Marcel G M Olde; Hüell, Michael; Kessels, Roy P C; Voigt-Radloff, Sebastian
2016-12-01
Errorless learning (EL) is an instructional procedure involving error reduction during learning. Errorless learning is mostly examined by counting correctly executed task steps or by rating them using a Task Performance Scale (TPS). Here, we explore the validity and reliability of a new assessment procedure, the core elements method (CEM), which rates essential building blocks of activities rather than individual steps. Task performance was assessed in 35 patients with Alzheimer's dementia recruited from the Relearning methods on Daily Living task performance of persons with Dementia (REDALI-DEM) study using TPS and CEM independently. Results showed excellent interrater reliabilities for both measure methods (CEM: intraclass coefficient [ICC] = .85; TPS: ICC = .97). Also, both methods showed a high agreement (CEM: mean of measurement difference [MD] = -3.44, standard deviation [SD] = 14.72; TPS: MD = -0.41, SD = 7.89) and correlated highly (>.75). Based on these results, TPS and CEM are both valid for assessing task performance. However, since TPS is more complicated and time consuming, CEM may be the preferred method for future research projects.
DeVries, R. J.; Hann, D. A.; Schramm, H.L.
2015-01-01
This study evaluated the effects of environmental parameters on the probability of capturing endangered pallid sturgeon (Scaphirhynchus albus) using trotlines in the lower Mississippi River. Pallid sturgeon were sampled by trotlines year round from 2008 to 2011. A logistic regression model indicated water temperature (T; P probability (Y = −1.75 − 0.06T + 0.10D). Habitat type, surface current velocity, river stage, stage change and non-sturgeon bycatch were not significant predictors (P = 0.26–0.63). Although pallid sturgeon were caught throughout the year, the model predicted that sampling should focus on times when the water temperature is less than 12°C and in deeper water to maximize capture probability; these water temperature conditions commonly occur during November to March in the lower Mississippi River. Further, the significant effect of water temperature which varies widely over time, as well as water depth indicate that any efforts to use the catch rate to infer population trends will require the consideration of temperature and depth in standardized sampling efforts or adjustment of estimates.
El-Minshawy Osama
2010-01-01
Full Text Available Glomerular Filtration Rate (GFR is considered the best overall index of renal function currently used. Measurement of 24 hours urine/plasma creatinine ratio (UV/P is usually used for estimation of GFR. However little is known about its accuracy in different stages of Chronic Kidney Disease (CKD aim: is to evaluate performance of UV/P in classification of CKD by comparing it with isotopic GFR (iGFR. 136 patients with CKD were enrolled in this study 80 (59% were males, 48 (35% were diabetics. Mean age 46 ± 13. Creatinine Clearance (Cr.Cl estimated by UV/P and Cockroft-Gault (CG was done for all patients, iGFR was the reference value. Accuracy of UV/P was 10%, 31%, 49% within ± 10%, ± 30%, ± 50% error respectively, r 2 = 0.44. CG gave a better performance even when we restrict our analysis to diabetics only, the accuracy of CG was 19%, 47%, 72% in ± 10%, ± 30% and ± 50% errors respectively, r 2 = 0.63. Both equations gave poor classification of CKD. In conclusion, UV/P has poor accuracy in estimation of GFR, The accuracy worsened as kidney disease becomes more severe. We conclude 24 hours CrCl. is not good substitute for measurement of GFR in patients with CKD.
Schloemer, Luc Laurent Alexander
2014-12-17
The compliance with the dose rate limits for transport and storage casks (TLB) for spent nuclear fuel from pressurised water reactors can be proved by calculation. This includes the determination of the radioactive sources and the shielding-capability of the cask. In this thesis the entire computational chain, which extends from the determination of the source terms to the final Monte-Carlo-transport-calculation is analysed and the arising uncertainties are quantified not only by benchmarks but also by variational calculi. The background of these analyses is that the comparison with measured dose rates at different TLBs shows an overestimation by the values calculated. Regarding the studies performed, the overestimation can be mainly explained by the detector characteristics for the measurement of the neutron dose rate and additionally in case of the gamma dose rates by the energy group structure, which the calculation is based on. It turns out that the consideration of the uncertainties occurring along the computational chain can lead to even greater overestimation. Concerning the dose rate calculation at cask loadings with spent uranium fuel assemblies an uncertainty of (({sup +21}{sub -28}) ±2) % (rel.) for the total gamma dose rate and of ({sup +28±23}{sub -55±4}) % (rel.) for the total neutron dose rate are estimated. For mixed-loadings with spent uranium and MOX fuel assemblies an uncertainty of ({sup +24±3}{sub -27±2}) % (rel.) for the total gamma dose rate and of ({sup +28±23}{sub -55±4}) % (rel.) for the total neutron dose rate are quantified. The results show that the computational chain has not to be modified, because the calculations performed lead to conservative dose rate predictions, even if high uncertainties at neutron dose rate measurements arise. Thus at first the uncertainties of the neutron dose rate measurement have to be decreased to enable a reduction of the overestimation of the calculated dose rate afterwards. In the present thesis
Stancil, Phillip
We propose to compute accurate collisional excitation rate coefficients for rovibrational transitions of CS, SiO, SO, NO, H_2O, and HCN due to H_2, He, or H impact. This extends our previous grant which focused on 3- and 4-atom systems to 4- and 5-atom collision complexes, with dynamics to be performed on 6-9 dimensional potential energy surfaces (PESs). This work, which uses fully quantum mechanical methods for inelastic scattering and incorporates full-dimensional PESs, pushes beyond the state-of-the-art for such calculations, as recently established by our group for rovibrational transitions in CO-H_2 in 6D. Many of the required PESs will be computed as part of this project using ab initio theory and basis sets of the highest level feasible and particular attention will be given to the long range form of the PESs. The completion of the project will result in 6 new global PESs and state-to-state rate coefficients for a large range of initial rovibrational levels for temperatures between 1 and 3000 K. The chosen collision systems correspond to cases where data are limited or lacking, are important coolants or diagnostics, and result in observable emission features in the infrared (IR). The final project results will be important for the analysis of a variety of interstellar and extragalactic environments in which the local conditions of gas density, radiation field, and/or shocks drive the level populations out of equilibrium. In such cases, collisional excitation data are critical to the accurate prediction and interpretation of observed molecular IR emission lines in protoplanetary disks, star-forming regions, planetary nebulae, embedded protostars, photodissociation regions, etc. The use of the proposed collisional excitation data will lead to deeper examination and understanding of the properties of many astrophysical environments, hence elevating the scientific return from the upcoming JWST, as well as from current (SOFIA, Herschel, HST) and past IR missions
Morisato, T.; Ohno, K.; Ohtsuki, T.; Hirose, K.; Sluiter, M.; Kawazoe, Y.
2008-01-01
Carrying out a first-principles calculation assuming linear relationship between the electron density at Be nucleus and the electron-capture (EC) decay rate, we explained why 7Be@C60 shows higher EC decay rate than 7Be crystal, which was originally found experimentally by Ohtsuki et al. [Phys. Rev.
Results from the LHC Beam Dump Reliability Run
Uythoven, J; Carlier, E; Castronuovo, F; Ducimetière, L; Gallet, E; Goddard, B; Magnin, N; Verhagen, H
2008-01-01
The LHC Beam Dumping System is one of the vital elements of the LHC Machine Protection System and has to operate reliably every time a beam dump request is made. Detailed dependability calculations have been made, resulting in expected rates for the different system failure modes. A 'reliability run' of the whole system, installed in its final configuration in the LHC, has been made to discover infant mortality problems and to compare the occurrence of the measured failure modes with their calculations.
Kim, Seon-Ha; Lee, Sang-Il; Jo, Min-Woo
2017-08-11
The standard gamble (SG) method is the gold standard for valuing health states as a utility, although it is accepted that it is difficult to valuate health states. This study was conducted in order to compare the SG with the rating scale (RS) and time trade-off (TTO) techniques in terms of their feasibility, comparability, and reliability in a valuation survey of the general Korean population. Five-hundred members of the general Korean population were recruited using a multi-stage quota sampling method in Seoul and its surrounding areas, Korea. Respondents evaluated 9 EQ-5D-5L health states using a visual analogue scale (VAS), SG, and TTO during a personal interview. Feasibility was assessed in aspects of the level of difficulty, administration time, and inconsistent responses. Comparability was evaluated using intraclass correlation coefficient (ICC) and the Bland-Altman approach. Test-retest reliability was analyzed using the ICC. Of the three methods, VAS was the easiest and quickest method to respond. The SG method did not differ significantly compared to the TTO method in administration time as well as the level of difficulty. The SG and TTO values were highly correlated (r = 0.992), and the average mean difference between the SG and the TTO values was 0.034. The ICCs of the VAS, SG, and TTO scores were 0.906, 0.841, and 0.827, respectively. This study suggests that the SG method compared with the VAS and TTO method was feasible and offered a reliable tool for population-based, health state valuation studies in Korea.
Tsuchiyagaito, Aki; Horiuchi, Satoshi; Igarashi, Toko; Kawanori, Yoshiya; Hirano, Yoshiyuki; Yabe, Hirooki; Nakagawa, Akiko
2017-01-01
The Hoarding Rating Scale-Self-Report (HRS-SR) is a five-item scale that assesses the symptoms of hoarding. These symptoms include excessive acquisition, difficulty in discarding, and excessive clutter that causes distress. We conducted three studies to examine the factor structure, reliability, and validity of the Japanese version of the HRS-SR (HRS-SR-J). Study 1 examined its reliability; 193 college students and 320 adolescents and adults completed the HRS-SR-J and, of the college students, 32 took it again 2 weeks later. Study 2 aimed to confirm that its scores in a sample of 210 adolescents and adults are independent of social desirability. Study 3 aimed to validate the HRS-SR-J in the aspects of convergent and discriminant validity in a sample of 550 adults. The HRS-SR-J showed good internal consistency and 2-week test-retest reliability. Based on the nonsignificant correlations between the HRS-SR-J and social desirability, the HRS-SR-J was not strongly affected by social desirability. In addition, it also had a good convergent validity with the Japanese version of the Saving Inventory-Revised (SI-R-J) and the hoarding subscale of the Obsessive-Compulsive Inventory, while having a significantly weaker correlation with the five subscales of the Obsessive-Compulsive Inventory, except for the hoarding subscale. In addition, the strength of the correlation between the HRS-SR-J and the Japanese version of the Patient Health Questionnaire-9 and that between the HRS-SR-J and the Generalized Anxiety Disorder-7 were significantly weaker than the correlation between the HRS-SR-J and the SI-R-J. These results demonstrate that the HRS-SR-J has good convergent and discriminant validity. The HRS-SR-J is a notable self-report scale for examining the severity of hoarding symptoms.
Danikiewicz, Witold
2009-08-01
Gas-phase proton affinities (PA) of a series of 25 small, aliphatic carbanions were computed using different Gaussian-3 methods: G3, G3(B3LYP), G3(MP2) and G3(MP2, B3LYP) and Complete Basis Set Extrapolation methods: CBS-4M, CBS-Q, CBS-QB3, and CBS-APNO. The results were compared with critically selected experimental data. The analysis of the results shows that for the majority of the studied molecules all compound methods (Gaussian-3 and CBS), except for CBS-4M, give comparable results, which differ no more than +/-2 kcal mol-1 from the experimental data. Taking into account the calculation time, G3(MP2) and G3(MP2, B3LYP) methods offer the best compromise between accuracy and computational cost. As an additional proof, the results obtained by these two methods were compared with the values obtained using CCSD(T) ab initio method with large basis set. It was found also that some of the published experimental data are erroneous and should be corrected. The results described in this work show that for the majority of the studied compounds PA values calculated using compound methods can be used with the same or even higher confidence as the experimental ones because even the largest differences between Gaussian-3 and CBS methods listed above are still comparable with the accuracy of the typical PA measurements.
Xiao-mei LUO
2011-07-01
Full Text Available Objective To compare the values of 3 empirical formulae,namely Modification of Diet in Renal Disease(MDRD study equation,Chronic Kidney Disease Epidemiology Collaboration(CKD-EPI equation,and cystatin C(Cys C single variable equations(eGFR-Cys,on predicting the glomerular filtration rate(GFR of patients with chronic kidney disease.Methods Ninety three patients with chronic kidney disease were enrolled in present study.The plasma clearance of 99mTc-diethylenetriamine pentaacetic acid(DTPA was measured the golden standard of GFR(rGFR,and estimated GFR(eGFR was calculated with the MDRD equation,CKD-EPI equation and eGFR-Cys equation,respectively.The result of rGFR with that of various eGFR was compared.Results Compared with rGFR,the mean bias of eGFR in CKD-EPI equation,eGFR-Cys equation and MDRD study equation were-3.4±10.7ml/(min·1.73m2,-4.8±11.9ml/(min·1.73m2 and-5.4±10.4ml/(min·1.73m2,respectively,and no significant difference was noted among the 3 values.The 30% accuracy of 3 equations was 74.2%,72.0% and 64.5%,respectively,no significant difference was found among the 3 values.The 30% accuracy of CKD-EPI equation was higher than that of MDRD study equation(75.7%±5.1% vs 54.1%±7.7%,P 60ml/(min·1.73m2.With 60ml/(min·1.73m2 as the diagnostic cut-off point of GFR damage,the area under receiver operating characteristic(ROC curve was 0.862 in MDRD study equation,0.863 in CKD-EPI equation and 0.877 in eGFR-Cys equation,respectively,and no significant difference was found among the 3 values.Conclusions There are no significant differences among the 3 equations in predicting the GFR of patients with CKD.However,further studies are needed to investigate whether MDRD study equation could be replaced by the CKD-EPI equation and eGFR-Cys equation.
Reliability and Validity of Self-Rated General Health%自评一般健康的信度和效度分析
齐亚强
2014-01-01
本文利用2008年“中国流动与健康调查”（IMHC）数据，分析了自评一般健康指标的信度和效度。研究发现，自评一般健康具有较好的信度，被访者先后两次回答的结果高度一致，两次回答结果的微小变动表现为随机性的波动，而非系统性偏差。该指标在一定程度上会受到调查中题目次序效应的影响。关于自评一般健康指标效度的分析发现，该指标能够有效反映被访者自我感知的各种健康状态和个体既有的关于自身健康的知识，但不能很好地反映个体无法感知的机体变化等健康问题。自评一般健康存在较为复杂的回答偏误问题，受不同年龄、社会经济地位群体关于健康的评价标准、期望与认知差异的影响，其回答结果在不同人群中的可比性值得商榷。%Using data from the 2008 Survey of Internal Migration and Health in China , this study examines the reliability and validity of self‐rated general health for the Chinese population .Results show that self‐rated general health is a highly reliable measure of individual health .Two repeated measures of self‐rated general health in the survey are quite consistent and the difference between the two answers reflects random variations rather than any systematic biases .Nonetheless ,there is also some evidence that self‐rated general health is likely to be affected by question orders in a survey . In addition ,this study examines the validity and potential reporting bias of self‐rated general health by fitting Hopit models .Results show that self‐rated general health is a valid summary measure of individual’ s self‐perceived and known health conditions ,although it does not reflect bodily functional changes that can hardly be perceived . The response of self‐rated general health is strongly correlated with respondent’s chronic medical conditions ,the occurrence of acute illness ,self‐perceived pains
Lutsyshyn, Y.; Halley, J. W.
2011-01-01
We present the results of diffusion Monte Carlo calculations of the elastic transmission of a low-energy beam of helium atoms through a suspended slab of superfluid helium. These calculations represent a significant improvement on variational Monte Carlo methods which were previously used to study this problem. The results are consistent with the existence of a condensate-mediated transmission mechanism, which would result in very fast transmission of pulses through a slab.
Kepler Reliability and Occurrence Rates
Bryson, Steve
2016-10-01
The Kepler mission has produced tables of exoplanet candidates (``KOI table''), as well as tables of transit detections (``TCE table''), hosted at the Exoplanet Archive (http://exoplanetarchive.ipac.caltech.edu). Transit detections in the TCE table that are plausibly due to a transiting object are selected for inclusion in the KOI table. KOI table entries that have not been identified as false positives (FPs) or false alarms (FAs) are classified as planet candidates (PCs, Mullally et al. 2015). A subset of PCs have been confirmed as planetary transits with greater than 99% probability, but most PCs have 90% (Morton & Johnson 2011).
殷青云; 陈劲梅; 罗学荣; 李雪荣
2011-01-01
目的 检验两个孤独症常用临床评定量表-儿童期孤独症评定量表(Childhood Autism Rating Scale,CARS)和孤独症行为评定量表(Autism Behavior Checklist,ABC)的信度和效度.方法 选用同质性信度(项目间平均相关系数、Cronbach's Alpha系数)检验CARS和ABC的信度;使用主成分分析、内部相关和量表间的相关系数来检验两个量表的效度.结果 CARS和ABC各条目的 Cronbach α系数分别为0.781.0.810,MIIC分别为0.324和0.254;ABC五个主维度的Cronbach α系数分别界于0.396～0.639之间.CARS的主成分分析提取5个主因子,解释了总信息的73.00%;ABC的主成分分析提取5个主因子,仪解释了总信息的38.9%.结论 总体上ABC和CARS具有良好的信度、效度.%Objective To evaluate the reliability and validity of Childhood Autism Rating Scale(CARS)and Autism Behaviour Checklist(ABC)in our clinical practise.Methods The study examined the reliability of CARS and ABC with the Cronbach's α coeffieient and mean inter-item correlation(MIIC),analysed the constructive and criterion-related validity of CARS and ABC with the principle component analysis and the correlation.Results Cronbach α coefficients of CARS and ABC were 0.781 and 0.810 respectively,MIIC was 0.324 and 0.254.The principal component analysis of the CARS items identified that the scale had five main factors,which explained 73.00%of the total variance.The principal component analysis of the ABC items identified that the scale had 19 main factors,of which 5 main factors only explained the 38.9%of the total variance.Condmion Both CARS and ABC have good reliability and validity.
STEGEMAN, CA; HUISMAN, RM; DEROUW, B; JOOSTEMA, A; DEJONG, PE
1995-01-01
We assessed the agreement between different methods of determining protein catabolic rate (PCR) in hemodialysis patients and the possible influence of postdialysis urea rebound and the length of the interdialytic interval on the PCR determination. Protein catabolic rate derived from measured total u
2015-05-12
Turbine Blades,” Structural and Multidisciplinary Optimization, DOI 10.1007/s00158-012-0839- 8. 11. Li, J., and Mourelatos, Z. P., 2009, “Time-Dependent...2012, “Reliability Analysis for Hydrokinetic Turbine Blades,” Renewable Energy, 48, 251-262. 13. Madsen, P. H., and Krenk, S., 1984, “An Integral...2003, “Subset Simulation and its Application to Seismic Risk Based on Dynamic Analysis,” Journal of Engineering Mechanics, 129, 901-917. 17. Wang
He, Shuxiang; Zang, Qiyong; Zhang, Jingyu; Zhang, Han; Wang, Mengqi; Chen, Yixue
2016-05-31
Point kernel integration (PKI) method is widely used in the visualization of radiation field in engineering applications because of the features of quickly dealing with large-scale complicated geometry space problems. But the traditional PKI programs have a lot of restrictions, such as complicated modeling, complicated source setting, 3D fine mesh results statistics and large-scale computing efficiency. To break the traditional restrictions for visualization of radiation field, ARShield was developed successfully. The results show that ARShield can deal with complicated plant radiation shielding problems for visualization of radiation field. Compared with SuperMC and QAD, it can be seen that the program is reliable and efficient. Also, ARShield can meet the demands of calculation speediness and interactive operations of modeling and displaying 3D geometries on a graphical user interface, avoiding error modeling in calculation and visualization.
Keney, G.S.
1981-08-01
A computer code has been written to calculate neutron induced activation of neutral-beam injector components and the corresponding dose rates as a function of geometry, component composition, and time after shutdown. The code, ACDOS1, was written in FORTRAN IV to calculate both activity and dose rates for up to 30 target nuclides and 50 neutron groups. Sufficient versatility has also been incorporated into the code to make it applicable to a variety of general activation problems due to neutrons of energy less than 20 MeV.
Hartzell, Allyson L; Shea, Herbert R
2010-01-01
This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.
王正刚
2013-01-01
For the problems as large amount of calculation,not high accuracy,easy to produce singular solution and calculation can be done only performance function is known in the conventional response surface methodology widely adopted for structural reliability computation,first,based on the principle of least squares,under the condition that only the performance function value corresponding to the design checking point is known,by using the properties of weight function as compactly supported,non-negativity,smoothness,decrease,and selecting the design checking point in the domain of influence,the paper presents that most probable failure point and reliability index of the structure can be calculated by applying the least squares method to generate response surface function through iteration and the combining with first order reliability method,until the calculated adjacent two reliability indexes can meet the given error.The paper gives a feasible algorithm,which can carry out calculation without necessary to know the performance function,even the type of the performance function.Examples show that this method with less number of iterations,can obtain the most probable failure point and reliability index of structure in the higher accuracy and with less number of iterations.%针对目前在结构可靠性计算中被广泛应用的传统响应面法存在计算量大,精确度不高,容易产生奇异解,必须知道功能函数才能计算等问题.首先根据最小二乘原理,在只需知道设计验算点对应的功能函数值的情况下,利用权函数的紧支性、非负性、光滑性、递减性等性质,在影响域内应用选取的设计验算点,提出了用移动最小二乘法通过迭代生成响应面函数,然后结合一阶可靠性方法,计算结构的最大失效点与可靠性指标.通过反复迭代,直到计算的相邻两次的可靠性指标满足所给的误差为止.给出了切实可行的算法,该算法不需要知道功能函数,甚
Bryans, P; Savin, D W
2008-01-01
We have reanalyzed SUMER observations of a parcel of coronal gas using new collisional ionization equilibrium (CIE) calculations. These improved CIE fractional abundances were calculated using state-of-the-art electron-ion recombination data for K-shell, L-shell, Na-like, and Mg-like ions of all elements from H through Zn and, additionally, Al- through Ar-like ions of Fe. Improved CIE calculations based on these data are presented here. We have also developed a new systematic method for determining the average emission measure (EM) and electron temperature (T_e) of an emitting plasma. With our new CIE data and our new approach for determining the average EM and T_e we have reanalyzed SUMER observations of the solar corona. We have compared our results with those of previous studies and found some significant differences for the derived EM and T_e. We have also calculated the enhancement of coronal elemental abundances compared to their photospheric abundances, using the SUMER observations themselves to determ...
Cunnah, David
2014-07-01
In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture.
2010-10-01
... Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE... Disease (ESRD) Services and Organ Procurement Costs § 413.220 Methodology for calculating the per... factor to account for the most recent estimate of increases in the prices of an appropriate market...
Pedro A. Hidalgo Menéndez
2010-09-01
Full Text Available Resumen Introducción y objetivos: Diferentes mecanismos se implican en la captación, transporte, entrega y utilización del oxígeno en los organismos vivos, y cada uno de ellos puede afectarse en el enfermo grave. El propósito de este trabajo fue conocer la confiabilidad de los cálculos especiales de la oxigenación, procedentes de muestras venosas centrales. Método: Se realizó un estudio prospectivo con 22 pacientes a los que se les practicó cirugía cardíaca, en los que se compararon los cálculos especiales obtenidos de muestras venosas centrales con los venosos-mixtos. Resultados: Se encontró correlación estadística significativa entre la diferencia arteriovenosa de oxígeno, el cortocircuito y la saturación venosa de hemoglobina oxigenada. Sin embargo, se halló un bajo por ciento de fiabilidad al aplicarles los criterios protocolizados; pero fue factible mediante ecuaciones de regresión, lograr una corrección altamente significativa (p < 0,01, que elevó la fiabilidad a más del 90 %. Conclusiones: Las muestras venosas centrales constituyen una alternativa recomendable para obtener cálculos especiales de la oxigenación durante la cirugía cardíaca. / Abstract Introduction and Objectives: Different mechanisms are involved in the uptake, transportation, delivery and utilization of oxygen in living organisms, and each of them may be affected in the severely ill patient. The purpose of this study was to determine the reliability of the special calculations of oxygenation, from central venous samples. Methods: A prospective study was performed on 22 patients who underwent cardiac surgery, and in which special calculations obtained from central venous samples were compared to mixed-venous samples calculations. Results: A statistically significant correlation among the arteriovenous oxygen difference, the shunt and the venous hemoglobin oxygen saturation was found. However, a small percentage of reliability was found when
Hernández S, A., E-mail: h.s.alfonso@gmail.com, E-mail: meduardo2001@hotmail.com; Cano, M. E., E-mail: h.s.alfonso@gmail.com, E-mail: meduardo2001@hotmail.com [Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán, Jalisco (Mexico); Torres-Arenas, J., E-mail: torresare@gmail.com [Division de Ciencias e Ingenierías, Universidad de Guanajuato, León, Guanajuato (Mexico)
2014-11-07
Currently the absorption of electromagnetic radiation by magnetic nanoparticles is studied for biomedical applications of cancer thermotherapy. Several experiments are conduced following the framework of the Rosensweig model, in order to estimate their specific absorption rate. Nevertheless, this linear approximation involves strong simplifications which constrain their accuracy and validity range. The main aim of this work is to incorporate the deviation of the sphericity assumption in particles shapes, to improve the determination of their specific absorption rate. The correction to the effective particles volume is computed as a measure of the apparent amount of magnetic material, interacting with the external AC magnetic field. Preliminary results using the physical properties of Fe3O4 nanoparticles, exhibit an important correction in their estimated specific absorption rate, as a function of the apparent mean particles radius. Indeed, we have observed using a small deviation (6% of the apparent radius), up to 40% of the predicted specific absorption rate by the Rosensweig linear approximation.
1977-02-01
calculating the freezing rate of homogeneous water-permeable soils by a single column; the method has been developed in works of B. V. Proskuryakov [1] and...ground volume freezes ; I. I L is the length of the freezing column; r is the radius of the frozen ground cylinder; r is time. B. V. Proskuryakov [1
Zhu, Lin-Fa; Kim, Soo; Chattopadhyay, Aditi; Goldberg, Robert K.
2004-01-01
A numerical procedure has been developed to investigate the nonlinear and strain rate dependent deformation response of polymer matrix composite laminated plates under high strain rate impact loadings. A recently developed strength of materials based micromechanics model, incorporating a set of nonlinear, strain rate dependent constitutive equations for the polymer matrix, is extended to account for the transverse shear effects during impact. Four different assumptions of transverse shear deformation are investigated in order to improve the developed strain rate dependent micromechanics model. The validities of these assumptions are investigated using numerical and theoretical approaches. A method to determine through the thickness strain and transverse Poisson's ratio of the composite is developed. The revised micromechanics model is then implemented into a higher order laminated plate theory which is modified to include the effects of inelastic strains. Parametric studies are conducted to investigate the mechanical response of composite plates under high strain rate loadings. Results show the transverse shear stresses cannot be neglected in the impact problem. A significant level of strain rate dependency and material nonlinearity is found in the deformation response of representative composite specimens.
Artificial Neural Reliability Calculating Method Based on Uniform Test Network%基于均匀试验人工神经网络的实用可靠性方法
高博
2012-01-01
This paper presents a design of a new reliability calculation method based on uniform test, which organically combines artificial intelligence network. Artificial neural networks abandons the finite element method (FEM) and applies this method,because the new law can greatly reduce the amount of computation. First, according to the distribution of the random variable,limited samples are extracted by uniform test, which are taken as for the input parameters finite element analysis. Secondly, based on the finite element analysis results, the limited training samples are used to construct the optimal artificial neural network. Optimal artificial neural network generalization ability is applied to obtain a valid response, and then the reliability index of the structural system is calculated. Finally, this calculation method provides a new attempt for reliability analysis in the actual testing of complex systems, which is proved to be practical and effective.%根据均匀试验的设计提出了一种全新的可靠性计算方法，它有机地综合了人工智能网络．人工神经网络弃用有限单元法（FEM）而改用该方法，一个根本原因在于它能极大地减少计算量．首先，根据随机变量分布情况，通过均匀试验提取出有限样本，将它们看作是有限元法分析的输入参数．其次，基于有限元分析结果利用这些有限的训练样本构建最优的人工神经网络．利用最优人工神经网络的泛化能力，随机得到一个有效的响应值，然后计算出结构系统的可靠性指数．最后，这一计算方法还为进行可靠性分析提供新尝试，在对复杂系统进行实际试验时该方法表现得切实可行且有效．
威布尔分布对整体旋转式斯特林制冷机的可靠性计算%Calculation on the reliability of rotary Stirling cryocoolers by Weibull Law
罗高乔; 范仙红; 何世安
2011-01-01
介绍了威布尔分布的计算过程,对Thales整体旋转式斯特林制冷机的可靠性试验数据进行了分析,比较了威布尔分布可靠性计算的数值解析法与图估法的计算结果,总结了Thales整体旋转式斯特林制冷机的可靠性计算过程和加速因子的确定,为国内同类产品可靠性试验方案和寿命计算方法提供借鉴.%This article described the calculation of Weibull Law and analyzed the experimental data on the reliability of Tha-les rotary Stirling cryocoolers. The comparison of calculating results between Weibull Law and figure - estimation method was conducted.It summerized the calculation process and speeding factor of Thales rotary Stirling cryocooler.
高茜; 管莹; 米其利; 李雪梅; 缪明明; 夭建华
2012-01-01
Using CHO bioengineering cell as target, application of flow cytometer in cell counting and cell survival rate calculation was explored in this paper. The results showed that cell counting and survival rate calculation could be accurate by the flow cytometer through the set of three parameters SS,EV, and FL3. Compared with blood cell counting plate method, flow cytometer method was more efficient and stable with faster operation and lower SD value. Therefore, to improve the production efficiency and toxicological evaluation reliability, flow cytometer method was recommend to be applied in large scale experiments for cell counting and survival rate calculation.%以CHO生物工程细胞为对象,探索了流式细胞仪在细胞计数和细胞存活率计算方面的应用.通过设定侧向角散射SS、电子体积EV及荧光强度FL3等3个参数,编制CHO细胞计数程序,再应用流式细胞仪进行细胞计数和存活率计算,其结果与血球计数板法基本一致,但操作更迅速、SD值更低,说明流式细胞仪法较血球计数板法更高效稳定.流式细胞仪法提高了生物工程的生产效率和毒理学评价的准确性,可应用于大规模细胞实验中.
The molar balance equations of indirect calorimetry are treated from the point of view of cause-effect relationship where the gaseous exchange rates representing the unknown causes heed to be inferred from a known noisy effect – gaseous concentrations. Two methods of such inversion are analyzed. Th...
Stevens, Thomas; Lu, HY
2009-01-01
over the late Pleistocene and Holocene. The results demonstrate that sedimentation rates are site specific, extremely variable over millennial timescales and that this variation is often not reflected in grain-size changes. In the central part of the Loess Plateau, the relationship between grain...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
U.S. Geological Survey, Department of the Interior — Rates of long-term and short-term shoreline change were generated in a GIS with the Digital Shoreline Analysis System (DSAS) version 2.0, an ArcView extension...
Wang, Jie; Xu, Yanhui; Wang, Yong
2015-01-01
The vacuum chamber of accelerator storage ring need clean ultra-high vacuum environment. TiZrV getter film which was deposited on interior wall of vacuum chamber, can realize distributed pumping, effectively improve the vacuum degree and reduce the longitudinal gradient. But accumulation of pollutants such as N2, O2, will decrease the adsorption ability of non-evaporable getter (NEG), which leads to the reduction of NEG lifetime. Therefore, NEG thin film coated with a layer of Pd which has high diffusion rate and absorption ability for H2, can extend the service life of NEG, and improve the pumping rate of H2 at the same time. With argon as discharge gas, magnetron sputtering method was adopted to prepare TiZrV-Pd film in long straight pipe. According to the experimental results of the scanning electron microscope (SEM), deposition rates of TiZrV-Pd films were analyzed under different deposition parameters, the magnetic field strength, the gas flow rate, discharge current, discharge voltage and working pressu...
Cristinel Popescu
2015-09-01
Full Text Available The paper aims to identify how to determine the dielectric loss angle tangent of the electric transformers from the transformer stations. Autors of the paper managed a case study on the dielectric established between high respectively medium voltage windings of an electrical rated 40 MVA transformer.
Fabiana Loddo
2010-05-01
Full Text Available A GPS-based geodetic study at a regional scale requires the availability of a dense network that is characterized by 10 km to 30 km spacing, typically followed in a few continuous GPS stations (CGPSs and several non-permanent GPS stations (NPSs. As short observation times do not allow adequate noise modeling, NPS data need specific processing where the main differences between NPSs and CGPSs are taken into account: primarily time-series length and antenna repositioning error. The GPS data collected in the 1999-2007 time-span from non-permanent measurement campaigns in the central Apennine area (Italy that was recently hit by the Mw 6.3 L'Aquila earthquake (April 6, 2009 are here further analyzed to compute a reliable strain-rate field at a regional scale. Moreover, areas characterized by different kinematics are recognized, and a complete characterization of the regional-scale kinematics is attempted. These new data can be interpreted as indicators from the viewpoint of seismic risk assessment.
Govardhani.Immadi
2014-05-01
Full Text Available With the increased demand for long distance Tele communication day by day, satellite communication system was developed. Satellite communications utilize L, C, Ku and Ka bands of frequency to fulfil all the requirements. Utilization of higher frequencies causes severe attenuation due to rain. Rain attenuation is noticeable for frequencies above 10ghz. Amount of attenuation depends on whether the operating wave length is comparable with rain drop diameter or not. In this paper the main focus is on drop size distribution using empirical methods, especially Marshall and Palmer distributions. Empirical methods deal with power law relation between the rain rate(mm/h and radar reflectivity(dBz. Finally it is discussed about the rain rate variation, radar reflectivity, drop size distribution, that is made for two rain events at K L University, Vijayawada on 4th September 2013 and on 18 th August 2013.
Poryazov, V. A.; Krainov, A. Yu.
2016-05-01
A physicomathematical model of combustion of a metallized composite solid propellant based on ammonium perchlorate has been presented. The model takes account of the thermal effect of decomposition of a condensed phase (c phase), convection, diffusion, the exothermal chemical reaction in a gas phase, the heating and combustion of aluminum particles in the gas flow, and the velocity lag of the particles behind the gas. The influence of the granulometric composition of aluminum particles escaping from the combustion surface on the linear rate of combustion has been investigated. It has been shown that information not only on the kinetics of chemical reactions in the gas phase, but also on the granulometric composition of aluminum particles escaping from the surface of the c phase into the gas, is of importance for determination of the linear rate of combustion.
Malleson, N; Andresen, MA
2015-01-01
Crime rate is a statistic used to summarize the risk of criminal events. However, research has shown that choosing the appropriate denominator is non-trivial. Different crime types exhibit different spatial opportunities and so does the population at risk. The residential population is the most commonly used population at risk, but is unlikely to be suitable for crimes that involve mobile populations. In this article, we use "crowd-sourced" data in Leeds, England, to measure the population at...
Amy Lansky
Full Text Available BACKGROUND: Injection drug use provides an efficient mechanism for transmitting bloodborne viruses, including human immunodeficiency virus (HIV and hepatitis C virus (HCV. Effective targeting of resources for prevention of HIV and HCV infection among persons who inject drugs (PWID is based on knowledge of the population size and disparity in disease burden among PWID. This study estimated the number of PWID in the United States to calculate rates of HIV and HCV infection. METHODS: We conducted meta-analysis using data from 4 national probability surveys that measured lifetime (3 surveys or past-year (3 surveys injection drug use to estimate the proportion of the United States population that has injected drugs. We then applied these proportions to census data to produce population size estimates. To estimate the disease burden among PWID by calculating rates of disease we used lifetime population size estimates of PWID as denominators and estimates of HIV and HCV infection from national HIV surveillance and survey data, respectively, as numerators. We calculated rates of HIV among PWID by gender-, age-, and race/ethnicity. RESULTS: Lifetime PWID comprised 2.6% (95% confidence interval: 1.8%-3.3% of the U.S. population aged 13 years or older, representing approximately 6,612,488 PWID (range: 4,583,188-8,641,788 in 2011. The population estimate of past-year PWID was 0.30% (95% confidence interval: 0.19 %-0.41% or 774,434 PWID (range: 494,605-1,054,263. Among lifetime PWID, the 2011 HIV diagnosis rate was 55 per 100,000 PWID; the rate of persons living with a diagnosis of HIV infection in 2010 was 2,147 per 100,000 PWID; and the 2011 HCV infection rate was 43,126 per 100,000 PWID. CONCLUSION: Estimates of the number of PWID and disease rates among PWID are important for program planning and addressing health inequities.
无
2008-01-01
In this paper we study the analytical and statistical results of estimating the gamma dose rate at pool access floor in TRR when the core shield accidentally decreases to some non-permitted levels. Due to the risk of experimental techniques, we use the analytical and statistical methods. In normal conditions (no risk),the discrepancies between experiment and two methods are justified and it is found that for such problems we have to normalize these methods to experimental results as follows: the analytical method by factor 0.13 and MCNP by 1.7.
Mathiesen, Jonathan M; Secher, Anna L; Ringholm, Lene
2014-01-01
and HbA1c were recorded. Results were compared with 96 women with type 1 diabetes on multiple daily injection therapy. RESULTS: Throughout pregnancy, the carbohydrate-to-insulin ratio decreased at all three main meals. The most pronounced decrease was observed at breakfast, where the carbohydrate......-to-insulin ratio was reduced, from median 12 (range 4-20) in early pregnancy to 3 (2-10) g carbohydrate per unit insulin in late pregnancy. Basal insulin delivery increased by ∼50%, i.e. from 0.8 (0.5-2.2) to 1.2 (0.6-2.5) IU/h at 5 a.m. and from 1.0 (0.6-1.5) to 1.3 (0.2-2.3) IU/h at 5 p.m. during pregnancy. HbA1......c levels during pregnancy, the occurrence of severe hypoglycemia and pregnancy outcomes were similar in the two groups. CONCLUSIONS: In women with type 1 diabetes on insulin pump therapy with a bolus calculator, the carbohydrate-to-insulin ratio declined 4-fold from early to late pregnancy, whereas...
Reliability and Its Quantitative Measures
Alexandru ISAIC-MANIU
2010-01-01
Full Text Available In this article is made an opening for the software reliability issues, through wide-ranging statistical indicators, which are designed based on information collected from operating or testing (samples. It is developed the reliability issues also for the case of the main reliability laws (exponential, normal, Weibull, which validated for a particular system, allows the calculation of some reliability indicators with a higher degree of accuracy and trustworthiness
Coppola, Diego; Laiolo, Marco; Cigolini, Corrado
2016-04-01
The rate at which the lava is erupted is a crucial parameter to be monitored during any volcanic eruption. However, its accurate and systematic measurement, throughout the whole duration of an event, remains a big challenge, also for volcanologists working on highly studied and well monitored volcanoes. The thermal approach (also known as thermal proxy) is actually one of most promising techniques adopted during effusive eruptions, since it allows to estimate Time Averaged lava Discharge Rates (TADR) from remote-sensed infrared data acquired several time per day. However, due to the complexity of the physic behind the effusive phenomenon and the difficulty to have field validations, the application of the thermal proxy is still debated and limited to few volcanoes only. Here we present the analysis of MODIS Middle InfraRed data, collected by during several distinct eruptions, in order to show how an alternative, empirical method (called radiant density approach; Coppola et al., 2013) permit to estimate TADRs over a wide range of emplacement styles and lava compositions. We suggest that the simplicity of this empirical approach allows its rapid application during eruptive crisis, and provides the basis for more complex models based on the cooling and spreading processes of the active lava bodies.
Bulut, Niyazi; Kłos, Jacek; Roncero, Octavio
2015-06-07
We present accurate state-to-state quantum wave packet calculations of integral cross sections and rate constants for the title reaction. Calculations are carried out on the best available ground 1(2)A' global adiabatic potential energy surface of Deskevich et al. [J. Chem. Phys. 124, 224303 (2006)]. Converged state-to-state reaction cross sections have been calculated for collision energies up to 0.5 eV and different initial rotational and vibrational excitations, DCl(v = 0, j = 0 - 1; v = 1, j = 0). Also, initial-state resolved rate constants of the title reaction have been calculated in a temperature range of 100-400 K. It is found that the initial rotational excitation of the DCl molecule does not enhance reactivity, in contract to the reaction with the isotopologue HCl in which initial rotational excitation produces an important enhancement. These differences between the isotopologue reactions are analyzed in detail and attributed to the presence of resonances for HCl(v = 0, j), absent in the case of DCl(v = 0, j). For vibrational excited DCl(v = 1, j), however, the reaction cross section increases noticeably, what is also explained by another resonance.
Torabi, Korosh; Corti, David S
2013-10-17
In the present paper, we develop a method to calculate the rate of homogeneous bubble nucleation within a superheated L-J liquid based on the (n,v) equilibrium embryo free energy surface introduced in the first paper (DOI: 10.1021/jp404149n). We express the nucleation rate as the product of the concentration of critical nuclei within the metastable liquid phase and the relevant forward rate coefficient. We calculate the forward rate coefficient of the critical nuclei from their average lifetime as determined from MD simulations of a large number of embryo trajectories initiated from the transitional region of the metastable liquid configuration space. Therefore, the proposed rate coefficient does not rely on any predefined reaction coordinate. In our model, the critical nuclei belong to the region of the configuration space where the committor probability is about one-half, guaranteeing the dynamical relevance of the proposed embryos. One novel characteristic of our approach is that we define a limit for the configuration space of the equilibrium metastable phase and do not include the configurations that have zero committor probability in the nucleation free energy surface. Furthermore, in order to take into account the transitional degrees of freedom of the critical nuclei, we develop a simulation-based approach for rigorously mapping the free energy of the (n,v) equilibrium embryos to the concentration of the critical nuclei within the bulk metastable liquid phase.
Romanets, Y; Vaz, P; Herrera-Martinez, A; Kadi, Y; Kharoua, C; Lettry, J; Lindroos, M
The EURISOL (The EURopean Isotope Separation On-Line Radioactive Ion Beam) project aims at producing high intensity radioactive ion beams produced by neutron induced fission on a fissile target (235U) surrounding a liquid mercury converter. A proton beam of 1 GeV and 4 MW impinges on the Hg converter generating by spallation reactions high neutron fluxes. In this work the state-of-the-art Monte Carlo codes MCNPX and FLUKA were used to assess the neutronics performance of the system which geometry, inspired from the MAFF concept, allows a versatile manipulation of the fission targets. The objective of the study was to optimize the geometry of the system and the materials used in the fuel and reflector elements of the system, in order to achieve the highest possible fission rate.
Gifford, Kent A; Price, Michael J; Horton, John L; Wareing, Todd A; Mourtada, Firas
2008-06-01
The goal of this work was to calculate the dose distribution around a high dose-rate 192Ir brachytherapy source using a multi-group discrete ordinates code and then to compare the results with a Monte Carlo calculated dose distribution. The unstructured tetrahedral mesh discrete ordinates code Attila version 6.1.1 was used to calculate the photon kerma rate distribution in water around the Nucletron microSelectron mHDRv2 source. MCNPX 2.5.c was used to compute the Monte Carlo water photon kerma rate distribution. Two hundred million histories were simulated, resulting in standard errors of the mean of less than 3% overall. The number of energy groups, S(n) (angular order), P(n) (scattering order), and mesh elements were varied in addition to the method of analytic ray tracing to assess their effects on the deterministic solution. Water photon kerma rate matrices were exported from both codes into an in-house data analysis software. This software quantified the percent dose difference distribution, the number of points within +/- 3% and +/- 5%, and the mean percent difference between the two codes. The data demonstrated that a 5 energy-group cross-section set calculated results to within 0.5% of a 15 group cross-section set. S12 was sufficient to resolve the solution in angle. P2 expansion of the scattering cross-section was necessary to compute accurate distributions. A computational mesh with 55 064 tetrahedral elements in a 30 cm diameter phantom resolved the solution spatially. An efficiency factor of 110 with the above parameters was realized in comparison to MC methods. The Attila code provided an accurate and efficient solution of the Boltzmann transport equation for the mHDRv2 source.
Noguchi, Kyotaro; Tanikawa, Toko; Inagaki, Yoshiyuki; Ishizuka, Shigehiro
2017-06-01
Several recent studies have used the net sheet method to estimate fine root production rates in forest ecosystems, wherein net sheets are inserted into the soil and fine roots growing through them are observed. Although this method has advantages in terms of its easy handling and low cost, there are uncertainties in the estimates per unit soil volume or unit stand area, because the net sheet is a two-dimensional material. Therefore, this study aimed to establish calculation procedures for estimating fine root production rates from two-dimensional fine root data on net sheets. This study was conducted in a hinoki cypress (Chamaecyparis obtusa (Sieb. & Zucc.) Endl.) stand in western Japan. We estimated fine root production rates in length and volume from the number (RN) and cross-sectional area (RCSA) densities, respectively, for fine roots crossing the net sheets, which were then converted to dry mass values. For these calculations, we used empirical regression equations or theoretical equations between the RN or RCSA densities on the vertical walls of soil pits and fine root densities in length or volume, respectively, in the soil, wherein the theoretical equations assumed random orientation of the growing fine roots. The estimates of mean fine root (diameter sheets using these calculation procedures, with the empirical regression equations reflecting fine root orientation in the study site. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Grebner, H.; Wang, Y.; Schmipfke, T.; Sievers, J.
2010-06-15
Within the framework of research project RS 1163 the computer code PROST for the quantitative assessment of the structural reliability of pipe components has been further developed. Thereby models were provided and tested for the consideration of the damage mechanism 'corrosion' to determine leak and break probabilities in cylindrical structures of ferritic and austenitic reactor steels. These models are now additionally available to the model for the consideration of the damage mechanism 'fatigue'. Furthermore, the application range of the code was extended to complex geometries in regards to loading and boundary conditions. Additional code modules were developed to be able to include the results of finite element (FE) calculations. The extended analysis method was tested, amongst others, in the context of calculations for a cracked feedwater nozzle of a steam generator under thermal-mechanical cyclic loading. The stress on cracks was calculated with the FE-method. For the determination of leak probabilities the crack growth due to fatigue was estimated taking into account the ''mixed-mode'' - loading within the J-integral vector approach. Altogether, the analyses show that with the provided flexible probabilistic analysis method quantitative determination of leak probabilities of a detected or postulated crack in a complex structure geometry under thermal-mechanical loading as function of the operating time in the range of very small probability values (<1.0 E-8) to large values (>1.0 E-2) are possible. The next development steps should comprise especially the improvement of the accuracy of the method to determine break probabilities and also the consideration of approaches on crack formation due to the damage mechanisms 'fatigue' and 'corrosion', based on evaluations of national and international operating experiences.
Hillenbrand, Lynne A
2015-01-01
An enigmatic and rare type of young stellar object is the FU Orionis class. The members are interpreted as "outbursting," that is, currently in a state of enhanced accretion by several orders of magnitude relative to the more modest disk-to-star accretion rates measured in typical T Tauri stars. They are key to our understanding of the history of stellar mass assembly and pre-main sequence evolution, as well as critical to consider in the chemical and physical evolution of the circumstellar environment -- where planets form. A common supposition is that *all* T Tauri stars undergo repeated such outbursts, more frequently in their earlier evolutionary stages when the disks are more massive, so as to build up the requisite amount of stellar mass on the required time scale. However, the actual data supporting this traditional picture of episodically enhanced disk accretion are limited, and the observational properties of the known sample of FU Ori objects quite diverse. To improve our understanding of these rare...
Marsodi
2006-01-01
Full Text Available Calculation and analysis of B/T (Burning and/or Transmutation rate of MA (minor actinides and Pu (Plutonium has been performed in fast B/T reactor. The study was based on the assumption that the spectrum shift of neutron flux to higher side of neutron energy had a potential significance for designing the fast B/T reactor and a remarkable effect for increasing the B/T rate of MA and/or Pu. The spectrum shifts of neutron have been performed by change MOX to metallic fuel. Blending fraction of MA and or Pu in B/T fuel and the volume ratio of fuel to coolant in the reactor core were also considered. Here, the performance of fast B/T reactor was evaluated theoretically based on the calculation results of the neutronics and burn-up analysis. In this study, the B/T rate of MA and/or Pu increased by increasing the blending fraction of MA and or Pu and by changing the F/C ratio. According to the results, the total B/T rate, i.e. [B/T rate]MA + [B/T rate]Pu, could be kept nearly constant under the critical condition, if the sum of the MA and Pu inventory in the core is nearly constant. The effect of loading structure was examined for inner or outer loading of concentric geometry and for homogeneous loading. Homogeneous loading of B/T fuel was the good structure for obtaining the higher B/T rate, rather than inner or outer loading
Reliability and construction control
Sherif S. AbdelSalam
2016-06-01
Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.
Renshaw, A A
1999-12-25
Rescreening negative Papanicolaou (Pap) smears alone is the most commonly employed method of determining the false-negative rate (FNR) or the false-negative proportion for a laboratory. Acceptable FNRs have been proposed, and the number of slides needed to be rescreened to demonstrate a statistically significant difference in FNRs has been determined. The authors sought to determine the range of FNRs this method can measure and, by implication, the value of this method. A literature review and an analysis of the FNRs this method can generate was performed. If one assumes that the FNR of review is the same as that of initial screening, the maximum measured FNR is only 25%, even with a true FNR of anywhere from 0-100%. In fact, as a laboratory's FNR increases over 50%, the measured FNR decreases back to zero. This range of FNRs corresponds very closely to the published range of FNRs of 1.6-28%. Because many authorities believe that 5% may be the lowest achievable FNR, the entire possible range of measured FNRs is only 5-25%. In this setting, a statistically significant difference of 20% is meaningless, and a statistically significant difference of 10% can only be achieved by laboratories with an initial FNR of less than 15%, and actual changes in FNR that are much greater than 10%. FNRs determined by review of negative smears without abnormal smears generate unreliable and potentially seriously misleading results. Current methodologies exist for more accurately determining the FNR of Pap smear screening by incorporating abnormal smears into the review process. There is little justification for further review of negative Pap smears alone as a method for determining the FNR of a laboratory. Cancer (Cancer Cytopathol) Copyright 1999 American Cancer Society.
Tomlinson, Sean
2016-04-01
The calculation and comparison of physiological characteristics of thermoregulation has provided insight into patterns of ecology and evolution for over half a century. Thermoregulation has typically been explored using linear techniques; I explore the application of non-linear scaling to more accurately calculate and compare characteristics and thresholds of thermoregulation, including the basal metabolic rate (BMR), peak metabolic rate (PMR) and the lower (Tlc) and upper (Tuc) critical limits to the thermo-neutral zone (TNZ) for Australian rodents. An exponentially-modified logistic function accurately characterised the response of metabolic rate to ambient temperature, while evaporative water loss was accurately characterised by a Michaelis-Menten function. When these functions were used to resolve unique parameters for the nine species studied here, the estimates of BMR and TNZ were consistent with the previously published estimates. The approach resolved differences in rates of metabolism and water loss between subfamilies of Australian rodents that haven't been quantified before. I suggest that non-linear scaling is not only more effective than the established segmented linear techniques, but also is more objective. This approach may allow broader and more flexible comparison of characteristics of thermoregulation, but it needs testing with a broader array of taxa than those used here.
王云; 王晓冬; 佘治成
2014-01-01
The primary cause for failure of gas well integrity is pipe string leakage, which leads to anomalous annular pressure in gas wells and hence threatens safe production of gas wells. The leakage rate of annulus is the core parameter to judge whether the gas well integrity fails. At present, some companies abroad have instrument which can measure the annular leakage rate at ifeld;but there is no reliable domestic method to determine the annular leakage rate. This paper presents two ways to calculate the annular leakage rate:one is the safety valve method, which makes reference to the leakage rate of downhole safety valve; the other is differential method, which establishes the theoretical model for gas to be leaking in the annulus and determines the boundary conditions by the leakage model through small holes on gas lines. Due to the effect of ifeld pressure relief data, the downhole safety valve method is not applicable on site, so the differential method may be used to calculate the annular leakage rate. The differential method was used for example veriifcation in DN2 gas ifeld of Tarim Oilifeld, and the result agreed well with the real situation, showing that this method for annular leakage rate calculation is reliable and is of some reference signiifcance to the ifeld.%气井完整性失效最根本的原因就是管柱发生泄漏，从而导致气井环空异常带压，威胁气井安全生产。环空泄漏速率是判断气井完整性是否失效最核心的参数。目前，有国外石油公司通过设备现场测量环空泄漏速率，国内还未有可靠的方法确定环空泄漏速率。提出了两种环空泄漏速率的计算方法：一是安全阀法，借鉴井下安全阀泄漏速率的判别方法；二是微分法，建立气体在环空泄漏的理论模型，并通过输气管道小孔泄漏模型确定边界条件。由于现场泄压数据的影响，井下安全阀法不适用于现场，可采用微分法进行环空泄漏速率计
Wang, K.; Li, S.; Jönsson, P.; Fu, N.; Dang, W.; Guo, X. L.; Chen, C. Y.; Yan, J.; Chen, Z. B.; Si, R.
2017-01-01
Extensive self-consistent multi-configuration Dirac-Fock (MCDF) calculations and second-order many-body perturbation theory (MBPT) calculations are performed for the lowest 272 states belonging to the 2s22p3, 2s2p4, 2p5, 2s22p23l, and 2s2p33l (l=s, p, d) configurations of N-like Kr XXX. Complete and consistent data sets of level energies, wavelengths, line strengths, oscillator strengths, lifetimes, AJ, BJ hyperfine interaction constants, Landé gJ-factors, and electric dipole (E1), magnetic dipole (M1), electric quadrupole (E2), magnetic quadrupole (M2) transition rates among all these levels are given. The present MCDF and MBPT results are compared with each other and with other available experimental and theoretical results. The mean relative difference between our two sets of level energies is only about 0.003% for these 272 levels. The accuracy of the present calculations are high enough to facilitate identification of many observed spectral lines. These accurate data can be served as benchmark for other calculations and can be useful for fusion plasma research and astrophysical applications.
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.
Marek, C. John; Molnar, Melissa
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.
Rivero, Paulo C.M.; Melo, P.F. Frutuoso e [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear
2000-07-01
Nowadays, probability approaches are employed for calculating the reliability of steam generators as a function of defects in their tubes without any deterministic association with warranty assurance. Unfortunately, probability models produce large failure values, as opposed to the recommendation of the U.S. Code of Federal Regulations, that is, failure probabilities must be as small as possible In this paper, we propose the association of the deterministic methodology with the probabilistic one. At first, the failure probability evaluation of steam generators follows a probabilistic methodology: to find the failure probability, critical cracks - obtained from Monte Carlo simulations - are limited to have length's in the interval defined by their lower value and the plugging limit one, so as to obtain a failure probability of at most 1%. The distribution employed for modeling the observed (measured) cracks considers the same interval. Any length outside the mentioned interval is not considered for the probability evaluation: it is approached by the deterministic model. The deterministic approach is to plug the tube when any anomalous crack is detected in it. Such a crack is an observed one placed in the third region on the plot of the logarithmic time derivative of crack lengths versus the mode I stress intensity factor, while for normal cracks the plugging of tubes occurs in the second region of that plot - if they are dangerous, of course, considering their random evolution. A methodology for identifying anomalous cracks is also presented. (author)
Safronova, U I; Safronova, A S; Beiersdorfer, P
2007-10-08
Transition rates and line strengths are calculated for electric-multipole (E2 and E3) and magnetic-multipole (M1, M2, and M3) transitions between 3s{sup 2}3p{sup 6}3d{sup 10}, 3s{sup 2}3p{sup 6}3d{sup 9}4l, 3s{sup 2}3p{sup 5}3d{sup 10}4l, and 3s3p{sup 6}3d{sup 10}4l states (with 4l = 4s, 4p, 4d, and 4f) in Ni-like ions with the nuclear charges ranging from Z = 34 to 100. Relativistic many-body perturbation theory (RMBPT), including the Breit interaction, is used to evaluate retarded multipole matrix elements. Transition energies used in the calculation of line strengths and transition rates are from second-order RMBPT. Lifetimes of the 3s{sup 2}3p{sup 6}3d{sup 9}4s levels are given for Z = 34-100. Taking into account that calculations were performed in a very broad range of Z, most of the data are presented in graphs as Z-dependencies. The full set of data is given only for Ni-like W ion. In addition, we also give complete results for the 3d4s{sup 3}D{sub 2}-3d4s {sup 3}D{sub 1} magnetic-dipole transition, as the transition may be observed in future experiments, which measure both transition energies and radiative rates. These atomic data are important in the modeling of radiation spectra from Ni-like multiply-charged ions generated in electron beam ion trap experiments as well as for laboratory plasma diagnostics including fusion research.
Wei, Wei; Lv, Zhaofeng; Yang, Gan; Cheng, Shuiyuan; Li, Yue; Wang, Litao
2016-11-01
This study aimed to apply an inverse-dispersion calculation method (IDM) to estimate the emission rate of volatile organic compounds (VOCs) for the complicated industrial area sources, through a case study on a petroleum refinery in Northern China. The IDM was composed of on-site monitoring of ambient VOCs concentrations and meteorological parameters around the source, calculation of the relationship coefficient γ between the source's emission rate and the ambient VOCs concentration by the ISC3 model, and estimation of the actual VOCs emission rate from the source. Targeting the studied refinery, 10 tests and 8 tests were respectively conducted in March and in June of 2014. The monitoring showed large differences in VOCs concentrations between background and downwind receptors, reaching 59.7 ppbv in March and 248.6 ppbv in June, on average. The VOCs increases at receptors mainly consisted of ethane (3.1%-22.6%), propane (3.8%-11.3%), isobutane (8.5%-10.2%), n-butane (9.9%-13.2%), isopentane (6.1%-12.9%), n-pentane (5.1%-9.7%), propylene (6.1-11.1%) and 1-butylene (1.6%-5.4%). The chemical composition of the VOCs increases in this field monitoring was similar to that of VOCs emissions from China's refineries reported, which revealed that the ambient VOCs increases were predominantly contributed by this refinery. So, we used the ISC3 model to create the relationship coefficient γ for each receptor of each test. In result, the monthly VOCs emissions from this refinery were calculated to be 183.5 ± 89.0 ton in March and 538.3 ± 281.0 ton in June. The estimate in June was greatly higher than in March, chiefly because the higher environmental temperature in summer produced more VOCs emissions from evaporation and fugitive process of the refinery. Finally, the VOCs emission factors (g VOCs/kg crude oil refined) of 0.73 ± 0.34 (in March) and 2.15 ± 1.12 (in June) were deduced for this refinery, being in the same order with previous direct
Vianello, E.A.; Biaggio, M.F.; Dr, M.F.; Almeida, C.E. de [Laboratorio de Ciencias Radiologicas- (L.C.R.)-D.B.B.- UERJ- R. Sao Francisco Xavier, 524- Pav. HLC- sala 136- CEP 20550-013 Rio de Janeiro (Brazil)
1998-12-31
In treatments with radiations for gynecologic tumors is necessary to evaluate the quality of the results obtained by different calculation methods for the dose rates on the points of clinical interest (A, rectal, vesicle). The present work compares the results obtained by two methods. The Manual Calibration Method (MCM) tri dimensional (Vianello E., et.al. 1998), using orthogonal radiographs for each patient in treatment, and the Theraplan/T P-11 planning system (Thratonics International Limited 1990) this last one verified experimentally (Vianello et.al. 1996). The results show that MCM can be used in the physical-clinical practice with a percentile difference comparable at the computerized programs. (Author)
Ballester, Facundo; Carlsson Tedgren, Åsa; Granero, Domingo; Haworth, Annette; Mourtada, Firas; Fonseca, Gabriel Paiva; Zourari, Kyveli; Papagiannis, Panagiotis; Rivard, Mark J; Siebert, Frank-André; Sloboda, Ron S; Smith, Ryan L; Thomson, Rowan M; Verhaegen, Frank; Vijande, Javier; Ma, Yunzhi; Beaulieu, Luc
2015-06-01
In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) (192)Ir source and a virtual water phantom were designed, which can be imported into a TPS. A hypothetical, generic HDR (192)Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic (192)Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra(®) Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS™ ]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201)(3) voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR (192)Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then
Ballester, Facundo, E-mail: Facundo.Ballester@uv.es [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Carlsson Tedgren, Åsa [Department of Medical and Health Sciences (IMH), Radiation Physics, Faculty of Health Sciences, Linköping University, Linköping SE-581 85, Sweden and Department of Medical Physics, Karolinska University Hospital, Stockholm SE-171 76 (Sweden); Granero, Domingo [Department of Radiation Physics, ERESA, Hospital General Universitario, Valencia E-46014 (Spain); Haworth, Annette [Department of Physical Sciences, Peter MacCallum Cancer Centre and Royal Melbourne Institute of Technology, Melbourne, Victoria 3000 (Australia); Mourtada, Firas [Department of Radiation Oncology, Helen F. Graham Cancer Center, Christiana Care Health System, Newark, Delaware 19713 (United States); Fonseca, Gabriel Paiva [Instituto de Pesquisas Energéticas e Nucleares – IPEN-CNEN/SP, São Paulo 05508-000, Brazil and Department of Radiation Oncology (MAASTRO), GROW, School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Zourari, Kyveli; Papagiannis, Panagiotis [Medical Physics Laboratory, Medical School, University of Athens, 75 MikrasAsias, Athens 115 27 (Greece); Rivard, Mark J. [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Siebert, Frank-André [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel, Kiel 24105 (Germany); Sloboda, Ron S. [Department of Medical Physics, Cross Cancer Institute, Edmonton, Alberta T6G 1Z2, Canada and Department of Oncology, University of Alberta, Edmonton, Alberta T6G 2R3 (Canada); and others
2015-06-15
Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual water phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by
M. Dalwai*
2013-12-01
Conclusion: The SATS has been shown to be a reliable triage scale for a developing country such as Pakistan. With accuracy being acceptable in the context of Timergara, we would suggest further validation studies looking at simple ways of validating the triage scale bearing in mind the challenges facing a developing country ED.
Dewey, H M; Donnan, G A; Freeman, E J; Sharples, C M; Macdonell, R A; McNeil, J J; Thrift, A G
1999-01-01
The reliability of the National Institutes of Health Stroke Scale (NIHSS) for use by trained neurologists in clinical trials of acute stroke has been established in several hospital-based studies. However, it also has the potential for application in community-based settings and to be used by nonneurologists: issues which have not been explored before. Hence, we aimed to determine the reliability of the NIHSS when administered by research nurses within the existing North Eastern Melbourne Stroke Incidence Study. Using the NIHSS, thirty-one consecutively registered stroke patients were assessed by 2 neurologists and 1 of 2 trained research nurses. The interrater reliability of observations was compared using weighted and unweighted kappa statistics and intraclass correlation coefficients (ICC). There was a high level of agreement for total scores between the 2 neurologists (ICC = 0.95) and between each neurologist and research nurse (ICC = 0.92 and 0.96). While there was moderate to excellent agreement among neurologists and research nurse (weighted kappa > 0.4) for the majority of the NIHSS items, there was poor agreement for the component 'limb ataxia'. Overall, agreement between nurse and neurologist for individual items was not significantly different from agreement between neurologists. It appears that in both hospital and community settings, trained research nurses can administer the NIHSS with a reliability similar to stroke-trained neurologists. This ability could be used to advantage in large community-based trials and epidemiological studies.
Bae, Kyung Oh; Kim, Dae Woong; Shin, Hyung Seop [Andong National Univ., Andong (Korea, Republic of); Park, Lee Ju; Kim, Hyung Won [Agency for Defense Development, Daejeon (Korea, Republic of)
2016-06-15
Studies on the deformation behavior of materials subjected to impact loads have been carried out in various fields of engineering and industry. The deformation and fracture of members for these machines/structures are known to correspond to the intermediate strain-rate region. Therefore, for the structural design, it is necessary to consider the dynamic deformation behavior in these intermediate strain-rate ranges. However, there have been few reports with useful data about the deformation and fracture behavior at intermediate strain-rate ranges. Because the intermediate strain-rate region is located between quasi-static and high strain-rate regions, it is difficult to obtain the intermediate strain-rate using conventional reasonable test equipment. To solve this problem, in this study, the measurement reliability of the constructed drop-bar impact tensile test apparatus was established and the dynamic behavior at the intermediate strain-rate range of carbon steels was evaluated by utilizing the apparatus.
Bouazza, Safa; Palmeri, Patrick; Quinet, Pascal
2017-09-01
We present a semi-empirical determination of Mo II radiative parameters in a wide wavelength range 1716-8789 Å. Our fitting procedure to experimental oscillator strengths available in the literature permits us to provide reliable values for a large number of Mo II lines, predicting previously unmeasured oscillator strengths of lines involving 4d45p and 4d35s5p odd-parity configurations. The extracted transition radial integral values are compared with ab-initio calculations: on average they are 0.88 times the values obtained with the basic pseudo-relativistic Hartree Fock method and they agree well when core polarization effects are included. When making a survey of our present and previous studies and including also those given in the literature we observe as general trends a decreasing of transition radial integral values with filling nd shells of the same principal quantum numbers for ndk(n + 1)s → ndk(n + 1)p transitions.
U.S. Geological Survey, Department of the Interior — This dataset consists of short-term (~33 years) shoreline change rates for the north coast of Alaska between Point Barrow and Icy Cape. Rate calculations were...
U.S. Geological Survey, Department of the Interior — This dataset consists of short-term (~33 years) shoreline change rates for the north coast of Alaska between the Colville River and Point Barrow. Rate calculations...
U.S. Geological Survey, Department of the Interior — This dataset consists of long-term (~65 years) shoreline change rates for the north coast of Alaska between the Colville River and Point Barrow. Rate calculations...
U.S. Geological Survey, Department of the Interior — This dataset consists of long-term (~65 years) shoreline change rates for the north coast of Alaska between the Colville River and Point Barrow. Rate calculations...
Buenker, Robert J; Liebermann, Heinz-Peter
2012-07-15
Ab initio multireference single- and double-excitation configuration interaction calculations have been performed to compute potential curves for ground and excited states of the CaO and SrO molecules and their positronic complexes, e(+)CaO, and e(+)SrO. The adiabatic dissociation limit for the (2)Σ(+) lowest states of the latter systems consists of the positive metal ion ground state (M(+)) and the OPs complex (e(+)O(-)), although the lowest energy limit is thought to be e(+)M + O. Good agreement is found between the calculated and experimental spectroscopic constants for the neutral diatomics wherever available. The positron affinity of the closed-shell X (1)Σ(+) ground states of both systems is found to lie in the 0.16-0.19 eV range, less than half the corresponding values for the lighter members of the alkaline earth monoxide series, BeO and MgO. Annihilation rates (ARs) have been calculated for all four positronated systems for the first time. The variation with bond distance is generally similar to what has been found earlier for the alkali monoxide series of positronic complexes, falling off gradually from the OPs AR value at their respective dissociation limits. The e(+)SrO system shows some exceptional behavior, however, with its AR value reaching a minimum at a relatively large bond distance and then rising to more than twice the OPs value close to its equilibrium distance. Copyright © 2012 Wiley Periodicals, Inc.
Christiansen, Jessie L.; Clarke, Bruce D.; Burke, Christopher J.; Jenkins, Jon M.; Bryson, Stephen T.; Coughlin, Jeffrey L.; Mullally, Fergal; Thompson, Susan E.; Twicken, Joseph D.; Batalha, Natalie M.; Haas, Michael R.; Catanzarite, Joseph; Campbell, Jennifer R.; Kamal Uddin, AKM; Zamudio, Khadeejah; Smith, Jeffrey C.; Henze, Christopher E.
2016-09-01
With each new version of the Kepler pipeline and resulting planet candidate catalog, an updated measurement of the underlying planet population can only be recovered with a corresponding measurement of the Kepler pipeline detection efficiency. Here we present measurements of the sensitivity of the pipeline (version 9.2) used to generate the Q1-Q17 DR24 planet candidate catalog. We measure this by injecting simulated transiting planets into the pixel-level data of 159,013 targets across the entire Kepler focal plane, and examining the recovery rate. Unlike previous versions of the Kepler pipeline, we find a strong period dependence in the measured detection efficiency, with longer (>40 day) periods having a significantly lower detectability than shorter periods, introduced in part by an incorrectly implemented veto. Consequently, the sensitivity of the 9.2 pipeline cannot be cast as a simple one-dimensional function of the signal strength of the candidate planet signal, as was possible for previous versions of the pipeline. We report on the implications for occurrence rate calculations based on the Q1-Q17 DR24 planet candidate catalog, and offer important caveats and recommendations for performing such calculations. As before, we make available the entire table of injected planet parameters and whether they were recovered by the pipeline, enabling readers to derive the pipeline detection sensitivity in the planet and/or stellar parameter space of their choice.
[Interrater reliability of the Braden scale].
Kottner, Jan; Tannen, Antje; Dassen, Theo
2008-04-01
Pressure ulcer risk assessment scales can assist nurses in determining the individual pressure ulcer risk. Although the Braden scale is widely used throughout Germany, its psychometric properties are yet unknown. The aim of the study was to determine the interrater reliability of the Braden scale and to compare the results with those of published data. A literature review was conducted. 20 studies measuring the interrater reliability of the Braden scale were evaluated. Only three of those studies investigated the interrater reliability of single items. The Pearson product-moment correlation coefficient (0.80 to 1.00) was calculated in most studies for an evaluation of the Braden scale as a whole. However, the use of correlation coefficients is inappropriate for measuring the interrater reliability of the Braden scale. Measures of the intraclass correlation coefficient varied from 0.83 to 0.99. The investigation of the interrater reliability of the Braden scale's German version was conducted in a German nursing home in 2006. Nurses independently rated 18 and 32 residents twice. Nurses achieved the highest agreement when rating the items "friction and shear" and "activity" (overall proportion of agreement = 0.67 to 0.84, Cohen's Kappa = 0.57 to 0.73). The lowest agreement was achieved when the item "nutrition" (overall proportion of agreement = 0.47 to 0.51, Cohen's Kappa = 0.28 to 0.30) was rated. For 66% of the rated residents the difference in the obtained Braden scores was equal or less than one point. Intraclass correlation coefficients were 0.91 (95% confidence interval 0.82 to 0.96) and 0.88 (95% confidence interval 0.61 to 0.96). This indicates that the interrater reliability of the Braden scale was high in the examined setting.
Fernandez, Rafael P. [INFIQC, Centro Laser de Ciencias Moleculares, Departamento de Fisico Quimica, Facultad de Ciencias Quimicas, Universidad Nacional de Cordoba, 5000, Cordoba (Argentina); Palancar, Gustavo G. [INFIQC, Centro Laser de Ciencias Moleculares, Departamento de Fisico Quimica, Facultad de Ciencias Quimicas, Universidad Nacional de Cordoba, 5000, Cordoba (Argentina)]. E-mail: palancar@fcq.unc.edu.ar; Madronich, Sasha [Atmospheric Chemistry Division, National Center for Atmospheric Research, 1850 Table mesa Drive, Boulder, CO, 80303 (United States); Toselli, Beatriz M. [INFIQC, Centro Laser de Ciencias Moleculares, Departamento de Fisico Quimica, Facultad de Ciencias Quimicas, Universidad Nacional de Cordoba, 5000, Cordoba (Argentina)]. E-mail: tosellib@fcq.unc.edu.ar
2007-03-15
A line by line (LBL) method to calculate highly resolved O{sub 2} absorption cross sections in the Schumann-Runge (SR) bands region was developed and integrated in the widely used Tropospheric Ultraviolet Visible (TUV) model to calculate accurate photolysis rate coefficients (J values) in the upper atmosphere at both small and large solar zenith angles (SZA). In order to obtain the O{sub 2} cross section between 49,000 and 57,000cm{sup -1}, an algorithm which considers the position, strength, and half width of each spectral line was used. Every transition was calculated by using the HIgh-resolution TRANsmission molecular absorption database (HITRAN) and a Voigt profile. The temperature dependence of both the strength and the half widths was considered within the range of temperatures characteristic of the US standard atmosphere, although the results show a very good agreement also at 79K. The cross section calculation was carried out on a 0.5cm{sup -1} grid and the contributions from all the lines lying at +/-500cm{sup -1} were considered for every wavelength. Both the SR and the Herzberg continuums were included. By coupling the LBL method to the TUV model, full radiative transfer calculations that compute J values including Rayleigh scattering at high altitudes and large SZA can now be done. Thus, the J values calculations were performed for altitudes from 0 to 120km and for SZA up to 89{sup o}. The results show, in the J{sub O{sub 2}} case, differences of more than +/-10% (e.g. at 96km and 30{sup o}) when compared against the last version of the TUV model (4.4), which uses the Koppers and Murtagh parameterization for the O{sub 2} cross section. Consequently, the J values of species with cross sections overlapping the SR band region show variable differences at lower altitudes. Although many species have been analyzed, the results for only four of them (O{sub 2}, N{sub 2}O, HNO{sub 3}, CFC12) are presented. Due to the fact that the HNO{sub 3} absorption cross
Jacob, Dayee; Lamberto, Melissa; DeSouza Lawrence, Lana; Mourtada, Firas
To retrospectively compare clinical dosimetry of CT-based tandem-ring treatment plans using a model-based dose calculation algorithm (MBDCA) with the standard TG-43-based dose formalism. A cohort of 10 cervical cancer cohorts treated using the tandem and ring high-dose-rate applicators were evaluated. The original treatment plans were created using the department CT-based volume optimization clinical standards. All plans originally calculated with TG-43 dose calculation formalism were recalculated using the MBDCA algorithm. The gross target volume and organs at risk (OARs) were contoured on each data set along with significant heterogeneities like air in cavity and high-density plastic tandem and ring components. The patient tissue was modeled as homogenous liquid water. D90, D95, and D100 for gross target volume, D0.1cm(3), D1.0cm(3), and D2.0cm(3) for bladder, rectum, and sigmoid were extracted from dose-volume histograms for TG-43 and MBDCA calculated plans. Mean absolute difference ± 2σ in the above metrics was calculated for each plan. Using the manual applicator contouring method, MBDCA plans (n = 10) showed 2.1 ± 1.1% reduction in dose to Point A average, 2.6 ± 0.9% reduction in Target D90 dose, and 2.1 ± 0.3% dose reduction to OARs. Results from plans using vendor supplied solid applicator models (n = 5) showed 2.2 ± 1.10% reduction in dose to Point A average, 2.7 ± 0.2% reduction in Target D90 dose, and 2.7 ± 1.0% dose reduction on average to OARs. For unshielded plastic gynecologic applicators, minimal dosimetric changes (<5%) were found using MBDCA relative to standard TG-43. Use of solid applicator model is more efficient than manual applicator contouring and also yielded similar MBDCA dosimetric results. Currently, TG-186 dose calculations should be reported along TG-43 until we obtain studies with larger cohorts to fully realize the potential of MBDCA dosimetry. Copyright © 2017 American Brachytherapy Society. Published by
Ying He (Vattenfall Research and Development AB, Stockholm (SE))
2007-09-15
In risk analysis of a power system, the risk for the system to fail power supply is calculated from the knowledge of the reliability data of individual system components. Meaningful risk analysis requires reasonable and acceptable data. The quality of the data has the fundamental importance for the analysis. However, the valid data are expensive to collect. The component reliability performance statistics are not easy to obtain. This report documents the distribution equipment reliability data developed by the project 'Component Reliability Data for Risk Analysis of Distribution Systems' within the Elforsk RandD program 'Risk Analysis 06-10'. The project analyzed a large sample size of distribution outages recorded by more than a hundred power utilities in Sweden during 2004-2005, and derived the equipment reliability data nationwide. The detailed summaries of these data are presented in the appendices of the report. The component reliability was also investigated at a number of power utilities including Vattenfall Eldistribution AB, Goeteborg Energi Naet AB, E.ON Elnaet Sverige AB, Fortum Distribution, and Linde Energi AB. The reliability data were derived for individual utilities. The detailed data lists and failure statistics are summarized in the appendices for each participating companies. The data provided in this report are developed based on a large sample size of field outage records and can be therefore used as generic data in system risk analysis and reliability studies. In order to provide more references and complementary data, the equipment reliability surveys conducted by IEEE were studied in the project. The most significant results obtained by the IEEE surveys are provided in the report. A summary of the reliability data surveyed by IEEE is presented in the appendix of the report. These data are suggested to use in the absence of better data being available. The reliability data estimates were derived for sustained failure rates
Reliability estimates for flawed mortar projectile bodies
Cordes, J.A. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)], E-mail: jennifer.cordes@us.army.mil; Thomas, J.; Wong, R.S.; Carlucci, D. [US Army ARDEC, AMSRD-AAR-MEF-E, Analysis and Evaluation Division, Fuze and Precision Armaments Technology Directorate, US Army Armament Research Development and Engineering Center, Picatinny Arsenal, NJ 07806-5000 (United States)
2009-12-15
The Army routinely screens mortar projectiles for defects in safety-critical parts. In 2003, several lots of mortar projectiles had a relatively high defect rate, 0.24%. Before releasing the projectiles, the Army reevaluated the chance of a safety-critical failure. Limit state functions and Monte Carlo simulations were used to estimate reliability. Measured distributions of wall thickness, defect rate, material strength, and applied loads were used with calculated stresses to estimate the probability of failure. The results predicted less than one failure in one million firings. As of 2008, the mortar projectiles have been used without any safety-critical incident.
曹勋; 赵冬娥; 李致成; 张斌
2016-01-01
针对破片测速系统对数据存储速率快、可靠性高的要求，提出了基于流水线设计的数据快速存储方案和基于FPGA片内建立虚拟存储器来管理FLASH坏块列表的方法。该方法有效降低存储系统的平均响应时间，将数据流的存储速率提高了近2倍；并且有效地屏蔽FLASH的坏块，保证了破片数据存储的可靠性。测试结果表明：数据存储速率提高到2.4 Mbyte/s，为原始速率的3倍。数据存储的可靠性为100%。该方法能有效提高测速系统的存储速率和可靠性。%As the fragment velocity measurement system for high data storage rate and reliability requirements pres⁃ents a fast data storage scheme based on pipeline design and establishes a virtual memory to manage the FLASH bad block list based on FPGA chip. This method reduces the average response time and makes the data storage rate 2 times faster. Meanwhile,the virtual memory can manage the Flash bad block list easily and block bad blocks effec⁃tively,then ensure the reliability of the fragment data storage. Tests show that the method improves the data storage rate raised to 2.4 Mbyte/s which is 3 times as original rate. The reliability of data storage is 100%. This method im⁃proves data storage rate and reliability effectively.
何劼; 张彬彬
2013-01-01
在核电厂概率安全评价（PSA ）分析中，有些始发事件频率或设备失效记录在工业界几乎无历史数据。为了计算这些无信息先验的可靠性参数和始发事件频率，可采用Bayesian统计学中的Jeffreys方法。本文阐述了Jeffreys先验和简化的受限无信息先验分布（SCNID ）的数学原理，分别导出了Gamma-Poisson模型和Beta-Binomial模型的Jeffreys无信息先验公式和不确定性区间。结合反应堆冷却剂小破口失水事故（SLOCA）实例介绍了如何应用Jeffreys先验计算始发事件频率。结果表明，Jeffreys方法是一种计算无信息先验的有效方法。%In the probabilistic safety assessment (PSA ) of nuclear power plants ,there are few historical records on some initiating event frequencies or component failures in industry .In order to determine the noninformative priors of such reliability parameters and initiating event frequencies , the Jeffreys method in Bayesian statistics was employed . The mathematical mechanism of the Jeffreys prior and the simplified constrained noninformative distribution (SCNID) were elaborated in this paper .The Jeffreys noninformative formulas and the credible intervals of the Gamma-Poisson and Beta-Binomial models were introduced .As an example ,the small break loss-of-coolant accident (SLOCA) was employed to show the application of the Jeffreys prior in deter-mining an initiating event frequency .The result shows that the Jeffreys method is an effective method for noninformative prior calculation .
2017-01-17
convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. This report was cleared for public release...testing for reliability prediction of devices exhibiting multiple failure mechanisms. Also presented was an integrated accelerating and measuring ...13 Table 2 T, V, F and matrix versus measured FIT
A Reliability Based Model for Wind Turbine Selection
A.K. Rajeevan
2013-06-01
Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering
Maurin, D; Derome, L; Ghelfi, A; Hubert, G
2014-01-01
Particles count rates at given Earth location and altitude result from the convolution of (i) the interstellar (IS) cosmic-ray fluxes outside the solar cavity, (ii) the time-dependent modulation of IS into Top-of-Atmosphere (TOA) fluxes, (iii) the rigidity cut-off (or geomagnetic transmission function) and grammage at the counter location, (iv) the atmosphere response to incoming TOA cosmic rays (shower development), and (v) the counter response to the various particles/energies in the shower. Count rates from neutron monitors or muon counters are therefore a proxy to solar activity. In this paper, we review all ingredients, discuss how their uncertainties impact count rate calculations, and how they translate into variation/uncertainties on the level of solar modulation $\\phi$ (in the simple Force-Field approximation). The main uncertainty for neutron monitors is related to the yield function. However, many other effects have a significant impact, at the 5-10% level on $\\phi$ values. We find no clear ranking...
OSS reliability measurement and assessment
Yamada, Shigeru
2016-01-01
This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.
Farrar, John T; Troxel, Andrea B; Stott, Colin; Duncombe, Paul; Jensen, Mark P
2008-05-01
The measurement of spasticity as a symptom of neurologic disease is an area of growing interest. Clinician-rated measures of spasticity purport to be objective but do not measure the patient's experience and may not be sensitive to changes that are meaningful to the patient. In a patient with clinical spasticity, the best judge of the perceived severity of the symptom is the patient. The aim of this study was to assess the validity and reliability, and determine the clinical importance, of change on a 0-10 numeric rating scale (NRS) as a patient-rated measure of the perceived severity of spasticity. Using data from a large,randomized, doubleblind, placebo-controlled study of an endocannabinoid system modulator in patients with multiple sclerosis-related spasticity, we evaluated the test-retest reliability and comparison-based validity of a patient-reported 0-10 NRS measure of spasticity severity with the Ashworth Scale and Spasm Frequency Scale. We estimated the level of change from baseline on the 0-10 NRS spasticity scale that constituted a clinically important difference (CID) and a minimal CID (MCID) as anchored to the patient's global impression of change (PGIC). Data from a total of 189 patients were included in this assessment (114 women, 75 men; mean age, 49.1 years). The test-retest reliability analysis found an interclass correlation coefficient of 0.83 (P change on 0-10 NRS and change in the Spasm Frequency Scale (r = 0.63; P change on 0-10 NRS and the PGIC (r = 0.47; P change of 18% the MCID. The measurement of the symptom of spasticity using a patient-rated 0-10 NRS was found to be both reliable and valid. The definitions of CID and MCID will facilitate the use of appropriate responder analyses and help clinicians interpret the significance of future results.
Reliable predictions of waste performance in a geologic repository
Pigford, T.H.; Chambre, P.L.
1985-08-01
Establishing reliable estimates of long-term performance of a waste repository requires emphasis upon valid theories to predict performance. Predicting rates that radionuclides are released from waste packages cannot rest upon empirical extrapolations of laboratory leach data. Reliable predictions can be based on simple bounding theoretical models, such as solubility-limited bulk-flow, if the assumed parameters are reliably known or defensibly conservative. Wherever possible, performance analysis should proceed beyond simple bounding calculations to obtain more realistic - and usually more favorable - estimates of expected performance. Desire for greater realism must be balanced against increasing uncertainties in prediction and loss of reliability. Theoretical predictions of release rate based on mass-transfer analysis are bounding and the theory can be verified. Postulated repository analogues to simulate laboratory leach experiments introduce arbitrary and fictitious repository parameters and are shown not to agree with well-established theory. 34 refs., 3 figs., 2 tabs.
Reliability Analysis of Wireless Sensor Networks Using Markovian Model
Jin Zhu
2012-01-01
Full Text Available This paper investigates reliability analysis of wireless sensor networks whose topology is switching among possible connections which are governed by a Markovian chain. We give the quantized relations between network topology, data acquisition rate, nodes' calculation ability, and network reliability. By applying Lyapunov method, sufficient conditions of network reliability are proposed for such topology switching networks with constant or varying data acquisition rate. With the conditions satisfied, the quantity of data transported over wireless network node will not exceed node capacity such that reliability is ensured. Our theoretical work helps to provide a deeper understanding of real-world wireless sensor networks, which may find its application in the fields of network design and topology control.