Energy Technology Data Exchange (ETDEWEB)
Rodricks, J.V. (Environ International Corp., Arlington, VA (United States))
1992-01-01
This book is a clear, practical, and balanced view of toxicology and risk management. The introduction argues the case for risks assessment and outlines the benefits and problems associated with chemical exposure. The first part of the book covers the basic science and the sources of human exposure to chemicals. Absorption, distribution, metabolism, and excretion are covered in some detail. The subsequent chapter gives a lively discussion of toxicity studies and then describes slow and fast poisons. The author gives the arguments for as well as against animal testing. There is much public bewilderment caused by reports of cancer-causing pesticides in apple juice and poisons emanating from nearby hazardous waste sites. The author believes that too much has been written in an attempt to expose governmental and corporate ignorance, negligence, and corruption. This book is less of a polemic, and more of a clear, unbiased clarification of the scientific basis for our concerns and uncertainties. It should serve to refocus the debate.
DEFF Research Database (Denmark)
Røder, Martin Andreas; Berg, Kasper Drimer; Loft, Mathias Dyrberg
2017-01-01
included. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Time to BR was defined as the first PSA result ≥0.2 ng/ml. BR risk was computed using multiple cause-specific Cox regression including preoperative PSA, pT category, RP Gleason score (GS), and surgical margin (R) status. Death without BR......BACKGROUND: It can be challenging to predict the risk of biochemical recurrence (BR) during follow-up after radical prostatectomy (RP) in men who have undetectable prostate-specific antigen (PSA), even years after surgery. OBJECTIVE: To establish and validate a contemporary nomogram that predicts...... the absolute risk of BR every year after RP in men with undetectable PSA while accounting for competing risks of death. DESIGN, SETTING, AND PARTICIPANTS: A total of 3746 patients from Rigshospitalet (Copenhagen, Denmark) and Stanford Urology (Stanford, CA, USA) who underwent RP between 1995 and 2013 were...
Cardiovascular risk calculation
African Journals Online (AJOL)
James A. Ker
2014-08-20
Aug 20, 2014 ... Introduction. Cardiovascular disease remains a major cause of global mortality and morbidity. Atherosclerosis is the main underlying cause in the majority of cardiovascular disease events. Traditional independent risk factors for car diovascular disease include age, abnormal lipid levels, elevated blood ...
Calculating Risk: Radiation and Chernobyl.
Gale, Robert Peter
1987-01-01
Considers who is at risk in a disaster such as Chernobyl. Assesses the difficulty in translating information regarding radiation to the public and in determining the acceptability of technological risks. (NKA)
The new pooled cohort equations risk calculator
DEFF Research Database (Denmark)
Preiss, David; Kristensen, Søren L
2015-01-01
total cardiovascular risk score. During development of joint guidelines released in 2013 by the American College of Cardiology (ACC) and American Heart Association (AHA), the decision was taken to develop a new risk score. This resulted in the ACC/AHA Pooled Cohort Equations Risk Calculator. This risk...... disease and any measure of social deprivation. An early criticism of the Pooled Cohort Equations Risk Calculator has been its alleged overestimation of ASCVD risk which, if confirmed in the general population, is likely to result in statin therapy being prescribed to many individuals at lower risk than...
CALCULATING ECONOMIC RISK AFTER HANFORD CLEANUP
Energy Technology Data Exchange (ETDEWEB)
Scott, M.J.
2003-02-27
Since late 1997, researchers at the Hanford Site have been engaged in the Groundwater Protection Project (formerly, the Groundwater/Vadose Zone Project), developing a suite of integrated physical and environmental models and supporting data to trace the complex path of Hanford legacy contaminants through the environment for the next thousand years, and to estimate corresponding environmental, human health, economic, and cultural risks. The linked set of models and data is called the System Assessment Capability (SAC). The risk mechanism for economics consists of ''impact triggers'' (sequences of physical and human behavior changes in response to, or resulting from, human health or ecological risks), and processes by which particular trigger mechanisms induce impacts. Economic impacts stimulated by the trigger mechanisms may take a variety of forms, including changes in either costs or revenues for economic sectors associated with the affected resource or activity. An existing local economic impact model was adapted to calculate the resulting impacts on output, employment, and labor income in the local economy (the Tri-Cities Economic Risk Model or TCERM). The SAC researchers ran a test suite of 25 realization scenarios for future contamination of the Columbia River after site closure for a small subset of the radionuclides and hazardous chemicals known to be present in the environment at the Hanford Site. These scenarios of potential future river contamination were analyzed in TCERM. Although the TCERM model is sensitive to river contamination under a reasonable set of assumptions concerning reactions of the authorities and the public, the scenarios show low enough future contamination that the impacts on the local economy are small.
Perceived and calculated health risks: do the impacts differ
Energy Technology Data Exchange (ETDEWEB)
Payne, B.A.; Williams, R.G.
1986-01-23
In many cases of radioactive and hazardous waste management, some members of the general public perceive that human health risks associated with the wastes are higher than the calculated risks. Calculated risks are projections that have been derived from models, and it is these risks that are usually used as the basis for waste management. However, for various reasons, the calculated risks are often considered by the public as too low or inappropriate. The reasons that calculated risks are not perceived as accurate and the factors that affect these perceptions are explored in this paper. Also discussed are the impacts related to the perceived and calculated health risks: what they are, and if and how they differ. The kinds of potential impacts examined are health effects, land value changes, and social, transportation, and economic effects. The paper concludes with a discussion of the implications of incorporating these different risk perspectives in decisions on waste management.
Earthquake Hazards Program: Risk-Targeted Ground Motion Calculator
U.S. Geological Survey, Department of the Interior — This tool is used to calculate risk-targeted ground motion values from probabilistic seismic hazard curves in accordance with the site-specific ground motion...
Risk calculations in the manufacturing technology selection process
DEFF Research Database (Denmark)
Farooq, S.; O'Brien, C.
2010-01-01
in the shape of opportunities and threats in different decision-making environments. Practical implications - The research quantifies the risk associated with different available manufacturing technology alternatives. This quantification of risk crystallises the process of technology selection decision making...... and supports an industrial manager in achieving objective and comprehensive decisions regarding selection of a manufacturing technology. Originality/value - The paper explains the process of risk calculation in manufacturing technology selection by dividing the decision-making environment into manufacturing......Purpose - The purpose of this paper is to present result obtained from a developed technology selection framework and provide a detailed insight into the risk calculations and their implications in manufacturing technology selection process. Design/methodology/approach - The results illustrated...
Girerd, X; Hanon, O; Pannier, B; Vaïsse, B
2017-06-01
To investigate the determinants of non-compliance with antihypertensive treatments among participants in the FLAHS 2015 survey and to develop a risk calculator for drug compliance in a hypertensive population. The FLAHS surveys are carried out by self-questionnaire sent by mail to individuals from the TNS SOFRES (representative panel of the population living in metropolitan France) sampling frame. In 2015, FLAHS was performed in subjects aged 55years and older. Using the Girerd questionnaire, the "perfect observance" was determined for a score of 0 and "nonobservance" for a score of 1 or higher. A Poisson regression was conducted in univariate and multivariate to estimate risk ratios for each determinant. A non-compliance risk calculator is constructed from multivariate analysis. A Poisson regression was performed in univariate and multivariate to estimate risk ratios. For each sex, a probability table is produced from the equation of the multivariate analysis and then the calculation of a nonobservance probability ratio (PR) using the profile with the best probability as a reference. Each subject is then classified into one of the three classes of risk of non-compliance: low (PR =2) and intermediate (PR>=1.5 and non-compliance are: male sex, young age, number of antihypertensive tablet, treatment for a metabolic disease (diabetes, dyslipidemia), presence of other chronic illness, secondary prevention of cardiovascular disease. To get the risk class of nonobservance a web page is available at http://www.comitehta.org/flahs-observance-hta/. The development of the FLAHS Compliance Test is a tool whose use is possible during an office visit. Its free availability for French doctor will be one of the actions undertaken as part of the "call for action for adherence in hypertension" proposed by the French League Against Hypertension in 2017. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Calculation of risk for workers in dust working site
Directory of Open Access Journals (Sweden)
Geldová Erika
2004-03-01
Full Text Available The fibrogeneous dust is considered as a specific harmful substance in the mine working site. Such kind of dust cumulates in lungs and this fact usually results in lungs dusting, so - called pneumoconiosis. Thus, dustiness risk poses a probability of lungs damage by pneumoconiosis. For the calculation of dustiness risk it is needed to know the following data: the value of average dustiness kC in the working site per a definite time period, the dispersivity of dust D (it determines a portion of dust particles with a diameter under 5 µm, so - called respirable particles and the percentage content of quartz Qr in the respirable grain size fraction.The contribution presents the calculation of dustiness risk R according to the equation (1, where R is in percentage, a is the analytically specific harmfulness and KDc is the total cumulative dust dose received by worker in time of its dust exposure.The total cumulative dust dose is calculated on the basis of the equation (4, where kc is the average dust concentration in the assessed time period, ttime of exposure, V average amount of air inspired by exposed worker per time unit ( standardized on the value of 1,2 m3h-1,10-6-recalculation from mg to kg for KDc.If the values of Qr, D and kc during the worker exposure on a definite workplace are constant, the dustiness risk R is calculated according to the equation (1 and (5 respectively. In the case of n time intervals in that the values Qr, D and kc are known the dustiness risk R is calculated according to the equation (7. The total personal risk of worker is given by the equation (8.Conclusively, the influence of parameters change namely Qr, D and kc on the value of dustiness risk per equal time period is reported.
Pancreatectomy risk calculator: an ACS-NSQIP resource
Parikh, Purvi; Shiloach, Mira; Cohen, Mark E; Bilimoria, Karl Y; Ko, Clifford Y; Hall, Bruce L; Pitt, Henry A
2010-01-01
Background The morbidity of pancreatoduodenectomy remains high and the mortality may be significantly increased in high-risk patients. However, a method to predict post-operative adverse outcomes based on readily available clinical data has not been available. Therefore, the objective was to create a ‘Pancreatectomy Risk Calculator’ using the American College of Surgeons-National Surgical Quality Improvement Program (ACS-NSQIP) database. Methods The 2005–2008 ACS-NSQIP data on 7571 patients undergoing proximal (n =4621), distal (n =2552) or total pancreatectomy (n =177) as well as enucleation (n =221) were analysed. Pre-operative variables (n =31) were assessed for prediction of post-operative mortality, serious morbidity and overall morbidity using a logistic regression model. Statistically significant variables were ranked and weighted to create a common set of predictors for risk models for all three outcomes. Results Twenty pre-operative variables were statistically significant predictors of post-operative mortality (2.5%), serious morbidity (21%) or overall morbidity (32%). Ten out of 20 significant pre-operative variables were employed to produce the three mortality and morbidity risk models. The risk factors included age, gender, obesity, sepsis, functional status, American Society of Anesthesiologists (ASA) class, coronary heart disease, dyspnoea, bleeding disorder and extent of surgery. Conclusion The ACS-NSQIP ‘Pancreatectomy Risk Calculator’ employs 10 easily assessable clinical parameters to assist patients and surgeons in making an informed decision regarding the risks and benefits of undergoing pancreatic resection. A risk calculator based on this prototype will become available in the future as on online ACS-NSQIP resource. PMID:20815858
Compliance with biopsy recommendations of a prostate cancer risk calculator.
van Vugt, Heidi A; Roobol, Monique J; Busstra, Martijn; Kil, Paul; Oomens, Eric H; de Jong, Igle J; Bangma, Chris H; Steyerberg, Ewout W; Korfage, Ida
2012-05-01
Study Type - Diagnostic (cohort) Level of Evidence 2b What's known on the subject? and What does the study add? So far, few publications have shown that a prediction model influences the behaviour of both physicians and patients. To our knowledge, it was unknown whether urologists and patients are compliant with the recommendations of a prostate cancer risk calculator and their reasons for non-compliance. Recommendations of the European Randomized study of Screening for Prostate Cancer risk calculator (ERSPC RC) about the need of a prostate biopsy were followed in most patients. In most cases of non-compliance with 'no biopsy' recommendations, a PSA level ≥ 3 ng/mL was decisive to opt for biopsy. Before implementation of the ERSPC RC in urological practices at a large scale, it is important to obtain insight into the use of guidelines that might counteract the adoption of the use of the RC as a result of opposing recommendations. To assess both urologist and patient compliance with a 'no biopsy' or 'biopsy' recommendation of the European Randomized study of Screening for Prostate Cancer (ERSPC) Risk Calculator (RC), as well as their reasons for non-compliance. To assess determinants of patient compliance. The ERSPC RC calculates the probability on a positive sextant prostate biopsy (P(posb) ) using serum prostate-specific antigen (PSA) level, outcomes of digital rectal examination and transrectal ultrasonography, and ultrasonographically assessed prostate volume. A biopsy was recommended if P(posb) ≥20%. Between 2008 and 2011, eight urologists from five Dutch hospitals included 443 patients (aged 55-75 years) after a PSA test with no previous biopsy. Urologists calculated the P(posb) using the RC in the presence of patients and completed a questionnaire about compliance. Patients completed a questionnaire about prostate cancer knowledge, attitude towards prostate biopsy, self-rated health (12-Item Short Form Health Survey), anxiety (State Trait Anxiety
Calculating Least Risk Paths in 3d Indoor Space
Vanclooster, A.; De Maeyer, Ph.; Fack, V.; Van de Weghe, N.
2013-08-01
Over the last couple of years, research on indoor environments has gained a fresh impetus; more specifically applications that support navigation and wayfinding have become one of the booming industries. Indoor navigation research currently covers the technological aspect of indoor positioning and the modelling of indoor space. The algorithmic development to support navigation has so far been left mostly untouched, as most applications mainly rely on adapting Dijkstra's shortest path algorithm to an indoor network. However, alternative algorithms for outdoor navigation have been proposed adding a more cognitive notion to the calculated paths and as such adhering to the natural wayfinding behaviour (e.g. simplest paths, least risk paths). These algorithms are currently restricted to outdoor applications. The need for indoor cognitive algorithms is highlighted by a more challenged navigation and orientation due to the specific indoor structure (e.g. fragmentation, less visibility, confined areas…). As such, the clarity and easiness of route instructions is of paramount importance when distributing indoor routes. A shortest or fastest path indoors not necessarily aligns with the cognitive mapping of the building. Therefore, the aim of this research is to extend those richer cognitive algorithms to three-dimensional indoor environments. More specifically for this paper, we will focus on the application of the least risk path algorithm of Grum (2005) to an indoor space. The algorithm as proposed by Grum (2005) is duplicated and tested in a complex multi-storey building. The results of several least risk path calculations are compared to the shortest paths in indoor environments in terms of total length, improvement in route description complexity and number of turns. Several scenarios are tested in this comparison: paths covering a single floor, paths crossing several building wings and/or floors. Adjustments to the algorithm are proposed to be more aligned to the
National Research Council Canada - National Science Library
Lei Wen
2017-01-01
.... How to weigh the benefits and hazards？ The current study aimed to assess the feasibility of a cardiovascular/gastrointestinal risk calculator, AsaRiskCalculator, in predicting gastrointestinal events in Chinese patients with myocardial infarction （MI...
Johnson, Cassandra; Campwala, Insiyah; Gupta, Subhas
2017-03-01
American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) created the Surgical Risk Calculator, to allow physicians to offer patients a risk-adjusted 30-day surgical outcome prediction. This tool has not yet been validated in plastic surgery. A retrospective analysis of all plastic surgery-specific complications from a quality assurance database from September 2013 through July 2015 was performed. Patient preoperative risk factors were entered into the ACS Surgical Risk Calculator, and predicted outcomes were compared with actual morbidities. The difference in average predicted complication rate versus the actual rate of complication within this population was examined. Within the study population of patients with complications (n=104), the calculator accurately predicted an above average risk for 20.90% of serious complications. For surgical site infections, the average predicted risk for the study population was 3.30%; this prediction was proven only 24.39% accurate. The actual incidence of any complication within the 4924 patients treated in our plastic surgery practice from September 2013 through June 2015 was 1.89%. The most common plastic surgery complications include seroma, hematoma, dehiscence and flap-related complications. The ACS Risk Calculator does not present rates for these risks. While most frequent outcomes fall into general risk calculator categories, the difference in predicted versus actual complication rates indicates that this tool does not accurately predict outcomes in plastic surgery. The ACS Surgical Risk Calculator is not a valid tool for the field of plastic surgery without further research to develop accurate risk stratification tools. Copyright © 2017 American Federation for Medical Research.
Cardiovascular risk calculation | Ker | South African Family Practice
African Journals Online (AJOL)
Cardiovascular disease remains a major cause of global mortality and morbidity. Atherosclerosis is the main underlying cause in the majority of cardiovascular disease events. Traditional independent risk factors for car diovascular disease include age, abnormal lipid levels, elevated blood pressure, smoking and elevated ...
Losina, Elena; Michl, Griffin L; Smith, Karen C; Katz, Jeffrey N
2017-08-01
Young adults, in general, are not aware of their risk of knee osteoarthritis (OA). Understanding risk and risk factors is critical to knee OA prevention. We tested the efficacy of a personalized risk calculator on accuracy of knee OA risk perception and willingness to change behaviors associated with knee OA risk factors. We conducted a randomized controlled trial of 375 subjects recruited using Amazon Mechanical Turk. Subjects were randomized to either use a personalized risk calculator based on demographic and risk-factor information (intervention), or to view general OA risk information (control). At baseline and after the intervention, subjects estimated their 10-year and lifetime risk of knee OA and responded to contemplation ladders measuring willingness to change diet, exercise, or weight-control behaviors. Subjects in both arms had an estimated 3.6% 10-year and 25.3% lifetime chance of developing symptomatic knee OA. Both arms greatly overestimated knee OA risk at baseline, estimating a 10-year risk of 26.1% and a lifetime risk of 47.8%. After the intervention, risk calculator subjects' perceived 10-year risk decreased by 12.9 percentage points to 12.5% and perceived lifetime risk decreased by 19.5 percentage points to 28.1%. Control subjects' perceived risks remained unchanged. Risk calculator subjects were more likely to move to an action stage on the exercise contemplation ladder (relative risk 2.1). There was no difference between the groups for diet or weight-control ladders. The risk calculator is a useful intervention for knee OA education and may motivate some exercise-related behavioral change. © 2016, American College of Rheumatology.
A.N. Strobl (Andreas N.); A.J. Vickers (Andrew); B. Van Calster (Ben); E.W. Steyerberg (Ewout); R.J. Leach (Robin J.); I.M. Thompson (Ian); D. Ankerst (Donna)
2015-01-01
textabstractClinical risk calculators are now widely available but have generally been implemented in a static and one-size-fits-all fashion. The objective of this study was to challenge these notions and show via a case study concerning risk-based screening for prostate cancer how calculators can
How Suitable Are Registry Data for Recurrence Risk Calculations?
DEFF Research Database (Denmark)
Ellesøe, Sabrina Gade; Jensen, Anders Boeck; Ängquist, Lars Henrik
2016-01-01
BACKGROUND: Congenital heart disease (CHD) occurs in approximately 1% of all live births, and 3% to 8% of these have until now been considered familial cases, defined as the occurrence of two or more affected individuals in a family. The validity of CHD diagnoses in Danish administrative registry...... data has only been studied previously in highly selected patient populations. These studies identified high positive predictive values (PPVs) and recurrence risk ratios (RRRs-ratio between probabilities of CHD given family history of CHD and no family history). However, the RRR can be distorted...... if registry data are used indiscriminately. Here, we investigated the consequences of misclassifications for the RRR using validated diagnoses on Danish patients with familial CHD. METHODS: Danish citizens are assigned a civil registration number (CPR number) at birth or immigration, which acts as a unique...
Basel II Approaches for the Calculation of the Regulatory Capital for Operational Risk
Directory of Open Access Journals (Sweden)
Ivana Valová
2011-01-01
Full Text Available The final version of the New Capital Accord, which includes operational risk, was released by the Basel Committee on Banking Supervision in June 2004. The article “Basel II approaches for the calculation of the regulatory capital for operational risk” is devoted to the issue of operational risk of credit financial institutions. The paper talks about methods of operational risk calculation, advantages and disadvantages of particular methods.
Current state of copper stabilizers and methodology towards calculating risk
Koratzinos, M
2011-01-01
The talk will start by reviewing the landscape: a brief mention of the results of the warm copper stabilizer measurements and the results of the splice measurements at cold will be shown. The preliminary results of the recent RRR measurements will then be presented. Then, together with the limits presented from talk no. 2, the probability of an incident will be presented for beam energies between 3.5 and 5TeV. The available methods at our disposal for addressing the limiting factors and operating at a higher energy will then be reviewed: a complete circuit qualification method coined the Thermal Amplifier can define the maximum safe energy of the LHC in case of a quench next to a defective joint. Ways of avoiding magnet quenches, another critical element of the analysis, for instance by optimizing BLM settings will then be shown. Finally, a proposal of a strategy for running at the highest possible energy compatible with a pre-defined level of risk will be presented. As a case study, the method will also be a...
Method to calculate additional ramps explicity (CARE) in quantitative risk analysis for road tunnels
Nelisse, R.M.L.; Vrouwenvelder, A.C.W.M.
2014-01-01
Article 13 of the EU Directive on minimum safety requirements for tunnels in the Trans-European Road Network states that a risk analysis, where necessary, shall be carried out. In the Netherlands, the risk of decease for road users in a tunnel (internal risk) is calculated with a model for
Johnson, Sara B; Dariotis, Jacinda K; Wang, Constance
2012-08-01
Adolescent risk taking may result from heightened susceptibility to environmental cues, particularly emotion and potential rewards. This study evaluated the impact of social stress on adolescent risk taking, accounting for individual differences in risk taking under nonstressed conditions. Eighty-nine older adolescents completed a computerized risk-taking and decision-making battery at baseline. At follow-up, participants were randomized to a control condition, which repeated this battery, or an experimental condition, which included a social and cognitive stressor before the battery. Baseline risk-taking data were cluster-analyzed to create groups of adolescents with similar risk-taking tendencies. The degree to which these risk-taking tendencies predicted risk taking by stress condition was assessed at follow-up. Participants in the stress condition took more risks than those in the no-stress condition. However, differences in risk taking under stress were related to baseline risk-taking tendencies. We observed three types of risk-takers: conservative, calculated, and impulsive. Impulsives were less accurate and planful under stress; calculated risk takers took fewer risks; and conservatives engaged in low risk taking regardless of stress. As a group, adolescents are more likely to take risks in "hot cognitive" than in "cold cognitive" situations. However, there is significant variability in adolescents' behavioral responses to stress related to trait-level risk-taking tendencies. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Calculating alcohol risk in a visualization tool for promoting healthy behavior.
Bissett, Scott; Wood, Sharon; Cox, Richard; Scott, Donia; Cassell, Jackie
2013-08-01
To investigate effective methods for communicating the personalized risks of alcohol consumption, particularly to young people. An interactive computerized blood alcohol content calculator was implemented in Flash based on literature findings for effectively communicating risk. Young people were consulted on attitudes to the animation features and visualization techniques used to display personalized risk based on disclosed alcohol consumption. Preliminary findings reveal the calculator is relatively enjoyable to use for its genre. However, the primary aims of the visualization tool to effectively communicate personalized risk were undermined for some users by technical language. Transparency of risk calculations might further enhance the tool for others. Worryingly, user feedback revealed a tension between accurate presentation of risk and its consequent lack of sensationalism in terms of personal risk to the individual. Initial findings suggest the tool may provide a relatively engaging vehicle for exploring the link between action choices and risk outcomes. Suggestions for enhancing risk communication include using intelligent techniques for selecting data presentation formats and for demonstrating the effects of sustained risky behavior. Effective communication of risk contributes only partially to effecting behavior change; the role of the tool in influencing contributing attitudinal factors is also discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Comparison of Two Prostate Cancer Risk Calculators that Include the Prostate Health Index
M.J. Roobol-Bouts (Monique); M.M. Vedder (Moniek); D. Nieboer (Daan); A. Houlgatte (Alain); S. Vincendeau (Sébastien); M. Lazzeri (Massimo); G. Guazzoni (Giorgio); C. Stephan (Carsten); A. Semjonow (Axel); A. Haese (Alexander); M. Graefen (Markus); E.W. Steyerberg (Ewout)
2015-01-01
textabstractBackground: Risk prediction models for prostate cancer (PCa) have become important tools in reducing unnecessary prostate biopsies. The Prostate Health Index (PHI) may increase the predictive accuracy of such models. Objectives: To compare two PCa risk calculators (RCs) that include PHI.
Directory of Open Access Journals (Sweden)
Chang Wook Jeong
Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy
Jeong, Chang Wook; Lee, Sangchul; Jung, Jin-Woo; Lee, Byung Ki; Jeong, Seong Jin; Hong, Sung Kyu; Byun, Seok-Soo; Lee, Sang Eun
2014-01-01
We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC) that predicts the probability of prostate cancer (PC) at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC) and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC). The clinical value was evaluated using decision curve analysis. PC was diagnosed in 1,240 (35.6%) and 417 (37.5%) men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811) than for ERSPC-RC (0.768, pmobile application for smart devices.
Park, Jae Young; Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang; Byun, Seok-Soo
2017-01-01
We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, pKorean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings.
Directory of Open Access Journals (Sweden)
S.A. Maksimov
2017-09-01
Full Text Available Our research goal was to perform a comparative analysis of regression analysis application and tree classification appli-cation in calculating additional population risk on the example of ischemic heart diseases (IHD. Our research object was a random population sample comprising both male and female population aged 25-64 in Kemerovo region (1,628 people within ESSE-RF multi-centered epidemiologic research. We considered the following IHD risk factors: lipid metabolism parameters, arterial hypertension, lifestyle factors, psychoemotional peculiarities, and social parameters. IHD occurrence was assessed as per sum of 3 epidemiologic criteria: on the basis of ECG changes coding as per Minnesota code, Rose questionnaire, and car-diac infarction in case history. We calculated additional population IHD risk determined by risk factors as per unified original algorithms, but with various statistic analysis techniques: logistic regression analysis and classification trees. We built up mathematic models for IHD probability as per risk factors, with predictive significance equal to 83.8% for logistic regression analysis and to 71.9% for classification trees. The applied statistical analysis techniques show different contributions made by risk factors into IHD prevalence which results from absence of correlation between them. IBD risk additional to population one and determined by risk factors as per both statistical analysis techniques in sex-age groups changed from negative values in age groups younger than 45 to positive values in older people. Increase in addi-tional IHD risk in aged groups as per both techniques was practically linear with slight deviations. Difference in additional population risk calculated as per two statistical analysis techniques was insignificant and as a rule it didn't exceed 1.5%. Consequently, both techniques give similar results and can be equally used in calculating IHD population risk.
A clustering approach to segmenting users of internet-based risk calculators.
Harle, C A; Downs, J S; Padman, R
2011-01-01
Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.
Directory of Open Access Journals (Sweden)
Jae Young Park
Full Text Available We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG that predicts the probability of prostate cancer (PC of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/. In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort.Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC and calibration plots.PC was detected in 172 (28.6% men, 120 (19.9% of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84 was higher than that of the PCPTRC-HG (0.79, p<0.001 but not different from that of the ERSPCRC-HG (0.83 on external validation. Calibration plots also revealed better performance of KPCRC-HG and ERSPCRC-HG than that of PCPTRC-HG on external validation. At a cut-off of 5% for KPCRC-HG, 253 of the 2,313 men (11% would not have been biopsied, and 14 of the 614 PC cases with Gleason score 7 or higher (2% would not have been diagnosed.KPCRC-HG is the first web-based high-grade prostate cancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in
Easy calculations of lod scores and genetic risks on small computers.
Lathrop, G M; Lalouel, J M
1984-01-01
A computer program that calculates lod scores and genetic risks for a wide variety of both qualitative and quantitative genetic traits is discussed. An illustration is given of the joint use of a genetic marker, affection status, and quantitative information in counseling situations regarding Duchenne muscular dystrophy. PMID:6585139
A study of the absence of arbitrage opportunities without calculating the risk-neutral probability
Directory of Open Access Journals (Sweden)
Dani S.
2016-12-01
Full Text Available In this paper, we establish the property of conditional full support for two processes: the Ornstein Uhlenbeck and the stochastic integral in which the Brownian Bridge is the integrator and we build the absence of arbitrage opportunities without calculating the risk-neutral probability.
Calculating operational value-at-risk (OpVaR in a retail bank
Directory of Open Access Journals (Sweden)
Ja'nel Esterhuysen
2012-05-01
Full Text Available The management of operational value-at-risk (OpVaR in financial institutions is presented by means of a novel, robust calculation technique and the influence of this value on the capital held by a bank for operational risk. A clear distinction between economic and regulatory capital is made, as well as the way OpVaR models may be used to calculate both types of capital. Under the Advanced Measurement Approach (AMA, banks may employ OpVaR models to calculate regulatory capital; this article therefore illustrates the differences in regulatory capital when using the AMA and the Standardised Approach (SA, by means of an example. Economic capital is found to converge with regulatory capital using the AMA, but not if the SA is used.
RESRAD for Radiological Risk Assessment. Comparison with EPA CERCLA Tools - PRG and DCC Calculators
Energy Technology Data Exchange (ETDEWEB)
Yu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, J. -J. [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, S. [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-07-01
The purpose of this report is two-fold. First, the risk assessment methodology for both RESRAD and the EPA’s tools is reviewed. This includes a review of the EPA’s justification for 2 using a dose-to-risk conversion factor to reduce the dose-based protective ARAR from 15 to 12 mrem/yr. Second, the models and parameters used in RESRAD and the EPA PRG and DCC Calculators are compared in detail, and the results are summarized and discussed. Although there are suites of software tools in the RESRAD family of codes and the EPA Calculators, the scope of this report is limited to the RESRAD (onsite) code for soil contamination and the EPA’s PRG and DCC Calculators also for soil contamination.
MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM
Directory of Open Access Journals (Sweden)
A. G. Korobeynikov
2015-05-01
Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed
Auffenberg, Gregory B; Merdan, Selin; Miller, David C; Singh, Karandeep; Stockton, Benjamin R; Ghani, Khurshid R; Denton, Brian T
2017-06-01
To compare the predictive performance of a logistic regression model developed with contemporary data from a diverse group of urology practices to that of the Prostate Cancer Prevention Trial (PCPT) Risk Calculator version 2.0. With data from all first-time prostate biopsies performed between January 2012 and March 2015 across the Michigan Urological Surgery Improvement Collaborative (MUSIC), we developed a multinomial logistic regression model to predict the likelihood of finding high-grade cancer (Gleason score ≥7), low-grade cancer (Gleason score ≤6), or no cancer on prostate biopsy. The performance of the MUSIC model was evaluated in out-of-sample data using 10-fold cross-validation. Discrimination and calibration statistics were used to compare the performance of the MUSIC model to that of the PCPT risk calculator in the MUSIC cohort. Of the 11,809 biopsies included, 4289 (36.3%) revealed high-grade cancer; 2027 (17.2%) revealed low-grade cancer; and the remaining 5493 (46.5%) were negative. In the MUSIC model, prostate-specific antigen level, rectal examination findings, age, race, and family history of prostate cancer were significant predictors of finding high-grade cancer on biopsy. The 2 models, based on similar predictors, had comparable discrimination (multiclass area under the curve = 0.63 for the MUSIC model and 0.62 for the PCPT calculator). Calibration analyses demonstrated that the MUSIC model more accurately predicted observed outcomes, whereas the PCPT risk calculator substantively overestimated the likelihood of finding no cancer while underestimating the risk of high-grade cancer in this population. The PCPT risk calculator may not be a good predictor of individual biopsy outcomes for patients seen in contemporary urology practices. Copyright © 2017 Elsevier Inc. All rights reserved.
Trudell, Amanda S; Tuuli, Methodius G; Colditz, Graham A; Macones, George A; Odibo, Anthony O
2017-01-01
To generate a clinical prediction tool for stillbirth that combines maternal risk factors to provide an evidence based approach for the identification of women who will benefit most from antenatal testing for stillbirth prevention. Retrospective cohort study. Midwestern United States quaternary referral center. Singleton pregnancies undergoing second trimester anatomic survey from 1999-2009. Pregnancies with incomplete follow-up were excluded. Candidate predictors were identified from the literature and univariate analysis. Backward stepwise logistic regression with statistical comparison of model discrimination, calibration and clinical performance was used to generate final models for the prediction of stillbirth. Internal validation was performed using bootstrapping with 1,000 repetitions. A stillbirth risk calculator and stillbirth risk score were developed for the prediction of stillbirth at or beyond 32 weeks excluding fetal anomalies and aneuploidy. Statistical and clinical cut-points were identified and the tools compared using the Integrated Discrimination Improvement. Antepartum stillbirth. 64,173 women met inclusion criteria. The final stillbirth risk calculator and score included maternal age, black race, nulliparity, body mass index, smoking, chronic hypertension and pre-gestational diabetes. The stillbirth calculator and simple risk score demonstrated modest discrimination but clinically significant performance with no difference in overall performance between the tools [(AUC 0.66 95% CI 0.60-0.72) and (AUC 0.64 95% CI 0.58-0.70), (p = 0.25)]. A stillbirth risk score was developed incorporating maternal risk factors easily ascertained during prenatal care to determine an individual woman's risk for stillbirth and provide an evidenced based approach to the initiation of antenatal testing for the prediction and prevention of stillbirth.
Directory of Open Access Journals (Sweden)
Tomáš Mikita
2015-01-01
Full Text Available This paper outlines the idea of a precision forestry tool for optimizing clearcut size and shape within the process of forest recovery and its publishing in the form of a web processing service for forest owners on the Internet. The designed tool titled COWRAS (Clearcut Optimization and Wind Risk Assessment is developed for optimization of clearcuts (their location, shape, size, and orientation with subsequent wind risk assessment. The tool primarily works with airborne LiDAR data previously processed to the form of a digital surface model (DSM and a digital elevation model (DEM. In the first step, the growing stock on the planned clearcut determined by its location and area in feature class is calculated (by the method of individual tree detection. Subsequently tree heights from canopy height model (CHM are extracted and then diameters at breast height (DBH and wood volume using the regressions are calculated. Information about wood volume of each tree in the clearcut is exported and summarized in a table. In the next step, all trees in the clearcut are removed and a new DSM without trees in the clearcut is generated. This canopy model subsequently serves as an input for evaluation of wind risk damage by the MAXTOPEX tool (Mikita et al., 2012. In the final raster, predisposition of uncovered forest stand edges (around the clearcut to wind risk is calculated based on this analysis. The entire tool works in the background of ArcGIS server as a spatial decision support system for foresters.
Literacy skills and calculated 10-year risk of coronary heart disease.
Martin, Laurie T; Schonlau, Matthias; Haas, Ann; Derose, Kathryn Pitkin; Rudd, Rima; Loucks, Eric B; Rosenfeld, Lindsay; Buka, Stephen L
2011-01-01
Coronary heart disease (CHD) is a leading cause of morbidity and mortality. Reducing the disease burden requires an understanding of factors associated with the prevention and management of CHD. Literacy skills may be one such factor. To examine the independent and interactive effects of four literacy skills: reading, numeracy, oral language (speaking) and aural language (listening) on calculated 10-year risk of CHD and to determine whether the relationships between literacy skills and CHD risk were similar for men and women. We used multivariable linear regression to assess the individual, combined, and interactive effects of the four literacy skills on risk of CHD, adjusting for education and race. Four hundred and nine English-speaking adults in Boston, MA and Providence, RI. Ten-year risk of coronary heart disease was calculated using the Framingham algorithm. Reading, oral language and aural language were measured using the Woodcock Johnson III Tests of Achievement. Numeracy was assessed through a modified version of the numeracy scale by Lipkus and colleagues. When examined individually, reading (p = 0.007), numeracy (p = 0.001) and aural language (p = 0.004) skills were significantly associated with CHD risk among women; no literacy skills were associated with CHD risk in men. When examined together, there was some evidence for an interaction between numeracy and aural language among women suggesting that higher skills in one area (e.g., aural language) may compensate for difficulties in another resulting in an equally low risk of CHD. Results of this study not only provide important insight into the independent and interactive effects of literacy skills on risk of CHD, they also highlight the need for the development of easy-to use assessments of the oral exchange in the health care setting and the need to better understand which literacy skills are most important for a given health outcome.
Subclinical atherosclerosis in menopausal women with low to medium calculated cardiovascular risk.
Lambrinoudaki, Irene; Armeni, Eleni; Georgiopoulos, Georgios; Kazani, Maria; Kouskouni, Evangelia; Creatsa, Maria; Alexandrou, Andreas; Fotiou, Stylianos; Papamichael, Christos; Stamatelopoulos, Kimon
2013-03-20
The menopausal status is closely related with cardiovascular disease (CVD). Nevertheless, it is still not included in risk stratification by total cardiovascular risk estimation systems. The present study aimed to evaluate the extent of subclinical vascular disorders in young healthy postmenopausal women. This cross-sectional study consecutively recruited 120 healthy postmenopausal women without clinically overt CVD or diabetes, aged 41-60 years and classified as not high-risk by the Heartscore (subclinical atherosclerosis. Subclinical atherosclerosis and the presence of at least one plaque were identified in 55% and 28% of women, respectively. Subjects with subclinical atherosclerosis had higher age, years since menopause, HOMA-IR and blood pressure. By multivariate analysis years since menopause and systolic blood pressure independently determined subclinical atherosclerosis while 79% of intermediate-risk women (Heartscore 2-4.9%) being in menopause for at least 4 years would be reclassified to a higher risk for the presence of atherosclerosis. Subclinical atherosclerosis was highly prevalent in postmenopausal women with low to medium Heartscore. Thus our data suggest that menopausal status and associated risk factors should be additionally weighted in risk calculations, regarding primary prevention strategies in this population. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
DSTiPE Algorithm for Fuzzy Spatio-Temporal Risk Calculation in Wireless Environments
Energy Technology Data Exchange (ETDEWEB)
Kurt Derr; Milos Manic
2008-09-01
Time and location data play a very significant role in a variety of factory automation scenarios, such as automated vehicles and robots, their navigation, tracking, and monitoring, to services of optimization and security. In addition, pervasive wireless capabilities combined with time and location information are enabling new applications in areas such as transportation systems, health care, elder care, military, emergency response, critical infrastructure, and law enforcement. A person/object in proximity to certain areas for specific durations of time may pose a risk hazard either to themselves, others, or the environment. This paper presents a novel fuzzy based spatio-temporal risk calculation DSTiPE method that an object with wireless communications presents to the environment. The presented Matlab based application for fuzzy spatio-temporal risk cluster extraction is verified on a diagonal vehicle movement example.
Mazonakis, Michalis; Berris, Theocharris; Lyraraki, Efrossyni; Damilakis, John
2015-03-01
This study was conducted to calculate the peripheral dose to critical structures and assess the radiation risks from modern radiotherapy for stage IIA/IIB testicular seminoma. A Monte Carlo code was used for treatment simulation on a computational phantom representing an average adult. The initial treatment phase involved anteroposterior and posteroanaterior modified dog-leg fields exposing para-aortic and ipsilateral iliac lymph nodes followed by a cone-down phase for nodal mass irradiation. Peripheral doses were calculated using different modified dog-leg field dimensions and an extended conventional dog-leg portal. The risk models of the BEIR-VII report and ICRP-103 were combined with dosimetric calculations to estimate the probability of developing stochastic effects. Radiotherapy for stage IIA seminoma with a target dose of 30 Gy resulted in a range of 23.0-603.7 mGy to non-targeted peripheral tissues and organs. The corresponding range for treatment of stage IIB disease to a cumulative dose of 36 Gy was 24.2-633.9 mGy. A dose variation of less than 13% was found by altering the field dimensions. Radiotherapy with the conventional instead of the modern modified dog-leg field increased the peripheral dose up to 8.2 times. The calculated heart doses of 589.0-632.9 mGy may increase the risk for developing cardiovascular diseases whereas the testicular dose of more than 231.9 mGy may lead to a temporary infertility. The probability of birth abnormalities in the offspring of cancer survivors was below 0.13% which is much lower than the spontaneous mutation rate. Abdominoplevic irradiation may increase the lifetime intrinsic risk for the induction of secondary malignancies by 0.6-3.9% depending upon the site of interest, patient’s age and tumor dose. Radiotherapy for stage IIA/IIB seminoma with restricted fields and low doses is associated with an increased morbidity. These data may allow the definition of a risk-adapted follow-up scheme for long
Esteve-Pastor, M A; Marín, F; Bertomeu-Martinez, V; Roldán-Rabadán, I; Cequier-Fillat, Á; Badimon, L; Muñiz-García, J; Valdés, M; Anguita-Sánchez, M
2016-05-01
Clinical risk scores, CHADS2 and CHA2 DS2 -VASc scores, are the established tools for assessing stroke risk in patients with atrial fibrillation (AF). The aim of this study is to assess concordance between manual and computer-based calculation of CHADS2 and CHA2 DS2 -VASc scores, as well as to analyse the patient categories using CHADS2 and the potential improvement on stroke risk stratification with CHA2 DS2 -VASc score. We linked data from Atrial Fibrillation Spanish registry FANTASIIA. Between June 2013 and March 2014, 1318 consecutive outpatients were recruited. We explore the concordance between manual scoring and computer-based calculation. We compare the distribution of embolic risk of patients using both CHADS2 and CHA2 DS2 -VASc scores The mean age was 73.8 ± 9.4 years, and 758 (57.5%) were male. For CHADS2 score, concordance between manual scoring and computer-based calculation was 92.5%, whereas for CHA2 DS2 -VASc score was 96.4%. In CHADS2 score, 6.37% of patients with AF changed indication on antithrombotic therapy (3.49% of patients with no treatment changed to need antithrombotic treatment and 2.88% of patients otherwise). Using CHA2 DS2 -VASc score, only 0.45% of patients with AF needed to change in the recommendation of antithrombotic therapy. We have found a strong concordance between manual and computer-based score calculation of both CHADS2 and CHA2 DS2 -VASc risk scores with minimal changes in anticoagulation recommendations. The use of CHA2 DS2 -VASc score significantly improves classification of AF patients at low and intermediate risk of stroke into higher grade of thromboembolic score. Moreover, CHA2 DS2 -VASc score could identify 'truly low risk' patients compared with CHADS2 score. © 2016 Royal Australasian College of Physicians.
McAuley, Claire; Dersch, Ave; Kates, Lisa N; Sowan, Darryel R; Ollson, Christopher A
2016-12-01
As industrial development is increasing near northern Canadian communities, human health risk assessments (HHRA) are conducted to assess the predicted magnitude of impacts of chemical emissions on human health. One exposure pathway assessed for First Nations communities is the consumption of traditional plants, such as muskeg tea (Labrador tea) (Ledum/Rhododendron groenlandicum) and mint (Mentha arvensis). These plants are used to make tea and are not typically consumed in their raw form. Traditional practices were used to harvest muskeg tea leaves and mint leaves by two First Nations communities in northern Alberta, Canada. Under the direction of community elders, community youth collected and dried plants to make tea. Soil, plant, and tea decoction samples were analyzed for inorganic elements using inductively coupled plasma-mass spectrometry. Concentrations of inorganic elements in the tea decoctions were orders of magnitude lower than in the vegetation (e.g., manganese 0.107 mg/L in tea, 753 mg/kg in leaves). For barium, the practice of assessing ingestion of raw vegetation would have resulted in a hazard quotient (HQ) greater than the benchmark of 0.2. Using measured tea concentrations it was determined that exposure would result in risk estimates orders of magnitude below the HQ benchmark of 0.2 (HQ = 0.0049 and 0.017 for muskeg and mint tea, respectively). An HHRA calculating exposure to tea vegetation through direct ingestion of the leaves may overestimate risk. The results emphasize that food preparation methods must be considered when conducting an HHRA. This study illustrates how collaboration between Western scientists and First Nations communities can add greater clarity to risk assessments. © 2016 Society for Risk Analysis.
Comparing Methods of Calculating Expected Annual Damage in Urban Pluvial Flood Risk Assessments
Directory of Open Access Journals (Sweden)
Anders Skovgård Olsen
2015-01-01
Full Text Available Estimating the expected annual damage (EAD due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how vulnerable it is to flood risk and how much can be gained by implementing e.g., climate change adaptation measures. This study identifies and compares three different methods for estimating the EAD based on unit costs of flooding of urban assets. One of these methods was used in previous studies and calculates the EAD based on a few extreme events by assuming a log-linear relationship between cost of an event and the corresponding return period. This method is compared to methods that are either more complicated or require more calculations. The choice of method by which the EAD is calculated appears to be of minor importance. At all three case study areas it seems more important that there is a shift in the damage costs as a function of the return period. The shift occurs approximately at the 10 year return period and can perhaps be related to the design criteria for sewer systems. Further, it was tested if the EAD estimation could be simplified by assuming a single unit cost per flooded area. The results indicate that within each catchment this may be a feasible approach. However the unit costs varies substantially between different case study areas. Hence it is not feasible to develop unit costs that can be used to calculate EAD, most likely because the urban landscape is too heterogeneous.
Kuy, SreyRam; Romero, Ramon A L
2017-07-01
The Overton Brooks VA Medical Center Surgical Service had a high mortality. In an effort to reduce surgical mortality, we implemented a series of quality improvement interventions, including utilization of the ACS Surgical Risk Calculator to identify high-risk surgical patients for discussion in a multidisciplinary Pre-Operative Consultation Committee. Retrospective study describing the implementation of a risk stratification intervention incorporating the ACS Surgical Risk Calculator Tool and a multidisciplinary Pre-Operative Consultation Committee to target high-risk patients. Measurement of 30 day surgical mortality and risk adjusted Observed to Expected (O/E) mortality ratio. From May 2013 to September 2014, 614 high-risk patients were selected utilizing the ACS Risk Calculator and presented at the Pre-Operative Consultation Committee. Following implementation of this risk stratification intervention, 30-day mortality decreased by 66% from 0.9% to 0.3%, and risk adjusted O/E mortality ratio decreased from 2.5 to 0.8. Among the high risk patients presented, there was no increase in referrals to other facilities. There was a significant increase in cases requiring further preoperative optimization, from 6.3% at the beginning of the study period to 17.5% at the end of the study period. Implementation of a preoperative risk stratification intervention utilizing the ACS Surgical Risk Calculator along with a multidisciplinary Pre-Operative Consultation Committee can be successfully accomplished, with a significant decrease in 30-day surgical mortality. This is the first published report of utilization of the ACS Risk calculator as part of a systematic quality improvement tool to decrease surgical mortality. Published by Elsevier Inc.
Comparing Methods of Calculating Expected Annual Damage in Urban Pluvial Flood Risk Assessments
DEFF Research Database (Denmark)
Skovgård Olsen, Anders; Zhou, Qianqian; Linde, Jens Jørgen
2015-01-01
Estimating the expected annual damage (EAD) due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how vulnerable it is to flood risk and how much can be gained by implementing e.g., climate change...... in the damage costs as a function of the return period. The shift occurs approximately at the 10 year return period and can perhaps be related to the design criteria for sewer systems. Further, it was tested if the EAD estimation could be simplified by assuming a single unit cost per flooded area. The results...... indicate that within each catchment this may be a feasible approach. However the unit costs varies substantially between different case study areas. Hence it is not feasible to develop unit costs that can be used to calculate EAD, most likely because the urban landscape is too heterogeneous....
van Vugt, Heidi A; Roobol, Monique J; van der Poel, Henk G; van Muilekom, Erik H A M; Busstra, Martijn; Kil, Paul; Oomens, Eric H; Leliveld-Kors, Anna; Bangma, Chris H; Korfage, Ida; Steyerberg, Ewout W
UNLABELLED: Study Type - Prognosis (cohort series). Level of Evidence 2a. What's known on the subject? and What does the study add? The present study is one of the first to investigate urologists' and patients' compliance with recommendations based on a risk calculator that calculates the
From Risk Models to Loan Contracts: Austerity as the Continuation of Calculation by Other Means
Directory of Open Access Journals (Sweden)
Pierre Pénet
2014-06-01
Full Text Available This article analyses how financial actors sought to minimise financial uncertainties during the European sovereign debt crisis by employing simulations as legal instruments of market regulation. We first contrast two roles that simulations can play in sovereign debt markets: ‘simulation-hypotheses’, which work as bundles of constantly updated hypotheses with the goal of better predicting financial risks; and ‘simulation-fictions’, which provide fixed narratives about the present with the purpose of postponing the revision of market risks. Using ratings reports published by Moody’s on Greece and European Central Bank (ECB regulations, we show that Moody’s stuck to a simulationfiction and displayed rating inertia on Greece’s trustworthiness to prevent the destabilising effects that further downgrades would have on Greek borrowing costs. We also show that the multi-notch downgrade issued by Moody’s in June 2010 followed the ECB’s decision to remove ratings from its collateral eligibility requirements. Then, as regulators moved from ‘regulation through model’ to ‘regulation through contract’, ratings stopped functioning as simulation-fictions. Indeed, the conditions of the Greek bailout implemented in May 2010 replaced the CRAs’ models as the main simulation-fiction, which market actors employed to postpone the prospect of a Greek default. We conclude by presenting austerity measures as instruments of calculative governance rather than ideological compacts
Nair, Kalyani P.; Harkness, Elaine F.; Gadde, Soujanye; Lim, Yit Y.; Maxwell, Anthony J.; Moschidis, Emmanouil; Foden, Philip; Cuzick, Jack; Brentnall, Adam; Evans, D. Gareth; Howell, Anthony; Astley, Susan M.
2017-03-01
Personalised breast screening requires assessment of individual risk of breast cancer, of which one contributory factor is weight. Self-reported weight has been used for this purpose, but may be unreliable. We explore the use of volume of fat in the breast, measured from digital mammograms. Volumetric breast density measurements were used to determine the volume of fat in the breasts of 40,431 women taking part in the Predicting Risk Of Cancer At Screening (PROCAS) study. Tyrer-Cuzick risk using self-reported weight was calculated for each woman. Weight was also estimated from the relationship between self-reported weight and breast fat volume in the cohort, and used to re-calculate Tyrer-Cuzick risk. Women were assigned to risk categories according to 10 year risk (below average =8%) and the original and re-calculated Tyrer-Cuzick risks were compared. Of the 716 women diagnosed with breast cancer during the study, 15 (2.1%) moved into a lower risk category, and 37 (5.2%) moved into a higher category when using weight estimated from breast fat volume. Of the 39,715 women without a cancer diagnosis, 1009 (2.5%) moved into a lower risk category, and 1721 (4.3%) into a higher risk category. The majority of changes were between below average and average risk categories (38.5% of those with a cancer diagnosis, and 34.6% of those without). No individual moved more than one risk group. Automated breast fat measures may provide a suitable alternative to self-reported weight for risk assessment in personalized screening.
Directory of Open Access Journals (Sweden)
Denzil O’Brien
2016-02-01
Full Text Available All horse-riding is risky. In competitive horse sports, eventing is considered the riskiest, and is often characterised as very dangerous. But based on what data? There has been considerable research on the risks and unwanted outcomes of horse-riding in general, and on particular subsets of horse-riding such as eventing. However, there can be problems in accessing accurate, comprehensive and comparable data on such outcomes, and in using different calculation methods which cannot compare like with like. This paper critically examines a number of risk calculation methods used in estimating risk for riders in eventing, including one method which calculates risk based on hours spent in the activity and in one case concludes that eventing is more dangerous than motorcycle racing. This paper argues that the primary locus of risk for both riders and horses is the jump itself, and the action of the horse jumping. The paper proposes that risk calculation in eventing should therefore concentrate primarily on this locus, and suggests that eventing is unlikely to be more dangerous than motorcycle racing. The paper proposes avenues for further research to reduce the likelihood and consequences of rider and horse falls at jumps.
Fuchs, Lynn S.; Schumacher, Robin F.; Long, Jessica; Namkung, Jessica; Malone, Amelia S.; Wang, Amber; Hamlett, Carol L.; Jordan, Nancy C.; Siegler, Robert S.; Changas, Paul
2016-01-01
The purposes of this study were to (a) investigate the efficacy of a core fraction intervention program on understanding and calculation skill and (b) isolate the effects of different forms of fraction word-problem (WP) intervention delivered as part of the larger program. At-risk 4th graders (n = 213) were randomly assigned at the individual…
Namkung, Jessica M.; Fuchs, Lynn S.
2016-01-01
The purpose of this study was to examine the cognitive predictors of calculations and number line estimation with whole numbers and fractions. At-risk 4th-grade students (N = 139) were assessed on 6 domain-general abilities (i.e., working memory, processing speed, concept formation, language, attentive behavior, and nonverbal reasoning) and…
de Lima, Mário Maciel; da Silva, Glaciane Rocha; Jensem Filho, Sebastião Salazar; Granja, Fabiana
2016-01-01
Cardiovascular disease is the major cause of morbidity and mortality across the world. Despite health campaigns to improve awareness of cardiovascular risk factors, there has been little improvement in cardiovascular mortality. In this study, we sought to examine the association between cardiovascular risk factors and people's perception on cardiovascular risk. This was an epidemiological, cross-sectional, descriptive, prospective study of Masonic men aged >40 years in Boa Vista, Brazil. Participants completed a health survey, which included three questions about perception of their stress level, overall health status, and risk of a heart attack. In addition, demographic and biological data were collected. A total of 101 Masonic men took part in the study; their mean age (± standard deviation) was 55.35±9.17 years and mean body mass index was 28.77±4.51 kg/m(2). Answers to the lifestyle questionnaire suggested an overall healthy lifestyle, including good diet and moderate exercise, although despite this ~80% were classified as overweight or obese. The majority of participants felt that they had a low stress level (66.3%), good overall general health (63.4%), and were at low risk of having a heart attack (71.3%). Masons who were overweight were significantly more likely to perceive themselves to be at risk of a heart attack (P=0.025). Despite over half of participants having a moderate to high risk of cardiovascular disease according to traditional risk factors, less than a third perceived themselves to be at high risk. Public health campaigns need to better communicate the significance of traditional cardiovascular risk in order to improve awareness of risk among the general population.
Energy Technology Data Exchange (ETDEWEB)
Walsh, Linda [Federal Office for Radiation Protection, Department of Radiation Protection and Health, Oberschleissheim (Germany); University of Manchester, The Faculty of Medical and Human Sciences, Manchester (United Kingdom); Schneider, Uwe [University of Zurich, Vetsuisse Faculty, Zurich (Switzerland); Radiotherapy Hirslanden AG, Aarau (Switzerland)
2013-03-15
Radiation-related risks of cancer can be transported from one population to another population at risk, for the purpose of calculating lifetime risks from radiation exposure. Transfer via excess relative risks (ERR) or excess absolute risks (EAR) or a mixture of both (i.e., from the life span study (LSS) of Japanese atomic bomb survivors) has been done in the past based on qualitative weighting. Consequently, the values of the weights applied and the method of application of the weights (i.e., as additive or geometric weighted means) have varied both between reports produced at different times by the same regulatory body and also between reports produced at similar times by different regulatory bodies. Since the gender and age patterns are often markedly different between EAR and ERR models, it is useful to have an evidence-based method for determining the relative goodness of fit of such models to the data. This paper identifies a method, using Akaike model weights, which could aid expert judgment and be applied to help to achieve consistency of approach and quantitative evidence-based results in future health risk assessments. The results of applying this method to recent LSS cancer incidence models are that the relative EAR weighting by cancer solid cancer site, on a scale of 0-1, is zero for breast and colon, 0.02 for all solid, 0.03 for lung, 0.08 for liver, 0.15 for thyroid, 0.18 for bladder and 0.93 for stomach. The EAR weighting for female breast cancer increases from 0 to 0.3, if a generally observed change in the trend between female age-specific breast cancer incidence rates and attained age, associated with menopause, is accounted for in the EAR model. Application of this method to preferred models from a study of multi-model inference from many models fitted to the LSS leukemia mortality data, results in an EAR weighting of 0. From these results it can be seen that lifetime risk transfer is most highly weighted by EAR only for stomach cancer. However
Definition of the risk grounding fatalities due to unintentional airplane crashes by calculates
Directory of Open Access Journals (Sweden)
І.Л. Государська
2005-04-01
Full Text Available Estimates of the expected number of grounding fatalities, are given which probably will arise due to accident of air carriers, air taxi and general aviation. Measures regulation of the risk is considered.
Comparison of the historic recycling risk for BSE in three European Countries by calculating RO.
Schwermer, H.; Koeijer, de A.A.; Brülisauer, F.; Heim, D.
2007-01-01
Adeterministic model of BSE transmission is used to calculate the R0 values for specific years of the BSE epidemics in the United Kingdom (UK), the Netherlands (NL), and Switzerland (CH). In all three countries, theR0 values decreased below 1 after the introduction of a ban on feeding meat and bone
Development of a risk-based mine closure cost calculation model
CSIR Research Space (South Africa)
Du Plessis, A
2006-06-01
Full Text Available . This research is important because currently there are a number of mines that do not have sufficient financial provision to close and rehabilitate the mines. The magnitude of the lack of funds could be reduced or eliminated if the closure cost calculation...
Directory of Open Access Journals (Sweden)
Pavlos A. Kassomenos
2009-02-01
Full Text Available The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural. Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations.
On the problems regarding the risk calculation used in IEC 62305
Energy Technology Data Exchange (ETDEWEB)
Gellen, T B; Szedenik, N; Kiss, I; Nemeth, B, E-mail: szedenik.norbert@vet.bme.hu [Budapest University of Technology and Economics, Egry J.u.18, Budapest (Hungary)
2011-06-23
The 2nd part of the international standard on lightning protection (IEC 62305) deals with risk management. The explanations of the mathematical principles and the basic terms of this part facilitate the proper application of the standard. This paper gives additional information for better understanding of the standard and highlights some issues that might occur in its practical application.
Root, Martin M; Dawson, Hannah R
2013-01-01
Weight-loss diets with varying proportions of macronutrients have had varying effects on weight loss, and components of metabolic syndrome and risk factors for vascular diseases. However, little work has examined the effect of weight-neutral dietary changes in macronutrients on these factors. This is an investigation using the OMNI Heart datasets available from the NHLBI BioLINCC program. This study compared a DASH-like diet high in carbohydrates with similar diets high in protein and high in unsaturated fats. Measures of metabolic syndrome, except waist, and measures of risk factors for vascular diseases were taken at the end of each dietary period. All 3 diets significantly lowered the number of metabolic syndrome components (p ≤ 0.002) with a standardized measure of changes in metabolic syndrome components, suggesting that the high-protein, high-fat diet was most efficacious overall (p = 0.035). All 3 diets lowered a calculated 10-year risk of cardiovascular disease, with the high-protein and unsaturated fat diet being the most efficacious (p fat diet showed a slightly decreased calculated 9-year risk of diabetes (p = 0.11). Of the 3 weight-neutral diets, those high in protein and unsaturated fats appeared partially or wholly most beneficial.
Ellman, R.; Sibonga, J. D.; Bouxsein, M. L.
2010-01-01
The factor-of-risk (Phi), defined as the ratio of applied load to bone strength, is a biomechanical approach to hip fracture risk assessment that may be used to identify subjects who are at increased risk for fracture. The purpose of this project was to calculate the factor of risk in long duration astronauts after return from a mission on the International Space Station (ISS), which is typically 6 months in duration. The load applied to the hip was calculated for a sideways fall from standing height based on the individual height and weight of the astronauts. The soft tissue thickness overlying the greater trochanter was measured from the DXA whole body scans and used to estimate attenuation of the impact force provided by soft tissues overlying the hip. Femoral strength was estimated from femoral areal bone mineral density (aBMD) measurements by dual-energy x-ray absorptiometry (DXA), which were performed between 5-32 days of landing. All long-duration NASA astronauts from Expedition 1 to 18 were included in this study, where repeat flyers were treated as separate subjects. Male astronauts (n=20) had a significantly higher factor of risk for hip fracture Phi than females (n=5), with preflight values of 0.83+/-0.11 and 0.36+/-0.07, respectively, but there was no significant difference between preflight and postflight Phi (Figure 1). Femoral aBMD measurements were not found to be significantly different between men and women. Three men and no women exceeded the theoretical fracture threshold of Phi=1 immediately postflight, indicating that they would likely suffer a hip fracture if they were to experience a sideways fall with impact to the greater trochanter. These data suggest that male astronauts may be at greater risk for hip fracture than women following spaceflight, primarily due to relatively less soft tissue thickness and subsequently greater impact force.
Aoki, Shuichi; Miyata, Hiroaki; Konno, Hiroyuki; Gotoh, Mitsukazu; Motoi, Fuyuhiko; Kumamaru, Hiraku; Wakabayashi, Go; Kakeji, Yoshihiro; Mori, Masaki; Seto, Yasuyuki; Unno, Michiaki
2017-05-01
The morbidity rate after pancreaticoduodenectomy remains high. The objectives of this retrospective cohort study were to clarify the risk factors associated with serious morbidity (Clavien-Dindo classification grades IV-V), and create complication risk calculators using the Japanese National Clinical Database. Between 2011 and 2012, data from 17,564 patients who underwent pancreaticoduodenectomy at 1,311 institutions in Japan were recorded in this database. The morbidity rate and associated risk factors were analyzed. The overall and serious morbidity rates were 41.6% and 4.5%, respectively. A pancreatic fistula (PF) with an International Study Group of Pancreatic Fistula (ISGPF) grade C was significantly associated with serious morbidity (P obesity, functional status, smoking status, the presence of a comorbidity, non-pancreatic cancer, combined vascular resection, and several abnormal laboratory results. C-indices of the risk models for serious morbidity and grade C PF were 0.708 and 0.700, respectively. Preventing a PF grade C is important for decreasing the serious morbidity rate and these risk calculations contribute to adequate patient selection. © 2017 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Risk Management for Complex Calculations: EuSpRIG Best Practices in Hybrid Applications
Cernauskas, Deborah; Kumiega, Andrew; VanVliet, Ben
2008-01-01
As the need for advanced, interactive mathematical models has increased, user/programmers are increasingly choosing the MatLab scripting language over spreadsheets. However, applications developed in these tools have high error risk, and no best practices exist. We recommend that advanced, highly mathematical applications incorporate these tools with spreadsheets into hybrid applications, where developers can apply EuSpRIG best practices. Development of hybrid applications can reduce the pote...
Calculations of risk: regulation and responsibility for asbestos in social housing.
Waldman, Linda; Williams, Heather
2013-01-01
This paper examines questions of risk, regulation, and responsibility in relation to asbestos lodged in UK social housing. Despite extensive health and safety legislation protecting against industrial exposure, very little regulatory attention is given to asbestos present in domestic homes. The paper argues that this lack of regulatory oversight, combined with the informal, contractual, and small-scale work undertaken in domestic homes weakens the basic premise of occupational health and safety, namely that rational decision-making, technical measures, and individual safety behavior lead concerned parties (workers, employers, and others) to minimize risk and exposure. The paper focuses on UK council or social housing, examining how local housing authorities - as landlords - have a duty to provide housing, to protect and to care for residents, but points out that these obligations do not extend to health and safety legislation in relation to DIY undertaken by residents. At the same time, only conventional occupational health and safety, based on rationality, identification, containment, and protective measures, cover itinerant workmen entering these homes. Focusing on asbestos and the way things work in reality, this paper thus explores the degree to which official health and safety regulation can safeguard maintenance and other workers in council homes. It simultaneously examines how councils advise and protect tenants as they occupy and shape their homes. In so doing, this paper challenges the notion of risk as an objective, scientific, and effective measure. In contrast, it demonstrates the ways in which occupational risk - and the choice of appropriate response - is more likely situational and determined by wide-ranging and often contradictory factors.
Hörmansdörfer, C; Scharf, A; Golatta, M; Vaske, B; Corral, A; Hillemanns, P; Schmidt, P
2009-02-01
In February 2007 new software, Prenatal Risk Calculation (PRC), for calculating the risk of fetal aneuploidy was introduced in Germany. Our aim was to investigate its test performance and compare it with that of the PIA Fetal Database (PIA) software developed and used by The Fetal Medicine Foundation. Between 31 August 1999 and 30 June 2004 at the Women's Hospital of the Medical University of Hanover in Germany, 3120 singleton pregnancies underwent combined first-trimester screening at 11 + 0 to 13 + 6 weeks of gestation. Calculation of risk for fetal aneuploidy was computed prospectively using the PIA software. In a subsequent retrospective analysis, we recalculated risks for the 2653 of these datasets with known fetal outcome using the PRC software and compared the results. Of the 2653 datasets analyzed, 17 were cases of aneuploidy. At a cut-off of 1 : 230, for the detection of fetal aneuploidy, the respective sensitivity, false-positive rate and positive predictive value were 70.6%, 4.1% and 9.9% for PRC and 76.5%, 2.9% and 14.6% for PIA. At a cut-off of 1 : 300, the equivalent values were 70.6%, 5.6% and 7.5% for PRC and 76.5%, 4.0% and 11.0% for PIA. The differences in test performance between the two types of software were highly significant (P aneuploidy being lower and the false-positive rate higher. Had PRC been employed prospectively in our study, 40% more women examined would have been offered unnecessarily an invasive procedure for fetal karyotyping.
Keller, Deborah S; Kroll, Donald; Papaconstantinou, Harry T; Ellis, C Neal
2017-04-01
To identify patients with a high risk of 30-day mortality after elective surgery, who may benefit from referral for tertiary care, an institution-specific process using the Veterans Affairs Surgical Quality Improvement Program (VASQIP) Risk Calculator was developed. The goal was to develop and validate the methodology. Our hypothesis was that the process could optimize referrals and reduce mortality. A VASQIP risk score was calculated for all patients undergoing elective noncardiac surgery at a single Veterans Affairs (VA) facility. After statistical analysis, a VASQIP risk score of 3.3% predicted mortality was selected as the institutional threshold for referral to a tertiary care center. The model predicted that 16% of patients would require referral, and 30-day mortality would be reduced by 73% at the referring institution. The main outcomes measures were the actual vs predicted referrals and mortality rates at the referring and receiving facilities. The validation included 565 patients; 90 (16%) had VASQIP risk scores greater than 3.3% and were identified for referral; 60 consented. In these patients, there were 16 (27%) predicted mortalities, but only 4 actual deaths (p = 0.007) at the receiving institution. When referral was not indicated, the model predicted 4 mortalities (1%), but no actual deaths (p = 0.1241). These data validate this methodology to identify patients for referral to a higher level of care, reducing mortality at the referring institutions and significantly improving patient outcomes. This methodology can help guide decisions on referrals and optimize patient care. Further application and studies are warranted. Copyright © 2017 American College of Surgeons. All rights reserved.
Application of Risk within Net Present Value Calculations for Government Projects
Grandl, Paul R.; Youngblood, Alisha D.; Componation, Paul; Gholston, Sampson
2007-01-01
In January 2004, President Bush announced a new vision for space exploration. This included retirement of the current Space Shuttle fleet by 2010 and the development of new set of launch vehicles. The President's vision did not include significant increases in the NASA budget, so these development programs need to be cost conscious. Current trade study procedures address factors such as performance, reliability, safety, manufacturing, maintainability, operations, and costs. It would be desirable, however, to have increased insight into the cost factors behind each of the proposed system architectures. This paper reports on a set of component trade studies completed on the upper stage engine for the new launch vehicles. Increased insight into architecture costs was developed by including a Net Present Value (NPV) method and applying a set of associated risks to the base parametric cost data. The use of the NPV method along with the risks was found to add fidelity to the trade study and provide additional information to support the selection of a more robust design architecture.
The Development of a Liver Abscess after Screening Colonoscopy: A Calculated Risk?
Directory of Open Access Journals (Sweden)
Simon Bac
2017-09-01
Full Text Available We present the case of a patient who developed a liver abscess following screening colonoscopy. A colorectal screening program was introduced in the Netherlands in 2014 in order to reduce mortality from colorectal cancer. The patient in this report, a 63-year-old man with no significant medical history, underwent polypectomy of two polyps. Four days afterwards he presented to our emergency department with fever, nausea and vomiting. He was diagnosed with a Klebsiella pneumoniae liver abscess and was successfully treated with antibiotics for 6 weeks. This case highlights one of the risks of screening colonoscopy. Given the high number of colonoscopies due to the colorectal screening programs, we should be aware of complications in this mostly asymptomatic group of patients.
Energy Technology Data Exchange (ETDEWEB)
Yuan, Y.C. [Square Y, Orchard Park, NY (United States); Chen, S.Y.; LePoire, D.J. [Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Rothman, R. [USDOE Idaho Field Office, Idaho Falls, ID (United States)
1993-02-01
This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors.
Energy Technology Data Exchange (ETDEWEB)
Yuan, Y.C. [Square Y Consultants, Orchard Park, NY (US); Chen, S.Y.; Biwer, B.M.; LePoire, D.J. [Argonne National Lab., IL (US)
1995-11-01
This report presents the technical details of RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, interactive program that can be run on an IBM or equivalent personal computer under the Windows{trademark} environment. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incident-free models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionuclide inventory and dose conversion factors. In addition, the flexibility of the models allows them to be used for assessing any accidental release involving radioactive materials. The RISKIND code allows for user-specified accident scenarios as well as receptor locations under various exposure conditions, thereby facilitating the estimation of radiological consequences and health risks for individuals. Median (50% probability) and typical worst-case (less than 5% probability of being exceeded) doses and health consequences from potential accidental releases can be calculated by constructing a cumulative dose/probability distribution curve for a complete matrix of site joint-wind-frequency data. These consequence results, together with the estimated probability of the entire spectrum of potential accidents, form a comprehensive, probabilistic risk assessment of a spent nuclear fuel transportation accident.
Energy Technology Data Exchange (ETDEWEB)
Dana L. Kelly; Nathan O. Siu
2010-06-01
As the U.S. Nuclear Regulatory Commission (NRC) continues its efforts to increase its use of risk information in decision making, the detailed, quantitative results of probabilistic risk assessment (PRA) calculations are coming under increased scrutiny. Where once analysts and users were not overly concerned with figure of merit variations that were less than an order of magnitude, now factors of two or even less can spark heated debate regarding modeling approaches and assumptions. The philosophical and policy-related aspects of this situation are well-recognized by the PRA community. On the other hand, the technical implications for PRA methods and modeling have not been as widely discussed. This paper illustrates the potential numerical effects of choices as to the details of models and methods for parameter estimation with three examples: 1) the selection of the time period data for parameter estimation, and issues related to component boundary and failure mode definitions; 2) the selection of alternative diffuse prior distributions, including the constrained noninformative prior distribution, in Bayesian parameter estimation; and 3) the impact of uncertainty in calculations for recovery of offsite power.
Karagiozoglou-Lampoudi, Thomais; Daskalou, Efstratia; Lampoudis, Dimitrios; Apostolou, Aggeliki; Agakidis, Charalampos
2015-05-01
The study aimed to test the hypothesis that computer-based calculation of malnutrition risk may enhance the ability to identify pediatric patients at malnutrition-related risk for an unfavorable outcome. The Pediatric Digital Scaled MAlnutrition Risk screening Tool (PeDiSMART), incorporating the World Health Organization (WHO) growth reference data and malnutrition-related parameters, was used. This was a prospective cohort study of 500 pediatric patients aged 1 month to 17 years. Upon admission, the PeDiSMART score was calculated and anthropometry was performed. Pediatric Yorkhill Malnutrition Score (PYMS), Screening Tool Risk on Nutritional Status and Growth (STRONGkids), and Screening Tool for the Assessment of Malnutrition in Pediatrics (STAMP) malnutrition screening tools were also applied. PeDiSMART's association with the clinical outcome measures (weight loss/nutrition support and hospitalization duration) was assessed and compared with the other screening tools. The PeDiSMART score was inversely correlated with anthropometry and bioelectrical impedance phase angle (BIA PhA). The score's grading scale was based on BIA Pha quartiles. Weight loss/nutrition support during hospitalization was significantly independently associated with the malnutrition risk group allocation on admission, after controlling for anthropometric parameters and age. Receiver operating characteristic curve analysis showed a sensitivity of 87% and a specificity of 75% and a significant area under the curve, which differed significantly from that of STRONGkids and STAMP. In the subgroups of patients with PeDiSMART-based risk allocation different from that based on the other tools, PeDiSMART allocation was more closely related to outcome measures. PeDiSMART, applicable to the full age range of patients hospitalized in pediatric departments, graded according to BIA PhA, and embeddable in medical electronic records, enhances efficacy and reproducibility in identifying pediatric patients at
Adam, Ahmed; Hellig, Julian C; Perera, Marlon; Bolton, Damien; Lawrentschuk, Nathan
2017-12-08
The use of mobile phone applications (Apps) has modernised the conventional practice of medicine. The diagnostic ability of the current Apps in prostate specific antigen monitoring, and its diagnostic ability within prostate cancer (PCa) risk calculators have not yet been appraised. We aimed to review, rate and assess the everyday functionality, and utility of all the currently available PCa risk calculator Apps. A systematic search on iTunes, Google Play Store, Blackberry World and Windows Apps Store, was performed on 23/11/2017, using the search term 'prostate cancer risk calculator'. After applying the exclusion criteria, each App was individually assessed and rated using pre-set criteria and grading was performed using the validated uMARS scale. In total, 83 Apps were retrieved. After applying our exclusion criteria, only 9 Apps were relevant, with 2 duplicated, and the remaining 7 were suitable for critical review. Data sizes ranged from 414 kb to 10.1 Mb. The cost of the Apps ranged from South African rand (ZAR) 0.00 to ZAR 29.99. The overall mean category uMARS scores ranged from 2.8/5 to 4.5/5. Apps such as Rotterdam Prostate Cancer Risk Calculator, Coral-Prostate Cancer Nomogram Calculator and CPC Risk Calculator, performed the best. The current PCa risk calculator mobile Apps available may be beneficial in counselling the concerned at risk patient. These Apps have potential to assist both the patient and the urologist alike. The PCa risk calculator App 'predictability' may be further enhanced by the incorporation of newly validated risk factors and predictors for PCa.
Directory of Open Access Journals (Sweden)
Eunmi Kim
2014-09-01
Full Text Available Recently, flood damage by frequent localized downpours in cities is on the increase on account of abnormal climate phenomena and the growth of impermeable areas due to urbanization. This study suggests a method to estimate real-time flood risk on roads for drivers based on the accumulated rainfall. The amount of rainfall of a road link, which is an intensive type, is calculated by using the revised method of missing rainfall in meteorology, because the rainfall is not measured on roads directly. To process in real time with a computer, we use the inverse distance weighting (IDW method, which is a suitable method in the computing system and is commonly used in relation to precipitation due to its simplicity. With real-time accumulated rainfall, the flooding history, rainfall range causing flooding from previous rainfall information and frequency probability of precipitation are used to determine the flood risk on roads. The result of simulation using the suggested algorithms shows the high concordance rate between actual flooded areas in the past and flooded areas derived from the simulation for the research region in Busan, Korea.
van Vugt, Heidi A; Roobol, Monique J; van der Poel, Henk G; van Muilekom, Erik H A M; Busstra, Martijn; Kil, Paul; Oomens, Eric H; Leliveld, Annemarie; Bangma, Chris H; Korfage, Ida; Steyerberg, Ewout W
2012-07-01
Study Type - Prognosis (cohort series). Level of Evidence 2a. What's known on the subject? and What does the study add? The present study is one of the first to investigate urologists' and patients' compliance with recommendations based on a risk calculator that calculates the probability of indolent prostate cancer. A threshold was set for a recommendation of active surveillance vs active treatment. Active surveillance recommendations based on a prostate cancer risk calculator were followed by most patients, but 30% with active treatment recommendations chose active surveillance instead. This indicates that the threshold may be too high for urologists and patients. • To assess urologists' and patients' compliance with treatment recommendations based on a prostate cancer risk calculator (RC) and the reasons for non-compliance. • To assess the difference between patients who were compliant and non-compliant with recommendations based on this RC. • Eight urologists from five Dutch hospitals included 240 patients with prostate cancer (PCa), aged 55-75 years, from December 2008 to February 2011. • The urologists used the European Randomized Study of Screening for Prostate Cancer RC which predicts the probability of potentially indolent PCa (P[indolent]), using serum prostate-specific antigen (PSA), prostate volume and pathological findings on biopsy. • Inclusion criteria were PSA sextant biopsy cores, ≤ 20 mm cancer tissue, ≥ 40 mm benign tissue and Gleason ≤ 3 + 3. If the P(indolent) was >70%, active surveillance (AS) was recommended, and active treatment (AT) otherwise. • After the treatment decision, patients completed a questionnaire about their treatment choice, related (dis)advantages, and validated measurements of other factors, e.g. anxiety. • Most patients (45/55, 82%) were compliant with an AS recommendation. Another 54 chose AS despite an AT recommendation (54/185, 29%). • The most common reason for non-compliance with AT
Piacentini, Rubén D.; Cede, Alexander; Luccini, Eduardo; Stengel, Fernando
2004-01-01
The connection between ultraviolet (UV) radiation and various skin diseases is well known. In this work, we present the computer program "UVARG", developed in order to prevent the risk of getting sunburn for persons exposed to solar UV radiation in Argentina, a country that extends from low (tropical) to high southern hemisphere latitudes. The software calculates the so-called "erythemal irradiance", i.e., the spectral irradiance weighted by the McKinlay and Diffey action spectrum for erythema and integrated in wavelength. The erythemal irradiance depends mainly on the following geophysical parameters: solar elevation, total ozone column, surface altitude, surface albedo, total aerosol optical depth and Sun-Earth distance. Minor corrections are due to the variability in the vertical ozone, aerosol, pressure, humidity and temperature profiles and the extraterrestrial spectral solar UV irradiance. Key parameter in the software is a total ozone column climatology incorporating monthly averages, standard deviations and tendencies for the particular geographical situation of Argentina that was obtained from TOMS/NASA satellite data from 1978 to 2000. Different skin types are considered in order to determine the sunburn risk at any time of the day and any day of the year, with and without sunscreen protection. We present examples of the software for three different regions: the high altitude tropical Puna of Atacama desert in the North-West, Tierra del Fuego in the South when the ozone hole event overpasses and low summertime ozone conditions over Buenos Aires, the largest populated city in the country. In particular, we analyzed the maximum time for persons having different skin types during representative days of the year (southern hemisphere equinoxes and solstices). This work was made possible by the collaboration between the Argentine Skin Cancer Foundation, the Institute of Physics Rosario (CONICET-National University of Rosario, Argentina) and the Institute of
Borque-Fernando, Á; Esteban-Escaño, L M; Rubio-Briones, J; Lou-Mercadé, A C; García-Ruiz, R; Tejero-Sánchez, A; Muñoz-Rivero, M V; Cabañuz-Plo, T; Alfaro-Torres, J; Marquina-Ibáñez, I M; Hakim-Alonso, S; Mejía-Urbáez, E; Gil-Fabra, J; Gil-Martínez, P; Ávarez-Alegret, R; Sanz, G; Gil-Sanz, M J
2016-04-01
To prevent the overdiagnosis and overtreatment of prostate cancer (PC), therapeutic strategies have been established such as active surveillance and focal therapy, as well as methods for clarifying the diagnosis of high-grade prostate cancer (HGPC) (defined as a Gleason score ≥7), such as multiparametric magnetic resonance imaging and new markers such as the 4Kscore test (4KsT). By means of a pilot study, we aim to test the ability of the 4KsT to identify HGPC in prostate biopsies (Bx) and compare the test with other multivariate prognostic models such as the Prostate Cancer Prevention Trial Risk Calculator 2.0 (PCPTRC 2.0) and the European Research Screening Prostate Cancer Risk Calculator 4 (ERSPC-RC 4). Fifty-one patients underwent a prostate Bx according to standard clinical practice, with a minimum of 10 cores. The diagnosis of HGPC was agreed upon by 4 uropathologists. We compared the predictions from the various models by using the Mann-Whitney U test, area under the ROC curve (AUC) (DeLong test), probability density function (PDF), box plots and clinical utility curves. Forty-three percent of the patients had PC, and 23.5% had HGPC. The medians of probability for the 4KsT, PCPTRC 2.0 and ERSPC-RC 4 were significantly different between the patients with HGPC and those without HGPC (p≤.022) and were more differentiated in the case of 4KsT (51.5% for HGPC [25-75 percentile: 25-80.5%] vs. 16% [P 25-75: 8-26.5%] for non-HGPC; p=.002). All models presented AUCs above 0.7, with no significant differences between any of them and 4KsT (p≥.20). The PDF and box plots showed good discriminative ability, especially in the ERSPC-RC 4 and 4KsT models. The utility curves showed how a cutoff of 9% for 4KsT identified all cases of HGPC and provided a 22% savings in biopsies, which is similar to what occurs with the ERSPC-RC 4 models and a cutoff of 3%. The assessed predictive models offer good discriminative ability for HGPCs in Bx. The 4KsT is a good classification
Energy Technology Data Exchange (ETDEWEB)
Iwai, P; Lins, L Nadler [AC Camargo Cancer Center, Sao Paulo (Brazil)
2016-06-15
Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT or IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.
DEFF Research Database (Denmark)
Sørensen, Steen; Momsen, Günther; Sundberg, Karin
2011-01-01
Reliable individual risk calculation for trisomy (T) 13, 18, and 21 in first-trimester screening depends on good estimates of the medians for fetal nuchal translucency thickness (NT), free ß-subunit of human chorionic gonadotropin (hCGß), and pregnancy-associated plasma protein-A (PAPP...... calculation programs to assess whether the screening efficacies for T13, T18, and T21 could be improved by using our locally estimated medians....
DEFF Research Database (Denmark)
Sørensen, Steen; Momsen, Günther; Sundberg, Karin
2011-01-01
Reliable individual risk calculation for trisomy (T) 13, 18, and 21 in first-trimester screening depends on good estimates of the medians for fetal nuchal translucency thickness (NT), free β-subunit of human chorionic gonadotropin (hCGβ), and pregnancy-associated plasma protein-A (PAPP...... calculation programs to assess whether the screening efficacies for T13, T18, and T21 could be improved by using our locally estimated medians....
Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L
2017-11-01
Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.
2010-04-01
... debt is not protected by any Federal agency or the Securities Investor Protection Corporation; the... the following: (1) Value at risk. The Value at Risk measures obtained by applying one or more approved Value at Risk models to each position and multiplying the result by the appropriate multiplication...
Hunter, Nezahat; Muirhead, Colin R; Bochicchio, Francesco; Haylock, Richard G E
2015-09-01
The risk of lung cancer mortality up to 75 years of age due to radon exposure has been estimated for both male and female continuing, ex- and never-smokers, based on various radon risk models and exposure scenarios. We used risk models derived from (i) the BEIR VI analysis of cohorts of radon-exposed miners, (ii) cohort and nested case-control analyses of a European cohort of uranium miners and (iii) the joint analysis of European residential radon case-control studies. Estimates of the lifetime lung cancer risk due to radon varied between these models by just over a factor of 2 and risk estimates based on models from analyses of European uranium miners exposed at comparatively low rates and of people exposed to radon in homes were broadly compatible. For a given smoking category, there was not much difference in lifetime lung cancer risk between males and females. The estimated lifetime risk of radon-induced lung cancer for exposure to a concentration of 200 Bq m(-3) was in the range 2.98-6.55% for male continuing smokers and 0.19-0.42% for male never-smokers, depending on the model used and assuming a multiplicative relationship for the joint effect of radon and smoking. Stopping smoking at age 50 years decreases the lifetime risk due to radon by around a half relative to continuing smoking, but the risk for ex-smokers remains about a factor of 5-7 higher than that for never-smokers. Under a sub-multiplicative model for the joint effect of radon and smoking, the lifetime risk of radon-induced lung cancer was still estimated to be substantially higher for continuing smokers than for never smokers. Radon mitigation-used to reduce radon concentrations at homes-can also have a substantial impact on lung cancer risk, even for persons in their 50 s; for each of continuing smokers, ex-smokers and never-smokers, radon mitigation at age 50 would lower the lifetime risk of radon-induced lung cancer by about one-third. To maximise risk reductions, smokers in high
Yao, Siu-Sun; Supariwala, Azhar; Yao, Amanda; Dukkipati, Sai Sreenija; Wyne, Jamshad; Chaudhry, Farooq A
2015-09-01
This study evaluates the prognostic value of stress echocardiography (Secho) in short-term (10 years) and lifetime atherosclerotic cardiovascular disease risk-defined groups according to the American College of Cardiology/American Heart Association 2013 cardiovascular risk calculator. The ideal risk assessment and management of patients with low-to-intermediate or high short-term versus low (risk is unclear. The purpose of this study was to evaluate the prognostic value of Secho in short-term and lifetime CV risk-defined groups. We evaluated 4,566 patients (60 ± 13 years; 46% men) who underwent Secho (41% treadmill and 59% dobutamine) with low-intermediate short-term (risk divided into low (risk and third group with high short-term risk (≥20%, n = 3,537). Follow-up (3.2 ± 1.5 years) for nonfatal myocardial infarction (n = 102) and cardiac death (n = 140) were obtained. By univariate analysis, age (p risk and also in those with high short-term CV risk group (3.5% vs 1.0% per year, p risk assessment in patients with low-intermediate or high short-term versus low or high lifetime cardiovascular risk. Event rate with normal Secho is low (≤1% per year) but higher in patients with high short-term CV risk by the American College of Cardiology/American Heart Association 2013 cardiovascular risk calculator. Copyright © 2015 Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Petersen, Kurt Erling
1986-01-01
Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...
Directory of Open Access Journals (Sweden)
Brenton A
2017-05-01
Full Text Available Ashley Brenton,1 Steven Richeimer,2,3 Maneesh Sharma,4 Chee Lee,1 Svetlana Kantorovich,1 John Blanchard,1 Brian Meshkin1 1Proove Biosciences, Irvine, CA, 2Keck school of Medicine, University of Southern California, Los Angeles, CA, 3Departments of Anesthesiology and Psychiatry, University of Southern California, Los Angeles, CA, 4Interventional Pain Institute, Baltimore, MD, USA Background: Opioid abuse in chronic pain patients is a major public health issue, with rapidly increasing addiction rates and deaths from unintentional overdose more than quadrupling since 1999. Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm incorporating phenotypic risk factors and neuroscience-associated single-nucleotide polymorphisms (SNPs. Patients and methods: The Proove Opioid Risk (POR algorithm determines the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm incorporating phenotypic risk factors and neuroscience-associated SNPs. In a validation study with 258 subjects with diagnosed opioid use disorder (OUD and 650 controls who reported using opioids, the POR successfully categorized patients at high and moderate risks of opioid misuse or abuse with 95.7% sensitivity. Regardless of changes in the prevalence of opioid misuse or abuse, the sensitivity of POR remained >95%. Conclusion: The POR correctly stratifies patients into low-, moderate-, and high-risk categories to appropriately identify patients at need for additional guidance, monitoring, or treatment changes. Keywords: opioid use disorder, addiction, personalized medicine, pharmacogenetics, genetic testing, predictive algorithm
Directory of Open Access Journals (Sweden)
Anders Chen
Full Text Available BACKGROUND: Oral pre-exposure prophylaxis (PrEP can be clinically effective and cost-effective for HIV prevention in high-risk men who have sex with men (MSM. However, individual patients have different risk profiles, real-world populations vary, and no practical tools exist to guide clinical decisions or public health strategies. We introduce a practical model of HIV acquisition, including both a personalized risk calculator for clinical management and a cost-effectiveness calculator for population-level decisions. METHODS: We developed a decision-analytic model of PrEP for MSM. The primary clinical effectiveness and cost-effectiveness outcomes were the number needed to treat (NNT to prevent one HIV infection, and the cost per quality-adjusted life-year (QALY gained. We characterized patients according to risk factors including PrEP adherence, condom use, sexual frequency, background HIV prevalence and antiretroviral therapy use. RESULTS: With standard PrEP adherence and national epidemiologic parameters, the estimated NNT was 64 (95% uncertainty range: 26, 176 at a cost of $160,000 (cost saving, $740,000 per QALY--comparable to other published models. With high (35% HIV prevalence, the NNT was 35 (21, 57, and cost per QALY was $27,000 (cost saving, $160,000, and with high PrEP adherence, the NNT was 30 (14, 69, and cost per QALY was $3,000 (cost saving, $200,000. In contrast, for monogamous, serodiscordant relationships with partner antiretroviral therapy use, the NNT was 90 (39, 157 and cost per QALY was $280,000 ($14,000, $670,000. CONCLUSIONS: PrEP results vary widely across individuals and populations. Risk calculators may aid in patient education, clinical decision-making, and cost-effectiveness evaluation.
Jiang, Henry Y; Kohtakangas, Erica L; Asai, Kengo; Shum, Jeffrey B
2017-05-02
NSQIP Risk Calculator was developed to allow surgeons to inform their patients about their individual risks for surgery. Its ability to predict complication rates and length of stay (LOS) has made it an appealing tool for both patients and surgeons. However, the NSQIP Risk Calculator has been criticized for its generality and lack of detail towards surgical subspecialties, including the hepatopancreaticobiliary (HPB) surgery. We wish to determine whether the NSQIP Risk Calculator is predictive of post-operative complications and LOS with respect to Whipple's resections for our patient population. As well, we wish to identify strategies to optimize early surgical outcomes in patients with pancreatic cancer. We conducted a retrospective review of patients who underwent elective Whipple's procedure for benign or malignant pancreatic head lesions at Health Sciences North (Sudbury, Ontario), a tertiary care center, from February 2014 to August 2016. Comparisons of LOS and post-operative complications between NSQIP-predicted and actual ones were carried out. NSQIP-predicted complications rates were obtained using the NSQIP Risk Calculator through pre-defined preoperative risk factors. Clinical outcomes examined, at 30 days post-operation, included pneumonia, cardiac events, surgical site infection (SSI), urinary tract infection (UTI), venous thromboembolism (VTE), renal failure, readmission, and reoperation for procedural complications. As well, mortality, disposition to nursing or rehabilitation facilities, and LOS were assessed. A total of 40 patients underwent Whipple's procedure at our center from February 2014 to August 2016. The average age was 68 (50-85), and there were 22 males and 18 females. The majority of patients had independent baseline functional status (39/40) with minimal pre-operative comorbidities. The overall post-operative morbidity was 47.5% (19/40). The rate of serious complication was 17.5% with four Clavien grade II, two grade III, and one grade
Zee, D.C. van der; Vieira Travassos, D.; Jong, J.R. de; Tytgat, S.H.A.J.
2008-01-01
Purpose: This study was designed to determine the risk of anastomotic leakage after thoracoscopic repair for esophageal atresia by digitally measuring the length of the proximal esophagus and distance of carina to proximal esophagus. Methods: With the use of Picture Archiving and
Neslo, R. E J; Oei, W.; Janssen, M. P.|info:eu-repo/dai/nl/304818208
Increasing identification of transmissions of emerging infectious diseases (EIDs) by blood transfusion raised the question which of these EIDs poses the highest risk to blood safety. For a number of the EIDs that are perceived to be a threat to blood safety, evidence on actual disease or
Neslo, R E J; Oei, W; Janssen, M P
2017-09-01
Increasing identification of transmissions of emerging infectious diseases (EIDs) by blood transfusion raised the question which of these EIDs poses the highest risk to blood safety. For a number of the EIDs that are perceived to be a threat to blood safety, evidence on actual disease or transmission characteristics is lacking, which might render measures against such EIDs disputable. On the other hand, the fact that we call them "emerging" implies almost by definition that we are uncertain about at least some of their characteristics. So what is the relative importance of various disease and transmission characteristics, and how are these influenced by the degree of uncertainty associated with their actual values? We identified the likelihood of transmission by blood transfusion, the presence of an asymptomatic phase of infection, prevalence of infection, and the disease impact as the main characteristics of the perceived risk of disease transmission by blood transfusion. A group of experts in the field of infectious diseases and blood transfusion ranked sets of (hypothetical) diseases with varying degrees of uncertainty associated with their disease characteristics, and used probabilistic inversion to obtain probability distributions for the weight of each of these risk characteristics. These distribution weights can be used to rank both existing and newly emerging infectious diseases with (partially) known characteristics. Analyses show that in case there is a lack of data concerning disease characteristics, it is the uncertainty concerning the asymptomatic phase and the disease impact that are the most important drivers of the perceived risk. On the other hand, if disease characteristics are well established, it is the prevalence of infection and the transmissibility of the disease by blood transfusion that will drive the perceived risk. The risk prioritization model derived provides an easy to obtain and rational expert assessment of the relative importance of
Salas, S; Resseguier, N; Blay, J Y; Le Cesne, A; Italiano, A; Chevreau, C; Rosset, P; Isambert, N; Soulie, P; Cupissol, D; Delcambre, C; Bay, J O; Dubray-Longeras, P; Krengli, M; De Bari, B; Villa, S; Kaanders, J H A M; Torrente, S; Pasquier, D; Thariat, J O; Myroslav, L; Sole, C V; Dincbas, H F; Habboush, J Y; Zilli, T; Dragan, T; Khan R, K; Ugurluer, G; Cena, T; Duffaud, F; Penel, N; Bertucci, F; Ranchere-Vince, D; Terrier, P; Bonvalot, S; Macagno, N; Lemoine, C; Lae, M; Coindre, J M; Bouvier, C
2017-08-01
Solitary fibrous tumors (SFT) are rare unusual ubiquitous soft tissue tumors that are presumed to be of fibroblastic differentiation. At present, the challenge is to establish accurate prognostic factors. A total of 214 consecutive patients with SFT diagnosed in 24 participating cancer centers were entered into the European database (www.conticabase.org) to perform univariate and multivariate analysis for overall survival (OS), local recurrence incidence (LRI) and metastatic recurrence incidence (MRI) by taking competing risks into account. A prognostic model was constructed for LRI and MRI. Internal and external validations of the prognostic models were carried out. An individual risk calculator was carried out to quantify the risk of both local and metastatic recurrence. We restricted our analysis to 162 patients with local disease. Twenty patients (12.3%) were deceased at the time of analysis and the median OS was not reached. The LRI rates at 10 and 20 years were 19.2% and 38.6%, respectively. The MRI rates at 10 and 20 years were 31.4% and 49.8%, respectively. Multivariate analysis retained age and mitotic count tended to significance for predicting OS. The factors influencing LRI were viscera localization, radiotherapy and age. Mitotic count, tumor localization other than limb and age had independent values for MRI. Three prognostic groups for OS were defined based on the number of unfavorable prognostic factors and calculations were carried out to predict the risk of local and metastatic recurrence for individual patients. LRI and MRI rates increased between 10 and 20 years so relapses were delayed, suggesting that long-term monitoring is useful. This study also shows that different prognostic SFT sub-groups could benefit from different therapeutic strategies and that use of a survival calculator could become standard practice in SFTs to individualize treatment based on the clinical situation.
Crispin, Alexander; Klinger, Carsten; Rieger, Anna; Strahwald, Brigitte; Lehmann, Kai; Buhr, Heinz-Johannes; Mansmann, Ulrich
2017-10-01
The purpose of this study is to provide a web-based calculator predicting complication probabilities of patients undergoing colorectal cancer (CRC) surgery in Germany. Analyses were based on records of first-time CRC surgery between 2010 and February 2017, documented in the database of the Study, Documentation, and Quality Center (StuDoQ) of the Deutsche Gesellschaft für Allgemein- und Viszeralchirurgie (DGAV), a registry of CRC surgery in hospitals throughout Germany, covering demography, medical history, tumor features, comorbidity, behavioral risk factors, surgical procedures, and outcomes. Using logistic ridge regression, separate models were developed in learning samples of 6729 colon and 4381 rectum cancer patients and evaluated in validation samples of sizes 2407 and 1287. Discrimination was assessed using c statistics. Calibration was examined graphically by plotting observed versus predicted complication probabilities and numerically using Brier scores. We report validation results regarding 15 outcomes such as any major complication, surgical site infection, anastomotic leakage, bladder voiding disturbance after rectal surgery, abdominal wall dehiscence, various internistic complications, 30-day readmission, 30-day reoperation rate, and 30-day mortality. When applied to the validation samples, c statistics ranged between 0.60 for anastomosis leakage and 0.85 for mortality after rectum cancer surgery. Brier scores ranged from 0.003 to 0.127. While most models showed satisfactory discrimination and calibration, this does not preclude overly optimistic or pessimistic individual predictions. To avoid misinterpretation, one has to understand the basic principles of risk calculation and risk communication. An e-learning tool outlining the appropriate use of the risk calculator is provided.
Martínez-Morillo, Eduardo; García, Belén Prieto; Calvo, Francisco Moreno; Alvarez, Francisco V
2012-03-01
To evaluate the population parameters applied to the calculation of risk for Down syndrome (DS) in the first trimester screening (FTS) and the comparison of performance obtained including or excluding maternal age from the mathematical algorithm. Three different calculation engines for prenatal risk of DS were developed on the basis of the population parameters from the Serum, Urine and Ultrasound Screening Study, the Fetal Medicine Foundation, and a combination of both of them. These calculators were evaluated in 14,645 first trimester pregnant women, including 59 DS affected fetuses, comparing their performance with that obtained by our commercial software Elipse® (Perkin Elmer Life and Analytical Sciences, Turku, Finland). Advanced first trimester screening (AFS) strategy was also analyzed, and a hybrid strategy (FTS + AFS) was evaluated. By selecting population parameters from the Serum, Urine and Ultrasound Screening Study, the detection rate increased from 76% (Elipse) to 86% with a small increase in the false positive rate (FPR), from 3.3% to 3.7%, respectively. DS screening performance significantly improved by using the hybrid strategy (AFS in pregnant women under 35 years and FTS in pregnant women over 35 years), with a 92% detection rate (FPR: 3.9%). In the present study, a new hybrid screening strategy has been proposed to achieve DS detection rates higher than 90%, for a convenient <4% FPR. © 2012 John Wiley & Sons, Ltd.
National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...
DEFF Research Database (Denmark)
Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.
2013-01-01
injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case-control studies...... and cell counts were the most frequently observed errors in the six DRUID case-control studies. Therefore, it is recommended that epidemiological studies that assess the risk of psychoactive substances in traffic pay specific attention to avoid these potential sources of random and systematic errors...
Chaikh, Abdulhamid; Balosso, Jacques
2017-06-01
During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type 'a' algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type 'b', which consider change in lateral electrons transport. Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type 'b' / OED type 'a'). The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type 'a', the OED values from type 'b' dose distributions' were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing
Kim, Eric H; Weaver, John K; Shetty, Anup S; Vetter, Joel M; Andriole, Gerald L; Strope, Seth A
2017-04-01
To determine the added value of prostate magnetic resonance imaging (MRI) to the Prostate Cancer Prevention Trial risk calculator. Between January 2012 and December 2015, 339 patients underwent prostate MRI prior to biopsy at our institution. MRI was considered positive if there was at least 1 Prostate Imaging Reporting and Data System 4 or 5 MRI suspicious region. Logistic regression was used to develop 2 models: biopsy outcome as a function of the (1) Prostate Cancer Prevention Trial risk calculator alone and (2) combined with MRI findings. When including all patients, the Prostate Cancer Prevention Trial with and without MRI models performed similarly (area under the curve [AUC] = 0.74 and 0.78, P = .06). When restricting the cohort to patients with estimated risk of high-grade (Gleason ≥7) prostate cancer ≤10%, the model with MRI outperformed the Prostate Cancer Prevention Trial alone model (AUC = 0.69 and 0.60, P = .01). Within this cohort of patients, there was no significant difference in discrimination between models for those with previous negative biopsy (AUC = 0.61 vs 0.63, P = .76), whereas there was a significant improvement in discrimination with the MRI model for biopsy-naïve patients (AUC = 0.72 vs 0.60, P = .01). The use of prostate MRI in addition to the Prostate Cancer Prevention Trial risk calculator provides a significant improvement in clinical risk discrimination for patients with estimated risk of high-grade (Gleason ≥7) prostate cancer ≤10%. Prebiopsy prostate MRI should be strongly considered for these patients. Copyright © 2016 Elsevier Inc. All rights reserved.
[Understanding dosage calculations].
Benlahouès, Daniel
2016-01-01
The calculation of dosages in paediatrics is the concern of the whole medical and paramedical team. This activity must generate a minimum of risks in order to prevent care-related adverse events. In this context, the calculation of dosages is a practice which must be understood by everyone. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Wals, Amadeo; Contreras, Jorge; Macías, José; Fortes, Inmaculada; Rivas, Daniel; González, Pedro; Herruzo, Ismael
2006-04-01
To calculate the Normal Tissue Complication Probabilities (NTCP) for the liver, right kidney, left kidney and spinal cord, as well as the global Uncomplicated Tumour Control Probability (UTCP) in gastric cancer patients who underwent a treatment with radiotherapy after radical surgery in our environment. In April 2000, a postoperative chemotherapy (QT-RT) protocol started in the province of Malaga for Gastric Adenocarcinomas with postsurgical stage II or higher (pT3-4 and/or pN+). This clinical protocol served as a base for our NTCP and UTCP retrospective theorical study. A virtual simulation and a 3D planning were made in all cases. The differential HDV, selected for each patient were obtained for the 4 organs at risk (OR). Hystograms reduction was made by the Kutcher and Burman's Effective Volume method. NTCP calculations by Lyman's models. The following variables were calculated: maximal dose for each organ (Dmax), Effective Volume (Veff), TD50 (Veff/Vref); NTCP for each organ of the patient; global UTCP for each patient. Differences between the 2 treatment techniques were analysed (2-field versus 4-field technique). For the NTCP calculations the computer application Albireo 1.0(R) was used. 29 patients to assess with an average age of 54 +/- 10 years (range: 38-71); 65.5% men/34.5% women. The technique used was the field technique AP-PA in the 51.7% (15) and with 4 fields in 48.3% (14) of the cases. The global damage is estimated in 16% with a range between 0 and 37%. This goes up to 25% with the 2-field technique, with a wide range between 2 and 48% and it remains reduced to 4%, within a range between 0 and 12% when 4 fields are used. There were significant differences concerning the estimated damage probability (NTCP) on liver, spinal cord and left kidney, depending on the use of two or four fields. NTCP and the global UTCP values of the organs at risk allow to compare a technique net benefit from another in each particular case, although in our theoretical
SRD 166 MEMS Calculator (Web, free access) This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.
Directory of Open Access Journals (Sweden)
Hendriek C. Boshuizen
2017-02-01
Full Text Available Abstract Background Disability Adjusted Life Years (DALYs quantify the loss of healthy years of life due to dying prematurely and due to living with diseases and injuries. Current methods of attributing DALYs to underlying risk factors fall short on two main points. First, risk factor attribution methods often unjustly apply incidence-based population attributable fractions (PAFs to prevalence-based data. Second, it mixes two conceptually distinct approaches targeting different goals, namely an attribution method aiming to attribute uniquely to a single cause, and an elimination method aiming to describe a counterfactual situation without exposure. In this paper we describe dynamic modeling as an alternative, completely counterfactual approach and compare this to the approach used in the Global Burden of Disease 2010 study (GBD2010. Methods Using data on smoking in the Netherlands in 2011, we demonstrate how an alternative method of risk factor attribution using a pure counterfactual approach results in different estimates for DALYs. This alternative method is carried out using the dynamic multistate disease table model DYNAMO-HIA. We investigate the differences between our alternative method and the method used by the GBD2010 by doing additional analyses using data from a synthetic population in steady state. Results We observed important differences between the outcomes of the two methods: in an artificial situation where dynamics play a limited role, DALYs are a third lower as compared to those calculated with the GBD2010 method (398,000 versus 607,000 DALYs. The most important factor is newly occurring morbidity in life years gained that is ignored in the GBD2010 approach. Age-dependent relative risks and exposures lead to additional differences between methods as they distort the results of prevalence-based DALY calculations, but the direction and magnitude of the distortions depend on the particular situation. Conclusions We argue that the
Boshuizen, Hendriek C; Nusselder, Wilma J; Plasmans, Marjanne H D; Hilderink, Henk H; Snijders, Bianca E P; Poos, René; van Gool, Coen H
2017-02-14
Disability Adjusted Life Years (DALYs) quantify the loss of healthy years of life due to dying prematurely and due to living with diseases and injuries. Current methods of attributing DALYs to underlying risk factors fall short on two main points. First, risk factor attribution methods often unjustly apply incidence-based population attributable fractions (PAFs) to prevalence-based data. Second, it mixes two conceptually distinct approaches targeting different goals, namely an attribution method aiming to attribute uniquely to a single cause, and an elimination method aiming to describe a counterfactual situation without exposure. In this paper we describe dynamic modeling as an alternative, completely counterfactual approach and compare this to the approach used in the Global Burden of Disease 2010 study (GBD2010). Using data on smoking in the Netherlands in 2011, we demonstrate how an alternative method of risk factor attribution using a pure counterfactual approach results in different estimates for DALYs. This alternative method is carried out using the dynamic multistate disease table model DYNAMO-HIA. We investigate the differences between our alternative method and the method used by the GBD2010 by doing additional analyses using data from a synthetic population in steady state. We observed important differences between the outcomes of the two methods: in an artificial situation where dynamics play a limited role, DALYs are a third lower as compared to those calculated with the GBD2010 method (398,000 versus 607,000 DALYs). The most important factor is newly occurring morbidity in life years gained that is ignored in the GBD2010 approach. Age-dependent relative risks and exposures lead to additional differences between methods as they distort the results of prevalence-based DALY calculations, but the direction and magnitude of the distortions depend on the particular situation. We argue that the GBD2010 approach is a hybrid of an attributional and
DEFF Research Database (Denmark)
Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René
2013-01-01
injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case–control studies....... Relevant information was gathered from the DRUID-reports for eleven indicators for errors. The results showed that differences between the odds ratios in the DRUID case–control studies may indeed be (partially) explained by random and systematic errors. Selection bias and errors due to small sample sizes...... and cell counts were the most frequently observed errors in the six DRUID case–control studies. Therefore, it is recommended that epidemiological studies that assess the risk of psychoactive substances in traffic pay specific attention to avoid these potential sources of random and systematic errors...
Choi, Hansol; Shim, Jee-Seon; Lee, Myung Ha; Yoon, Young Mi; Choi, Dong Phil; Kim, Hyeon Chang
2016-09-01
Low-density lipoprotein cholesterol (LDL-C), an established cardiovascular risk factor, can be generally determined by calculation from total cholesterol, high-density lipoprotein cholesterol, and triglyceride concentrations. The aim of this study was to compare LDL-C estimations using various formulas with directly measured LDL-C in a community-based group and hospital-based group among the Korean population. A total of 1498 participants were classified into four groups according to triglyceride concentrations as follows: <100, 100-199, 200-299, and ≥300 mg/dL. LDL-C was calculated using the Friedewald, Chen, Vujovic, Hattori, de Cordova, and Anandaraja formulas and directly measured using a homogenous enzymatic method. Pearson's correlation coefficients, intraclass correlation coefficients (ICC), Passing & Bablok regression, and Bland-Altman plots were used to evaluate the performance of six formulas. The Friedewald formula had the highest accuracy (ICC=0.977; 95% confidence interval 0.974-0.979) of all the triglyceride ranges, while the Vujovic formula had the highest accuracy (ICC=0.876; 98.75% confidence interval 0.668-0.951) in people with triglycerides ≥300 mg/dL. The mean difference was the lowest for the Friedewald formula (0.5 mg/dL) and the percentage error was the lowest for the Vujovic formula (30.2%). However, underestimation of the LDL-C formulas increased with triglyceride concentrations. The accuracy of the LDL-C formulas varied considerably with differences in triglyceride concentrations. The Friedewald formula outperformed other formulas for estimating LDL-C against a direct measurement and the Vujovic formula was suitable for hypertriglyceridemic samples; it could be used as an alternative cost-effective tool to measure LDL-C when the direct measurement cannot be afforded.
1994-01-01
MathSoft Plus 5.0 is a calculation software package for electrical engineers and computer scientists who need advanced math functionality. It incorporates SmartMath, an expert system that determines a strategy for solving difficult mathematical problems. SmartMath was the result of the integration into Mathcad of CLIPS, a NASA-developed shell for creating expert systems. By using CLIPS, MathSoft, Inc. was able to save the time and money involved in writing the original program.
McCarty, George
1982-01-01
How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...
Energy Technology Data Exchange (ETDEWEB)
Manning, Karessa L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dolislager, Fredrick G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bellamy, Michael B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-11-01
The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency's (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants from soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV's). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.
Energy Technology Data Exchange (ETDEWEB)
1975-10-01
Information is presented concerning the radioactive releases from the containment following accidents; radioactive inventory of the reactor core; atmospheric dispersion; reactor sites and meteorological data; radioactive decay and deposition from plumes; finite distance of plume travel; dosimetric models; health effects; demographic data; mitigation of radiation exposure; economic model; and calculated results with consequence model.
Directory of Open Access Journals (Sweden)
Renata Rajtar-Salwa
2017-01-01
Full Text Available The aim of this study was to assess the relationship between biomarkers (high-sensitive troponin I [hs-TnI], N-Terminal probrain natriuretic peptide [NT-proBNP] and calculated 5-year percentage risk score of sudden cardiac death (SCD in hypertrophic cardiomyopathy (HCM. Methods. In 46 HCM patients (mean age 39 ± 7 years, 24 males and 22 females, echocardiographic examination, including the stimulating maneuvers to provoke maximized LVOT gradient, had been performed and next ECG Holter was immediately started. After 24 hours, the ECG Holter was finished and the hs-TnI and NT-proBNP have been measured. Patients were divided according to 1/value of both biomarkers (hs-TnI-positive and hs-TnI-negative subgroups and 2/(NT-proBNP lower and higher subgroup divided by median. Results. In comparison between 19 patients (hs-TnI positive versus 27 patients (hs-TnI negative, the calculated 5-year percentage risk of SCD in HCM was significantly greater (6.38 ± 4.17% versus 3.81 ± 3.23%, P0.05. Conclusions. Patients with HCM and positive hs-TnI test have a higher risk of SCD estimated according to SCD calculator recommended by the ESC Guidelines 2014 than patients with negative hs-TnI test.
Energy Technology Data Exchange (ETDEWEB)
Negrisoli, Manoel E.M.
1989-12-31
This work aims to present a method which permits an optimized design of the transmission towers insulating structures for a specified risk-of-failure, under different types of electrical stresses to which they might be submitted, considering statistical probabilities of unfavorable atmospheric conditions (pressure, temperature, humidity and wind speeds), amplitudes of voltage surges and its distribution along the line, as well as the number of parallel tower gaps and insulator swings. Convention design methods are used for a preliminary design based on all types of overvoltage stresses. The risk of failure is calculated considering the atmospheric conditions along the line route. Adjustments should be made until the design can be considered satisfactory and its risk-of-failure complies to the specified one. In order to complete the tower insulation coordination project, an evaluation of lightning impulse performance should be made. 24 refs., 41 figs., 6 tabs.
Energy Technology Data Exchange (ETDEWEB)
Kaplan, D
2005-08-31
The purpose of this document is to provide a technically defensible list of distribution coefficients, or Kd values, for use in performance assessment (PA) and special analysis (SA) calculations on the SRS. Only Kd values for radionuclides that have new information related to them or that have recently been recognized as being important are discussed in this report. Some 150 Kd values are provided in this report for various waste-disposal or tank-closure environments: soil, corrosion in grout, oxidizing grout waste, gravel, clay, and reducing concrete environments. Documentation and justification for the selection of each Kd value is provided.
Directory of Open Access Journals (Sweden)
Ana María Rubio Lorente
2017-02-01
Full Text Available Echogenic intracardiac foci are a second trimester marker associated with aneuploidy in high-risk populations. The objective of this study is to assess the validity of echogenic intracardiac foci for Down syndrome detection in the second trimester ultrasound scan. A systematic search in major bibliographic databases was carried out (MEDLINE, EMBASE, CINAHL. Twenty-five studies about echogenic intracardiac foci were selected for statistical synthesis in this systematic review. Those 25 considered to be relevant were then subjected to critical reading, following the Critical Appraisal Skills Programme criteria, by at least three independent observers. Then, the published articles were subjected to a meta-analysis. A global sensitivity of 21.8% and a 4.1% false positive rate were obtained. The positive likelihood ratio was 5.08 (95% confidence interval, 4.04–6.41. The subgroups analysis did not reveal statistically significant differences. In conclusion, echogenic intracardiac foci as an isolated marker could be a tool to identify—rather than exclude—the high-risk group of Down syndrome, although it should be noted that it shows low sensitivity.
Romero, Jose-María; Bover, Jordi; Fite, Joan; Bellmunt, Sergi; Dilmé, Jaime-Félix; Camacho, Mercedes; Vila, Luis; Escudero, Jose-Román
2012-11-01
Risk prediction is important in medical management, especially to optimize patient management before surgical intervention. No quantitative risk scores or predictors are available for patients with peripheral arterial disease (PAD). Surgical risk and prognosis are usually based on anesthetic scores or clinical evaluation. We suggest that renal function is a better predictor of risk than other cardiovascular parameters. This study used the four-variable Modification of Diet in Renal Disease (MDRD-4)-calculated glomerular filtration rate (GFR) to compare classical cardiovascular risk factors with prognosis and cardiovascular events of hospitalized PAD patients. The study evaluated 204 patients who were admitted for vascular intervention and diagnosed with grade IIb, III, or IV PAD or with carotid or renal stenosis. Those with carotid or renal stenosis were excluded, leaving 188 patients who were randomized from 2004 to 2005 and monitored until 2010. We performed a life-table analysis with a 6-year follow-up period and one final checkpoint. The following risk factors were evaluated: age, sex, ischemic heart disease, ictus (as a manifestation of cerebrovascular disease related to systemic arterial disease), diabetes, arterial hypertension, dyslipidemia, smoking, chronic obstructive pulmonary disease, type of vascular intervention, and urea and creatinine plasma levels. The GFR was calculated using the MDRD-4 equation. Death, major cardiovascular events, and reintervention for arterial disease were recorded during the follow-up. Patients (73% men) were a mean age of 71.38 ± 11.43 (standard deviation) years. PAD grade IIb was diagnosed in 41 (20%) and grade III-IV in 147 (72%). Forty-two minor amputations (20.6%), 21 major amputations (10.3%), and 102 revascularizations (50%) were performed. A major cardiovascular event occurred in 60 patients (29.4%), and 71 (34.8%) died. Multivariate logistic regression analysis showed that the MDRD-4 GFR, age, and male sex were
Day, Emily Ruth; Lefkowitz, Daniel K; Marshall, Elizabeth G; Hovinga, Mary
2010-10-01
Commercial fishing has high rates of work-related injury and death and needs preventive strategies. Work-related fatal and nonfatal injury rates for New Jersey (NJ) commercial fishermen who suffered unintentional traumatic injuries from 2001 to 2007 are calculated using data from the United States Coast Guard (USCG) Marine Safety and Pollution Database and estimated denominator data. Fatalities were compared to those ascertained by the NJ Fatality Assessment Control and Evaluation (FACE) surveillance system. For the study years, 225 nonfatal injuries and 31 fatal injuries were reported. Among nonfatal injuries, the causes by frequency were fall onto surface, crushed between objects, struck by moving object, line handling/caught in lines, collision with fixed objects, fall into water, and other noncontact injuries. The distribution of fatal injuries differed, with the most frequent cause as crushed between objects. Falls into water and several noncontact injuries accounted for most of the other fatalities. The large majority (96%) of nonfatal injuries were contact injuries, whereas only 68% of fatalities were classified as contact. The overall incidence rate of nonfatal injuries was 1188 per 100,000 full-time equivalents (FTEs) per year. The rate varied considerably by year, from a low of 286 per 100,000 FTEs in 2001 and 2007 to 3806 per 100,000 FTEs in 2003. The overall occupational fatality rate over the period 2001-2007 was 164 per 100,000 FTEs per year. These results can aid in targeting the commercial fishing industry for injury prevention strategies and interventions, especially for falls, crushing injuries, and drownings.
Barshi, Immanuel
2016-01-01
Speaking up, i.e. expressing ones concerns, is a critical piece of effective communication. Yet, we see many situations in which crew members have concerns and still remain silent. Why would that be the case? And how can we assess the risks of speaking up vs. the risks of keeping silent? And once we do make up our minds to speak up, how should we go about it? Our workshop aims to answer these questions, and to provide us all with practical tools for effective risk assessment and effective speaking-up strategies..
Pedicini, Piernicola; Strigari, Lidia; Benassi, Marcello; Caivano, Rocchina; Fiorentino, Alba; Nappi, Antonio; Salvatore, Marco; Storto, Giovanni
2014-01-01
To increase the efficacy of radiotherapy for non-small cell lung cancer (NSCLC), many schemes of dose fractionation were assessed by a new "toxicity index" (I), which allows one to choose the fractionation schedules that produce less toxic treatments. Thirty-two patients affected by non resectable NSCLC were treated by standard 3-dimensional conformal radiotherapy (3DCRT) with a strategy of limited treated volume. Computed tomography datasets were employed to re plan by simultaneous integrated boost intensity-modulated radiotherapy (IMRT). The dose distributions from plans were used to test various schemes of dose fractionation, in 3DCRT as well as in IMRT, by transforming the dose-volume histogram (DVH) into a biological equivalent DVH (BDVH) and by varying the overall treatment time. The BDVHs were obtained through the toxicity index, which was defined for each of the organs at risk (OAR) by a linear quadratic model keeping an equivalent radiobiological effect on the target volume. The less toxic fractionation consisted in a severe/moderate hyper fractionation for the volume including the primary tumor and lymph nodes, followed by a hypofractionation for the reduced volume of the primary tumor. The 3DCRT and IMRT resulted, respectively, in 4.7% and 4.3% of dose sparing for the spinal cord, without significant changes for the combined-lungs toxicity (p < 0.001). Schedules with reduced overall treatment time (accelerated fractionations) led to a 12.5% dose sparing for the spinal cord (7.5% in IMRT), 8.3% dose sparing for V20 in the combined lungs (5.5% in IMRT), and also significant dose sparing for all the other OARs (p < 0.001). The toxicity index allows to choose fractionation schedules with reduced toxicity for all the OARs and equivalent radiobiological effect for the tumor in 3DCRT, as well as in IMRT, treatments of NSCLC. Copyright © 2014 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Pedicini, Piernicola, E-mail: ppiern@libero.it [Service of Medical Physics, I.R.C.C.S. Regional Cancer Hospital C.R.O.B, Rionero in Vulture (Italy); Strigari, Lidia [Laboratory of Medical Physics and Expert Systems, Regina Elena National Cancer Institute, Rome (Italy); Benassi, Marcello [Service of Medical Physics, Scientific Institute of Tumours of Romagna I.R.S.T., Meldola (Italy); Caivano, Rocchina [Service of Medical Physics, I.R.C.C.S. Regional Cancer Hospital C.R.O.B, Rionero in Vulture (Italy); Fiorentino, Alba [U.O. of Radiotherapy, I.R.C.C.S. Regional Cancer Hospital C.R.O.B., Rionero in Vulture (Italy); Nappi, Antonio [U.O. of Nuclear Medicine, I.R.C.C.S. Regional Cancer Hospital C.R.O.B., Rionero in Vulture (Italy); Salvatore, Marco [U.O. of Nuclear Medicine, I.R.C.C.S. SDN Foundation, Naples (Italy); Storto, Giovanni [U.O. of Nuclear Medicine, I.R.C.C.S. Regional Cancer Hospital C.R.O.B., Rionero in Vulture (Italy)
2014-04-01
To increase the efficacy of radiotherapy for non–small cell lung cancer (NSCLC), many schemes of dose fractionation were assessed by a new “toxicity index” (I), which allows one to choose the fractionation schedules that produce less toxic treatments. Thirty-two patients affected by non resectable NSCLC were treated by standard 3-dimensional conformal radiotherapy (3DCRT) with a strategy of limited treated volume. Computed tomography datasets were employed to re plan by simultaneous integrated boost intensity-modulated radiotherapy (IMRT). The dose distributions from plans were used to test various schemes of dose fractionation, in 3DCRT as well as in IMRT, by transforming the dose-volume histogram (DVH) into a biological equivalent DVH (BDVH) and by varying the overall treatment time. The BDVHs were obtained through the toxicity index, which was defined for each of the organs at risk (OAR) by a linear quadratic model keeping an equivalent radiobiological effect on the target volume. The less toxic fractionation consisted in a severe/moderate hyper fractionation for the volume including the primary tumor and lymph nodes, followed by a hypofractionation for the reduced volume of the primary tumor. The 3DCRT and IMRT resulted, respectively, in 4.7% and 4.3% of dose sparing for the spinal cord, without significant changes for the combined-lungs toxicity (p < 0.001). Schedules with reduced overall treatment time (accelerated fractionations) led to a 12.5% dose sparing for the spinal cord (7.5% in IMRT), 8.3% dose sparing for V{sub 20} in the combined lungs (5.5% in IMRT), and also significant dose sparing for all the other OARs (p < 0.001). The toxicity index allows to choose fractionation schedules with reduced toxicity for all the OARs and equivalent radiobiological effect for the tumor in 3DCRT, as well as in IMRT, treatments of NSCLC.
National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...
... NIAAA College Materials Supporting Research Special Features CollegeAIM College Administrators Parents & Students Home > Special Features > Calculators > Alcohol Calorie Calculator Weekly Total 0 Calories Alcohol Calorie ...
Directory of Open Access Journals (Sweden)
Dylan Collins
2017-03-01
Full Text Available The World Health Organisation and International Society of Hypertension (WHO/ISH cardiovascular disease (CVD risk assessment charts have been implemented in many low- and middle-income countries as part of the WHO Package of Essential Non-Communicable Disease (PEN Interventions for Primary Health Care in Low-Resource settings. Evaluation of the WHO/ISH cardiovascular risk charts and their use is a key priority and since they only existed in paper or PDF formats, we developed an R implementation of the charts for all epidemiological subregions of the world. The main strengths of this implementation are that it is built in a free, open-source, coding language with simple syntax, can be downloaded from github as a package (“whoishRisk”, and can be used with a standard computer.
2016-06-10
Under the Medicare Shared Savings Program (Shared Savings Program), providers of services and suppliers that participate in an Accountable Care Organization (ACO) continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, but the ACO may be eligible to receive a shared savings payment if it meets specified quality and savings requirements. This final rule addresses changes to the Shared Savings Program, including: Modifications to the program's benchmarking methodology, when resetting (rebasing) the ACO's benchmark for a second or subsequent agreement period, to encourage ACOs' continued investment in care coordination and quality improvement; an alternative participation option to encourage ACOs to enter performance-based risk arrangements earlier in their participation under the program; and policies for reopening of payment determinations to make corrections after financial calculations have been performed and ACO shared savings and shared losses for a performance year have been determined.
Williams, David E.
1981-01-01
This short quiz for teachers is intended to help them to brush up on their calculator operating skills and to prepare for the types of questions their students will ask about calculator idiosyncracies. (SJL)
Bahr, Patrick; Hutton, Graham
2015-01-01
In this article we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high- level semantics by systematic calculation, with all details of the implementation of the compilers falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language f...
Autistic Savant Calendar Calculators.
Patti, Paul J.
This study identified 10 savants with developmental disabilities and an exceptional ability to calculate calendar dates. These "calendar calculators" were asked to demonstrate their abilities, and their strategies were analyzed. The study found that the ability to calculate dates into the past or future varied widely among these…
Threlfall, John
2002-01-01
Suggests that strategy choice is a misleading characterization of efficient mental calculation and that teaching mental calculation methods as a whole is not conducive to flexibility. Proposes an alternative in which calculation is thought of as an interaction between noticing and knowledge. Presents an associated teaching approach to promote…
Directory of Open Access Journals (Sweden)
Haraldo Claus-Hermberg
2009-10-01
nature of the proposed endpoint, a new calculator has been proposed: Fracture Risk Assessment Tool FRAX TM, which follows the same objectives of previous models, but integrates and combines several of those factors according to their relative weight. It can estimate absolute risk of hip fracture (or a combination of osteoporotic fractures for the following 10 years. The calculator could be adapted for use in any country by the incorporation of hip fracture incidence and age- and sex-adjusted life expectancy in the same country. This instrument has been presented as a new paradigm to assist in clinical and therapeutic decision-making. In the present review some of its characteristics are discussed, such as: the purported applicability to different populations, the convenience of using 10-year absolute fracture risk for the whole age range under consideration, and whether the efficacy of pharmacological treatment for the prevention of bone fractures in osteoporotic patients can be expected to be equally effective among patients selected for treatment on the basis of this model. Finally, we would like to call attention to the fact that risk thresholds for intervention are not yet clearly defined; those thresholds can obviously be expected to have a profound impact on the number of patients amenable to treatment.
Energy Technology Data Exchange (ETDEWEB)
Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment
1998-03-01
In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)
Energy Technology Data Exchange (ETDEWEB)
Kmetyk, L.N.; Brown, T.D. [Sandia National Labs., Albuquerque, NM (United States)
1995-03-01
To gain a better understanding of the risk significance of low power and shutdown modes of operation, the Office of Nuclear Regulatory Research at the NRC established programs to investigate the likelihood and severity of postulated accidents that could occur during low power and shutdown (LP&S) modes of operation at commercial nuclear power plants. To investigate the likelihood of severe core damage accidents during off power conditions, probabilistic risk assessments (PRAs) were performed for two nuclear plants: Unit 1 of the Grand Gulf Nuclear Station, which is a BWR-6 Mark III boiling water reactor (BWR), and Unit 1 of the Surry Power Station, which is a three-loop, subatmospheric, pressurized water reactor (PWR). The analysis of the BWR was conducted at Sandia National Laboratories while the analysis of the PWR was performed at Brookhaven National Laboratory. This multi-volume report presents and discusses the results of the BWR analysis. The subject of this part presents the deterministic code calculations, performed with the MELCOR code, that were used to support the development and quantification of the PRA models. The background for the work documented in this report is summarized, including how deterministic codes are used in PRAS, why the MELCOR code is used, what the capabilities and features of MELCOR are, and how the code has been used by others in the past. Brief descriptions of the Grand Gulf plant and its configuration during LP&S operation and of the MELCOR input model developed for the Grand Gulf plant in its LP&S configuration are given.
Multiphase flow calculation software
Fincke, James R.
2003-04-15
Multiphase flow calculation software and computer-readable media carrying computer executable instructions for calculating liquid and gas phase mass flow rates of high void fraction multiphase flows. The multiphase flow calculation software employs various given, or experimentally determined, parameters in conjunction with a plurality of pressure differentials of a multiphase flow, preferably supplied by a differential pressure flowmeter or the like, to determine liquid and gas phase mass flow rates of the high void fraction multiphase flows. Embodiments of the multiphase flow calculation software are suitable for use in a variety of applications, including real-time management and control of an object system.
Radar Signature Calculation Facility
Federal Laboratory Consortium — FUNCTION: The calculation, analysis, and visualization of the spatially extended radar signatures of complex objects such as ships in a sea multipath environment and...
Waste Package Lifting Calculation
Energy Technology Data Exchange (ETDEWEB)
H. Marr
2000-05-11
The objective of this calculation is to evaluate the structural response of the waste package during the horizontal and vertical lifting operations in order to support the waste package lifting feature design. The scope of this calculation includes the evaluation of the 21 PWR UCF (pressurized water reactor uncanistered fuel) waste package, naval waste package, 5 DHLW/DOE SNF (defense high-level waste/Department of Energy spent nuclear fuel)--short waste package, and 44 BWR (boiling water reactor) UCF waste package. Procedure AP-3.12Q, Revision 0, ICN 0, calculations, is used to develop and document this calculation.
Electrical installation calculations advanced
Kitcher, Christopher
2013-01-01
All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio
Evapotranspiration Calculator Desktop Tool
The Evapotranspiration Calculator estimates evapotranspiration time series data for hydrological and water quality models for the Hydrologic Simulation Program - Fortran (HSPF) and the Stormwater Management Model (SWMM).
Electronics Environmental Benefits Calculator
U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...
Electrical installation calculations basic
Kitcher, Christopher
2013-01-01
All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo
Chemical calculations and chemicals that might calculate
Barnett, Michael P.
I summarize some applications of symbolic calculation to the evaluation of molecular integrals over Slater orbitals, and discuss some spin-offs of this work that have wider potential. These include the exploration of the mechanized use of analogy. I explain the methods that I use to do this, in relation to mathematical proofs and to modeling step by step processes such as organic syntheses and NMR pulse sequences. Another spin-off relates to biological information processing. Some challenges and opportunities in the information infrastructure of interdisciplinary research are discussed.
Lei, S.; Osborne, P.
2016-12-01
The Scoping of Options and Analyzing Risk (SOAR) model was developed by the U.S. Nuclear Regulatory Commission staff to assist in their evaluation of potential high-level radioactive waste disposal options. It is a 1-D contaminant transport code that contains a biosphere module to calculate mass fluxes and radiation dose to humans. As part of the Canadian Nuclear Safety Commission (CNSC)'s Coordinated Assessment Program to assist with the review of proposals for deep geological repositories (DGR's) for nuclear fuel wastes, CNSC conducted a research project to find out whether SOAR can be used by CNSC staff as an independent scoping tool to assist review of proponents' submissions related to safety assessment for DGRs. In the research, SOAR was applied to the post-closure safety assessment for a hypothetical DGR in sedimentary rock, as described in the 5th Case Study report by the Nuclear Waste Management Organization (NWMO) of Canada (2011). The report contains, among others, modeling of transport and releases of radionuclides at various locations within the geosphere and the radiation dose to humans over a period of one million years. One aspect covered was 1-D modeling of various scenarios and sensitivity cases with both deterministic and probabilistic approaches using SYVAC3-CC4, which stands for Systems Variability Analysis Code (generation 3, Canadian Concept generation 4), developed by Atomic Energy of Canada Limited (Kitson et al., 2000). Radionuclide fluxes and radiation dose to the humans calculated using SOAR were compared with that from NWMO's modeling. Overall, the results from the two models were similar, although SOAR gave lower mass fluxes and peak dose, mainly due to differences in modeling the waste package configurations. Sensitivity analyses indicate that both models are most sensitive to the diffusion coefficient of the geological media. The research leads to the conclusion that SOAR is a robust, user friendly, and flexible scoping tool that
The Dental Trauma Internet Calculator
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; Lauridsen, Eva Fejerskov; Christensen, Søren Steno Ahrensburg
2012-01-01
Background/Aim Prediction tools are increasingly used to inform patients about the future dental health outcome. Advanced statistical methods are required to arrive at unbiased predictions based on follow-up studies. Material and Methods The Internet risk calculator at the Dental Trauma Guide...... provides prognoses for teeth with traumatic injuries based on the Copenhagen trauma database: http://www.dentaltraumaguide.org The database includes 2191 traumatized permanent teeth from 1282 patients that were treated at the dental trauma unit at the University Hospital in Copenhagen (Denmark...
Vedder, Moniek M.; de Bekker-Grob, Esther W.; Lilja, Hans G.; Vickers, Andrew J.; van Leenders, G.J.L.H.; Steyerberg, Ewout W.; Roobol, Monique J.
2015-01-01
Background Prostate-specific antigen (PSA) testing has limited accuracy for the early detection of prostate cancer (PCa). Objective To assess the value added by percentage of free to total PSA (%fPSA), prostate cancer antigen 3 (PCA3), and a kallikrein panel (4k-panel) to the European Randomised Study of Screening for Prostate Cancer (ERSPC) multivariable prediction models: risk calculator (RC) 4, including transrectal ultrasound, and RC 4 plus digital rectal examination (4+DRE) for prescreened men. Design, setting, and participants Participants were invited for rescreening between October 2007 and February 2009 within the Dutch part of the ERSPC study. Biopsies were taken in men with a PSA level ≥3.0 ng/ml or a PCA3 score ≥10. Additional analyses of the 4k-panel were done on serum samples. Outcome measurements and statistical analysis Outcome was defined as PCa detectable by sextant biopsy. Receiver operating characteristic curve and decision curve analyses were performed to compare the predictive capabilities of %fPSA, PCA3, 4k-panel, the ERSPC RCs, and their combinations in logistic regression models. Results and limitations PCa was detected in 119 of 708 men. The %fPSA did not perform better univariately or added to the RCs compared with the RCs alone. In 202 men with an elevated PSA, the 4k-panel discriminated better than PCA3 when modelled univariately (area under the curve [AUC]: 0.78 vs 0.62; p = 0.01). The multivariable models with PCA3 or the 4k-panel were equivalent (AUC: 0.80 for RC 4+DRE). In the total population, PCA3 discriminated better than the 4k-panel (univariate AUC: 0.63 vs 0.56; p = 0.05). There was no statistically significant difference between the multivariable model with PCA3 (AUC: 0.73) versus the model with the 4k-panel (AUC: 0.71; p = 0.18). The multivariable model with PCA3 performed better than the reference model (0.73 vs 0.70; p = 0.02). Decision curves confirmed these patterns, although numbers were small. Conclusions Both PCA3
DEFF Research Database (Denmark)
Frederiksen, Morten
2014-01-01
Williamson’s characterisation of calculativeness as inimical to trust contradicts most sociological trust research. However, a similar argument is found within trust phenomenology. This paper re-investigates Williamson’s argument from the perspective of Løgstrup’s phenomenological theory of trust....... Contrary to Williamson, however, Løgstrup’s contention is that trust, not calculativeness, is the default attitude and only when suspicion is awoken does trust falter. The paper argues that while Williamson’s distinction between calculativeness and trust is supported by phenomenology, the analysis needs...... to take actual subjective experience into consideration. It points out that, first, Løgstrup places trust alongside calculativeness as a different mode of engaging in social interaction, rather conceiving of trust as a state or the outcome of a decision-making process. Secondly, the analysis must take...
Unit Cost Compendium Calculations
U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...
Magnetic Field Grid Calculator
National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly...
National Stormwater Calculator
EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico).
Calculation Tool for Engineering
Lampinen, Samuli
2016-01-01
The Study was conducted as qualitative research for K-S Konesuunnittelu Oy. The company provides mechanical engineering for technology suppliers in the Finnish export industries. The main objective was to study if the competitiveness of the case company could be improved using a self-made Calculation Tool (Excel Tool). The mission was to clarify processes in the case company to see the possibilities of Excel Tool and to compare it with other potential calculation applications. In addition,...
Current interruption transients calculation
Peelo, David F
2014-01-01
Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,
Methods for Melting Temperature Calculation
Hong, Qi-Jun
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which
Wisniewski, H.; Gourdain, P.-A.
2017-10-01
APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.
LHC Bellows Impedance Calculations
Dyachkov, M
1997-01-01
To compensate for thermal expansion the LHC ring has to accommodate about 2500 bellows which, together with beam position monitors, are the main contributors to the LHC broad-band impedance budget. In order to reduce this impedance to an acceptable value the bellows have to be shielded. In this paper we compare different designs proposed for the bellows and calculate their transverse and longitudinal wakefields and impedances. Owing to the 3D geometry of the bellows, the code MAFIA was used for the wakefield calculations; when possible the MAFIA results were compared to those obtained with ABCI. The results presented in this paper indicate that the latest bellows design, in which shielding is provided by sprung fingers which can slide along the beam screen, has impedances smaller tha those previously estimated according to a rather conservative scaling of SSC calculations and LEP measurements. Several failure modes, such as missing fingers and imperfect RF contact, have also been studied.
INVAP's Nuclear Calculation System
Directory of Open Access Journals (Sweden)
Ignacio Mochi
2011-01-01
Full Text Available Since its origins in 1976, INVAP has been on continuous development of the calculation system used for design and optimization of nuclear reactors. The calculation codes have been polished and enhanced with new capabilities as they were needed or useful for the new challenges that the market imposed. The actual state of the code packages enables INVAP to design nuclear installations with complex geometries using a set of easy-to-use input files that minimize user errors due to confusion or misinterpretation. A set of intuitive graphic postprocessors have also been developed providing a fast and complete visualization tool for the parameters obtained in the calculations. The capabilities and general characteristics of this deterministic software package are presented throughout the paper including several examples of its recent application.
Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim
2003-01-01
We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...
Graphing Calculator Mini Course
Karnawat, Sunil R.
1996-01-01
The "Graphing Calculator Mini Course" project provided a mathematically-intensive technologically-based summer enrichment workshop for teachers of American Indian students on the Turtle Mountain Indian Reservation. Eleven such teachers participated in the six-day workshop in summer of 1996 and three Sunday workshops in the academic year. The project aimed to improve science and mathematics education on the reservation by showing teachers effective ways to use high-end graphing calculators as teaching and learning tools in science and mathematics courses at all levels. In particular, the workshop concentrated on applying TI-82's user-friendly features to understand the various mathematical and scientific concepts.
Gravitational constant calculation methodologies
Shakhparonov, V. M.; Karagioz, O. V.; Izmailov, V. P.
2011-01-01
We consider the gravitational constant calculation methodologies for a rectangular block of the torsion balance body presented in the papers Phys. Rev. Lett. 102, 240801 (2009) and Phys.Rev. D. 82, 022001 (2010). We have established the influence of non-equilibrium gas flows on the obtained values of G.
PROSPECTS OF MANAGEMENT ACCOUNTING AND COST CALCULATION
Directory of Open Access Journals (Sweden)
Marian ŢAICU
2014-11-01
Full Text Available Progress in improving production technology requires appropriate measures to achieve an efficient management of costs. This raises the need for continuous improvement of management accounting and cost calculation. Accounting information in general, and management accounting information in particular, have gained importance in the current economic conditions, which are characterized by risk and uncertainty. The future development of management accounting and cost calculation is essential to meet the information needs of management.
Calculation of collisional mixing
Koponen, I.; Hautala, M.
1990-06-01
Collisional mixing of markers is calculated by splitting the calculation into two parts. Relocation cross sections have been calculated using a realistic potential in a Monte Carlo simulation. The cross sections are used in the computation of marker relocation. The cumulative effect of successive relocations is assumed to be an uncorrelated transport process and it is treated as a weighted random walk. Matrix relocation was not included in the calculations. The results from this two-step simulation model are compared with analytical models. A fit to the simulated differential relocation cross sections has been found which makes the numerical integration of the Bothe formula feasible. The influence of primaries has been treated in this way. When all the recoils are included the relocation profiles are nearly Gaussian and the Pearson IV distributions yield acceptable profiles in the studied cases. The approximations and cut-off procedures which cause the major uncertainties in calculations are pointed out. The choice of the cut-off energy is shown to be the source of the largest uncertainty whereas the mathematical approximations can be used with good accuracy. The methods are used to study the broadening of a Pt marker in Si mixed by 300 keV Xe ions, broadening of a Ti marker in Al mixed by 300 keV Xe ions and broadening of a Ti marker in Hf mixed by 750 keV Kr ions. The fluence in each case is 2 × 10 16{ions}/{cm 2}. The calculated averages of half widths at half maximum vary between 11-18, 9-12 and 10-15 nm, respectively, depending on the cut-off energy and the mixing efficiencies vary between 11-29, 6-11 and 6-14 {Å5}/{eV}, respectively. The broadenings of Pt in Si and Ti in Al are about two times smaller than the measured values and the broadening of Ti in Hf is in agreement with the measured values.
Microcomputer calculations in physics
Killingbeck, J. P.
1985-01-01
The use of microcomputers to carry out computations in an interactive manner allows the judgement of the operator to be allied with the calculating power of the machine in a synthesis which speeds up the creation and testing of mathematical techniques for physical problems. This advantage is accompanied by a disadvantage, in that microcomputers are limited in capacity and power, and special analysis is needed to compensate for this. These two features together mean that there is a fairly recognisable body of methods which are particularly appropriate for interactive microcomputing. This article surveys a wide range of mathematical methods used in physics, indicating how they can be applied using microcomputers and giving several original calculations which illustrate the value of the microcomputer in stimulating the exploration of new methods. Particular emphasis is given to methods which use iteration, recurrence relation or extrapolation procedures which are well adapted to the capabilities of modern microcomputers.
CONVEYOR FOUNDATIONS CALCULATION
Energy Technology Data Exchange (ETDEWEB)
S. Romanos
1995-03-10
The purpose of these calculations is to design foundations for all conveyor supports for the surface conveyors that transport the muck resulting from the TBM operation, from the belt storage to the muck stockpile. These conveyors consist of: (1) Conveyor W-TO3, from the belt storage, at the starter tunnel, to the transfer tower. (2) Conveyor W-SO1, from the transfer tower to the material stacker, at the muck stockpile.
Calculations in furnace technology
Davies, Clive; Hopkins, DW; Owen, WS
2013-01-01
Calculations in Furnace Technology presents the theoretical and practical aspects of furnace technology. This book provides information pertinent to the development, application, and efficiency of furnace technology. Organized into eight chapters, this book begins with an overview of the exothermic reactions that occur when carbon, hydrogen, and sulfur are burned to release the energy available in the fuel. This text then evaluates the efficiencies to measure the quantity of fuel used, of flue gases leaving the plant, of air entering, and the heat lost to the surroundings. Other chapters consi
Bhatnagar, Shalabh
2017-01-01
Sound is an emerging source of renewable energy but it has some limitations. The main limitation is, the amount of energy that can be extracted from sound is very less and that is because of the velocity of the sound. The velocity of sound changes as per medium. If we could increase the velocity of the sound in a medium we would be probably able to extract more amount of energy from sound and will be able to transfer it at a higher rate. To increase the velocity of sound we should know the speed of sound. If we go by the theory of classic mechanics speed is the distance travelled by a particle divided by time whereas velocity is the displacement of particle divided by time. The speed of sound in dry air at 20 °C (68 °F) is considered to be 343.2 meters per second and it won't be wrong in saying that 342.2 meters is the velocity of sound not the speed as it's the displacement of the sound not the total distance sound wave covered. Sound travels in the form of mechanical wave, so while calculating the speed of sound the whole path of wave should be considered not just the distance traveled by sound. In this paper I would like to focus on calculating the actual speed of sound wave which can help us to extract more energy and make sound travel with faster velocity.
Multilayer optical calculations
Byrnes, Steven J
2016-01-01
When light hits a multilayer planar stack, it is reflected, refracted, and absorbed in a way that can be derived from the Fresnel equations. The analysis is treated in many textbooks, and implemented in many software programs, but certain aspects of it are difficult to find explicitly and consistently worked out in the literature. Here, we derive the formulas underlying the transfer-matrix method of calculating the optical properties of these stacks, including oblique-angle incidence, absorption-vs-position profiles, and ellipsometry parameters. We discuss and explain some strange consequences of the formulas in the situation where the incident and/or final (semi-infinite) medium are absorptive, such as calculating $T>1$ in the absence of gain. We also discuss some implementation details like complex-plane branch cuts. Finally, we derive modified formulas for including one or more "incoherent" layers, i.e. very thick layers in which interference can be neglected. This document was written in conjunction with ...
Clinical calculators in hospital medicine: Availability, classification, and needs.
Dziadzko, Mikhail A; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly
2016-09-01
Clinical calculators are widely used in modern clinical practice, but are not generally applied to electronic health record (EHR) systems. Important barriers to the application of these clinical calculators into existing EHR systems include the need for real-time calculation, human-calculator interaction, and data source requirements. The objective of this study was to identify, classify, and evaluate the use of available clinical calculators for clinicians in the hospital setting. Dedicated online resources with medical calculators and providers of aggregated medical information were queried for readily available clinical calculators. Calculators were mapped by clinical categories, mechanism of calculation, and the goal of calculation. Online statistics from selected Internet resources and clinician opinion were used to assess the use of clinical calculators. One hundred seventy-six readily available calculators in 4 categories, 6 primary specialties, and 40 subspecialties were identified. The goals of calculation included prediction, severity, risk estimation, diagnostic, and decision-making aid. A combination of summation logic with cutoffs or rules was the most frequent mechanism of computation. Combined results, online resources, statistics, and clinician opinion identified 13 most utilized calculators. Although not an exhaustive list, a total of 176 validated calculators were identified, classified, and evaluated for usefulness. Most of these calculators are used for adult patients in the critical care or internal medicine settings. Thirteen of 176 clinical calculators were determined to be useful in our institution. All of these calculators have an interface for manual input. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
MEDAR LUCIAN-ION
2011-12-01
Full Text Available The management of credit institutions must be concerned with identifying the internal and external risks of banking operations development, estimating their size and importance, assessing the possibility and imposing measures for their management. On the one hand, the identification, analysis, and mitigation of banking risks can cause reduction of inconvenient and uneconomical costs and realization of incomes with the role of shock absorber in profits reduction, and on the other hand, ignoring them can lead to loss reflected in the profit decrease, thus affecting the bank performance.
IPC - Isoelectric Point Calculator.
Kozlowski, Lukasz P
2016-10-21
Accurate estimation of the isoelectric point (pI) based on the amino acid sequence is useful for many analytical biochemistry and proteomics techniques such as 2-D polyacrylamide gel electrophoresis, or capillary isoelectric focusing used in combination with high-throughput mass spectrometry. Additionally, pI estimation can be helpful during protein crystallization trials. Here, I present the Isoelectric Point Calculator (IPC), a web service and a standalone program for the accurate estimation of protein and peptide pI using different sets of dissociation constant (pKa) values, including two new computationally optimized pKa sets. According to the presented benchmarks, the newly developed IPC pKa sets outperform previous algorithms by at least 14.9 % for proteins and 0.9 % for peptides (on average, 22.1 % and 59.6 %, respectively), which corresponds to an average error of the pI estimation equal to 0.87 and 0.25 pH units for proteins and peptides, respectively. Moreover, the prediction of pI using the IPC pKa's leads to fewer outliers, i.e., predictions affected by errors greater than a given threshold. The IPC service is freely available at http://isoelectric.ovh.org Peptide and protein datasets used in the study and the precalculated pI for the PDB and some of the most frequently used proteomes are available for large-scale analysis and future development. This article was reviewed by Frank Eisenhaber and Zoltán Gáspári.
The rating reliability calculator
Directory of Open Access Journals (Sweden)
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
Point Defect Calculations in Tungsten
National Research Council Canada - National Science Library
Danilowicz, Ronald
1968-01-01
.... The vacancy migration energy for tungsten was calculated. The calculated value of 1.73 electron volts, together with experimental data, suggests that vacancies migrate in stage III recovery in tungsten...
relline: Relativistic line profiles calculation
Dauser, Thomas
2015-05-01
relline calculates relativistic line profiles; it is compatible with the common X-ray data analysis software XSPEC (ascl:9910.005) and ISIS (ascl:1302.002). The two basic forms are an additive line model (RELLINE) and a convolution model to calculate relativistic smearing (RELCONV).
Calculated neutron intensities for SINQ
Energy Technology Data Exchange (ETDEWEB)
Atchison, F
1998-03-01
A fully detailed calculation of the performance of the SINQ neutron source, using the PSI version of the HETC code package, was made in 1996 to provide information useful for source commissioning. Relevant information about the formulation of the problem, cascade analysis and some of the results are presented. Aspects of the techniques used to verify the results are described and discussed together with a limited comparison with earlier results obtained from neutron source design calculations. A favourable comparison between the measured and calculated differential neutron flux in one of the guides gives further indirect evidence that such calculations can give answers close to reality in absolute terms. Due to the complex interaction between the many nuclear (and other) models involved, no quantitative evaluation of the accuracy of the calculational method in general terms can be given. (author) refs., 13 figs., 9 tabs.
Alaska Village Electric Load Calculator
Energy Technology Data Exchange (ETDEWEB)
Devine, M.; Baring-Gould, E. I.
2004-10-01
As part of designing a village electric power system, the present and future electric loads must be defined, including both seasonal and daily usage patterns. However, in many cases, detailed electric load information is not readily available. NREL developed the Alaska Village Electric Load Calculator to help estimate the electricity requirements in a village given basic information about the types of facilities located within the community. The purpose of this report is to explain how the load calculator was developed and to provide instructions on its use so that organizations can then use this model to calculate expected electrical energy usage.
Practical astronomy with your calculator
Duffett-Smith, Peter
1989-01-01
Practical Astronomy with your Calculator, first published in 1979, has enjoyed immense success. The author's clear and easy to follow routines enable you to solve a variety of practical and recreational problems in astronomy using a scientific calculator. Mathematical complexity is kept firmly in the background, leaving just the elements necessary for swiftly making calculations. The major topics are: time, coordinate systems, the Sun, the planetary system, binary stars, the Moon, and eclipses. In the third edition there are entirely new sections on generalised coordinate transformations, nutr
Calculate Your Body Mass Index
... Institutes of Health Contact Us Get Email Alerts Font Size Accessible Search Form Search the NHLBI, use ... Be Physically Active Healthy Weight Tools BMI Calculator Menu Plans Portion Distortion Key Recommendations Healthy Weight Resources ...
Computer Calculation of Fire Danger
William A. Main
1969-01-01
This paper describes a computer program that calculates National Fire Danger Rating Indexes. fuel moisture, buildup index, and drying factor are also available. The program is written in FORTRAN and is usable on even the smallest compiler.
Landfill Gas Energy Benefits Calculator
This page contains the LFG Energy Benefits Calculator to estimate direct, avoided, and total greenhouse gas reductions, as well as environmental and energy benefits, for a landfill gas energy project.
Nursing students' mathematic calculation skills.
Rainboth, Lynde; DeMasi, Chris
2006-12-01
This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.
Transfer Area Mechanical Handling Calculation
Energy Technology Data Exchange (ETDEWEB)
B. Dianda
2004-06-23
This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use
Calculation of Rydberg interaction potentials
Weber, Sebastian; Tresp, Christoph; Menke, Henri; Urvoy, Alban; Firstenberg, Ofer; Büchler, Hans Peter; Hofferberth, Sebastian
2017-07-01
The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence can be fine-tuned with great flexibility by choosing appropriate Rydberg states and applying external electric and magnetic fields. More and more experiments are probing this interaction at short atomic distances or with such high precision that perturbative calculations as well as restrictions to the leading dipole-dipole interaction term are no longer sufficient. In this tutorial, we review all relevant aspects of the full calculation of Rydberg interaction potentials. We discuss the derivation of the interaction Hamiltonian from the electrostatic multipole expansion, numerical and analytical methods for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source.
A Romberg Integral Spreadsheet Calculator
Directory of Open Access Journals (Sweden)
Kim Gaik Tay
2015-04-01
Full Text Available Motivated by the work of Richardson’s extrapolation spreadsheet calculator up to level 4 to approximate definite differentiation, we have developed a Romberg integral spreadsheet calculator to approximate definite integral. The main feature of this version of spreadsheet calculator is a friendly graphical user interface developed to capture the needed information to solve the integral by Romberg method. Users simply need to enter the variable in the integral, function to be integrated, lower and upper limits of the integral, select the desired accuracy of computation, select the exact function if it exists and lastly click the Compute button which is associated with VBA programming written to compute Romberg integral table. The full solution of the Romberg integral table up to any level can be obtained quickly and easily using this method. The attached spreadsheet calculator together with this paper helps educators to prepare their marking scheme easily and assist students in checking their answers instead of reconstructing the answers from scratch. A summative evaluation of this Romberg Spreadsheet Calculator has been conducted by involving 36 students as sample. The data was collected using questionnaire. The findings showed that the majority of the students agreed that the Romberg Spreadsheet Calculator provides a structured learning environment that allows learners to be guided through a step-by-step solution.
Recursive calculation of Hansen coefficients
Branham, Richard L., Jr.
1990-06-01
Hansen coefficients are used in expansions of the elliptic motion. Three methods for calculating the coefficients are studied: Tisserand's method, the Von Zeipel-Andoyer (VZA) method with explicit representation of the polynomials required to compute the Hansen coefficients, and the VZA method with the values of the polynomials calculated recursively. The VZA method with explicit polynomials is by far the most rapid, but the tabulation of the polynomials only extends to 12th order in powers of the eccentricity, and unless one has access to the polynomials in machine-readable form their entry is laborious and error-prone. The recursive calculation of the VZA polynomials, needed to compute the Hansen coefficients, while slower, is faster than the calculation of the Hansen coefficients by Tisserand's method, up to 10th order in the eccentricity and is still relatively efficient for higher orders. The main advantages of the recursive calculation are the simplicity of the program and one's being able to extend the expansions to any order of eccentricity with ease. Because FORTRAN does not implement recursive procedures, this paper used C for all of the calculations. The most important conclusion is recursion's genuine usefulness in scientific computing.
Procedures for Calculating Residential Dehumidification Loads
Energy Technology Data Exchange (ETDEWEB)
Winkler, Jon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Booten, Chuck [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-06-01
Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity. The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.
Rumpf, Clemens
2016-01-01
Asteroid impacts are a hazard to human populations. A method to assess the impact risk of hazardous asteroids was developed in this work, making use of the universal concept of risk culminating in the Asteroid Risk Mitigation Optimization and Research (ARMOR) tool. Using this tool, the global spatial risk distribution of a threatening asteroid can be calculated and expressed in the units of expected casualties (= fatalities). Risk distribution knowledge enables disaster managers to plan for a...
Mordred: a molecular descriptor calculator.
Moriwaki, Hirotomo; Tian, Yu-Shi; Kawashita, Norihito; Takagi, Tatsuya
2018-02-06
Molecular descriptors are widely employed to present molecular characteristics in cheminformatics. Various molecular-descriptor-calculation software programs have been developed. However, users of those programs must contend with several issues, including software bugs, insufficient update frequencies, and software licensing constraints. To address these issues, we propose Mordred, a developed descriptor-calculation software application that can calculate more than 1800 two- and three-dimensional descriptors. It is freely available via GitHub. Mordred can be easily installed and used in the command line interface, as a web application, or as a high-flexibility Python package on all major platforms (Windows, Linux, and macOS). Performance benchmark results show that Mordred is at least twice as fast as the well-known PaDEL-Descriptor and it can calculate descriptors for large molecules, which cannot be accomplished by other software. Owing to its good performance, convenience, number of descriptors, and a lax licensing constraint, Mordred is a promising choice of molecular descriptor calculation software that can be utilized for cheminformatics studies, such as those on quantitative structure-property relationships.
SEECAL: Program to calculate age-dependent
Energy Technology Data Exchange (ETDEWEB)
Cristy, M.; Eckerman, K.F.
1993-12-01
This report describes the computer program SEECAL, which calculates specific effective energies (SEE) to specified target regions for ages newborn, 1 y, 5 y, 10 y, 15 y, a 70-kg adult male, and a 58-kg adult female. The dosimetric methodology is that of the International Commission on Radiological Protection (ICRP) and is generally consistent with the schema of the Medical Internal Radiation Dose committee of the US Society of Nuclear Medicine. Computation of SEEs is necessary in the computation of equivalent dose rate in a target region, for occupational or public exposure to radionuclides taken into the body. Program SEECAL replaces the program SEE that was previously used by the Dosimetry Research Group at Oak Ridge National Laboratory. The program SEE was used in the dosimetric calculations for occupational exposures for ICRP Publication 30 and is limited to adults. SEECAL was used to generate age-dependent SEEs for ICRP Publication 56, Part 1. SEECAL is also incorporated into DCAL, a radiation dose and risk calculational system being developed for the Environmental Protection Agency. Electronic copies of the program and data files and this report are available from the Radiation Shielding Information Center at Oak Ridge National Laboratory.
Precision Calculations in Supersymmetric Theories
Directory of Open Access Journals (Sweden)
L. Mihaila
2013-01-01
Full Text Available In this paper we report on the newest developments in precision calculations in supersymmetric theories. An important issue related to this topic is the construction of a regularization scheme preserving simultaneously gauge invariance and supersymmetry. In this context, we discuss in detail dimensional reduction in component field formalism as it is currently the preferred framework employed in the literature. Furthermore, we set special emphasis on the application of multi-loop calculations to the analysis of gauge coupling unification, the prediction of the lightest Higgs boson mass, and the computation of the hadronic Higgs production and decay rates in supersymmetric models. Such precise theoretical calculations up to the fourth order in perturbation theory are required in order to cope with the expected experimental accuracy on the one hand and to enable us to distinguish between the predictions of the Standard Model and those of supersymmetric theories on the other hand.
Energy Technology Data Exchange (ETDEWEB)
Uruena Llinares, A.; Santos Rubio, A.; Luis Simon, F. J.; Sanchez Carmona, G.; Herrador Cordoba, M.
2006-07-01
The objective of this paper is to compare, in thirty treatments for lung cancer,the absorbed doses at risk organs and target volumes obtained between the two used algorithms of calculation of our treatment planning system Oncentra Masterplan, that is, Pencil Beams vs Collapsed Cone. For it we use a set of measured indicators (D1 and D99 of tumor volume, V20 of lung, homogeneity index defined as (D5-D95)/D prescribed, and others). Analysing the dta, making a descriptor analysis of the results, and applying the non parametric test of the ranks with sign of Wilcoxon we find that the use of Pencil Beam algorithm underestimates the dose in the zone of the PTV including regions of low density as well as the values of maximum dose in spine cord. So, we conclude that in those treatments in which the spine dose is near the maximum permissible limit or those in which the PTV it includes a zone with pulmonary tissue must be used the Collapse Cone algorithm systematically and in any case an analysis must become to choose between time and precision in the calculation for both algorithms. (Authors)
Rochkind, Jonathan
2008-01-01
Deciding whether to go with a particular open source product is an exercise in risk management: understanding the risks of one's possible actions and choices, calculating when a certain level of risk is appropriate to reach a desired outcome, and planning for how to handle negative outcomes. To be sure, opting to go with a particular proprietary…
Directory of Open Access Journals (Sweden)
Montserrat Hernández Solís
2014-07-01
Full Text Available Resumen Una práctica común que realizan las entidades aseguradoras es la de modificar las tasas de mortalidad instantánea al aplicar el principio de prima neta con el fin de hacer frente a las desviaciones desfavorables de la siniestralidad. Este documento proporciona una respuesta matemática a esta cuestión mediante la aplicación de la función de distorsión de potencia de Wang. Tanto la prima neta y la función de distorsión de Wang son medidas de riesgo coherentes, siendo este último aplicado por primera vez en el campo delos seguros de vida. Utilizando las leyes de Gompertz y Makeham primero calculamos la prima a nivel general y en una segunda parte, se aplica el principio de cálculo de la prima basado en función de distorsión de potencia de Wang para calcular el recargo sobre la prima de riesgo ajustada. El precio de prima única de riesgo se ha aplicado a una forma de cobertura de seguro de supervivencia, el seguro de rentas. La principal conclusión que puede extraerse es que mediante el uso de la función de distorsión, la nueva tasa instantánea de mortalidad es directamente proporcional a un múltiplo, que es justamente el exponente de esta función y hace que el riesgo de longevidad sea mayor. Esta es la razón por la prima de riesgo ajustada es superior a la prima neta. Abstract Modification of instantaneous mortality rates when applying the net premium principle in order to cope with unfavorable deviations in claims, is common practice carried out by insurance companies. This paper provides a mathematical answer to this matter by applying Wang’s power distortion function. Both net premium and Wang’s distortion function are coherent risk measures, the latter being first applied to the field of life insurance. Using the Gompertz and Makeham laws we first calculate the premium at a general level and in a second part, the principle of premium calculation based on Wang´s power distortion function is applied to calculate
Directory of Open Access Journals (Sweden)
Montserrat Hernández Solís
2013-12-01
Full Text Available Una práctica común que realizan las entidades aseguradoras es la de modificar las tasas de mortalidad instantánea al aplicar el principio de prima neta con el fin de hacer frente a las desviaciones desfavorables de la siniestralidad. Este documento proporciona una respuesta matemática a esta cuestión mediante la aplicación de la función de distorsión de potencia de Wang. Tanto la prima neta y la función de distorsión de Wang son medidas de riesgo coherentes, siendo este último aplicado por primera vez en el campo delos seguros de vida.Utilizando las leyes de Gompertz y Makeham primero calculamos la prima a nivel general y en una segunda parte, se aplica el principio de cálculo de la prima basado en función de distorsión de potencia de Wang para calcular el recargo sobre la prima de riesgo ajustada. El precio de prima única de riesgo se ha aplicado a una forma de cobertura de seguro de supervivencia, el seguro de rentas.La principal conclusión que puede extraerse es que mediante el uso de la función de distorsión, la nueva tasa instantánea de mortalidad es directamente proporcional a un múltiplo, que es justamente el exponente de esta función y hace que el riesgo de longevidad sea mayor. Esta es la razón por la prima de riesgo ajustada es superior a la prima neta.Modification of instantaneous mortality rates when applying the net premium principle in order to cope with unfavorable deviations in claims, is common practice carried out by insurance companies. This paper provides a mathematical answer to this matter by applying Wang’s power distortion function. Both net premium and Wang’s distortion function are coherent risk measures, the latter being first applied to the field of life insurance.Using the Gompertz and Makeham laws we first calculate the premium at a general level and in a second part, the principle of premium calculation based on Wang´s power distortion function is applied to calculate the adjusted risk
Monte Carlo calculations for HTRs
Energy Technology Data Exchange (ETDEWEB)
Hogenbirk, A. [ECN Nuclear Research, Petten (Netherlands)
1998-09-01
From a neutronics point of view pebble-bed HTRs are completely different from standard LWRs. The most important differences are to be found in the reactor geometry, the properties of the moderator (graphite instead of water) and the self-shielding of the fuel regions. Therefore, computer packages normally used for core analyses should be validated with experimental data before they can be used for HTR analyses. This especially holds for deterministic computer codes, in which approximations are made which may not be valid in pebble-bed HTRs. Monte Carlo codes more based on first principles suffer much less from this problem. In order to study small- and medium-sized LEU-HTR systems in the late 1980s an IAEA Coordinated Research Programme (CRP) was started. This CRP was mainly directed to the effects of water ingress and neutron streaming. The PROTEUS facility at the Paul Scherrer Institute (PSI) in Villigen, Switzerland, played a central role in this CRP. Benchmark quality measurements were provided in clean, easy-to-interpret critical configurations, using pebble-type fuel. ECN in Petten, Netherlands, contributed to the CRP by performing reactor calculations using the WIMS code system with deterministic calculations. However, a need was felt for reference calculations, in which as few approximations as possible were made. These analyses were performed with the Monte Carlo code MCNP-4A. In this contribution the results are given of the main MCNP-calculations. In these analyses a detailed model of the PROTEUS experimental set-up was used, whereas in the calculations use was made of high-quality continuous-energy cross-section data. The attention was focused on the calculation of the value of k{sub eff} and of streaming effects in the pebble-bed core. 15 refs.
Friction and wear calculation methods
Kragelsky, I V; Kombalov, V S
1981-01-01
Friction and Wear: Calculation Methods provides an introduction to the main theories of a new branch of mechanics known as """"contact interaction of solids in relative motion."""" This branch is closely bound up with other sciences, especially physics and chemistry. The book analyzes the nature of friction and wear, and some theoretical relationships that link the characteristics of the processes and the properties of the contacting bodies essential for practical application of the theories in calculating friction forces and wear values. The effect of the environment on friction and wear is a
Algorithmes Efficaces en Calcul Formel
Bostan, Alin; Chyzak, Frédéric; Giusti, Marc; Lebreton, Romain; Lecerf, Grégoire; Salvy, Bruno; Schost, Eric
2017-01-01
Voir la page du livre à l’adresse \\url{https://hal.archives-ouvertes.fr/AECF/}; International audience; Le calcul formel traite des objets mathématiques exacts d’un point de vue informatique. Cet ouvrage « Algorithmes efficaces en calcul formel » explore deux directions : la calculabilité et la complexité. La calculabilité étudie les classes d’objets mathématiques sur lesquelles des réponses peuvent être obtenues algorithmiquement. La complexité donne ensuite des outils pour comparer des algo...
Ab initio calculations of biomolecules
Leś, Andrzej; Adamowicz, Ludwik
1995-08-01
Ab initio quantum mechanical calculations are valuable tools for interpretation and elucidation of elemental processes in biochemical systems. With the ab initio approach one can calculate data that sometimes are difficult to obtain by experimental techniques. The most popular computational theoretical methods include the Hartree-Fock method as well as some lower-level variational and perturbational post-Hartree Fock approaches which allow to predict molecular structures and to calculate spectral properties. We have been involved in a number of joined theoretical and experimental studies in the past and some examples of these studies are given in this presentation. The systems chosen cover a wide variety of simple biomolecules, such as precursors of nucleic acids, double-proton transferring molecules, and simple systems involved in processes related to first stages of substrate-enzyme interactions. In particular, examples of some ab initio calculations used in the assignment of IR spectra of matrix isolated pyrimidine nucleic bases are shown. Some radiation-induced transformations in model chromophores are also presented. Lastly, we demonstrate how the ab-initio approach can be used to determine the initial several steps of the molecular mechanism of thymidylate synthase inhibition by dUMP analogues.
Dead reckoning calculating without instruments
Doerfler, Ronald W
1993-01-01
No author has gone as far as Doerfler in covering methods of mental calculation beyond simple arithmetic. Even if you have no interest in competing with computers you'll learn a great deal about number theory and the art of efficient computer programming. -Martin Gardner
CALCULATION OF PHYSISORPTION ENERGIES OF
African Journals Online (AJOL)
o-Fe:Os (I) SURFACE USING A CRYSTAL FIELD CLUSTER MODEL. A. Uzairu' ... is of considerable interest in industry and theoretial calculations of ... This choice of cluster naturally assumes an oxygen vacancy at the physisorption site and the adoption of at least three top Llli planes of atomic layers as the surface region.
QCD calculations for jet substructure
Dasgupta, Mrinal; Salam, Gavin P.
2014-01-01
We present results on novel analytic calculations to describe invariant mass distributions of QCD jets with three substructure algorithms: trimming, pruning and the mass-drop taggers. These results not only lead to considerable insight into the behaviour of these tools, but also show how they can be improved. As an example, we discuss the remarkable properties of the modified mass-drop tagger.
Affect and Graphing Calculator Use
McCulloch, Allison W.
2011-01-01
This article reports on a qualitative study of six high school calculus students designed to build an understanding about the affect associated with graphing calculator use in independent situations. DeBellis and Goldin's (2006) framework for affect as a representational system was used as a lens through which to understand the ways in which…
Algorithm Calculates Cumulative Poisson Distribution
Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.
1992-01-01
Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).
Heat transfer, insulation calculations simplified
Energy Technology Data Exchange (ETDEWEB)
Ganapathy, V.
1985-08-19
Determination of heat transfer coefficients for air, water, and steam flowing in tubes and calculation of heat loss through multilayered insulated surfaces have been simplified by two computer programs. The programs, written in BASIC, have been developed for the IBM and equivalent personal computers.
Calculating the Number of Tunnels
Li, Fajie; Klette, Reinhard; RuizShulcloper, J; Kropatsch, WG
2008-01-01
This paper considers 2-regions of grid cubes and proposes an algorithm for calculating the number of tunnels of such a. region. The graph-theoretical algorithm proceeds layer by layer; a proof of its correctness is provided, and its time complexity is also given.
Monte Carlo calculations of nuclei
Energy Technology Data Exchange (ETDEWEB)
Pieper, S.C. [Argonne National Lab., IL (United States). Physics Div.
1997-10-01
Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.
The "Intelligence" of Calendrical Calculators.
Young, R. L.; Nettelbeck, T.
1994-01-01
The strategies of four men with mild mental retardation when performing calendar calculations were investigated. Results suggested that subjects were aware of calendar rules and regularities, including knowledge of the 14 different calendar templates. Their strategies were rigidly applied and relied heavily on memory, with little manipulation of…
Ab Initio Calculations of Oxosulfatovanadates
DEFF Research Database (Denmark)
Frøberg, Torben; Johansen, Helge
1996-01-01
Restricted Hartree-Fock and multi-configurational self-consistent-field calculations together with secondorder perturbation theory have been used to study the geometry, the electron density, and the electronicspectrum of (VO2SO4)-. A bidentate sulphate attachment to vanadium was found to be stabl...
Assessment of cardiovascular risk.
LENUS (Irish Health Repository)
Cooney, Marie Therese
2010-10-01
Atherosclerotic cardiovascular disease (CVD) is the most common cause of death worldwide. Usually atherosclerosis is caused by the combined effects of multiple risk factors. For this reason, most guidelines on the prevention of CVD stress the assessment of total CVD risk. The most intensive risk factor modification can then be directed towards the individuals who will derive the greatest benefit. To assist the clinician in calculating the effects of these multiple interacting risk factors, a number of risk estimation systems have been developed. This review address several issues regarding total CVD risk assessment: Why should total CVD risk be assessed? What risk estimation systems are available? How well do these systems estimate risk? What are the advantages and disadvantages of the current systems? What are the current limitations of risk estimation systems and how can they be resolved? What new developments have occurred in CVD risk estimation?
Distorted Wave Calculations and Applications
Bhatia, A. K.; Fisher, Richard R. (Technical Monitor)
2000-01-01
Physical properties such as temperature and electron density of solar plasma and other astrophysical objects can be inferred from EUV and X-ray emission lines observed from space. These lines are emitted when the higher states of an ion are excited by electron impact and then decay by photon emission. Excitation cross sections are required for the spectroscopic analyses of the observations and various approximations have been used to calculate the scattering functions. One of them which has been widely used is a distorted wave approximation. This approximation, along with its applications to solar observations, is discussed. The Bowen fluorescence mechanism and optical depth effects are also discussed. It is concluded that such calculations are reliable for highly charged ions and for high electron temperatures.
CONTRIBUTION FOR MINING ATMOSPHERE CALCULATION
Directory of Open Access Journals (Sweden)
Franica Trojanović
1989-12-01
Full Text Available Humid air is an unavoidable feature of mining atmosphere, which plays a significant role in defining the climate conditions as well as permitted circumstances for normal mining work. Saturated humid air prevents heat conduction from the human body by means of evaporation. Consequently, it is of primary interest in the mining practice to establish the relative air humidity either by means of direct or indirect methods. Percentage of water in the surrounding air may be determined in various procedures including tables, diagrams or particular calculations, where each technique has its specific advantages and disadvantages. Classical calculation is done according to Sprung's formula, in which case partial steam pressure should also be taken from the steam table. The new method without the use of diagram or tables, established on the functional relation of pressure and temperature on saturated line, is presented here for the first time (the paper is published in Croatian.
Algorithm project weight calculation aircraft
Directory of Open Access Journals (Sweden)
Г. В. Абрамова
2013-07-01
Full Text Available The paper describes the process of a complex technical object design on the example of the aircraft, using information technology such as CAD/CAM/CAE-systems, presents the basic models of aircraft which are developed in the process of designing and reflect the different aspects of its structure and function. The idea of control parametric model at complex technical object design is entered, which is a set of initial data for the development of design stations and enables the optimal complex technical object control at all stages of design using modern computer technology. The paper discloses a process of weight design, which is associated with all stages of development aircraft and its production. Usage of a scheduling algorithm that allows to organize weight calculations are carried out at various stages of planning and weighing options to optimize the use of available database of formulas and methods of calculation
MARKOV MODELS IN CALCULATING CLV
DECEWICZ, Anna
2015-01-01
The paper presents a me hod of calculating customer lifetime value and finding optimal remarketing strategy basing on Markov model with short-term memory of client's activity. Furthermore, sensitivity analysis of optimal strategy is conducted for two ty pes of retention rate functional form defining transitin probabilities
Parallel plasma fluid turbulence calculations
Leboeuf, J. N.; Carreras, B. A.; Charlton, L. A.; Drake, J. B.; Lynch, V. E.; Newman, D. E.; Sidikman, K. L.; Spong, D. A.
The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.
Calculation of minimum miscibility pressure
Energy Technology Data Exchange (ETDEWEB)
Wang, Y.; Orr, F.M. [Department of Petroleum Engineering, Stanford University, Mitchell Bldg., Room 360, 94305-2220 Stanford, CA (United States)
2000-09-01
A method is described and tested for calculation of minimum miscibility pressure (MMP) that makes use of an analytical theory for one-dimensional, dispersion-free flow of multicomponent mixtures. The theory shows that in a displacement of an oil by a gas with n{sub c} components, the behavior of the displacement is controlled by a sequence of n{sub c}-1 key tie lines. Besides, the tie lines that extend through the initial oil and injection gas compositions, there are n{sub c}-3 tie lines, known as crossover tie lines, that can be found from a set of conditions that require the extensions of the appropriate tie lines to intersect each other. The MMP is calculated as the pressure at which one of the key tie lines becomes a tie line of zero length that is tangent to the critical locus. The numerical approach for solving the tie line intersection equations is described; slim tube test and compositional simulation data reported in the literature are used to show that the proposed approach can be used to calculate MMP accurately for displacements with an arbitrary number of components present.
Prenatal radiation exposure. Dose calculation; Praenatale Strahlenexposition. Dosisermittlung
Energy Technology Data Exchange (ETDEWEB)
Scharwaechter, C.; Schwartz, C.A.; Haage, P. [University Hospital Witten/Herdecke, Wuppertal (Germany). Dept. of Diagnostic and Interventional Radiology; Roeser, A. [University Hospital Witten/Herdecke, Wuppertal (Germany). Dept. of Radiotherapy and Radio-Oncology
2015-05-15
The unborn child requires special protection. In this context, the indication for an X-ray examination is to be checked critically. If thereupon radiation of the lower abdomen including the uterus cannot be avoided, the examination should be postponed until the end of pregnancy or alternative examination techniques should be considered. Under certain circumstances, either accidental or in unavoidable cases after a thorough risk assessment, radiation exposure of the unborn may take place. In some of these cases an expert radiation hygiene consultation may be required. This consultation should comprise the expected risks for the unborn while not perturbing the mother or the involved medical staff. For the risk assessment in case of an in-utero X-ray exposition deterministic damages with a defined threshold dose are distinguished from stochastic damages without a definable threshold dose. The occurrence of deterministic damages depends on the dose and the developmental stage of the unborn at the time of radiation. To calculate the risks of an in-utero radiation exposure a three-stage concept is commonly applied. Depending on the amount of radiation, the radiation dose is either estimated, roughly calculated using standard tables or, in critical cases, accurately calculated based on the individual event. The complexity of the calculation thereby increases from stage to stage. An estimation based on stage one is easily feasible whereas calculations based on stages two and especially three are more complex and often necessitate execution by specialists. This article demonstrates in detail the risks for the unborn child pertaining to its developmental phase and explains the three-stage concept as an evaluation scheme. It should be noted, that all risk estimations are subject to considerable uncertainties.
Cognitive Reflection Versus Calculation in Decision Making
Directory of Open Access Journals (Sweden)
Aleksandr eSinayev
2015-05-01
Full Text Available Scores on the three-item Cognitive Reflection Test (CRT have been linked with dual-system theory and normative decision making (Frederick, 2005. In particular, the CRT is thought to measure monitoring of System 1 intuitions such that, if cognitive reflection is high enough, intuitive errors will be detected and the problem will be solved. However, CRT items also require numeric ability to be answered correctly and it is unclear how much numeric ability vs. cognitive reflection contributes to better decision making. In two studies, CRT responses were used to calculate Cognitive Reflection and numeric ability; a numeracy scale was also administered. Numeric ability, measured on the CRT or the numeracy scale, accounted for the CRT’s ability to predict more normative decisions (a subscale of decision-making competence, incentivized measures of impatient and risk-averse choice, and self-reported financial outcomes; Cognitive Reflection contributed no independent predictive power. Results were similar whether the two abilities were modeled (Study 1 or calculated using proportions (Studies 1 and 2. These findings demonstrate numeric ability as a robust predictor of superior decision making across multiple tasks and outcomes. They also indicate that correlations of decision performance with the CRT are insufficient evidence to implicate overriding intuitions in the decision-making biases and outcomes we examined. Numeric ability appears to be the key mechanism instead.
AGING FACILITY CRITICALITY SAFETY CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
C.E. Sanders
2004-09-10
The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the
Band calculation of lonsdaleite Ge
Chen, Pin-Shiang; Fan, Sheng-Ting; Lan, Huang-Siang; Liu, Chee Wee
2017-01-01
The band structure of Ge in the lonsdaleite phase is calculated using first principles. Lonsdaleite Ge has a direct band gap at the Γ point. For the conduction band, the Γ valley is anisotropic with the low transverse effective mass on the hexagonal plane and the large longitudinal effective mass along the c axis. For the valence band, both heavy-hole and light-hole effective masses are anisotropic at the Γ point. The in-plane electron effective mass also becomes anisotropic under uniaxial tensile strain. The strain response of the heavy-hole mass is opposite to the light hole.
Yet another partial wave calculator
Energy Technology Data Exchange (ETDEWEB)
Greenwald, Daniel; Rauch, Johannes [TUM, Munich (Germany)
2016-07-01
We will present a new C++ library for partial wave analysis: YAP - yet another partial wave calculator. YAP is intended for amplitude analyses of the decays of spin-0 heavy mesons (principally B and D) to multiple (3, 4, etc.) pseudoscalar mesons but is not hard coded for such situations and is flexible enough to handle other decay scenarios. The library allows for both model dependent and model independent analysis methods. We introduce the software, and demonstrate examples for generating Monte Carlo data efficiently, and for analyzing data (both with the aid of the Bayesian Analysis Toolkit).
Entanglement entropy: a perturbative calculation
Energy Technology Data Exchange (ETDEWEB)
Rosenhaus, Vladimir; Smolkin, Michael [Center for Theoretical Physics and Department of Physics,University of California, Berkeley, CA 94720 (United States)
2014-12-31
We provide a framework for a perturbative evaluation of the reduced density matrix. The method is based on a path integral in the analytically continued spacetime. It suggests an alternative to the holographic and ‘standard’ replica trick calculations of entanglement entropy. We implement this method within solvable field theory examples to evaluate leading order corrections induced by small perturbations in the geometry of the background and entangling surface. Our findings are in accord with Solodukhin’s formula for the universal term of entanglement entropy for four dimensional CFTs.
Calculation of confined swirling jets
Chen, C. P.
1986-01-01
Computations of a confined coaxial swirling jet are carried out using a standard two-equation (k-epsilon) model and two modifications of this model based on Richardson-number corrections of the length-scale (epsilon) governing equation. To avoid any uncertainty involved in the setting up of inlet boundary conditions, actual measurements are used at the inlet plane of this calculation domain. The results of the numerical investigation indicate that the k-epsilon model is inadequate for the predictions of confined swirling flows. Although marginal improvement of the flow predictions can be achieved by these two corrections, neither can be judged satisfactory.
Calculation of Rydberg interaction potentials
DEFF Research Database (Denmark)
Weber, Sebastian; Tresp, Christoph; Menke, Henri
2017-01-01
The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence...... for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up...
Electronics reliability calculation and design
Dummer, Geoffrey W A; Hiller, N
1966-01-01
Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea
Digital calculations of engine cycles
Starkman, E S; Taylor, C Fayette
1964-01-01
Digital Calculations of Engine Cycles is a collection of seven papers which were presented before technical meetings of the Society of Automotive Engineers during 1962 and 1963. The papers cover the spectrum of the subject of engine cycle events, ranging from an examination of composition and properties of the working fluid to simulation of the pressure-time events in the combustion chamber. The volume has been organized to present the material in a logical sequence. The first two chapters are concerned with the equilibrium states of the working fluid. These include the concentrations of var
Energy Technology Data Exchange (ETDEWEB)
Santos, William S.; Neves, Lucio P.; Perini, Ana P.; Caldas, Linda V.E., E-mail: wssantos@ipen.br, E-mail: lpneves@ipen.br, E-mail: aperini@ipen.br, E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Maia, Ana F., E-mail: afmaia@ufs.br [Universidade Federal de Sergipe (UFS), Sao Cristovao, SE (Brazil). Dept. de Fisica
2014-07-01
Cardiac procedures are among the most common procedures in interventional radiology (IR), and can lead to high medical and occupational exposures, as in most cases are procedures complex and long lasting. In this work, conversion coefficients (CC) for the risk of cancer, normalized by kerma area product (KAP) to the patient, cardiologist and nurse were calculated using Monte Carlo simulation. The patient and the cardiologist were represented by anthropomorphic simulators MESH, and the nurse by anthropomorphic phantom FASH. Simulators were incorporated into the code of Monte Carlo MCNPX. Two scenarios were created: in the first (1), lead curtain and protective equipment suspended were not included, and in the second (2) these devices were inserted. The radiographic parameters employed in Monte Carlo simulations were: tube voltage of 60 kVp and 120 kVp; filtration of the beam and 3,5 mmAl beam area of 10 x 10 cm{sup 2}. The average values of CCs to eight projections (in 10{sup -4} / Gy.cm{sup 2} were 1,2 for the patient, 2,6E-03 (scenario 1) and 4,9E-04 (scenario 2) for cardiologist and 5,2E-04 (scenario 1) and 4,0E-04 (Scenario 2) to the nurse. The results show a significant reduction in CCs for professionals, when the lead curtain and protective equipment suspended are employed. The evaluation method used in this work can provide important information on the risk of cancer patient and professional, and thus improve the protection of workers in cardiac procedures of RI.
Calculational Tool for Skin Contamination Dose Assessment
Hill, R L
2002-01-01
Spreadsheet calculational tool was developed to automate the calculations preformed for dose assessment of skin contamination. This document reports on the design and testing of the spreadsheet calculational tool.
Dissecting Reactor Antineutrino Flux Calculations
Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.
2017-09-01
Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235U, 239Pu, 241Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In the present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238U contribution as well as the effective charge and the allowed shape assumption used in the conversion method. We observe that including a shape correction of about +6 % MeV-1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.
Calculation of sound propagation in fibrous materials
DEFF Research Database (Denmark)
Tarnow, Viggo
1996-01-01
Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements.......Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements....
Risk measurement with equivalent utility principles
Denuit, M.; Dhaene, J.; Goovaerts, M.; Kaas, R.; Laeven, R.
2006-01-01
Risk measures have been studied for several decades in the actuarial literature, where they appeared under the guise of premium calculation principles. Risk measures and properties that risk measures should satisfy have recently received considerable attention in the financial mathematics
Energy Technology Data Exchange (ETDEWEB)
Park, Jong Min; Park, So Yeon; Kim, Jung In; Kim, Jin Ho [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of); Wu, Hong Gyun [Dept. of Radiation Oncology, Seoul National University College of Medicine, Seoul (Korea, Republic of)
2015-10-15
Since those organs are small in volume, dose calculation for those organs seems to be more susceptible to the calculation grid size in the treatment planning system (TPS). Moreover, since they are highly radio-sensitive organs, especially eye lens, they should be considered carefully for radiotherapy. On the other hand, in the treatment of head and neck (H and N) cancer or brain tumor that generally involves radiation exposure to eye lens and optic apparatus, intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT) techniques are frequently used because of the proximity of various radio-sensitive normal organs to the target volumes. Since IMRT and VMAT can deliver prescription dose to target volumes while minimizing dose to nearby organs at risk (OARs) by generating steep dose gradients near the target volumes, high dose gradient sometimes occurs near or at the eye lenses and optic apparatus. In this case, the effect of dose calculation resolution on the accuracy of calculated dose to eye lens and optic apparatus might be significant. Therefore, the effect of dose calculation grid size on the accuracy of calculated doses for each eye lens and optic apparatus was investigated in this study. If an inappropriate calculation resolution was applied for dose calculation of eye lens and optic apparatus, considerable errors can be occurred due to the volume averaging effect in high dose gradient region.
Energy Technology Data Exchange (ETDEWEB)
Fung, Jimmy [Los Alamos National Laboratory; Schofield, Sam [LLNL; Shashkov, Mikhail J. [Los Alamos National Laboratory
2012-06-25
We did not run with a 'cylindrically painted region'. However, we did compute two general variants of the original problem. Refinement studies where a single zone at each level of refinement contains the entire internal energy at t=0 or A 'finite' energy source which has the same physical dimensions as that for the 91 x 46 mesh, but consisting of increasing numbers of zones with refinement. Nominal mesh resolution: 91 x 46. Other mesh resolutions: 181 x 92 and 361 x 184. Note, not identical to the original specification. To maintain symmetry for the 'fixed' energy source, the mesh resolution was adjusted slightly. FLAG Lagrange or full (Eulerian) ALE was used with various options for each simulation. Observation - for either Lagrange or ALE, point or 'fixed' source, calculations converge on density and pressure with mesh resolution, but not energy, (not vorticity either).
Dyscalculia and the Calculating Brain.
Rapin, Isabelle
2016-08-01
Dyscalculia, like dyslexia, affects some 5% of school-age children but has received much less investigative attention. In two thirds of affected children, dyscalculia is associated with another developmental disorder like dyslexia, attention-deficit disorder, anxiety disorder, visual and spatial disorder, or cultural deprivation. Infants, primates, some birds, and other animals are born with the innate ability, called subitizing, to tell at a glance whether small sets of scattered dots or other items differ by one or more item. This nonverbal approximate number system extends mostly to single digit sets as visual discrimination drops logarithmically to "many" with increasing numerosity (size effect) and crowding (distance effect). Preschoolers need several years and specific teaching to learn verbal names and visual symbols for numbers and school agers to understand their cardinality and ordinality and the invariance of their sequence (arithmetic number line) that enables calculation. This arithmetic linear line differs drastically from the nonlinear approximate number system mental number line that parallels the individual number-tuned neurons in the intraparietal sulcus in monkeys and overlying scalp distribution of discrete functional magnetic resonance imaging activations by number tasks in man. Calculation is a complex skill that activates both visual and spatial and visual and verbal networks. It is less strongly left lateralized than language, with approximate number system activation somewhat more right sided and exact number and arithmetic activation more left sided. Maturation and increasing number skill decrease associated widespread non-numerical brain activations that persist in some individuals with dyscalculia, which has no single, universal neurological cause or underlying mechanism in all affected individuals. Copyright © 2016 Elsevier Inc. All rights reserved.
Evaluation of students' knowledge about paediatric dosage calculations.
Özyazıcıoğlu, Nurcan; Aydın, Ayla İrem; Sürenler, Semra; Çinar, Hava Gökdere; Yılmaz, Dilek; Arkan, Burcu; Tunç, Gülseren Çıtak
2017-09-19
Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it
Smartphone apps for calculating insulin dose: a systematic assessment.
Huckvale, Kit; Adomaviciute, Samanta; Prieto, José Tomás; Leow, Melvin Khee-Shing; Car, Josip
2015-05-06
Medical apps are widely available, increasingly used by patients and clinicians, and are being actively promoted for use in routine care. However, there is little systematic evidence exploring possible risks associated with apps intended for patient use. Because self-medication errors are a recognized source of avoidable harm, apps that affect medication use, such as dose calculators, deserve particular scrutiny. We explored the accuracy and clinical suitability of apps for calculating medication doses, focusing on insulin calculators for patients with diabetes as a representative use for a prevalent long-term condition. We performed a systematic assessment of all English-language rapid/short-acting insulin dose calculators available for iOS and Android. Searches identified 46 calculators that performed simple mathematical operations using planned carbohydrate intake and measured blood glucose. While 59% (n = 27/46) of apps included a clinical disclaimer, only 30% (n = 14/46) documented the calculation formula. 91% (n = 42/46) lacked numeric input validation, 59% (n = 27/46) allowed calculation when one or more values were missing, 48% (n = 22/46) used ambiguous terminology, 9% (n = 4/46) did not use adequate numeric precision and 4% (n = 2/46) did not store parameters faithfully. 67% (n = 31/46) of apps carried a risk of inappropriate output dose recommendation that either violated basic clinical assumptions (48%, n = 22/46) or did not match a stated formula (14%, n = 3/21) or correctly update in response to changing user inputs (37%, n = 17/46). Only one app, for iOS, was issue-free according to our criteria. No significant differences were observed in issue prevalence by payment model or platform. The majority of insulin dose calculator apps provide no protection against, and may actively contribute to, incorrect or inappropriate dose recommendations that put current users at risk of both catastrophic overdose and more
Calculation of plantar pressure time integral, an alternative approach.
Melai, Tom; IJzerman, T Herman; Schaper, Nicolaas C; de Lange, Ton L H; Willems, Paul J B; Meijer, Kenneth; Lieverse, Aloysius G; Savelberg, Hans H C M
2011-07-01
In plantar pressure measurement, both peak pressure and pressure time integral are used as variables to assess plantar loading. However, pressure time integral shows a high concordance with peak pressure. Many researchers and clinicians use Novel software (Novel GmbH Inc., Munich, Germany) that calculates this variable as the summation of the products of peak pressure and duration per time sample, which is not a genuine integral of pressure over time. Therefore, an alternative calculation method was introduced. The aim of this study was to explore the relevance of this alternative method, in different populations. Plantar pressure variables were measured in 76 people with diabetic polyneuropathy, 33 diabetic controls without polyneuropathy and 19 healthy subjects. Peak pressure and pressure time integral were obtained using Novel software. The quotient of the genuine force time integral over contact area was obtained as the alternative pressure time integral calculation. This new alternative method correlated less with peak pressure than the pressure time integral as calculated by Novel. The two methods differed significantly and these differences varied between the foot sole areas and between groups. The largest differences were found under the metatarsal heads in the group with diabetic polyneuropathy. From a theoretical perspective, the alternative approach provides a more valid calculation of the pressure time integral. In addition, this study showed that the alternative calculation is of added value, along peak pressure calculation, to interpret adapted plantar pressures patterns in particular in patients at risk for foot ulceration. Copyright © 2011 Elsevier B.V. All rights reserved.
RTU Comparison Calculator Enhancement Plan
Energy Technology Data Exchange (ETDEWEB)
Miller, James D.; Wang, Weimin; Katipamula, Srinivas
2014-03-31
Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.
RTU Comparison Calculator Enhancement Plan
Energy Technology Data Exchange (ETDEWEB)
Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wang, Weimin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-07-01
Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.
Selfconsistent calculations for hyperdeformed nuclei
Energy Technology Data Exchange (ETDEWEB)
Molique, H.; Dobaczewski, J.; Dudek, J.; Luo, W.D. [Universite Louis Pasteur, Strasbourg (France)
1996-12-31
Properties of the hyperdeformed nuclei in the A {approximately} 170 mass range are re-examined using the self-consistent Hartree-Fock method with the SOP parametrization. A comparison with the previous predictions that were based on a non-selfconsistent approach is made. The existence of the {open_quotes}hyper-deformed shell closures{close_quotes} at the proton and neutron numbers Z=70 and N=100 and their very weak dependence on the rotational frequency is suggested; the corresponding single-particle energy gaps are predicted to play a role similar to that of the Z=66 and N=86 gaps in the super-deformed nuclei of the A {approximately} 150 mass range. Selfconsistent calculations suggest also that the A {approximately} 170 hyperdeformed structures have neglegible mass asymmetry in their shapes. Very importantly for the experimental studies, both the fission barriers and the {open_quotes}inner{close_quotes} barriers (that separate the hyperdeformed structures from those with smaller deformations) are predicted to be relatively high, up to the factor of {approximately}2 higher than the corresponding ones in the {sup 152}Dy superdeformed nucleus used as a reference.
Marek, Repka
2015-01-01
The original McEliece PKC proposal is interesting thanks to its resistance against all known attacks, even using quantum cryptanalysis, in an IND-CCA2 secure conversion. Here we present a generic implementation of the original McEliece PKC proposal, which provides test vectors (for all important intermediate results), and also in which a measurement tool for side-channel analysis is employed. To our best knowledge, this is the first such an implementation. This Calculator is valuable in implementation optimization, in further McEliece/Niederreiter like PKCs properties investigations, and also in teaching. Thanks to that, one can, for example, examine side-channel vulnerability of a certain implementation, or one can find out and test particular parameters of the cryptosystem in order to make them appropriate for an efficient hardware implementation. This implementation is available [1] in executable binary format, and as a static C++ library, as well as in form of source codes, for Linux and Windows operating systems.
[IOL power calculation after refractive surgery].
Rabsilber, T M; Auffarth, G U
2010-08-01
Cataract surgery is evolving more and more into a refractive procedure with high expectations in terms of visual rehabilitation. Especially patients presenting after previous Excimer laser corneal surgery are used to being independent from glasses. Unfortunately, some of these patients showed unexpected hyperopic surprises after cataract surgery in the past. The changes of corneal radii and keratometer index as well as the inaccurate prediction of the postoperative intraocular lens (IOL) position using different formulas were determined as error sources which led to a reduced IOL power calculation in dioptres. Several methods have been proposed to solve this problem which can be divided in two groups. On the one hand, there are methods that depend on refraction and biometry values before the initial treatment (e. g., clinical history, Feiz-Mannis, double-K, adjusted effective refractive power [EffRadj]-, cornea bypass/Wake Forest methods as well as correction factors to adjust K-values) and on the other hand procedures that only need current pre-cataract surgery measurements (e. g., contact lens method, corneal topography systems, ray tracing, aphakic refraction technique, correction factors to adjust K-values and new formulas including Haigis-L or BESSt and recently a novel pachymetry method). This review describes these procedures and analyses their strengths and weaknesses. The number of presented methods emphasises already that no perfect solution has been determined so far that would be valid for every patient. Some methods do provide a good predictability; however, individual deviations can occur. In general, it is advisable to inform the patient about the higher risk for an inaccurate IOL power calculation. It can be helpful to compare the results of different methods indicating the importance to provide all required individual data by the refractive surgeon already. Georg Thieme Verlag KG Stuttgart, New York.
Multidimensional Risk Analysis: MRISK
McCollum, Raymond; Brown, Douglas; O'Shea, Sarah Beth; Reith, William; Rabulan, Jennifer; Melrose, Graeme
2015-01-01
Multidimensional Risk (MRISK) calculates the combined multidimensional score using Mahalanobis distance. MRISK accounts for covariance between consequence dimensions, which de-conflicts the interdependencies of consequence dimensions, providing a clearer depiction of risks. Additionally, in the event the dimensions are not correlated, Mahalanobis distance reduces to Euclidean distance normalized by the variance and, therefore, represents the most flexible and optimal method to combine dimensions. MRISK is currently being used in NASA's Environmentally Responsible Aviation (ERA) project o assess risk and prioritize scarce resources.
Three-dimensional calculation and visualization of fault gouge ratio
Energy Technology Data Exchange (ETDEWEB)
Hoffman, Karen S.; Neave, John W. [Dynamic Graphics, Inc., Alameda, CA (United States)
2000-07-01
Understanding the sealing characteristics of faults is critical in assessing the hydrocarbon potential of traps formed by faults. Fault gouge ratio and juxtaposition analysis have often been limited to a single cross section (a two-dimensional approach) or to a single, isolated fault surface (a partial three-dimensional approach). We have now developed a full three-dimensional solution for calculating fault gouge ratio. This method uses a continually varying clay volume fraction, a network of faults (isolated, dying and/or branching), and displacement along the fault surface (instead of just the dip component). The structural model used as the framework for this calculation is based on geometric reconstruction techniques that construct faults and horizons in three-dimensional space, allowing easy and rigorous calculation of juxtaposition and displacement. These last two items are necessary input to the fault gouge ratio calculation. Rigorous calculation of fault gouge ratio depends on a robust structural model. With the model described herein, a variety of scenarios may be investigated, thus incorporating uncertainty into the calculation. Determining whether a fault will act as a seal, or whether there is potential for development of leaks during the production of the reservoir depends on many variables. Minimizing the uncertainty in this analysis may provide increased confidence in assessing risk. (author)
76 FR 71431 - Civil Penalty Calculation Methodology
2011-11-17
... TRANSPORTATION Federal Motor Carrier Safety Administration Civil Penalty Calculation Methodology AGENCY: Federal... its civil penalty methodology. Part of this evaluation includes a forthcoming explanation of the... methodology for calculation of certain civil penalties. To induce compliance with federal regulations, FMCSA...
Dissociated brain potentials for two calculation strategies.
Luo, Wenbo; Liu, Dianzhi; He, Weiqi; Tao, Weidong; Luo, Yuejia
2009-03-04
Event-related brain potentials were used to investigate the shortcut calculation strategy and nonshortcut calculation strategy in performing addition using mental arithmetic. Results showed that the shortcut calculation strategy elicited a larger P220 than the nonshortcut calculation strategy in the 180-280 ms. Dipole source analysis of the difference wave (shortcut calculation minus nonshortcut calculation) indicated that a generator was localized in the posterior cingulate cortex, which reflected the evaluation effect of number in the use of the shortcut strategy. In the 320-500 ms time window, a greater N400 was found in the nonshortcut calculation as compared with the shortcut calculation. Dipole source analysis of the difference wave indicated that a generator was localized in the anterior cingulate cortex. The N400 might reflect the greater working memory load.
Pressure Vessel Calculations for VVER-440 Reactors
Hordósy, G.; Hegyi, Gy.; Keresztúri, A.; Maráczy, Cs.; Temesvári, E.; Vértes, P.; Zsolnay, É.
2003-06-01
Monte Carlo calculations were performed for a selected cycle of the Paks NPP Unit II to test a computational model. In the model the source term was calculated by the core design code KARATE and the neutron transport calculations were performed by the MCNP. Different forms of the source specification were examined. The calculated results were compared with measurements and in most cases fairly good agreement was found.
46 CFR 154.429 - Calculations.
2010-10-01
... § 154.429 Calculations. The tank design load calculations for a membrane tank must include the following... submitted to meet this paragraph. (c) The combined strains from static, dynamic, and thermal loads. ... 46 Shipping 5 2010-10-01 2010-10-01 false Calculations. 154.429 Section 154.429 Shipping COAST...
47 CFR 1.1623 - Probability calculation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623... Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be computed to no less than three significant digits. Probabilities will be truncated to the number of...
Mathematical Creative Activity and the Graphic Calculator
Duda, Janina
2011-01-01
Teaching mathematics using graphic calculators has been an issue of didactic discussions for years. Finding ways in which graphic calculators can enrich the development process of creative activity in mathematically gifted students between the ages of 16-17 is the focus of this article. Research was conducted using graphic calculators with…
Recursive Delay Calculation Unit for Parametric Beamformer
DEFF Research Database (Denmark)
Nikolov, Svetoslav; Jensen, Jørgen Arendt; Tomov, Borislav Gueorguiev
2006-01-01
This paper presents a recursive approach for parametric delay calculations for a beamformer. The suggested calculation procedure is capable of calculating the delays for any image line defined by an origin and arbitrary direction. It involves only add and shift operations making it suitable...
How to calculate sample size and why.
Kim, Jeehyoung; Seo, Bong Soo
2013-09-01
Calculating the sample size is essential to reduce the cost of a study and to prove the hypothesis effectively. Referring to pilot studies and previous research studies, we can choose a proper hypothesis and simplify the studies by using a website or Microsoft Excel sheet that contains formulas for calculating sample size in the beginning stage of the study. There are numerous formulas for calculating the sample size for complicated statistics and studies, but most studies can use basic calculating methods for sample size calculation.
Energy Technology Data Exchange (ETDEWEB)
Ryan, G.W., Westinghouse Hanford
1996-07-12
This document supports the development and presentation of the following accident scenario in the TWRS Final Safety Analysis Report: Subsurface Leak Remaining Subsurface. The calculations needed to quantify the risk associated with this accident scenario are included within.
Energy Technology Data Exchange (ETDEWEB)
Ryan, G.W., Westinghouse Hanford
1996-09-19
This document supports the development and presentation of the following accident scenario in the TWRS Final Safety Analysis Report: Subsurface Leak Remaining Subsurface. The calculations needed to quantify the risk associated with this accident scenario are included within.
Calculation notes for surface leak resulting in pool, TWRS FSAR accident analysis
Energy Technology Data Exchange (ETDEWEB)
Hall, B.W.
1996-09-25
This document includes the calculations performed to quantify the risk associated with the unmitigated and mitigated accident scenarios described in the TWRS FSAR for the accident analysis titled: Surface Leaks Resulting in Pool.
Mcmanus, John
2009-01-01
Few projects are completed on time, on budget, and to their original requirement or specifications. Focusing on what project managers need to know about risk in the pursuit of delivering projects, Risk Management covers key components of the risk management process and the software development process, as well as best practices for risk identification, risk planning, and risk analysis. The book examines risk planning, risk analysis responses to risk, the tracking and modelling of risks, intel...
Mathematics, Pricing, Market Risk Management and Trading Strategies for Financial Derivatives (2/3)
CERN. Geneva; Coffey, Brian
2009-01-01
Market Trading and Risk Management of Vanilla FX Options - Measures of Market Risk - Implied Volatility - FX Risk Reversals, FX Strangles - Valuation and Risk Calculations - Risk Management - Market Trading Strategies
The Calculator of Anti-Alzheimer's Diet. Macronutrients.
Directory of Open Access Journals (Sweden)
Marcin Studnicki
Full Text Available The opinions about optimal proportions of macronutrients in a healthy diet have changed significantly over the last century. At the same time nutritional sciences failed to provide strong evidence backing up any of the variety of views on macronutrient proportions. Herein we present an idea how these proportions can be calculated to find an optimal balance of macronutrients with respect to prevention of Alzheimer's Disease (AD and dementia. These calculations are based on our published observation that per capita personal income (PCPI in the USA correlates with age-adjusted death rates for AD (AADR. We have previously reported that PCPI through the period 1925-2005 correlated with AADR in 2005 in a remarkable, statistically significant oscillatory manner, as shown by changes in the correlation coefficient R (Roriginal. A question thus arises what caused the oscillatory behavior of Roriginal? What historical events in the life of 2005 AD victims had shaped their future with AD? Looking for the answers we found that, considering changes in the per capita availability of macronutrients in the USA in the period 1929-2005, we can mathematically explain the variability of Roriginal for each quarter of a human life. On the basis of multiple regression of Roriginal with regard to the availability of three macronutrients: carbohydrates, total fat, and protein, with or without alcohol, we propose seven equations (referred to as "the calculator" throughout the text which allow calculating optimal changes in the proportions of macronutrients to reduce the risk of AD for each age group: youth, early middle age, late middle age and late age. The results obtained with the use of "the calculator" are grouped in a table (Table 4 of macronutrient proportions optimal for reducing the risk of AD in each age group through minimizing Rpredicted-i.e., minimizing the strength of correlation between PCPI and future AADR.
The Calculator of Anti-Alzheimer's Diet. Macronutrients.
Studnicki, Marcin; Woźniak, Grażyna; Stępkowski, Dariusz
2016-01-01
The opinions about optimal proportions of macronutrients in a healthy diet have changed significantly over the last century. At the same time nutritional sciences failed to provide strong evidence backing up any of the variety of views on macronutrient proportions. Herein we present an idea how these proportions can be calculated to find an optimal balance of macronutrients with respect to prevention of Alzheimer's Disease (AD) and dementia. These calculations are based on our published observation that per capita personal income (PCPI) in the USA correlates with age-adjusted death rates for AD (AADR). We have previously reported that PCPI through the period 1925-2005 correlated with AADR in 2005 in a remarkable, statistically significant oscillatory manner, as shown by changes in the correlation coefficient R (Roriginal). A question thus arises what caused the oscillatory behavior of Roriginal? What historical events in the life of 2005 AD victims had shaped their future with AD? Looking for the answers we found that, considering changes in the per capita availability of macronutrients in the USA in the period 1929-2005, we can mathematically explain the variability of Roriginal for each quarter of a human life. On the basis of multiple regression of Roriginal with regard to the availability of three macronutrients: carbohydrates, total fat, and protein, with or without alcohol, we propose seven equations (referred to as "the calculator" throughout the text) which allow calculating optimal changes in the proportions of macronutrients to reduce the risk of AD for each age group: youth, early middle age, late middle age and late age. The results obtained with the use of "the calculator" are grouped in a table (Table 4) of macronutrient proportions optimal for reducing the risk of AD in each age group through minimizing Rpredicted-i.e., minimizing the strength of correlation between PCPI and future AADR.
DEFF Research Database (Denmark)
Birot, Sophie
for all the methods using uncertainty analysis [11]. The recommended approach for the allergen risk assessment was implemented in a Shiny application with the R software. Thus, allergen risk assessment can be performed easily by non-statisticians with the interactive application....... Allergen and Allergy Management) aims at developing strategies for food allergies based on evidences. Especially, food allergen risk assessment helps food producers or authorities to make decisions on withdrawing a food product from the market or adding more information on the label when allergen presence...... is unintended. The risk assessment method has three different kinds of input. The exposure is calculated from the product consumption and the allergen contamination in the food product. The exposure is then compared to the thresholds to which allergic individuals react in order to calculate the chance...
The Band Structure of Polymers: Its Calculation and Interpretation. Part 2. Calculation.
Duke, B. J.; O'Leary, Brian
1988-01-01
Details ab initio crystal orbital calculations using all-trans-polyethylene as a model. Describes calculations based on various forms of translational symmetry. Compares these calculations with ab initio molecular orbital calculations discussed in a preceding article. Discusses three major approximations made in the crystal case. (CW)
MATNORM: Calculating NORM using composition matrices
Pruseth, Kamal L.
2009-09-01
This paper discusses the implementation of an entirely new set of formulas to calculate the CIPW norm. MATNORM does not involve any sophisticated programming skill and has been developed using Microsoft Excel spreadsheet formulas. These formulas are easy to understand and a mere knowledge of the if-then-else construct in MS-Excel is sufficient to implement the whole calculation scheme outlined below. The sequence of calculation used here differs from that of the standard CIPW norm calculation, but the results are very similar. The use of MS-Excel macro programming and other high-level programming languages has been deliberately avoided for simplicity.
The conundrum of calculating carbon footprints
DEFF Research Database (Denmark)
Strobel, Bjarne W.; Erichsen, Anders Christian; Gausset, Quentin
2016-01-01
A pre-condition for reducing global warming is to minimise the emission of greenhouse gasses (GHGs). A common approach to informing people about the link between behaviour and climate change rests on developing GHG calculators that quantify the ‘carbon footprint’ of a product, a sector or an actor....... There is, however, an abundance of GHG calculators that rely on very different premises and give very different estimates of carbon footprints. In this chapter, we compare and analyse the main principles of calculating carbon footprints, and discuss how calculators can inform (or misinform) people who wish...
Methodology for embedded transport core calculation
Ivanov, Boyan D.
The progress in the Nuclear Engineering field leads to developing new generations of Nuclear Power Plants (NPP) with complex rector core designs, such as cores loaded partially with mixed-oxide (MOX) fuel, high burn-up loadings, and cores with advanced designs of fuel assemblies and control rods. Such heterogeneous cores introduce challenges for the diffusion theory that has been used for several decades for calculations of the current Pressurized Water Rector (PWR) cores. To address the difficulties the diffusion approximation encounters new core calculation methodologies need to be developed by improving accuracy, while preserving efficiency of the current reactor core calculations. In this thesis, an advanced core calculation methodology is introduced, based on embedded transport calculations. Two different approaches are investigated. The first approach is based on embedded finite element (FEM), simplified P3 approximation (SP3), fuel assembly (FA) homogenization calculation within the framework of the diffusion core calculation with NEM code (Nodal Expansion Method). The second approach involves embedded FA lattice physics eigenvalue calculation based on collision probability method (CPM) again within the framework of the NEM diffusion core calculation. The second approach is superior to the first because most of the uncertainties introduced by the off-line cross-section generation are eliminated.
Pile Load Capacity – Calculation Methods
Directory of Open Access Journals (Sweden)
Wrana Bogumił
2015-12-01
Full Text Available The article is a review of the current problems of the foundation pile capacity calculations. The article considers the main principles of pile capacity calculations presented in Eurocode 7 and other methods with adequate explanations. Two main methods are presented: α – method used to calculate the short-term load capacity of piles in cohesive soils and β – method used to calculate the long-term load capacity of piles in both cohesive and cohesionless soils. Moreover, methods based on cone CPTu result are presented as well as the pile capacity problem based on static tests.
Directory of Open Access Journals (Sweden)
Matić Vesna
2016-01-01
Full Text Available Concentration risk has been gaining a special dimension in the contemporary financial and economic environment. Financial institutions are exposed to this risk mainly in the field of lending, mostly through their credit activities and concentration of credit portfolios. This refers to the concentration of different exposures within a single risk category (credit risk, market risk, operational risk, liquidity risk.
RISK PREMIUM IN MOTOR VEHICLE INSURANCE
Directory of Open Access Journals (Sweden)
BANU ÖZGÜREL
2013-06-01
Full Text Available The pure premium or risk premium is the premium that would exactly meet the expected cost of the risk covered ignoring management expenses, commissions, contingency loading, etc. Claim frequency rate and mean claim size are required for estimation in calculating risk premiums. In this study, we discussed to estimate claim frequency rate and mean claim size with several methods and calculated risk premiums. Data, which supported our study, is provided by insurance company involving with motor vehicle insurance.
Directory of Open Access Journals (Sweden)
T. S. Il'ina
2016-01-01
Full Text Available Constant development of modern society is setting higher requirements to specialist training. In this connection, riskmanagement concepts need to be developed in order to take important desicions for educational establishment management. To create a qualitative instrument for managing educational risks quantitative techniques for risks assessment in higher education are considered in the paper. Risk assessment has been made by experts. The data received has been used for minimizing educational risks in managerial decision making. Determining an expert panel absence of personal interest in the matter has been taken into account to increase the quality of decision-making. Expert grouping has been based on the reasonableness evaluation of the issue in question. Then experts have assessed the educational risks on the proposed scale. Overall expert assessments have been calculated using mathematical statistics and dimension of agreement has been determined. For this purpose, the average rank and the average rank deviation from the risk universe have been determined and a multivariable rank correlation coefficient has been calculated. The given coefficient shows the dimension of the expert agreement. And the importance of the multivariable rank correlation coefficient has been assessed for evaluating the quality of the decision made and making conclusions on the data obtained. As a result, the most relevant risks in education have been identified and adequate measures have been taken to minimize those risks.
Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.
2018-01-01
The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.
Calculation reliability in vehicle accident reconstruction.
Wach, Wojciech
2016-06-01
The reconstruction of vehicle accidents is subject to assessment in terms of the reliability of a specific system of engineering and technical operations. In the article [26] a formalized concept of the reliability of vehicle accident reconstruction, defined using Bayesian networks, was proposed. The current article is focused on the calculation reliability since that is the most objective section of this model. It is shown that calculation reliability in accident reconstruction is not another form of calculation uncertainty. The calculation reliability is made dependent on modeling reliability, adequacy of the model and relative uncertainty of calculation. All the terms are defined. An example is presented concerning the analytical determination of the collision location of two vehicles on the road in the absence of evidential traces. It has been proved that the reliability of this kind of calculations generally does not exceed 0.65, despite the fact that the calculation uncertainty itself can reach only 0.05. In this example special attention is paid to the analysis of modeling reliability and calculation uncertainty using sensitivity coefficients and weighted relative uncertainty. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Calculated optical absorption of different perovskite phases
DEFF Research Database (Denmark)
Castelli, Ivano Eligio; Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel
2015-01-01
We present calculations of the optical properties of a set of around 80 oxides, oxynitrides, and organometal halide cubic and layered perovskites (Ruddlesden-Popper and Dion-Jacobson phases) with a bandgap in the visible part of the solar spectrum. The calculations show that for different classes...
Impedance Calculations of Induction Machine Rotor Conductors ...
African Journals Online (AJOL)
The exact calculation of the impedance of induction machine rotor conductors at several operating frequencies are necessary if the dynamic behaviour of the machine is to give a good correlation between the simulated starting torque and current and the experimental results. This paper describes a method of' calculating ...
46 CFR 154.520 - Piping calculations.
2010-10-01
...: (a) Pipe weight loads; (b) Acceleration loads; (c) Internal pressure loads; (d) Thermal loads; and (e... 46 Shipping 5 2010-10-01 2010-10-01 false Piping calculations. 154.520 Section 154.520 Shipping... Process Piping Systems § 154.520 Piping calculations. A piping system must be designed to meet the...
Calculated Atomic Volumes of the Actinide Metals
DEFF Research Database (Denmark)
Skriver, H.; Andersen, O. K.; Johansson, B.
1979-01-01
The equilibrium atomic volume is calculated for the actinide metals. It is possible to account for the localization of the 5f electrons taking place in americium.......The equilibrium atomic volume is calculated for the actinide metals. It is possible to account for the localization of the 5f electrons taking place in americium....
Numerical calculations of turbulent swirling flow
Kubo, I.; Gouldin, F. C.
1974-01-01
Description of a numerical technique for solving axisymmetric, incompressible, turbulent swirling flow problems. Isothermal flow calculations are presented for a coaxial flow configuration of special interest. The calculation results are discussed in regard to their implications for the design of gas turbine combustors.
47 CFR 54.609 - Calculating support.
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculating support. 54.609 Section 54.609... SERVICE Universal Service Support for Health Care Providers § 54.609 Calculating support. (a) Except with... amount of universal service support for an eligible service provided to a public or non-profit rural...
Calculation of the Poisson cumulative distribution function
Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.
1990-01-01
A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.
Data base to compare calculations and observations
Energy Technology Data Exchange (ETDEWEB)
Tichler, J.L.
1985-01-01
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)
Lewis Carroll's Formula for Calendar Calculating.
Spitz, Herman H.
1993-01-01
This paper presents Lewis Carroll's formula for mentally calculating the day of the week of a given date. The paper concludes that such formulas are too complex for individuals of low intelligence to learn by themselves, and thus "idiots savants" who perform such calendar calculations must be using other systems. (JDD)
BURDEN OF DISEASE CALCULATION, COST OF ILLNESS ...
African Journals Online (AJOL)
CIU
individual's relatives and the society can channel such resources and energy to other uses that would ..... European countries. Useful Steps in BoD Calculation and CoI Analysis. The first useful step in the calculation is the outcome tree. Others are perspective of evaluation ... illustrating their conditional dependency. The first ...
10 CFR 766.102 - Calculation methodology.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
Sniderman, A.D.; Tremblay, A.J.; Graaf, J. de; Couture, P.
2014-01-01
OBJECTIVES: This study tests the validity of the Hattori formula to calculate LDL apoB based on plasma lipids and total apoB. METHODS: In 2178 patients in a tertiary care lipid clinic, LDL apoB calculated as suggested by Hattori et al. was compared to directly measured LDL apoB isolated by
5 CFR 1653.4 - Calculating entitlements.
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Calculating entitlements. 1653.4 Section 1653.4 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Retirement Benefits Court Orders § 1653.4 Calculating...
Calculation of Temperature Rise in Calorimetry.
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
Direct calculation of wind turbine tip loss
DEFF Research Database (Denmark)
Wood, D.H.; Okulov, Valery; Bhattacharjee, D.
2016-01-01
The usual method to account for a finite number of blades in blade element calculations of wind turbine performance is through a tip loss factor. Most analyses use the tip loss approximation due to Prandtl which is easily and cheaply calculated but is known to be inaccurate at low tip speed ratio...
Fuzzy-probabilistic calculations of water-balance uncertainty
Energy Technology Data Exchange (ETDEWEB)
Faybishenko, B.
2009-10-01
Hydrogeological systems are often characterized by imprecise, vague, inconsistent, incomplete, or subjective information, which may limit the application of conventional stochastic methods in predicting hydrogeologic conditions and associated uncertainty. Instead, redictions and uncertainty analysis can be made using uncertain input parameters expressed as probability boxes, intervals, and fuzzy numbers. The objective of this paper is to present the theory for, and a case study as an application of, the fuzzyprobabilistic approach, ombining probability and possibility theory for simulating soil water balance and assessing associated uncertainty in the components of a simple waterbalance equation. The application of this approach is demonstrated using calculations with the RAMAS Risk Calc code, to ssess the propagation of uncertainty in calculating potential evapotranspiration, actual evapotranspiration, and infiltration-in a case study at the Hanford site, Washington, USA. Propagation of uncertainty into the results of water-balance calculations was evaluated by hanging he types of models of uncertainty incorporated into various input parameters. The results of these fuzzy-probabilistic calculations are compared to the conventional Monte Carlo simulation approach and estimates from field observations at the Hanford site.
HP-67 calculator programs for thermodynamic data and phase diagram calculations
Energy Technology Data Exchange (ETDEWEB)
Brewer, L.
1978-05-25
This report is a supplement to a tabulation of the thermodynamic and phase data for the 100 binary systems of Mo with the elements from H to Lr. The calculations of thermodynamic data and phase equilibria were carried out from 5000/sup 0/K to low temperatures. This report presents the methods of calculation used. The thermodynamics involved is rather straightforward and the reader is referred to any advanced thermodynamic text. The calculations were largely carried out using an HP-65 programmable calculator. In this report, those programs are reformulated for use with the HP-67 calculator; great reduction in the number of programs required to carry out the calculation results.
Calculation of the disease burden associated with environmental chemical exposures
DEFF Research Database (Denmark)
Grandjean, Philippe; Bellanger, Martine
2017-01-01
, and is hampered by gaps in environmental exposure data, especially from industrializing countries. For these reasons, a recently calculated environmental BoD of 5.18% of the total DALYs is likely underestimated. We combined and extended cost calculations for exposures to environmental chemicals, including...... neurotoxicants, air pollution, and endocrine disrupting chemicals, where sufficient data were available to determine dose-dependent adverse effects. Environmental exposure information allowed cost estimates for the U.S. and the EU, for OECD countries, though less comprehensive for industrializing countries...... is that they are available for few environmental chemicals and primarily based on mortality and impact and duration of clinical morbidity, while less serious conditions are mostly disregarded. Our economic estimates based on available exposure information and dose-response data on environmental risk factors need to be seen...
Calculation of the disease burden associated with environmental chemical exposures
DEFF Research Database (Denmark)
Grandjean, Philippe; Bellanger, Martine
2017-01-01
is that they are available for few environmental chemicals and primarily based on mortality and impact and duration of clinical morbidity, while less serious conditions are mostly disregarded. Our economic estimates based on available exposure information and dose-response data on environmental risk factors need to be seen......, and is hampered by gaps in environmental exposure data, especially from industrializing countries. For these reasons, a recently calculated environmental BoD of 5.18% of the total DALYs is likely underestimated. We combined and extended cost calculations for exposures to environmental chemicals, including...... neurotoxicants, air pollution, and endocrine disrupting chemicals, where sufficient data were available to determine dose-dependent adverse effects. Environmental exposure information allowed cost estimates for the U.S. and the EU, for OECD countries, though less comprehensive for industrializing countries...
Calculating Outcrossing Rates used in Decision Support Systems for Ships
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2008-01-01
Onboard decision support systems (DSS) are used to increase the operational safety of ships. Ideally, DSS can estimate - in the statistical sense - future ship responses on a time scale of the order of 1-3 hours taking into account speed and course changes. The calculations depend on both...... operational and environmental parameters that are known only in the statistical sense. The present paper suggests a procedure to incorporate random variables and associated uncertainties in calculations of outcrossing rates, which are the basis for risk-based DSS. The procedure is based on parallel system...... analysis, and the paper derives and describes the main ideas. The concept is illustrated by an example, where the limit state of a non-linear ship response is considered. The results from the parallel system analysis are in agreement with corresponding Monte Carlo simulations. However, the computational...
DEFF Research Database (Denmark)
Blicher-Mathiesen, Gitte; Andersen, Hans Estrup; Carstensen, Jacob
2014-01-01
risk mapping part of the tool, we combined a modelled root zone N leaching with a catchment-specific N reduction factor which in combination determines the N load to the marine recipient. N leaching was calculated using detailed information of agricultural management from national databases as well...
Risk and risk perception of knee osteoarthritis in the US: a population-based study.
Michl, G L; Katz, J N; Losina, E
2016-04-01
We sought to investigate risk perception among an online cohort of younger US adults compared with calculated risk estimates. We recruited a population-based cohort 25-44 years of age with no history of knee osteoarthritis (OA) using Amazon's Mechanical Turk, an online marketplace used extensively for behavioral research. After collecting demographic and risk factor information, we asked participants to estimate their 10-year and lifetime risk of knee OA. We compared perceived risk with risk derived from the OA risk calculator (OA Risk C), an online tool built on the basis of the validated OA Policy Model. 375 people completed the study. 21% reported having 3+ risk factors for OA, 25% reported two risk factors, and 32% reported one risk factor. Using the OA Risk C, we calculated a mean lifetime OA risk of 25% and 10-year risk of 4% for this sample. Participants overestimated their lifetime and 10-year OA risk at 48% and 26%, respectively. We found that obesity, female sex, family history of OA, history of knee injury, and occupational exposure were all significantly associated with greater perceived lifetime risk of OA. Risk factors are prevalent in this relatively young cohort. Participants consistently overestimated their lifetime risk and showed even greater overestimation of their 10-year risk, suggesting a lack of knowledge about the timing of OA onset. These data offer insights for awareness and risk interventions among younger persons at risk for knee OA. Copyright © 2015 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.
Normal mode calculations of trigonal selenium
DEFF Research Database (Denmark)
Hansen, Flemming Yssing; McMurry, H. L.
1980-01-01
. With such coordinates a potential energy, calculated with only a diagonal force matrix, is equivalent to one calculated with both off diagonal and diagonal elements when conventional coordinates are used. Another advantage is that often some force constants may be determined directly from frequencies at points of high....... In this way we have eliminated the ambiguity in the choice of valence coordinates, which has been a problem in previous models which used valence type interactions. Calculated sound velocities and elastic moduli are also given. The Journal of Chemical Physics is copyrighted by The American Institute...
Spreadsheet Based Scaling Calculations and Membrane Performance
Energy Technology Data Exchange (ETDEWEB)
Wolfe, T D; Bourcier, W L; Speth, T F
2000-12-28
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI
Ti-84 Plus graphing calculator for dummies
McCalla
2013-01-01
Get up-to-speed on the functionality of your TI-84 Plus calculator Completely revised to cover the latest updates to the TI-84 Plus calculators, this bestselling guide will help you become the most savvy TI-84 Plus user in the classroom! Exploring the standard device, the updated device with USB plug and upgraded memory (the TI-84 Plus Silver Edition), and the upcoming color screen device, this book provides you with clear, understandable coverage of the TI-84's updated operating system. Details the new apps that are available for download to the calculator via the USB cabl
Hamming generalized corrector for reactivity calculation
Energy Technology Data Exchange (ETDEWEB)
Suescun-Diaz, Daniel; Ibarguen-Gonzalez, Maria C.; Figueroa-Jimenez, Jorge H. [Pontificia Universidad Javeriana Cali, Cali (Colombia). Dept. de Ciencias Naturales y Matematicas
2014-06-15
This work presents the Hamming method generalized corrector for numerically resolving the differential equation of delayed neutron precursor concentration from the point kinetics equations for reactivity calculation, without using the nuclear power history or the Laplace transform. A study was carried out of several correctors with their respective modifiers with different time step calculations, to offer stability and greater precision. Better results are obtained for some correctors than with other existing methods. Reactivity can be calculated with precision of the order h{sup 5}, where h is the time step. (orig.)
NASCAP/LEO calculations of current collection
Mandell, Myron J.; Katz, Ira; Davis, Victoria A.; Kuharski, Robert A.
1990-12-01
NASCAP/LEO is a 3-dimensional computer code for calculating the interaction of a high-voltage spacecraft with the cold dense plasma found in Low Earth Orbit. Although based on a cubic grid structure, NASCAP/LEO accepts object definition input from standard computer aided design (CAD) programs so that a model may be correctly proportioned and important features resolved. The potential around the model is calculated by solving the finite element formulation of Poisson's equation with an analytic space charge function. Five previously published NASCAP/LEO calculations for three ground test experiments and two space flight experiments are presented. The three ground test experiments are a large simulated panel, a simulated pinhole, and a 2-slit experiment with overlapping sheaths. The two space flight experiments are a solar panel biased up to 1000 volts, and a rocket-mounted sphere biased up to 46 kilovolts. In all cases, the authors find good agreement between calculation and measurement.
Carbon Footprint Calculator | Climate Change | US EPA
2016-12-12
An interactive calculator to estimate your household's carbon footprint. This tool will estimate carbon pollution emissions from your daily activities and show how to reduce your emissions and save money through simple steps.
Calculated Leaf Carbon and Nitrogen, 1992 (ACCP)
National Aeronautics and Space Administration — Study plot canopy chemistry values were calculated from leaf chemistry and litterfall weight values. Average leaf concentrations of nitrogen and carbon were used to...
Calculating Employee Compensation Using An Economic Principle
National Research Council Canada - National Science Library
Puneet Jaiprakash
2015-01-01
.... This paper develops an intuitive method for calculating the minimum amount by which an employee's compensation must be adjusted taking into account changes in economic conditions since the start of employment...
Teaching Graphing Concepts with Graphing Calculators.
Mercer, Joseph
1995-01-01
Presents five lessons to demonstrate how graphing calculators can be used to teach the slope-intercept concept of linear equations and to establish more general principles about two-dimensional graphs. Contains a reproducible student quiz. (MKR)
Numerical calculations in quantum field theories
Energy Technology Data Exchange (ETDEWEB)
Rebbi, C.
1984-01-01
Four lecture notes are included: (1) motivation for numerical calculations in Quantum Field Theory; (2) numerical simulation methods; (3) Monte Carlo studies of Quantum Chromo Dynamics; and (4) systems with fermions. 23 references. (WHK)
108 NUMERICAL CALCULATIONS IN THE GENERAL DYNAMICAL ...
African Journals Online (AJOL)
DR. AMINU
Correspondence Author ... of Moving Bodies”, the following postulates were introduced:- .... 110. Table1: calculated values of the ratio of coordinate time to proper time for both general relativity and dynamical theory of gravitation. Body. Mass (M) Kg.
Fair and Reasonable Rate Calculation Data -
Department of Transportation — This dataset provides guidelines for calculating the fair and reasonable rates for U.S. flag vessels carrying preference cargoes subject to regulations contained at...
Temperature calculation in fire safety engineering
Wickström, Ulf
2016-01-01
This book provides a consistent scientific background to engineering calculation methods applicable to analyses of materials reaction-to-fire, as well as fire resistance of structures. Several new and unique formulas and diagrams which facilitate calculations are presented. It focuses on problems involving high temperature conditions and, in particular, defines boundary conditions in a suitable way for calculations. A large portion of the book is devoted to boundary conditions and measurements of thermal exposure by radiation and convection. The concepts and theories of adiabatic surface temperature and measurements of temperature with plate thermometers are thoroughly explained. Also presented is a renewed method for modeling compartment fires, with the resulting simple and accurate prediction tools for both pre- and post-flashover fires. The final chapters deal with temperature calculations in steel, concrete and timber structures exposed to standard time-temperature fire curves. Useful temperature calculat...
IOL Power Calculation after Corneal Refractive Surgery
Maddalena De Bernardo; Luigi Capasso; Luisa Caliendo; Francesco Paolercio; Nicola Rosa
2014-01-01
Purpose. To describe the different formulas that try to overcome the problem of calculating the intraocular lens (IOL) power in patients that underwent corneal refractive surgery (CRS). Methods. A Pubmed literature search review of all published articles, on keyword associated with IOL power calculation and corneal refractive surgery, as well as the reference lists of retrieved articles, was performed. Results. A total of 33 peer reviewed articles dealing with methods that try to overcome the...
Flow calculation of a bulb turbine
Energy Technology Data Exchange (ETDEWEB)
Goede, E.; Pestalozzi, J.
1987-01-01
In recent years remarkable progress has been made in the field of theoretical flow calculation. Studying the relevant literature one might receive the impression that most problems have been solved. But probing more deeply into details one becomes aware that by no means all questions are answered. The report tries to point out what may be expected of the quasi-three-dimensional flow calculation method employed and - much more important - what it must not be expected to accomplish. (orig.)
Providing driving rain data for hygrothermal calculations
DEFF Research Database (Denmark)
Kragh, Mikkel Kristian
1996-01-01
Due to a wish for driving rain data as input for hygrothermal calculations, this report deals with utilizing commonly applied empirical relations and standard meteorological data, in an attempt to provide realistic estimates rather than exact correlations.......Due to a wish for driving rain data as input for hygrothermal calculations, this report deals with utilizing commonly applied empirical relations and standard meteorological data, in an attempt to provide realistic estimates rather than exact correlations....
DOWNSCALE APPLICATION OF BOILER THERMAL CALCULATION APPROACH
Zelený, Zbynĕk; Hrdlička, Jan
2016-01-01
Commonly used thermal calculation methods are intended primarily for large scale boilers. Hot water small scale boilers, which are commonly used for home heating have many specifics, that distinguish them from large scale boilers especially steam boilers. This paper is focused on application of thermal calculation procedure that is designed for large scale boilers, on a small scale boiler for biomass combustion of load capacity 25 kW. Special issue solved here is influence of formation of dep...
A revised calculational model for fission
Energy Technology Data Exchange (ETDEWEB)
Atchison, F.
1998-09-01
A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)
Efficient Finite Element Calculation of Nγ
DEFF Research Database (Denmark)
Clausen, Johan; Damkilde, Lars; Krabbenhøft, K.
2007-01-01
This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing.......This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing....
Internal Mechanism of a Schoonschip Calculation
Strubbe, H
1973-01-01
Schoonschip is a general purpose "Algebraic Manipulation" program. It is designed to do long - but in principle straightforward - analytic calculations. It can be used interactively. It is very fast in execution and very economical in storage (25K). This is achieved by writing the program almost entirely in (CDC 6000) machine code. Therefore, the representation of the algebraic formulae (list structure) and the calculation mechanism can be set up very efficiently. Input, Output and a few numerical tasks are done in Fortran.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
Modern biogeochemistry environmental risk assessment
Bashkin, Vladimir N
2006-01-01
Most books deal mainly with various technical aspects of ERA description and calculationsAims at generalizing the modern ideas of both biogeochemical and environmental risk assessment during recent yearsAims at supplementing the existing books by providing a modern understanding of mechanisms that are responsible for the ecological risk for human beings and ecosystem
Exploration Health Risks: Probabilistic Risk Assessment
Rhatigan, Jennifer; Charles, John; Hayes, Judith; Wren, Kiley
2006-01-01
Maintenance of human health on long-duration exploration missions is a primary challenge to mission designers. Indeed, human health risks are currently the largest risk contributors to the risks of evacuation or loss of the crew on long-duration International Space Station missions. We describe a quantitative assessment of the relative probabilities of occurrence of the individual risks to human safety and efficiency during space flight to augment qualitative assessments used in this field to date. Quantitative probabilistic risk assessments will allow program managers to focus resources on those human health risks most likely to occur with undesirable consequences. Truly quantitative assessments are common, even expected, in the engineering and actuarial spheres, but that capability is just emerging in some arenas of life sciences research, such as identifying and minimize the hazards to astronauts during future space exploration missions. Our expectation is that these results can be used to inform NASA mission design trade studies in the near future with the objective of preventing the higher among the human health risks. We identify and discuss statistical techniques to provide this risk quantification based on relevant sets of astronaut biomedical data from short and long duration space flights as well as relevant analog populations. We outline critical assumptions made in the calculations and discuss the rationale for these. Our efforts to date have focussed on quantifying the probabilities of medical risks that are qualitatively perceived as relatively high risks of radiation sickness, cardiac dysrhythmias, medically significant renal stone formation due to increased calcium mobilization, decompression sickness as a result of EVA (extravehicular activity), and bone fracture due to loss of bone mineral density. We present these quantitative probabilities in order-of-magnitude comparison format so that relative risk can be gauged. We address the effects of
Biases for current FFTF calculational methods
Energy Technology Data Exchange (ETDEWEB)
Ombrellaro, P.A.; Bennett, R.A.; Daughtry, J.W.; Dobbin, K.D.; Harris, R.A.; Nelson, J.V.; Peterson, R.E.; Rothrock, R.B.
1978-01-01
Uncertainties in nuclear data and approximate calculational methods used in safety design, and operational support of a reactor yield biased as well as uncertain results. Experimentally based biases for use in Fast Flux Test Facility (FFTF) core calculations have been evaluated and are presented together with a description of calculational methods. Experimental data for these evaluations were obtained from an Engineering Mockup Critical (EMC) of the FFTF core built at the Argonne National Laboratory (ANL). The experiments were conceived and planned by the Hanford Engineering Development Laboratory (HEDL) in cooperation with the Westinghouse Advanced Reactors Division (WARD) and ANL personnel, and carried out by the ANL staff. All experiments were designed specifically to provide data for evaluation of current FFTF core calculational methods. These comprehensive experiments were designed to allow simultaneous evaluations of biases and uncertainties in calculated reactivities, fuel sub-assembly and material reactivity worths, small sample worths, absorber rod worths, spatial fission rate distributions, power tilting effects and spatial neutron spectra. Modified source multiplication and reactivity anomaly methods have also been evaluated. Uncertainties in the biases have been established and are sufficiently small to attain a high degree of confidence in the design, safety and operational aspects of the FFTF core.
Good Practices in Free-energy Calculations
Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher
2013-01-01
As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.
Paramedics’ Ability to Perform Drug Calculations
Directory of Open Access Journals (Sweden)
Eastwood, Kathyrn J
2009-11-01
Full Text Available Background: The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics’ drug calculation abilities was first published in 2000 and for nurses’ abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student’s ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations.Methods: A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL, MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved.Results: The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects.Conclusions: This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting.[WestJEM. 2009;10:240-243.
Paramedics’ Ability to Perform Drug Calculations
Eastwood, Kathryn J; Boyle, Malcolm J; Williams, Brett
2009-01-01
Background: The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics’ drug calculation abilities was first published in 2000 and for nurses’ abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student’s ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations. Methods: A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved. Results: The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects. Conclusions: This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting. PMID:20046240
Paramedics' ability to perform drug calculations.
Eastwood, Kathryn J; Boyle, Malcolm J; Williams, Brett
2009-11-01
The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics' drug calculation abilities was first published in 2000 and for nurses' abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student's ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations. A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved. The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects. This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting.
The vulnerability index calculation for determination of groundwater quality
Energy Technology Data Exchange (ETDEWEB)
Kurtz, D.A.; Parizek, R.R. [Penn State Univ., University Park, PA (United States)
1995-12-01
Non-point source pollutants, such as pesticides, enter groundwater systems in a variety of means at wide-ranging concentrations. Risks in using groundwater in human consumption vary depending on the amounts of contaminants, the type of groundwater aquifer, and various use factors. We have devised a method of determining the vulnerability of an aquifer towards contamination with the Vulnerability Index. The Index can be used either as a comparative or an absolute index (comparative with a pure water source or aquifer spring or without comparison, assuming no peaks in the compared sample). Data for the calculation is obtained by extraction of a given water sample followed by analysis with a nitrogen/phosphorus detector on gas chromatography. The calculation uses the sum of peak heights as its determination. An additional peak number factor is added to emphasize higher numbers of compounds found in a given sample. Karst aquifers are considered to be highly vulnerable due to the large solution openings in its structure. Examples will be given of Vulnerability Indices taken from springs emanating from karst, intermediate, and diffuse flow aquifers taken at various times of the 1992 sampling year and compared with rainfall during that time. Comparisons will be made of the Index vs. rainfall events and vs. pesticide application data. The risk of using contaminated drinking water sources can be evaluated with the use of this index.
Common breast cancer risk alleles and risk assessment
DEFF Research Database (Denmark)
Näslund-Koch, C; Nordestgaard, B G; Bojesen, S E
2017-01-01
general population were followed in Danish health registries for up to 21 years after blood sampling. After genotyping 72 breast cancer risk loci, each with 0-2 alleles, the sum for each individual was calculated. We used the simple allele sum instead of the conventional polygenic risk score...... cancer risks ≤ 1.5%. Using polygenic risk score led to similar results. CONCLUSION: Common breast cancer risk alleles are associated with incidence and mortality of breast cancer in the general population, but not with other cancers. After including breast cancer allele sum in risk assessment, 25......BACKGROUND: We hypothesized that common breast cancer risk alleles are associated with incidences of breast cancer and other cancers in the general population, and identify low risk women among those invited for screening mammography. PARTICIPANTS AND METHODS: 35,441 individuals from the Danish...
A practical alternative to calculating unmet need for family planning.
Sinai, Irit; Igras, Susan; Lundgren, Rebecka
2017-01-01
The standard approach for measuring unmet need for family planning calculates actual, physiological unmet need and is useful for tracking changes at the population level. We propose to supplement it with an alternate approach that relies on individual perceptions and can improve program design and implementation. The proposed approach categorizes individuals by their perceived need for family planning: real met need (current users of a modern method), perceived met need (current users of a traditional method), real no need, perceived no need (those with a physiological need for family planning who perceive no need), and perceived unmet need (those who realize they have a need but do not use a method). We tested this approach using data from Mali (n=425) and Benin (n=1080). We found that traditional method use was significantly higher in Benin than in Mali, resulting in different perceptions of unmet need in the two countries. In Mali, perceived unmet need was much higher. In Benin, perceived unmet need was low because women believed (incorrectly) that they were protected from pregnancy. Perceived no need - women who believed that they could not become pregnant despite the fact that they were fecund and sexually active - was quite high in both countries. We posit that interventions that address perceptions of unmet need, in addition to physiological risk of pregnancy, will more likely be effective in changing behavior. The suggested approach for calculating unmet need supplements the standard calculations and is helpful for designing programs to better address women's and men's individual needs in diverse contexts.
D & D screening risk evaluation guidance
Energy Technology Data Exchange (ETDEWEB)
Robers, S.K.; Golden, K.M.; Wollert, D.A.
1995-09-01
The Screening Risk Evaluation (SRE) guidance document is a set of guidelines provided for the uniform implementation of SREs performed on decontamination and decommissioning (D&D) facilities. Although this method has been developed for D&D facilities, it can be used for transition (EM-60) facilities as well. The SRE guidance produces screening risk scores reflecting levels of risk through the use of risk ranking indices. Five types of possible risk are calculated from the SRE: current releases, worker exposures, future releases, physical hazards, and criticality. The Current Release Index (CRI) calculates the current risk to human health and the environment, exterior to the building, from ongoing or probable releases within a one-year time period. The Worker Exposure Index (WEI) calculates the current risk to workers, occupants and visitors inside contaminated D&D facilities due to contaminant exposure. The Future Release Index (FRI) calculates the hypothetical risk of future releases of contaminants, after one year, to human health and the environment. The Physical Hazards Index (PHI) calculates the risks to human health due to factors other than that of contaminants. Criticality is approached as a modifying factor to the entire SRE, due to the fact that criticality issues are strictly regulated under DOE. Screening risk results will be tabulated in matrix form, and Total Risk will be calculated (weighted equation) to produce a score on which to base early action recommendations. Other recommendations from the screening risk scores will be made based either on individual index scores or from reweighted Total Risk calculations. All recommendations based on the SRE will be made based on a combination of screening risk scores, decision drivers, and other considerations, as determined on a project-by-project basis.
Automated one-loop calculations with GOSAM
Energy Technology Data Exchange (ETDEWEB)
Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)
2011-11-15
We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)
Resolving resonances in R-matrix calculations
Energy Technology Data Exchange (ETDEWEB)
Ramirez, J.M.; Bautista, Manuel A. [Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas (IVIC), Caracas (Venezuela)
2002-10-28
We present a technique to obtain detailed resonance structures from R-matrix calculations of atomic cross sections for both collisional and radiative processes. The resolving resonances (RR) method relies on the QB method of Quigley-Berrington (Quigley L, Berrington K A and Pelan J 1998 Comput. Phys. Commun. 114 225) to find the position and width of resonances directly from the reactance matrix. Then one determines the symmetry parameters of these features and generates an energy mesh whereby fully resolved cross sections are calculated with minimum computational cost. The RR method is illustrated with the calculation of the photoionization cross sections and the unified recombination rate coefficients of Fe XXIV, O VI, and Fe XVII. The RR method reduces numerical errors arising from unresolved R-matrix cross sections in the computation of synthetic bound-free opacities, thermally averaged collision strengths and recombination rate coefficients. (author)
Green's function calculation from equipartition theorem.
Perton, Mathieu; Sánchez-Sesma, Francisco José
2016-08-01
A method is presented to calculate the elastodynamic Green's functions by using the equipartition principle. The imaginary parts are calculated as the average cross correlations of the displacement fields generated by the incidence of body and surface waves with amplitudes weighted by partition factors. The real part is retrieved using the Hilbert transform. The calculation of the partition factors is discussed for several geometrical configurations in two dimensional space: the full-space, a basin in a half-space and for layered media. For the last case, it results in a fast computation of the full Green's functions. Additionally, if the contribution of only selected states is desired, as for instance the surface wave part, the computation is even faster. Its use for full waveform inversion may then be advantageous.
Daylight calculations using constant luminance curves
Energy Technology Data Exchange (ETDEWEB)
Betman, E. [CRICYT, Mendoza (Argentina). Laboratorio de Ambiente Humano y Vivienda
2005-02-01
This paper presents a simple method to manually estimate daylight availability and to make daylight calculations using constant luminance curves calculated with local illuminance and irradiance data and the all-weather model for sky luminance distribution developed in the Atmospheric Science Research Center of the University of New York (ARSC) by Richard Perez et al. Work with constant luminance curves has the advantage that daylight calculations include the problem's directionality and preserve the information of the luminous climate of the place. This permits accurate knowledge of the resource and a strong basis to establish conclusions concerning topics related to the energy efficiency and comfort in buildings. The characteristics of the proposed method are compared with the method that uses the daylight factor. (author)
Using inverted indices for accelerating LINGO calculations.
Kristensen, Thomas G; Nielsen, Jesper; Pedersen, Christian N S
2011-03-28
The ever growing size of chemical databases calls for the development of novel methods for representing and comparing molecules. One such method called LINGO is based on fragmenting the SMILES string representation of molecules. Comparison of molecules can then be performed by calculating the Tanimoto coefficient, which is called LINGOsim when used on LINGO multisets. This paper introduces a verbose representation for storing LINGO multisets, which makes it possible to transform them into sparse fingerprints such that fingerprint data structures and algorithms can be used to accelerate queries. The previous best method for rapidly calculating the LINGOsim similarity matrix required specialized hardware to yield a significant speedup over existing methods. By representing LINGO multisets in the verbose representation and using inverted indices, it is possible to calculate LINGOsim similarity matrices roughly 2.6 times faster than existing methods without relying on specialized hardware.
Note about socio-economic calculations
DEFF Research Database (Denmark)
Landex, Alex; Andersen, Jonas Lohmann Elkjær; Salling, Kim Bang
2006-01-01
This note gives a short introduction of how to make socio-economic evaluations in connection with the teaching at the Centre for Traffic and Transport (CTT). It is not a manual for making socio-economic calculations in transport infrastructure projects – in this context we refer to the guidelines...... for socio-economic calculations within the transportation area (Ministry of Traffic, 2003). The note also explains the theory of socio-economic calculations – reference is here made to ”Road Infrastructure Planning – a Decision-oriented approach” (Leleur, 2000). Socio-economic evaluations of infrastructure...... projects are common and can be made at different levels of detail depending on the type of project and the decision making phase. A common feature of the different levels of detail of the socio-economic analysis is that the planned project(s) is compared with a basic; the basic alternative or a null...
eQuilibrator—the biochemical thermodynamics calculator
Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron
2012-01-01
The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like ‘how much Gibbs energy is released by ATP hydrolysis at pH 5?’ are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use. PMID:22064852
The Calculator of Anti-Alzheimer’s Diet. Macronutrients
Studnicki, Marcin; Woźniak, Grażyna; Stępkowski, Dariusz
2016-01-01
The opinions about optimal proportions of macronutrients in a healthy diet have changed significantly over the last century. At the same time nutritional sciences failed to provide strong evidence backing up any of the variety of views on macronutrient proportions. Herein we present an idea how these proportions can be calculated to find an optimal balance of macronutrients with respect to prevention of Alzheimer’s Disease (AD) and dementia. These calculations are based on our published observation that per capita personal income (PCPI) in the USA correlates with age-adjusted death rates for AD (AADR). We have previously reported that PCPI through the period 1925–2005 correlated with AADR in 2005 in a remarkable, statistically significant oscillatory manner, as shown by changes in the correlation coefficient R (Roriginal). A question thus arises what caused the oscillatory behavior of Roriginal? What historical events in the life of 2005 AD victims had shaped their future with AD? Looking for the answers we found that, considering changes in the per capita availability of macronutrients in the USA in the period 1929–2005, we can mathematically explain the variability of Roriginal for each quarter of a human life. On the basis of multiple regression of Roriginal with regard to the availability of three macronutrients: carbohydrates, total fat, and protein, with or without alcohol, we propose seven equations (referred to as “the calculator” throughout the text) which allow calculating optimal changes in the proportions of macronutrients to reduce the risk of AD for each age group: youth, early middle age, late middle age and late age. The results obtained with the use of “the calculator” are grouped in a table (Table 4) of macronutrient proportions optimal for reducing the risk of AD in each age group through minimizing Rpredicted−i.e., minimizing the strength of correlation between PCPI and future AADR. PMID:27992612
Undergraduate paramedic students cannot do drug calculations
Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett
2012-01-01
BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067
Calculation of water activation for the LHC
Vollaire, Joachim; Brugger, Markus; Forkel-Wirth, Doris; Roesler, Stefan; Vojtyla, Pavol
2006-06-01
The management of activated water in the Large Hadron Collider (LHC) at CERN is a key concern for radiation protection. For this reason, the induced radioactivity of the different water circuits is calculated using the Monte-Carlo (MC) code FLUKA. The results lead to the definition of procedures to be taken into account during the repair and maintenance of the machine, as well as to measures being necessary for a release of water into the environment. In order to assess the validity of the applied methods, a benchmark experiment was carried out at the CERN-EU High Energy Reference Field (CERF) facility, where a hadron beam (120 GeV) is impinging on a copper target. Four samples of water, as used at the LHC, and different in their chemical compositions, were irradiated near the copper target. In addition to the tritium activity measured with a liquid scintillation counter, the samples were also analyzed using gamma spectroscopy in order to determine the activity of the gamma emitting isotopes such as Be7 and Na24. While for the latter an excellent agreement between simulation and measurement was found, for the calculation of tritium a correction factor is derived to be applied for future LHC calculations in which the activity is calculated by direct scoring of produced nuclei. A simplified geometry representing the LHC Arc sections is then used to evaluate the different calculation methods with FLUKA. By comparing these methods and by taking into account the benchmark results, a strategy for the environmental calculations can be defined.
First principles calculations for litiated manganese oxides
Energy Technology Data Exchange (ETDEWEB)
Benedek, R; Prasad, R; Thackeray, M; Wills, J M; Yang, L H
1998-12-22
First principles calculations using the local-spin-density-functional theory are presented of densities of electronic states for MnO, LiMnO{sub 2} in the monoclinic and orthorhombic structures, cubic LiMn{sub 2}O{sub 4} spinel, and {lambda}-MnO{sub 2} (delithiated spinel), all in antiferromagnetic spin configurations. The changes in energy spectra as the Mn oxidation state varies between 2+ and 4+ are illustrated. Preliminary calculations for Co-doped LiMnO{sub 2} are presented, and the destabilization of a monoclinic relative to a rhombohedral structure is discussed.
Transmission pipeline calculations and simulations manual
Menon, E Shashi
2014-01-01
Transmission Pipeline Calculations and Simulations Manual is a valuable time- and money-saving tool to quickly pinpoint the essential formulae, equations, and calculations needed for transmission pipeline routing and construction decisions. The manual's three-part treatment starts with gas and petroleum data tables, followed by self-contained chapters concerning applications. Case studies at the end of each chapter provide practical experience for problem solving. Topics in this book include pressure and temperature profile of natural gas pipelines, how to size pipelines for specified f
Calculation of a Shock Response Spectra
Directory of Open Access Journals (Sweden)
Jiri Tuma
2011-11-01
Full Text Available As it is stated in the ISO 18431-4 Standard, a Shock Response Spectrum is defined as the response to a given accelerationacting at a set of mass-damper-spring oscillators, which are adjusted to the different resonance frequencies while their resonancegains (Q-factor are equal to the same value. The maximum of the absolute value of the calculated responses as a function of theresonance frequencies compose the shock response spectrum (SRS. The paper will deal with employing Signal Analyzer, the softwarefor signal processing, for calculation of the SRS. The theory is illustrated by examples.
Histidine in Continuum Electrostatics Protonation State Calculations
Couch, Vernon; Stuchebruckhov, Alexei
2014-01-01
A modification to the standard continuum electrostatics approach to calculate protein pKas which allows for the decoupling of histidine tautomers within a two state model is presented. Histidine with four intrinsically coupled protonation states cannot be easily incorporated into a two state formalism because the interaction between the two protonatable sites of the imidazole ring is not purely electrostatic. The presented treatment, based on a single approximation of the interrelation between histidine’s charge states, allows for a natural separation of the two protonatable sites associated with the imidazole ring as well as the inclusion of all protonation states within the calculation. PMID:22072521
Ab initio valence calculations in chemistry
Cook, D B
1974-01-01
Ab Initio Valence Calculations in Chemistry describes the theory and practice of ab initio valence calculations in chemistry and applies the ideas to a specific example, linear BeH2. Topics covered include the Schrödinger equation and the orbital approximation to atomic orbitals; molecular orbital and valence bond methods; practical molecular wave functions; and molecular integrals. Open shell systems, molecular symmetry, and localized descriptions of electronic structure are also discussed. This book is comprised of 13 chapters and begins by introducing the reader to the use of the Schrödinge
BASIC program calculates flue gas energy balance
Energy Technology Data Exchange (ETDEWEB)
Ganapathy, V. (ABCO Industries, Inc., Abilene, TX (United States))
1993-10-01
Engineers always seek cost-cutting, energy-efficient ways to operate boilers and waste-heat recovery systems. The starting point in the design or performance evaluation of any heat transfer equipment is an energy balance calculation. This easy-to-use BASIC program tackles this problem. Using the gas stream analysis as percent weight or volume, the program calculates inlet and exit temperatures, heat duty, the gas stream's molecular weight, etc. This program is a definite must for the plant engineering notebook.
Criticality calculation of non-ordinary systems
Energy Technology Data Exchange (ETDEWEB)
Kalugin, A. V., E-mail: Kalugin-AV@nrcki.ru; Tebin, V. V. [National Research Centre Kurchatov Institute (Russian Federation)
2016-12-15
The specific features of calculation of the effective multiplication factor using the Monte Carlo method for weakly coupled and non-asymptotic multiplying systems are discussed. Particular examples are considered and practical recommendations on detection and Monte Carlo calculation of systems typical in numerical substantiation of nuclear safety for VVER fuel management problems are given. In particular, the problems of the choice of parameters for the batch mode and the method for normalization of the neutron batch, as well as finding and interpretation of the eigenvalue spectrum for the integral fission matrix, are discussed.
Calculating Buffer Zones: A Guide for Applicators
Buffer zones provide distance between the application block (i.e., edge of the treated field) and bystanders, in order to control pesticide exposure risk from soil fumigants. Distance requirements may be reduced by credits such as tarps.
Energy risk management and value at risk modeling
Energy Technology Data Exchange (ETDEWEB)
Sadeghi, Mehdi [Economics department, Imam Sadiq University, P.B. 14655-159, Tehran (Iran, Islamic Republic of)]. E-mail: sadeghi@isu.ac.ir; Shavvalpour, Saeed [Economics department, Imam Sadiq University, P.B. 14655-159, Tehran (Iran, Islamic Republic of)]. E-mail: shavalpoor@isu.ac.ir
2006-12-15
The value of energy trades can change over time with market conditions and underlying price variables. The rise of competition and deregulation in energy markets has led to relatively free energy markets that are characterized by high price shifts. Within oil markets the volatile oil price environment after OPEC agreements in the 1970s requires a risk quantification.' Value-at-risk' has become an essential tool for this end when quantifying market risk. There are various methods for calculating value-at-risk. The methods we introduced in this paper are Historical Simulation ARMA Forecasting and Variance-Covariance based on GARCH modeling approaches. The results show that among various approaches the HSAF methodology presents more efficient results, so that if the level of confidence is 99%, the value-at-risk calculated through HSAF methodology is greater than actual price changes in almost 97.6 percent of the forecasting period.
Energy risk management and value at risk modeling
Energy Technology Data Exchange (ETDEWEB)
Mehdi Sadeghi; Saeed Shavvalpour [Imam Sadiq University, Tehran (Iran). Economics Dept.
2006-12-15
The value of energy trades can change over time with market conditions and underlying price variables. The rise of competition and deregulation in energy markets has led to relatively free energy markets that are characterized by high price shifts. Within oil markets the volatile oil price environment after OPEC agreements in the 1970s requires a risk quantification. ''Value-at-risk'' has become an essential tool for this end when quantifying market risk. There are various methods for calculating value-at-risk. The methods we introduced in this paper are Historical Simulation ARMA Forecasting and Variance-Covariance based on GARCH modeling approaches. The results show that among various approaches the HSAF methodology presents more efficient results, so that if the level of confidence is 99%, the value-at-risk calculated through HSAF methodology is greater than actual price changes in almost 97.6 percent of the forecasting period. (author)
Engineering calculations in radiative heat transfer
Gray, W A; Hopkins, D W
1974-01-01
Engineering Calculations in Radiative Heat Transfer is a six-chapter book that first explains the basic principles of thermal radiation and direct radiative transfer. Total exchange of radiation within an enclosure containing an absorbing or non-absorbing medium is then described. Subsequent chapters detail the radiative heat transfer applications and measurement of radiation and temperature.
Calculators and the SAT: A Status Report.
Rigol, Gretchen W.
1991-01-01
The College Entrance Examination Board has not permitted calculator use on the Scholastic Aptitude Test because of unresolved concerns about equity, implications for test content, and logistical and security issues. Those issues no longer seem insurmountable, and significant changes are being introduced on many tests. (MSE)
Cubic scaling GW: Towards fast quasiparticle calculations
Czech Academy of Sciences Publication Activity Database
Liu, P.; Kaltak, M.; Klimeš, Jiří; Kresse, G.
2016-01-01
Roč. 94, č. 16 (2016), s. 165109 ISSN 2469-9950 Institutional support: RVO:61388955 Keywords : MEAN-FIELD THEORY * ELECTRONIC-STRUCTURE CALCULATIONS * AUGMENTED-WAVE METHOD Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.836, year: 2016
IOL Power Calculation after Corneal Refractive Surgery
Directory of Open Access Journals (Sweden)
Maddalena De Bernardo
2014-01-01
Full Text Available Purpose. To describe the different formulas that try to overcome the problem of calculating the intraocular lens (IOL power in patients that underwent corneal refractive surgery (CRS. Methods. A Pubmed literature search review of all published articles, on keyword associated with IOL power calculation and corneal refractive surgery, as well as the reference lists of retrieved articles, was performed. Results. A total of 33 peer reviewed articles dealing with methods that try to overcome the problem of calculating the IOL power in patients that underwent CRS were found. According to the information needed to try to overcome this problem, the methods were divided in two main categories: 18 methods were based on the knowledge of the patient clinical history and 15 methods that do not require such knowledge. The first group was further divided into five subgroups based on the parameters needed to make such calculation. Conclusion. In the light of our findings, to avoid postoperative nasty surprises, we suggest using only those methods that have shown good results in a large number of patients, possibly by averaging the results obtained with these methods.
IOL power calculation after corneal refractive surgery.
De Bernardo, Maddalena; Capasso, Luigi; Caliendo, Luisa; Paolercio, Francesco; Rosa, Nicola
2014-01-01
To describe the different formulas that try to overcome the problem of calculating the intraocular lens (IOL) power in patients that underwent corneal refractive surgery (CRS). A Pubmed literature search review of all published articles, on keyword associated with IOL power calculation and corneal refractive surgery, as well as the reference lists of retrieved articles, was performed. A total of 33 peer reviewed articles dealing with methods that try to overcome the problem of calculating the IOL power in patients that underwent CRS were found. According to the information needed to try to overcome this problem, the methods were divided in two main categories: 18 methods were based on the knowledge of the patient clinical history and 15 methods that do not require such knowledge. The first group was further divided into five subgroups based on the parameters needed to make such calculation. In the light of our findings, to avoid postoperative nasty surprises, we suggest using only those methods that have shown good results in a large number of patients, possibly by averaging the results obtained with these methods.
{ital Ab} {ital initio} calculations of biomolecules
Energy Technology Data Exchange (ETDEWEB)
Les, A. [Department of Chemistry, University of Warsaw, 02-093 Warsaw (Poland)]|[Department of Chemistry, University of Arizona, Tucson, Arizona 85721 (United States); Adamowicz, L. [Department of Theoretical Chemistry, University of Lund, Lund, S-22100 (Sweden)]|[Department of Chemistry, University of Arizona, Tucson, Arizona 85721 (United States)
1995-08-01
{ital Ab} {ital initio} quantum mechanical calculations are valuable tools for interpretation and elucidation of elemental processes in biochemical systems. With the {ital ab} {ital initio} approach one can calculate data that sometimes are difficult to obtain by experimental techniques. The most popular computational theoretical methods include the Hartree-Fock method as well as some lower-level variational and perturbational post-Hartree Fock approaches which allow to predict molecular structures and to calculate spectral properties. We have been involved in a number of joined theoretical and experimental studies in the past and some examples of these studies are given in this presentation. The systems chosen cover a wide variety of simple biomolecules, such as precursors of nucleic acids, double-proton transferring molecules, and simple systems involved in processes related to first stages of substrate-enzyme interactions. In particular, examples of some {ital ab} {ital initio} calculations used in the assignment of IR spectra of matrix isolated pyrimidine nucleic bases are shown. Some radiation-induced transformations in model chromophores are also presented. Lastly, we demonstrate how the {ital ab}-{ital initio} approach can be used to determine the initial several steps of the molecular mechanism of thymidylate synthase inhibition by dUMP analogues.
24 CFR 3280.811 - Calculations.
2010-04-01
... neutral load determined by Article 220.61 of the National Electrical Code, NFPA No. 70-2005. The loads... DEVELOPMENT MANUFACTURED HOME CONSTRUCTION AND SAFETY STANDARDS Electrical Systems § 3280.811 Calculations. (a... motors and heater loads (exhaust fans, air conditioners, electric, gas, or oil heating). Omit smaller of...
COMPARISON OF CALCULATED AND DIRECT LOW DENSITY ...
African Journals Online (AJOL)
hi-tech
2004-03-03
Mar 3, 2004 ... linear regression analyses using SPSS (VER 10.0). To assess the degree of agreement ... Summary of Cholesterol, TG's, HDL-C and LDL-C; and correlation between calculated and direct. LDC-C among the groups(a) .... associated with hyperlipidaemia, including diabetes mellitus, nephrotic syndrome and ...
Work Function Calculation For Hafnium- Barium System
Directory of Open Access Journals (Sweden)
K.A. Tursunmetov
2015-08-01
Full Text Available The adsorption process of barium atoms on hafnium is considered. A structural model of the system is presented and on the basis of calculation of interaction between ions dipole system the dependence of the work function on the coating.
molecular dynamics simulations and quantum chemical calculations ...
African Journals Online (AJOL)
KEYWORDS: Molecular dynamic simulation; iron surface; adsorption; imidazoline derivatives; quantum chemical calculations ..... break any bond. This means that the closer the nuclei of the bonding atoms are a greater supply of energy is needed to separate the atoms due to large force of attraction between the atoms.
Simple Calculation Programs for Biology Other Methods
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Other Methods. Hemolytic potency of drugs. Raghava et al., (1994) Biotechniques 17: 1148. FPMAP: methods for classification and identification of microorganisms 16SrRNA. graphical display of restriction and fragment map of ...
Molecular transport calculations with Wannier Functions
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel
2005-01-01
We present a scheme for calculating coherent electron transport in atomic-scale contacts. The method combines a formally exact Green's function formalism with a mean-field description of the electronic structure based on the Kohn-Sham scheme of density functional theory. We use an accurate plane-wave...
Unified approach to alpha decay calculations
Indian Academy of Sciences (India)
2014-05-02
May 2, 2014 ... a small error in E can cause a much bigger error in τ. Due to this reason, in many analyses of α decay one uses experimentally measured E rather than the theoretically calculated one, even though a more satisfactory theoretical approach should generate both E and τ within a unified framework. Further ...
A New Iterative Method to Calculate [pi
Dion, Peter; Ho, Anthony
2012-01-01
For at least 2000 years people have been trying to calculate the value of [pi], the ratio of the circumference to the diameter of a circle. People know that [pi] is an irrational number; its decimal representation goes on forever. Early methods were geometric, involving the use of inscribed and circumscribed polygons of a circle. However, real…
Auger yield calculations for medical radioisotopes
Directory of Open Access Journals (Sweden)
Lee Boon Q.
2015-01-01
Full Text Available Auger yields from the decays of 71Ge, 99mTc, 111In and 123–125I have been calculated using a Monte Carlo model of the Auger cascade that has been developed at the ANU. In addition, progress to improve the input data of the model has been made with the Multiconfiguration Dirac-Hartree-Fock method.
Attitudes towards Graphing Calculators in Developmental Mathematics
Rajan, Shaun Thomas
2013-01-01
The purpose of this exploratory study was to examine instructor and student attitudes towards the use of the graphing calculator in the developmental mathematics classroom. A focus of the study was to see if instructors or students believed there were changes in the conceptual understanding of mathematics as a result of graphing calculator…
Calculation of the CIPW norm: New formulas
Indian Academy of Sciences (India)
A completely new set of formulas,based on matrix algebra,has been suggested for the calculation of the CIPW norm for igneous rocks to achieve highly consistent and accurate norms.The suggested sequence of derivation of the normative minerals greatly deviates from the sequence followed in the classical scheme.
7 CFR 1416.704 - Payment calculation.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Payment calculation. 1416.704 Section 1416.704 Agriculture Regulations of the Department of Agriculture (Continued) COMMODITY CREDIT CORPORATION, DEPARTMENT... necessary to ensure successful plant survival; (3) Chemicals and nutrients necessary for successful...
Conductance calculations with a wavelet basis set
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel
2003-01-01
. The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...
Simple Calculation Programs for Biology Immunological Methods
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...
Gaseous Nitrogen Orifice Mass Flow Calculator
Ritrivi, Charles
2013-01-01
The Gaseous Nitrogen (GN2) Orifice Mass Flow Calculator was used to determine Space Shuttle Orbiter Water Spray Boiler (WSB) GN2 high-pressure tank source depletion rates for various leak scenarios, and the ability of the GN2 consumables to support cooling of Auxiliary Power Unit (APU) lubrication during entry. The data was used to support flight rationale concerning loss of an orbiter APU/hydraulic system and mission work-arounds. The GN2 mass flow-rate calculator standardizes a method for rapid assessment of GN2 mass flow through various orifice sizes for various discharge coefficients, delta pressures, and temperatures. The calculator utilizes a 0.9-lb (0.4 kg) GN2 source regulated to 40 psia (.276 kPa). These parameters correspond to the Space Shuttle WSB GN2 Source and Water Tank Bellows, but can be changed in the spreadsheet to accommodate any system parameters. The calculator can be used to analyze a leak source, leak rate, gas consumables depletion time, and puncture diameter that simulates the measured GN2 system pressure drop.
Bullet Design of MEMS Cantilever - Hand Calculation
Directory of Open Access Journals (Sweden)
Abhijeet V. KSHIRSAGAR
2008-04-01
Full Text Available The present article describes the basic hand calculations for design of MEMS cantilever for beginners. The MATLAB software code was written to analysis the all formulae. Further the article gives insight of important parameters, its dependence and consideration for a good design.
Model calculations in correlated finite nuclei
Energy Technology Data Exchange (ETDEWEB)
Guardiola, R.; Ros, J. (Granada Univ. (Spain). Dept. de Fisica Nuclear); Polls, A. (Tuebingen Univ. (Germany, F.R.). Inst. fuer Theoretische Physik)
1980-10-21
In order to study the convergence condition of the FAHT cluster expansion several model calculations are described and numerically tested. It is concluded that this cluster expansion deals properly with the central part of the two-body distribution function, but presents some difficulties for the exchange part.
Total energy calculations and bonding at interfaces
Energy Technology Data Exchange (ETDEWEB)
Louie, S.G.
1984-08-01
Some of the concepts and theoretical techniques employed in recent ab initio studies of the electronic and structural properties of surfaces and interfaces are discussed. Results of total energy calculations for the 2 x 1 reconstructed diamond (111) surface and for stacking faults in Si are reviewed. 30 refs., 8 figs.
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Block Tridiagonal Matrices in Electronic Structure Calculations
DEFF Research Database (Denmark)
Petersen, Dan Erik
This thesis focuses on some of the numerical aspects of the treatment of the electronic structure problem, in particular that of determining the ground state electronic density for the non–equilibrium Green’s function formulation of two–probe systems and the calculation of transmission...
Deconstructing Calculation Methods, Part 3: Multiplication
Thompson, Ian
2008-01-01
In this third of a series of four articles, the author deconstructs the primary national strategy's approach to written multiplication. The approach to multiplication, as set out on pages 12 to 15 of the primary national strategy's "Guidance paper" "Calculation" (DfES, 2007), is divided into six stages: (1) mental…
Calculated Bulk Properties of the Actinide Metals
DEFF Research Database (Denmark)
Skriver, Hans Lomholt; Andersen, O. K.; Johansson, B.
1978-01-01
Self-consistent relativistic calculations of the electronic properties for seven actinides (Ac-Am) have been performed using the linear muffin-tin orbitals method within the atomic-sphere approximation. Exchange and correlation were included in the local spin-density scheme. The theory explains t...
Unified approach to alpha decay calculations
Indian Academy of Sciences (India)
2014-05-02
May 2, 2014 ... We describe the analytic -matrix (SM) method which gives a procedure for the calculation of decay energy and mean life in an integrated way by evaluating the resonance pole of the -matrix in the complex momentum or energy plane. We make an illustrative comparative study of WKB and -matrix ...
CALCULATION OF THE PROCESS OF BURDEN HEATING
Directory of Open Access Journals (Sweden)
S. L. Rovin
2009-01-01
Full Text Available The original method of calculation of duration of burden heating till predetermined temperature is stated. The results of numerical modeling of nonstationary heating of fixed bed are given. Experimental check of the received results is carried out at full-scale plants.
5 CFR 1653.14 - Calculating entitlements.
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Calculating entitlements. 1653.14 Section 1653.14 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD COURT ORDERS AND LEGAL PROCESSES AFFECTING THRIFT SAVINGS PLAN ACCOUNTS Legal Process for the Enforcement of a Participant's Legal...
30 CFR 5.30 - Fee calculation.
2010-07-01
... MINING PRODUCTS FEES FOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS § 5.30 Fee calculation. (a) MSHA bases fees under this subchapter on the direct and indirect costs of the services provided, except... product. (d) If the actual cost of processing the application is less than MSHA's maximum fee estimate...
Calculating track thrust with track functions
Chang, Hsi-Ming; Procura, Massimiliano; Thaler, Jesse; Waalewijn, Wouter J.
2013-08-01
In e+e- event shapes studies at LEP, two different measurements were sometimes performed: a “calorimetric” measurement using both charged and neutral particles and a “track-based” measurement using just charged particles. Whereas calorimetric measurements are infrared and collinear safe, and therefore calculable in perturbative QCD, track-based measurements necessarily depend on nonperturbative hadronization effects. On the other hand, track-based measurements typically have smaller experimental uncertainties. In this paper, we present the first calculation of the event shape “track thrust” and compare to measurements performed at ALEPH and DELPHI. This calculation is made possible through the recently developed formalism of track functions, which are nonperturbative objects describing how energetic partons fragment into charged hadrons. By incorporating track functions into soft-collinear effective theory, we calculate the distribution for track thrust with next-to-leading logarithmic resummation. Due to a partial cancellation between nonperturbative parameters, the distributions for calorimeter thrust and track thrust are remarkably similar, a feature also seen in LEP data.
Relativistic calculations of coalescing binary neutron stars
Indian Academy of Sciences (India)
We have designed and tested a new relativistic Lagrangian hydrodynamics code, which treats gravity in the conformally flat approximation to general relativity. We have tested the resulting code extensively, finding that it performs well for calculations of equilibrium single-star models, collapsing relativistic dust clouds, and ...
Relativistic calculations of coalescing binary neutron stars
Indian Academy of Sciences (India)
Relativistic calculations of coalescing binary neutron stars. JOSHUA FABER, PHILIPPE GRANDCLÉMENT and FREDERIC RASIO. Department of Physics and Astronomy, Northwestern University, Evanston,. IL 60208-0834, USA. E-mail: rasio@mac.com. Abstract. We have designed and tested a new relativistic Lagrangian ...
Fast calculation of best focus position
Bezzubik, V.; Belashenkov, N.; Vdovin, G.V.
2015-01-01
New computational technique based on linear-scale differential analysis (LSDA) of digital image is proposed to find the best focus position in digital microscopy by means of defocus estimation in two near-focal positions only. The method is based on the calculation of local gradients of the image on
Synthesis, characterization, ab initio calculations, thermal behaviour ...
Indian Academy of Sciences (India)
Administrator
through titration of the ligands with the metal ions at constant ionic strength (0⋅1 M NaClO4) and at 25°C. According to the thermodynamic studies, as the steric character of the ligand increases, the complexation tendency to VO(IV) center decreases. Also, the ab initio calculations were carried out to determine the structural ...
A Tabular Approach to Titration Calculations
Lim, Kieran F.
2012-01-01
Titrations are common laboratory exercises in high school and university chemistry courses, because they are easy, relatively inexpensive, and they illustrate a number of fundamental chemical principles. While students have little difficulty with calculations involving a single titration step, there is a significant leap in conceptual difficulty…
Ammonia synthesis from first principles calculations
DEFF Research Database (Denmark)
Honkala, Johanna Karoliina; Hellman, Anders; Remediakis, Ioannis
2005-01-01
The rate of ammonia synthesis over a nanoparticle ruthenium catalyst can be calculated directly on the basis of a quantum chemical treatment of the problem using density functional theory. We compared the results to measured rates over a ruthenium catalyst supported on magnesium aluminum spinet...
Why Do Calculators Have Rubber Feet?
Heavers, Richard M.
2007-01-01
Our students like using the covers of their TI graphing calculators in an inquiry-based extension of a traditional exercise that challenges their preconceived ideas about friction. Biology major Fiona McGraw (Fig. 1) is obviously excited about the large coefficient of static friction ([mu][subscript s] = 1.3) for the four little rubber feet on her…
On Calculating the Current-Voltage Characteristic of Multi-Diode Models for Organic Solar Cells
Roberts, Ken; Valluri, S.R.
2015-01-01
We provide an alternative formulation of the exact calculation of the current-voltage characteristic of solar cells which have been modeled with a lumped parameters equivalent circuit with one or two diodes. Such models, for instance, are suitable for describing organic solar cells whose current-voltage characteristic curve has an inflection point, also known as an S-shaped anomaly. Our formulation avoids the risk of numerical overflow in the calculation. It is suitable for implementation in ...
Calculation of U-value for Concrete Element
DEFF Research Database (Denmark)
Rose, Jørgen
1997-01-01
This report is a U-value calculation of a typical concrete element used in industrial buildings.The calculations are performed using a 2-dimensional finite difference calculation programme.......This report is a U-value calculation of a typical concrete element used in industrial buildings.The calculations are performed using a 2-dimensional finite difference calculation programme....
Radionuclide release calculations for SAR-08
Energy Technology Data Exchange (ETDEWEB)
Thomson, Gavin; Miller, Alex; Smith, Graham; Jackson, Duncan (Enviros Consulting Ltd, Wolverhampton (United Kingdom))
2008-04-15
Following a review by the Swedish regulatory authorities of the post-closure safety assessment of the SFR 1 disposal facility for low and intermediate waste (L/ILW), SAFE, the SKB has prepared an updated assessment called SAR-08. This report describes the radionuclide release calculations that have been undertaken as part of SAR-08. The information, assumptions and data used in the calculations are reported and the results are presented. The calculations address issues raised in the regulatory review, but also take account of new information including revised inventory data. The scenarios considered include the main case of expected behaviour of the system, with variants; low probability releases, and so-called residual scenarios. Apart from these scenario uncertainties, data uncertainties have been examined using a probabilistic approach. Calculations have been made using the AMBER software. This allows all the component features of the assessment model to be included in one place. AMBER has been previously used to reproduce results the corresponding calculations in the SAFE assessment. It is also used in demonstration of the IAEA's near surface disposal assessment methodology ISAM and has been subject to very substantial verification tests and has been used in verifying other assessment codes. Results are presented as a function of time for the release of radionuclides from the near field, and then from the far field into the biosphere. Radiological impacts of the releases are reported elsewhere. Consideration is given to each radionuclide and to each component part of the repository. The releases from the entire repository are also presented. The peak releases rates are, for most scenarios, due to organic C-14. Other radionuclides which contribute to peak release rates include inorganic C-14, Ni-59 and Ni-63. (author)
Energy Technology Data Exchange (ETDEWEB)
Liljenzin, J.O.; Rydberg, J. [Radiochemistry Consultant Group, Vaestra Froelunda (Sweden)
1996-11-01
The first part of this review discusses the importance of risk. If there is any relation between the emotional and rational risk perceptions (for example, it is believed that increased knowledge will decrease emotions), it will be a desirable goal for society, and the nuclear industry in particular, to improve the understanding by the laymen of the rational risks from nuclear energy. This review surveys various paths to a more common comprehension - perhaps a consensus - of the nuclear waste risks. The second part discusses radioactivity as a risk factor and concludes that it has no relation in itself to risk, but must be connected to exposure leading to a dose risk, i.e. a health detriment, which is commonly expressed in terms of cancer induction rate. Dose-effect relations are discussed in light of recent scientific debate. The third part of the report describes a number of hazard indexes for nuclear waste found in the literature and distinguishes between absolute and relative risk scales. The absolute risks as well as the relative risks have changed over time due to changes in radiological and metabolic data and by changes in the mode of calculation. To judge from the literature, the risk discussion is huge, even when it is limited to nuclear waste. It would be very difficult to make a comprehensive review and extract the essentials from that. Therefore, we have chosen to select some publications, out of the over 100, which we summarize rather comprehensively; in some cases we also include our remarks. 110 refs, 22 figs.
Coupled-cluster calculations of nucleonic matter
Hagen, G.; Papenbrock, T.; Ekström, A.; Wendt, K. A.; Baardsen, G.; Gandolfi, S.; Hjorth-Jensen, M.; Horowitz, C. J.
2014-01-01
Background: The equation of state (EoS) of nucleonic matter is central for the understanding of bulk nuclear properties, the physics of neutron star crusts, and the energy release in supernova explosions. Because nuclear matter exhibits a finely tuned saturation point, its EoS also constrains nuclear interactions. Purpose: This work presents coupled-cluster calculations of infinite nucleonic matter using modern interactions from chiral effective field theory (EFT). It assesses the role of correlations beyond particle-particle and hole-hole ladders, and the role of three-nucleon forces (3NFs) in nuclear matter calculations with chiral interactions. Methods: This work employs the optimized nucleon-nucleon (NN) potential NNLOopt at next-to-next-to leading order, and presents coupled-cluster computations of the EoS for symmetric nuclear matter and neutron matter. The coupled-cluster method employs up to selected triples clusters and the single-particle space consists of a momentum-space lattice. We compare our results with benchmark calculations and control finite-size effects and shell oscillations via twist-averaged boundary conditions. Results: We provide several benchmarks to validate the formalism and show that our results exhibit a good convergence toward the thermodynamic limit. Our calculations agree well with recent coupled-cluster results based on a partial wave expansion and particle-particle and hole-hole ladders. For neutron matter at low densities, and for simple potential models, our calculations agree with results from quantum Monte Carlo computations. While neutron matter with interactions from chiral EFT is perturbative, symmetric nuclear matter requires nonperturbative approaches. Correlations beyond the standard particle-particle ladder approximation yield non-negligible contributions. The saturation point of symmetric nuclear matter is sensitive to the employed 3NFs and the employed regularization scheme. 3NFs with nonlocal cutoffs exhibit a
Risks of multiple sclerosis in relatives of patients in Flanders, Belgium
Carton, H; Vlietinck, R; Debruyne, J; DeKeyser, J; DHooghe, MB; Loos, R; Medaer, R; Truyen, L; Yee, IML; Sadovnick, AD
Objectives - To calculate age adjusted risks for multiple sclerosis in relatives of Flemish patients with multiple sclerosis. Methods - Lifetime risks were calculated using the maximum likelihood approach. Results - Vital information was obtained on 674 probands with multiple sclerosis in Flanders
Results of the degradation kinetics project and describes a general approach for calculating and selecting representative half-life values from soil and aquatic transformation studies for risk assessment and exposure modeling purposes.
2016 WSES guidelines on acute calculous cholecystitis.
LENUS (Irish Health Repository)
Ansaloni, L
2016-01-01
Acute calculus cholecystitis is a very common disease with several area of uncertainty. The World Society of Emergency Surgery developed extensive guidelines in order to cover grey areas. The diagnostic criteria, the antimicrobial therapy, the evaluation of associated common bile duct stones, the identification of "high risk" patients, the surgical timing, the type of surgery, and the alternatives to surgery are discussed. Moreover the algorithm is proposed: as soon as diagnosis is made and after the evaluation of choledocholitiasis risk, laparoscopic cholecystectomy should be offered to all patients exception of those with high risk of morbidity or mortality. These Guidelines must be considered as an adjunctive tool for decision but they are not substitute of the clinical judgement for the individual patient.
Black hole entropy calculations based on symmetries
Dreyer, O; Wísniewski, J A; Dreyer, Olaf; Ghosh, Amit; Wisniewski, Jacek
2001-01-01
Symmetry based approaches to the black hole entropy problem have a number of attractive features; in particular they are very general and do not depend on the details of the quantization method. However we point out that, of the two available approaches, one faces conceptual problems (also emphasized by others), while the second contains certain technical flaws. We correct these errors and, within the new, improved scheme, calculate the entropy of 3-dimensional black holes. We find that, while the new symmetry vector fields are well-defined on the ``stretched horizon,'' and lead to well-defined Hamiltonians satisfying the expected Lie algebra, they fail to admit a well-defined limit to the horizon. This suggests that, although the formal calculation can be carried out at the classical level, its real, conceptual origin probably lies in the quantum theory.
On the Origins of Calculation Abilities
Directory of Open Access Journals (Sweden)
A. Ardila
1993-01-01
Full Text Available A historical review of calculation abilities is presented. Counting, starting with finger sequencing, has been observed in different ancient and contemporary cultures, whereas number representation and arithmetic abilities are found only during the last 5000–6000 years. The rationale for selecting a base of ten in most numerical systems and the clinical association between acalculia and finger agnosia are analyzed. Finger agnosia (as a restricted form of autotopagnosia, right–left discrimination disturbances, semantic aphasia, and acalculia are proposed to comprise a single neuropsychological syndrome associated with left angular gyrus damage. A classification of calculation disturbances resulting from brain damage is presented. It is emphasized that using historical/anthropological analysis, it becomes evident that acalculia, finger agnosia, and disorders in right–left discrimination (as in general, in the use of spatial concepts must constitute a single clinical syndrome, resulting from the disruption of some common brain activity and the impairment of common cognitive mechanisms.
Labview virtual instruments for calcium buffer calculations.
Reitz, Frederick B; Pollack, Gerald H
2003-01-01
Labview VIs based upon the calculator programs of Fabiato and Fabiato (J. Physiol. Paris 75 (1979) 463) are presented. The VIs comprise the necessary computations for the accurate preparation of multiple-metal buffers, for the back-calculation of buffer composition given known free metal concentrations and stability constants used, for the determination of free concentrations from a given buffer composition, and for the determination of apparent stability constants from absolute constants. As implemented, the VIs can concurrently account for up to three divalent metals, two monovalent metals and four ligands thereof, and the modular design of the VIs facilitates further extension of their capacity. As Labview VIs are inherently graphical, these VIs may serve as useful templates for those wishing to adapt this software to other platforms.
A priori calculations for the rotational stabilisation
Directory of Open Access Journals (Sweden)
Iwata Yoritaka
2013-12-01
Full Text Available The synthesis of chemical elements are mostly realised by low-energy heavy-ion reactions. Synthesis of exotic and heavy nuclei as well as that of superheavy nuclei is essential not only to find out the origin and the limit of the chemical elements but also to clarify the historical/chemical evolution of our universe. Despite the life time of exotic nuclei is not so long, those indispensable roles in chemical evolution has been pointed out. Here we are interested in examining the rotational stabilisation. In this paper a priori calculation (before microscopic density functional calculations is carried out for the rotational stabilisation effect in which the balance between the nuclear force, the Coulomb force and the centrifugal force is taken into account.
A corrector for spacecraft calculated electron moments
Directory of Open Access Journals (Sweden)
J. Geach
2005-03-01
Full Text Available We present the application of a numerical method to correct electron moments calculated on-board spacecraft from the effects of potential broadening and energy range truncation. Assuming a shape for the natural distribution of the ambient plasma and employing the scalar approximation, the on-board moments can be represented as non-linear integral functions of the underlying distribution. We have implemented an algorithm which inverts this system successfully over a wide range of parameters for an assumed underlying drifting Maxwellian distribution. The outputs of the solver are the corrected electron plasma temperature Te, density Ne and velocity vector Ve. We also make an estimation of the temperature anisotropy A of the distribution. We present corrected moment data from Cluster's PEACE experiment for a range of plasma environments and make comparisons with electron and ion data from other Cluster instruments, as well as the equivalent ground-based calculations using full 3-D distribution PEACE telemetry.
CALCULATION ALGORITHM TRUSS UNDER CRANE BEAMS
Directory of Open Access Journals (Sweden)
N. K. Akaev1
2016-01-01
Full Text Available Aim.The task of reducing the deflection and increase the rigidity of single-span beams are made. In the article the calculation algorithm for truss crane girders is determined.Methods. To identify the internal effort required for the selection of cross section elements the design uses the Green's function.Results. It was found that the simplest truss system reduces deflection and increases the strength of design. The upper crossbar is subjected not only to bending and shear and compression work due to tightening tension. Preliminary determination of the geometrical characteristics of the crane farms elements are offered to make a comparison with previous similar configuration of his farms, using a simple approximate calculation methods.Conclusion.The method of sequential movements (incrementally the two bridge cranes along the length of the upper crossbar truss beams is suggested. We give the corresponding formulas and conditions of safety.
Fastlim: a fast LHC limit calculator
Papucci, Michele; Weiler, Andreas; Zeune, Lisa
2014-01-01
Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the exclusion $p$-value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straight-forward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios...
Calculation of coherent synchrotron radiation using mesh
Directory of Open Access Journals (Sweden)
T. Agoh
2004-05-01
Full Text Available We develop a new method to simulate coherent synchrotron radiation numerically. It is based on the mesh calculation of the electromagnetic field in the frequency domain. We make an approximation in the Maxwell equation which allows a mesh size much larger than the relevant wavelength so that the computing time is tolerable. Using the equation, we can perform a mesh calculation of coherent synchrotron radiation in transient states with shielding effects by the vacuum chamber. The simulation results obtained by this method are compared with analytic solutions. Though, for the comparison with theories, we adopt simplifications such as longitudinal Gaussian distribution, zero-width transverse distribution, horizontal uniform bend, and a vacuum chamber with rectangular cross section, the method is applicable to general cases.
Comparative Study of Daylighting Calculation Methods
Directory of Open Access Journals (Sweden)
Mandala Ariani
2018-01-01
Full Text Available The aim of this study is to assess five daylighting calculation method commonly used in architectural study. The methods used include hand calculation methods (SNI/DPMB method and BRE Daylighting Protractors, scale models studied in an artificial sky simulator and computer programs using Dialux and Velux lighting software. The test room is conditioned by the uniform sky conditions, simple room geometry with variations of the room reflectance (black, grey, and white color. The analyses compared the result (including daylight factor, illumination, and coefficient of uniformity value and examines the similarity and contrast the result different. The color variations trial is used to analyses the internally reflection factor contribution to the result.
Low-energy calculations for nuclear photodisintegration
Directory of Open Access Journals (Sweden)
Deflorian S.
2016-01-01
Full Text Available In the Standard Solar Model a central role in the nucleosynthesis is played by reactions of the kind XZ1A11+XZ2A22→YZ1+Z2A1+A2+γ${}_{{Z_1}}^{{A_1}}{X_1} + {}_{{Z_2}}^{{A_2}}{X_2} \\to {}_{{Z_1} + {Z_2}}^{{A_1} + {A_2}}Y + \\gamma $, which enter the proton-proton chains. These reactions can also be studied through the inverse photodisintegration reaction. One option is to use the Lorentz Integral Transform approach, which transforms the continuum problem into a bound state-like one. A way to check the reliability of such methods is a direct calculation, for example using the Kohn Variational Principle to obtain the scattering wave function and then directly calculate the response function of the reaction.
Integral dependent spin couplings in CI calculations
Iberle, K.; Davidson, E. R.
1982-01-01
Although the number of ways to combine Slater determinants to form spin eigenfunctions increases rapidly with the number of open shells, most of these spin couplings will make only a small contribution to a given state, provided the spin coupling is chosen judiciously. The technique of limiting calculations to the interacting subspace pioneered by Bunge (1970) was employed by Munch and Davidson (1975) to the vanadium atom. The use of an interacting space looses its advantage in more complex cases. However, the problem can always be reduced to only one interacting spin coupling by making the coefficients integral dependent. The present investigation is concerned with the performance of integral dependent interacting couplings, taking into account the results of three test calculations.
A Methodology for Calculating Radiation Signatures
Energy Technology Data Exchange (ETDEWEB)
Klasky, Marc Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wilcox, Trevor [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bathke, Charles G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); James, Michael R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-05-01
A rigorous formalism is presented for calculating radiation signatures from both Special Nuclear Material (SNM) as well as radiological sources. The use of MCNP6 in conjunction with CINDER/ORIGEN is described to allow for the determination of both neutron and photon leakages from objects of interest. In addition, a description of the use of MCNP6 to properly model the background neutron and photon sources is also presented. Examinations of the physics issues encountered in the modeling are investigated so as to allow for guidance in the user discerning the relevant physics to incorporate into general radiation signature calculations. Furthermore, examples are provided to assist in delineating the pertinent physics that must be accounted for. Finally, examples of detector modeling utilizing MCNP are provided along with a discussion on the generation of Receiver Operating Curves, which are the suggested means by which to determine detectability radiation signatures emanating from objects.
Reading and Calculating Billing Through Mobile Devices
Directory of Open Access Journals (Sweden)
Pilar Alexandra Moreno
2013-06-01
Full Text Available This article broadly describes the analysis, design and development of the system utilitarian, called “Reading and billing calculation site through mobile devices.” The application is oriented Public Services Companies, first water services, to perform part of the billing process “in place” through phones or any mobile devices compatible with Android. Will enable you to take readings of service consumption, recording new gauging, online update and control the information for users and turnover. This technology is considered as such one site billing method as through Internet is connected with the database of the company, sending and receiving date information, which makes the calculation of the billing for the reading period, bringing benefits to the client and the service generating company.
Improving the calculation of interdiffusion coefficients
Kapoor, Rakesh R.; Eagar, Thomas W.
1990-12-01
Least-squares spline interpolation techniques are reviewed and presented as a mathematical tool for noise reduction and interpolation of diffusion profiles. Numerically simulated diffusion profiles were interpolated using a sixth-order spline. The spline fit data were successfully used in conjunction with the Boltzmann-Matano treatment to compute the interdiffusion coefficient, demonstrating the usefulness of splines as a numerical tool for such calculations. Simulations conducted on noisy data indicate that the technique can extract the correct diffusivity data given compositional data that contain only three digits of information and are contaminated with a noise level of 0.001. Splines offer a reproducible and reliable alternative to graphical evaluation of the slope of a diffusion profile, which is used in the Boltzmann-Matano treatment. Hence, use of splines reduces the numerical errors associated with calculation of interdiffusion coefficients from raw diffusion profile data.
Tearing mode stability calculations with pressure flattening
Ham, C J; Cowley, S C; Hastie, R J; Hender, T C; Liu, Y Q
2013-01-01
Calculations of tearing mode stability in tokamaks split conveniently into an external region, where marginally stable ideal MHD is applicable, and a resonant layer around the rational surface where sophisticated kinetic physics is needed. These two regions are coupled by the stability parameter. Pressure and current perturbations localized around the rational surface alter the stability of tearing modes. Equations governing the changes in the external solution and - are derived for arbitrary perturbations in axisymmetric toroidal geometry. The relationship of - with and without pressure flattening is obtained analytically for four pressure flattening functions. Resistive MHD codes do not contain the appropriate layer physics and therefore cannot predict stability directly. They can, however, be used to calculate -. Existing methods (Ham et al. 2012 Plasma Phys. Control. Fusion 54 025009) for extracting - from resistive codes are unsatisfactory when there is a finite pressure gradient at the rational surface ...
Through-Flow Calculations in Axial Turbomachinery
1976-10-01
Glassman , Lewis Research Center, NASA SP-290, 1973. 3. Dzung, L.S.: Schaufelgitter mit dicker Hinterkante, Technical Note BBC, (unpublished) 4...of peak efficiency was taken from - Warner L.S. : ASME Paper 61-WA-37 - Glassman A.J. : NASA TN-D-6702 The method for computing incidence losses is...devise more intelligent ý , flow models whicn will enable us to do semi-empirical simpler calculations. One of the things that I have in mind and has not
Continuum RPA calculation of escape widths
Energy Technology Data Exchange (ETDEWEB)
Vertse, T. (Inst. of Nuclear Research, Hungarian Academy of Sciences, Debrecen (Hungary)); Curutchet, P.; Liotta, R.J. (Manne Siegbahn Inst. of Physics, Stockholm (Sweden)); Bang, J. (Niels Bohr Inst., Copenhagen (Denmark)); Giai, N. van (Inst. de Physique Nucleaire, 91 - Orsay (France))
1991-07-25
Particle-hole partial decay widths are calculated within the continuum RPA exactly, i.e. without any further approximation, in a square well plus Coulomb potential and using a separable residual interaction. The results are compared with the ones obtained by making pole expansions of the single-particle Green functions (Berggren and Mittag-Leffler). It is found that the Berggren and Mittag-Leffler expansions give results in good agreement with the 'exact' ones. (orig.).
Temperature Calculations in the Coastal Modeling System
2017-04-01
System by Honghai Li and Mitchell E. Brown PURPOSE: This Coastal and Hydraulics Engineering Technical Note (CHETN) describes procedures to calculate...strong tidal signals and sufficient wind energy to provide the vertical mixing. Also, the assumption of sufficient energy to mix over the water...of the Corrotoman River is predominated by tidal process with occasional passages of meteorological events. Tide and wind provide sufficient energy
Toward a nitrogen footprint calculator for Tanzania
Hutton, Mary Olivia; Leach, Allison M.; Leip, Adrian; Galloway, James N.; Bekunda, Mateete; Sullivan, Clare; Lesschen, Jan Peter
2017-03-01
We present the first nitrogen footprint model for a developing country: Tanzania. Nitrogen (N) is a crucial element for agriculture and human nutrition, but in excess it can cause serious environmental damage. The Sub-Saharan African nation of Tanzania faces a two-sided nitrogen problem: while there is not enough soil nitrogen to produce adequate food, excess nitrogen that escapes into the environment causes a cascade of ecological and human health problems. To identify, quantify, and contribute to solving these problems, this paper presents a nitrogen footprint tool for Tanzania. This nitrogen footprint tool is a concept originally designed for the United States of America (USA) and other developed countries. It uses personal resource consumption data to calculate a per-capita nitrogen footprint. The Tanzania N footprint tool is a version adapted to reflect the low-input, integrated agricultural system of Tanzania. This is reflected by calculating two sets of virtual N factors to describe N losses during food production: one for fertilized farms and one for unfertilized farms. Soil mining factors are also calculated for the first time to address the amount of N removed from the soil to produce food. The average per-capita nitrogen footprint of Tanzania is 10 kg N yr-1. 88% of this footprint is due to food consumption and production, while only 12% of the footprint is due to energy use. Although 91% of farms in Tanzania are unfertilized, the large contribution of fertilized farms to N losses causes unfertilized farms to make up just 83% of the food production N footprint. In a developing country like Tanzania, the main audiences for the N footprint tool are community leaders, planners, and developers who can impact decision-making and use the calculator to plan positive changes for nitrogen sustainability in the developing world.
On the Origins of Calculation Abilities
Ardila, A.
1993-01-01
A historical review of calculation abilities is presented. Counting, starting with finger sequencing, has been observed in different ancient and contemporary cultures, whereas number representation and arithmetic abilities are found only during the last 5000–6000 years. The rationale for selecting a base of ten in most numerical systems and the clinical association between acalculia and finger agnosia are analyzed. Finger agnosia (as a restricted form of autotopagnosia), right–left discrimina...
Thermal Load Calculations of Multilayered Walls
Bashir M. Suleiman
2012-01-01
Thermal load calculations have been performed for multi-layered walls that are composed of three different parts; a common (sand and cement) plaster, and two types of locally produced soft and hard bricks. The masonry construction of these layered walls was based on concrete-backed stone masonry made of limestone bricks joined by mortar. These multilayered walls are forming the outer walls of the building envelope of a typical Libyan house. Based on the periodic seasonal ...
Flow calculation in a bulb turbine
Energy Technology Data Exchange (ETDEWEB)
Goede, E.; Pestalozzi, J.
1987-02-01
In recent years remarkable progress has been made in the field of computational fluid dynamics. Sometimes the impression may arise when reading the relevant literature that most of the problems in this field have already been solved. Upon studying the matter more deeply, however, it is apparent that some questions still remain unanswered. The use of the quasi-3D (Q3D) computational method for calculating the flow in a fuel hydraulic turbine is described.
Calculation methods of the nuclear characteristics
Dubovichenko, S. B.
2010-01-01
In the book the mathematical methods of nuclear cross sections and phases of elastic scattering, energy and characteristics of bound states in two- and three-particle nuclear systems, when the potentials of interaction contain not only central, but also tensor component, are presented. Are given the descriptions of the mathematical numerical calculation methods and computer programs in the algorithmic language "BASIC" for "Turbo Basic" of firm "Borland" for the computers of the type IBM PC AT...
Calculation Of Residual Volume By Spirometric Data
Directory of Open Access Journals (Sweden)
R. Hashemi
2005-05-01
Full Text Available Background: The current practice to measure RV is either by BPG or helium dilution methods which may not be available in all clinics due to their cost. Methods: This paper outlines a method for both direct and indirect calculation of RV via PFT with acceptable sensitivity (81 %, 60% , specificity (71 %, 94% and validity (76%, 78% for obstructive and restrictive lung disease respectively at a much lower cost.
Calculation of Weighted Geometric Dilution of Precision
Directory of Open Access Journals (Sweden)
Chien-Sheng Chen
2013-01-01
Full Text Available To achieve high accuracy in wireless positioning systems, both accurate measurements and good geometric relationship between the mobile device and the measurement units are required. Geometric dilution of precision (GDOP is widely used as a criterion for selecting measurement units, since it represents the geometric effect on the relationship between measurement error and positioning determination error. In the calculation of GDOP value, the maximum volume method does not necessarily guarantee the selection of the optimal four measurement units with minimum GDOP. The conventional matrix inversion method for GDOP calculation demands a large amount of operation and causes high power consumption. To select the subset of the most appropriate location measurement units which give the minimum positioning error, we need to consider not only the GDOP effect but also the error statistics property. In this paper, we employ the weighted GDOP (WGDOP, instead of GDOP, to select measurement units so as to improve the accuracy of location. The handheld global positioning system (GPS devices and mobile phones with GPS chips can merely provide limited calculation ability and power capacity. Therefore, it is very imperative to obtain WGDOP accurately and efficiently. This paper proposed two formations of WGDOP with less computation when four measurements are available for location purposes. The proposed formulae can reduce the computational complexity required for computing the matrix inversion. The simpler WGDOP formulae for both the 2D and the 3D location estimation, without inverting a matrix, can be applied not only to GPS but also to wireless sensor networks (WSN and cellular communication systems. Furthermore, the proposed formulae are able to provide precise solution of WGDOP calculation without incurring any approximation error.
40 CFR 91.1307 - Credit calculation.
2010-07-01
...) are to be calculated according to the following equation and rounded, in accordance with ASTM E29-93a, to the nearest gram. ASTM E29-93a has been incorporated by reference. See § 91.6. Consistent units... family in grams per kilowatt hour. CL = compliance level of the in-use testing in g/kW-hr. μuse = mean...
Theoretical Calculations of Atomic Data for Spectroscopy
Bautista, Manuel A.
2000-01-01
Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.
Bias in Dynamic Monte Carlo Alpha Calculations
Energy Technology Data Exchange (ETDEWEB)
Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nolen, Steven Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adams, Terry R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-06
A 1/N bias in the estimate of the neutron time-constant (commonly denoted as α) has been seen in dynamic neutronic calculations performed with MCATK. In this paper we show that the bias is most likely caused by taking the logarithm of a stochastic quantity. We also investigate the known bias due to the particle population control method used in MCATK. We conclude that this bias due to the particle population control method is negligible compared to other sources of bias.
Characteristic features of calculations of hydrogen generators
Troshen'kin, V. B.
2010-03-01
Among the methods of hydrogen generation that are economically sound for autonomous customers is the silikol method. The technique of calculation of the cylinder gas generator circuit is given. The restrictions imposed on the flow velocity in a three-phase reacting system are considered. It is established that the reaction rate in the circuit as a dissipative structure is in direct correlation with the change in the Gibbs energy.
Calculation of reactor antineutrino spectra in TEXONO
Chen Dong Liang; Mao Ze Pu; Wong, T H
2002-01-01
In the low energy reactor antineutrino physics experiments, either for the researches of antineutrino oscillation and antineutrino reactions, or for the measurement of abnormal magnetic moment of antineutrino, the flux and the spectra of reactor antineutrino must be described accurately. The method of calculation of reactor antineutrino spectra was discussed in detail. Furthermore, based on the actual circumstances of NP2 reactors and the arrangement of detectors, the flux and the spectra of reactor antineutrino in TEXONO were worked out
Activation Product Inverse Calculations with NDI
Energy Technology Data Exchange (ETDEWEB)
Gray, Mark Girard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-27
NDI based forward calculations of activation product concentrations can be systematically used to infer structural element concentrations from measured activation product concentrations with an iterative algorithm. The algorithm converges exactly for the basic production-depletion chain with explicit activation product production and approximately, in the least-squares sense, for the full production-depletion chain with explicit activation product production and nosub production-depletion chain. The algorithm is suitable for automation.
TINTE. Nuclear calculation theory description report
Energy Technology Data Exchange (ETDEWEB)
Gerwin, H.; Scherer, W.; Lauer, A. [Forschungszentrum Juelich GmbH (DE). Institut fuer Energieforschung (IEF), Sicherheitsforschung und Reaktortechnik (IEF-6); Clifford, I. [Pebble Bed Modular Reactor (Pty) Ltd. (South Africa)
2010-01-15
The Time Dependent Neutronics and Temperatures (TINTE) code system deals with the nuclear and the thermal transient behaviour of the primary circuit of the High-temperature Gas-cooled Reactor (HTGR), taking into consideration the mutual feedback effects in twodimensional axisymmetric geometry. This document contains a complete description of the theoretical basis of the TINTE nuclear calculation, including the equations solved, solution methods and the nuclear data used in the solution. (orig.)
In situ magnetotail magnetic flux calculation
Directory of Open Access Journals (Sweden)
M. A. Shukhtina
2015-06-01
Full Text Available We explore two new modifications of the magnetotail magnetic flux (F calculation algorithm based on the Petrinec and Russell (1996 (PR96 approach of the tail radius determination. Unlike in the PR96 model, the tail radius value is calculated at each time step based on simultaneous magnetotail and solar wind observations. Our former algorithm, described in Shukhtina et al. (2009, required that the "tail approximation" requirement were fulfilled, i.e., it could be applied only tailward x ∼ −15 RE. The new modifications take into account the approximate uniformity of the magnetic field of external sources in the near and middle tail. Tests, based on magnetohydrodynamics (MHD simulations, show that this approach may be applied at smaller distances, up to x ∼ −3 RE. The tests also show that the algorithm fails during long periods of strong positive interplanetary magnetic field (IMF Bz. A new empirical formula has also been obtained for the tail radius at the terminator (at x = 0 which improves the calculations.
Accurate Calculation of Electric Fields Inside Enzymes.
Wang, X; He, X; Zhang, J Z H
2016-01-01
The specific electric field generated by a protease at its active site is considered as an important source of the catalytic power. Accurate calculation of electric field at the active site of an enzyme has both fundamental and practical importance. Measuring site-specific changes of electric field at internal sites of proteins due to, eg, mutation, has been realized by using molecular probes with CO or CN groups in the context of vibrational Stark effect. However, theoretical prediction of change in electric field inside a protein based on a conventional force field, such as AMBER or OPLS, is often inadequate. For such calculation, quantum chemical approach or quantum-based polarizable or polarized force field is highly preferable. Compared with the result from conventional force field, significant improvement is found in predicting experimentally measured mutation-induced electric field change using quantum-based methods, indicating that quantum effect such as polarization plays an important role in accurate description of electric field inside proteins. In comparison, the best theoretical prediction comes from fully quantum mechanical calculation in which both polarization and inter-residue charge transfer effects are included for accurate prediction of electrostatics in proteins. © 2016 Elsevier Inc. All rights reserved.
Excited state electron affinity calculations for aluminum
Hussein, Adnan Yousif
2017-08-01
Excited states of negative aluminum ion are reviewed, and calculations of electron affinities of the states (3s^23p^2)^1D and (3s3p^3){^5}{S}° relative to the (3s^23p)^2P° and (3s3p^2)^4P respectively of the neutral aluminum atom are reported in the framework of nonrelativistic configuration interaction (CI) method. A priori selected CI (SCI) with truncation energy error (Bunge in J Chem Phys 125:014107, 2006) and CI by parts (Bunge and Carbó-Dorca in J Chem Phys 125:014108, 2006) are used to approximate the valence nonrelativistic energy. Systematic studies of convergence of electron affinity with respect to the CI excitation level are reported. The calculated value of the electron affinity for ^1D state is 78.675(3) meV. Detailed Calculations on the ^5S°c state reveals that is 1216.8166(3) meV below the ^4P state.
Agriculture-related radiation dose calculations
Energy Technology Data Exchange (ETDEWEB)
Furr, J.M.; Mayberry, J.J.; Waite, D.A.
1987-10-01
Estimates of radiation dose to the public must be made at each stage in the identification and qualification process leading to siting a high-level nuclear waste repository. Specifically considering the ingestion pathway, this paper examines questions of reliability and adequacy of dose calculations in relation to five stages of data availability (geologic province, region, area, location, and mass balance) and three methods of calculation (population, population/food production, and food production driven). Calculations were done using the model PABLM with data for the Permian and Palo Duro Basins and the Deaf Smith County area. Extra effort expended in gathering agricultural data at succeeding environmental characterization levels does not appear justified, since dose estimates do not differ greatly; that effort would be better spent determining usage of food types that contribute most to the total dose; and that consumption rate and the air dispersion factor are critical to assessment of radiation dose via the ingestion pathway. 17 refs., 9 figs., 32 tabs.
Sample size calculations for skewed distributions.
Cundill, Bonnie; Alexander, Neal D E
2015-04-02
Sample size calculations should correspond to the intended method of analysis. Nevertheless, for non-normal distributions, they are often done on the basis of normal approximations, even when the data are to be analysed using generalized linear models (GLMs). For the case of comparison of two means, we use GLM theory to derive sample size formulae, with particular cases being the negative binomial, Poisson, binomial, and gamma families. By simulation we estimate the performance of normal approximations, which, via the identity link, are special cases of our approach, and for common link functions such as the log. The negative binomial and gamma scenarios are motivated by examples in hookworm vaccine trials and insecticide-treated materials, respectively. Calculations on the link function (log) scale work well for the negative binomial and gamma scenarios examined and are often superior to the normal approximations. However, they have little advantage for the Poisson and binomial distributions. The proposed method is suitable for sample size calculations for comparisons of means of highly skewed outcome variables.
TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES
Energy Technology Data Exchange (ETDEWEB)
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu [Planetary Sciences Group, Department of Physics, University of Central Florida, Orlando, FL 32816-2385 (United States)
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.
Cost Calculation Model for Logistics Service Providers
Directory of Open Access Journals (Sweden)
Zoltán Bokor
2012-11-01
Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly
Neural correlates of arithmetic calculation strategies.
Rosenberg-Lee, Miriam; Lovett, Marsha C; Anderson, John R
2009-09-01
Recent research into math cognition has identified areas of the brain that are involved in number processing (Dehaene, Piazza, Pinel, & Cohen, 2003) and complex problem solving (Anderson, 2007). Much of this research assumes that participants use a single strategy; yet, behavioral research finds that people use a variety of strategies (LeFevre et al., 1996; Siegler, 1987; Siegler & Lemaire, 1997). In the present study, we examined cortical activation as a function of two different calculation strategies for mentally solving multidigit multiplication problems. The school strategy, equivalent to long multiplication, involves working from right to left. The expert strategy, used by "lightning" mental calculators (Staszewski, 1988), proceeds from left to right. The two strategies require essentially the same calculations, but have different working memory demands (the school strategy incurs greater demands). The school strategy produced significantly greater early activity in areas involved in attentional aspects of number processing (posterior superior parietal lobule, PSPL) and mental representation (posterior parietal cortex, PPC), but not in a numerical magnitude area (horizontal intraparietal sulcus, HIPS) or a semantic memory retrieval area (lateral inferior prefrontal cortex, LIPFC). An ACT-R model of the task successfully predicted BOLD responses in PPC and LIPFC, as well as in PSPL and HIPS.
McMullan, Miriam; Jones, Ray; Lea, Susan
2010-04-01
This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.
Lattice QCD Calculation of Nucleon Structure
Energy Technology Data Exchange (ETDEWEB)
Liu, Keh-Fei [University of Kentucky, Lexington, KY (United States). Dept. of Physics and Astronomy; Draper, Terrence [University of Kentucky, Lexington, KY (United States). Dept. of Physics and Astronomy
2016-08-30
It is emphasized in the 2015 NSAC Long Range Plan that "understanding the structure of hadrons in terms of QCD's quarks and gluons is one of the central goals of modern nuclear physics." Over the last three decades, lattice QCD has developed into a powerful tool for ab initio calculations of strong-interaction physics. Up until now, it is the only theoretical approach to solving QCD with controlled statistical and systematic errors. Since 1985, we have proposed and carried out first-principles calculations of nucleon structure and hadron spectroscopy using lattice QCD which entails both algorithmic development and large-scale computer simulation. We started out by calculating the nucleon form factors -- electromagnetic, axial-vector, πNN, and scalar form factors, the quark spin contribution to the proton spin, the strangeness magnetic moment, the quark orbital angular momentum, the quark momentum fraction, and the quark and glue decomposition of the proton momentum and angular momentum. The first round of calculations were done with Wilson fermions in the `quenched' approximation where the dynamical effects of the quarks in the sea are not taken into account in the Monte Carlo simulation to generate the background gauge configurations. Beginning in 2000, we have started implementing the overlap fermion formulation into the spectroscopy and structure calculations. This is mainly because the overlap fermion honors chiral symmetry as in the continuum. It is going to be more and more important to take the symmetry into account as the simulations move closer to the physical point where the u and d quark masses are as light as a few MeV only. We began with lattices which have quark masses in the sea corresponding to a pion mass at ~ 300 MeV and obtained the strange form factors, charm and strange quark masses, the charmonium spectrum and the D_{s} meson decay constant f_{Ds}, the strangeness and charmness, the meson mass
Černák, Peter
2009-01-01
The Master's Thesis deals with the topic of risk management in a non-financial company. The goal of this Thesis is to create a framework for review of risk management process and to practically apply it in a case study. Objectives of the theoretical parts are: stating the reasons for risk management in non-financial companies, addressing the main parts of risk management and providing guidance for review of risk management process. A special attention is paid to financial risks. The practical...
Calculating classifier calibration performance with a custom modification of Weka
Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Martínez, Juan Manuel Montero
2015-02-01
Calibration is often overlooked in machine-learning problem-solving approaches, even in situations where an accurate estimation of predicted probabilities, and not only a discrimination between classes, is critical for decision-making. One of the reasons is the lack of readily available open-source software packages which can easily calculate calibration metrics. In order to provide one such tool, we have developed a custom modification of the Weka data mining software, which implements the calculation of Hosmer-Lemeshow groups of risk and the Pearson chi-square statistic comparison between estimated and observed frequencies for binary problems. We provide calibration performance estimations with Logistic regression (LR), BayesNet, Naïve Bayes, artificial neural network (ANN), support vector machine (SVM), k-nearest neighbors (KNN), decision trees and Repeated Incremental Pruning to Produce Error Reduction (RIPPER) models with six different datasets. Our experiments show that SVMs with RBF kernels exhibit the best results in terms of calibration, while decision trees, RIPPER and KNN are highly unlikely to produce well-calibrated models.
A practical alternative to calculating unmet need for family planning
Directory of Open Access Journals (Sweden)
Sinai I
2017-07-01
Full Text Available Irit Sinai,1,2 Susan Igras,1 Rebecka Lundgren1 1Institute for Reproductive Health, Georgetown University, Washington, DC, USA; 2Palladium, Washington, DC, USA Abstract: The standard approach for measuring unmet need for family planning calculates actual, physiological unmet need and is useful for tracking changes at the population level. We propose to supplement it with an alternate approach that relies on individual perceptions and can improve program design and implementation. The proposed approach categorizes individuals by their perceived need for family planning: real met need (current users of a modern method, perceived met need (current users of a traditional method, real no need, perceived no need (those with a physiological need for family planning who perceive no need, and perceived unmet need (those who realize they have a need but do not use a method. We tested this approach using data from Mali (n=425 and Benin (n=1080. We found that traditional method use was significantly higher in Benin than in Mali, resulting in different perceptions of unmet need in the two countries. In Mali, perceived unmet need was much higher. In Benin, perceived unmet need was low because women believed (incorrectly that they were protected from pregnancy. Perceived no need – women who believed that they could not become pregnant despite the fact that they were fecund and sexually active – was quite high in both countries. We posit that interventions that address perceptions of unmet need, in addition to physiological risk of pregnancy, will more likely be effective in changing behavior. The suggested approach for calculating unmet need supplements the standard calculations and is helpful for designing programs to better address women’s and men’s individual needs in diverse contexts. Keywords: unmet need, family planning, contraception, Mali, Benin
Ulusoy, Şükrü
2013-01-01
Several calculation modalities are used today for cardiovascular risk assessment. Cardiovascular risk assessment should be performed in all hypertensive patients. Risk assessment methods being based on the population in which the patient lives and the inclusion of factors such as ethnicity variations, socioeconomic status, and medication use will contribute to improvements in risk assessments. The results should be shared with the patient, and modifiable risk factors must be effectively treated.
Numerical precision calculations for LHC physics
Energy Technology Data Exchange (ETDEWEB)
Reuschle, Christian Andreas
2013-02-05
In this thesis I present aspects of QCD calculations, which are related to the fully numerical evaluation of next-to-leading order (NLO) QCD amplitudes, especially of the one-loop contributions, and the efficient computation of associated collider observables. Two interrelated topics have thereby been of concern to the thesis at hand, which give rise to two major parts. One large part is focused on the general group-theoretical behavior of one-loop QCD amplitudes, with respect to the underlying SU(N{sub c}) theory, in order to correctly and efficiently handle the color degrees of freedom in QCD one-loop amplitudes. To this end a new method is introduced that can be used in order to express color-ordered partial one-loop amplitudes with multiple quark-antiquark pairs as shuffle sums over cyclically ordered primitive one-loop amplitudes. The other large part is focused on the local subtraction of divergences off the one-loop integrands of primitive one-loop amplitudes. A method for local UV renormalization has thereby been developed, which uses local UV counterterms and efficient recursive routines. Together with suitable virtual soft and collinear subtraction terms, the subtraction method is extended to the virtual contributions in the calculations of NLO observables, which enables the fully numerical evaluation of the one-loop integrals in the virtual contributions. The method has been successfully applied to the calculation of jet rates in electron-positron annihilation to NLO accuracy in the large-N{sub c} limit.
Rooftop Unit Comparison Calculator User Manual
Energy Technology Data Exchange (ETDEWEB)
Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-04-30
This document serves as a user manual for the Packaged rooftop air conditioners and heat pump units comparison calculator (RTUCC) and is an aggregation of the calculator’s website documentation. Content ranges from new-user guide material like the “Quick Start” to the more technical/algorithmic descriptions of the “Methods Pages.” There is also a section listing all the context-help topics that support the features on the “Controls” page. The appendix has a discussion of the EnergyPlus runs that supported the development of the building-response models.
Calculating the cost of a healthcare project.
Stichler, Jaynelle F
2008-02-01
Nearly $200 billion of healthcare construction is expected by the year 2015, and nurse leaders must expand their knowledge and capabilities in healthcare design. This bimonthly department prepares nurse leaders to use the evidence-based design process to ensure that new, expanded, and renovated hospitals facilitate optimal patient outcomes, enhance the work environment for healthcare providers, and improve organizational performance. In this article, the author introduces important project budget terms and a method of calculating an estimation of probable cost for a building project.
Calculations in bridge aeroelasticity via CFD
Energy Technology Data Exchange (ETDEWEB)
Brar, P.S.; Raul, R.; Scanlan, R.H. [Johns Hopkins Univ., Baltimore, MD (United States)
1996-12-31
The central focus of the present study is the numerical calculation of flutter derivatives. These aeroelastic coefficients play an important role in determining the stability or instability of long, flexible structures under ambient wind loading. A class of Civil Engineering structures most susceptible to such an instability are long-span bridges of the cable-stayed or suspended-span variety. The disastrous collapse of the Tacoma Narrows suspension bridge in the recent past, due to a flutter instability, has been a big impetus in motivating studies in flutter of bridge decks.
Exact and approximate calculation of giant resonances
Energy Technology Data Exchange (ETDEWEB)
Vertse, T. [Magyar Tudomanyos Akademia, Debrecen (Hungary). Atommag Kutato Intezete; Liotta, R.J. [Royal Inst. of Tech., Stockholm (Sweden); Maglione, E. [Padua Univ. (Italy). Ist. di Fisica
1995-02-13
Energies, sum rules and partial decay widths of giant resonances in {sup 208}Pb are calculated solving exactly the continuum RPA equations corresponding to a central Woods-Saxon potential. For comparison an approximate treatment of those quantities in terms of pole expansions of the Green function (Berggren and Mittag-Leffler) is also performed. It is found that the approximated results agree well with the exact ones. Comparison with experimental data is made and a search for physically meaningful resonances is carried out. ((orig.))
Calculation of persistent currents in superconducting magnets
Directory of Open Access Journals (Sweden)
C. Völlinger
2000-12-01
Full Text Available This paper describes a semianalytical hysteresis model for hard superconductors. The model is based on the critical state model considering the dependency of the critical current density on the varying local field in the superconducting filaments. By combining this hysteresis model with numerical field computation methods, it is possible to calculate the persistent current multipole errors in the magnet taking local saturation effects in the magnetic iron parts into consideration. As an application of the method, the use of soft magnetic iron sheets (coil protection sheets mounted between the coils and the collars for partial compensation of the multipole errors during the ramping of the magnets is investigated.
Electrical Conductivity Calculations from the Purgatorio Code
Energy Technology Data Exchange (ETDEWEB)
Hansen, S B; Isaacs, W A; Sterne, P A; Wilson, B G; Sonnad, V; Young, D A
2006-01-09
The Purgatorio code [Wilson et al., JQSRT 99, 658-679 (2006)] is a new implementation of the Inferno model describing a spherically symmetric average atom embedded in a uniform plasma. Bound and continuum electrons are treated using a fully relativistic quantum mechanical description, giving the electron-thermal contribution to the equation of state (EOS). The free-electron density of states can also be used to calculate scattering cross sections for electron transport. Using the extended Ziman formulation, electrical conductivities are then obtained by convolving these transport cross sections with externally-imposed ion-ion structure factors.
ICBM vulnerability: Calculations, predictions, and error bars
Hobson, Art
1988-09-01
The theory of intercontinental ballistic missile (ICBM) silo vulnerability is reviewed, and the present and probable future (mid-1990s) vulnerability of US silos is analyzed. The analysis emphasizes methodology, sources of information, and uncertainties. US ICBMs might still be survivable today but they will certainly be vulnerable to ICBM attack, and perhaps even to submarine-launched ballistic missile attack, by the mid-1990s. These calculations are presented not only for their immediate importance but also to introduce other physicists to some of the quantitative methods that can be used to analyze international security topics.
Drift Mode Calculations in Nonaxisymmetric Geometry
Energy Technology Data Exchange (ETDEWEB)
G. Rewoldt; L.-P. Ku; W.A. Cooper; W.M. Tang
1999-07-01
A fully kinetic assessment of the stability properties of toroidal drift modes has been obtained for nonaxisymmetric (stellarator) geometry, in the electrostatic limit. This calculation is a comprehensive solution of the linearized gyrokinetic equation, using the lowest-order ''ballooning representation'' for high toroidal mode number instabilities, with a model collision operator. Results for toroidal drift waves destabilized by temperature gradients and/or trapped particle dynamics are presented, using three-dimensional magnetohydrodynamic equilibria generated as part of a design effort for a quasiaxisymmetric stellarator. Comparisons of these results with those obtained for typical tokamak cases indicate that the basic trends are similar.
Calculations in fundamental physics mechanics and heat
Heddle, T
2013-01-01
Calculations in Fundamental Physics, Volume I: Mechanics and Heat focuses on the mechanisms of heat. The manuscript first discusses motion, including parabolic, angular, and rectilinear motions, relative velocity, acceleration of gravity, and non-uniform acceleration. The book then discusses combinations of forces, such as polygons and resolution, friction, center of gravity, shearing force, and bending moment. The text looks at force and acceleration, energy and power, and machines. Considerations include momentum, horizontal or vertical motion, work and energy, pulley systems, gears and chai
Speed mathematics secrets skills for quick calculation
Handley, Bill
2011-01-01
Using this book will improve your understanding of math and haveyou performing like a genius!People who excel at mathematics use better strategies than the restof us; they are not necessarily more intelligent.Speed Mathematics teaches simple methods that will enable you tomake lightning calculations in your head-including multiplication,division, addition, and subtraction, as well as working withfractions, squaring numbers, and extracting square and cube roots.Here's just one example of this revolutionary approach to basicmathematics:96 x 97 =Subtract each number from 100.96 x 97 =4 3Subtract
PyTransport: Calculate inflationary correlation functions
Mulryne, David J.; Ronayne, John W.
2017-10-01
PyTransport calculates the 2-point and 3-point function of inflationary perturbations produced during multi-field inflation. The core of PyTransport is C++ code which is automatically edited and compiled into a Python module once an inflationary potential is specified. This module can then be called to solve the background inflationary cosmology as well as the evolution of correlations of inflationary perturbations. PyTransport includes two additional modules written in Python, one to perform the editing and compilation, and one containing a suite of functions for common tasks such as looping over the core module to construct spectra and bispectra.
Enthalpy Calculation for Pressurized Oxy- coal Combustion
Weihong Wu; Jingli Huang
2012-01-01
Oxy-fuel combustion is recognizing one of the most promising available technologies that zero emission accomplishment may be in the offing. With coal burned under the pressure of 6MPa and oxygen-enriched conditions, the high temperature and high pressure gaseous combustion product is composed of 95% CO2 and water-vapor, with the rest of O2, N2 and so on. However, once lauded as classic approach of resolving fuel gas enthalpy calculation pertaining to ideal gas at atmospheric pressure was rest...
Representation and calculation of economic uncertainties
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans
2002-01-01
the economic uncertainties involved, different procedures have been suggested. This paper discusses the representation of economic uncertainties by intervals,fuzzy numbers and probabilities, including double, triple and quadruple estimates and the problems of applying the four basic arithmetical operations...... additional uncertainties not present in the original economic problem. The paper will finally discuss the applicability and limitations of a few computational procedures based on available computer programs used for practical economic calculations with uncertain values. (C) 2002 Elsevier Science B.V. All...
Cobalamins uncovered by modern electronic structure calculations
DEFF Research Database (Denmark)
Kepp, Kasper Planeta; Ryde, Ulf
2009-01-01
This review describes how computational methods have contributed to the held of cobalamin chemistry since the start of the new millennium. Cobalamins are cobalt-dependent cofactors that are used for alkyl transfer and radical initiation by several classes of enzymes. Since the entry of modern...... electronic-structure calculations, in particular density functional methods, the understanding of the molecular mechanism of cobalamins has changed dramatically, going from a dominating view of trans-steric strain effects to a much more complex view involving an arsenal of catalytic strategies. Among...
Calculate the moisture content of steam
Energy Technology Data Exchange (ETDEWEB)
Ganapathy, V. (ABCO Industries, Inc., Abilene, TX (United States))
1993-08-01
Water droplets in steam can create serious problems. For example, if the steam is being used to drive turbines, droplets can damage the turbine blades. It is important, therefore, for an engineer to know if steam contains moisture, especially if the steam is generated in low-pressure boilers (under 500 psia). Unlike larger boilers, these units don't have internal separation devices such as cyclones. Calculating the steam's moisture content, or quality, can be complicated procedure. Now, a simple chart can be used to get the data from one temperature reading. The paper explains the procedure.
Motor Torque Calculations For Electric Vehicle
Directory of Open Access Journals (Sweden)
Saurabh Chauhan
2015-08-01
Full Text Available Abstract It is estimated that 25 of the total cars across the world will run on electricity by 2025. An important component that is an integral part of all electric vehicles is the motor. The amount of torque that the driving motor delivers is what plays a decisive role in determining the speed acceleration and performance of an electric vehicle. The following work aims at simplifying the calculations required to decide the capacity of the motor that should be used to drive a vehicle of particular specifications.
Atomic Reference Data for Electronic Structure Calculations
Kotochigova, S; Shirley, E L
We have generated data for atomic electronic structure calculations, to provide a standard reference for results of specified accuracy under commonly used approximations. Results are presented here for total energies and orbital energy eigenvalues for all atoms from H to U, at microHartree accuracy in the total energy, as computed in the local-density approximation (LDA) the local-spin-density approximation (LSD); the relativistic local-density approximation (RLDA); and scalar-relativistic local-density approximation (ScRLDA).
Casamayou, Alexandre; Cohen, Nathann; Connan, Guillaume; Dumont, Thierry; Fousse, Laurent; Maltey, Francois; Meulien, Matthias; Mezzarobba, Marc; Pernet, Clément; Thiéry, Nicolas M.; Zimmermann, Paul
2013-01-01
electronic version available under Creative Commons license; Sage est un logiciel libre de calcul mathématique s'appuyant sur le langage de programmation Python. Ses auteurs, une communauté internationale de centaines d'enseignants et de chercheurs, se sont donné pour mission de fournir une alternative viable aux logiciels Magma, Maple, Mathematica et Matlab. Sage fait appel pour cela à de multiples logiciels libres existants, comme GAP, Maxima, PARI et diverses bibliothèques scientifiques po...
Using reciprocity in Boundary Element Calculations
DEFF Research Database (Denmark)
Juhl, Peter Møller; Cutanda Henriquez, Vicente
2010-01-01
as the reciprocal radiation problem. The present paper concerns the situation of having a point source (which is reciprocal to a point receiver) at or near a discretized boundary element surface. The accuracy of the original and the reciprocal problem is compared in a test case for which an analytical solution......The concept of reciprocity is widely used in both theoretical and experimental work. In Boundary Element calculations reciprocity is sometimes employed in the solution of computationally expensive scattering problems, which sometimes can be more efficiently dealt with when formulated...
Chinese books on Western calendrical calculations and Japanese calendrical calculators in Edo era
Kobayashi, Tatsuhiko
2005-06-01
From the end of Ming to the beginning of Qing China many Western scientific books were translated into Chinese by Jesuit missionaries with cooperation of Chinese intellectuals. The Tokugawa government began to permit the importation of them as an exception to the Shogunate's seclusion policy in 1720. In this paper the author discussed the acceptances of them, especially Chinese books on Western calendrical calculations by Japanese calendrical calculators in 18th-19th centuries.
Calculated Communications In A Concave World
2016-02-16
their camp rumors and print them as facts. I regard them as spies, which in truth, they are. If I killed them all there would be news from Hell ...concave world is not hindered by normal barriers to communication. From one side of the globe to the other, and even more substantially than a flat...obvious and glaring differences in casualties, the main-stream media seems hell - bent on continuing to shift the risk aversion bias even further by
Use of risk aversion in risk acceptance criteria
Energy Technology Data Exchange (ETDEWEB)
Griesmeyer, J. M.; Simpson, M.; Okrent, D.
1980-06-01
Quantitative risk acceptance criteria for technological systems must be both justifiable, based upon societal values and objectives, and workable in the sense that compliance is possible and can be demonstrated in a straightforward manner. Societal values have frequently been assessed using recorded accident statistics on a wide range of human activities assuming that the statistics in some way reflect societal preferences, or by psychometric surveys concerning perceptions and evaluations of risk. Both methods indicate a societal aversion to risk e.g., many small accidents killing a total of 100 people are preferred over one large accident in which 100 lives are lost. Some of the implications of incorporating risk aversion in acceptance criteria are discussed. Calculated risks of various technological systems are converted to expected social costs using various risk aversion factors. The uncertainties in these assessments are also discussed.
The calculation of information and organismal complexity
Directory of Open Access Journals (Sweden)
Xu Cunshuan
2010-10-01
Full Text Available Abstract Background It is difficult to measure precisely the phenotypic complexity of living organisms. Here we propose a method to calculate the minimal amount of genomic information needed to construct organism (effective information as a measure of organismal complexity, by using permutation and combination formulas and Shannon's information concept. Results The results demonstrate that the calculated information correlates quite well with the intuitive organismal phenotypic complexity defined by traditional taxonomy and evolutionary theory. From viruses to human beings, the effective information gradually increases, from thousands of bits to hundreds of millions of bits. The simpler the organism is, the less the information; the more complex the organism, the more the information. About 13% of human genome is estimated as effective information or functional sequence. Conclusions The effective information can be used as a quantitative measure of phenotypic complexity of living organisms and also as an estimate of functional fraction of genome. Reviewers This article was reviewed by Dr. Lavanya Kannan (nominated by Dr. Arcady Mushegian, Dr. Chao Chen, and Dr. ED Rietman (nominated by Dr. Marc Vidal.
ARTc: Anisotropic reflectivity and transmissivity calculator
Malehmir, Reza; Schmitt, Douglas R.
2016-08-01
While seismic anisotropy is known to exist within the Earth's crust and even deeper, isotropic or even highly symmetric elastic anisotropic assumptions for seismic imaging is an over-simplification which may create artifacts in the image, target mis-positioning and hence flawed interpretation. In this paper, we have developed the ARTc algorithm to solve reflectivity, transmissivity as well as velocity and particle polarization in the most general case of elastic anisotropy. This algorithm is able to provide reflectivity solution from the boundary between two anisotropic slabs with arbitrary symmetry and orientation up to triclinic. To achieve this, the algorithm solves full elastic wave equation to find polarization, slowness and amplitude of all six wave-modes generated from the incident plane-wave and welded interface. In the first step to calculate the reflectivity, the algorithm solves properties of the incident wave such as particle polarization and slowness. After calculation of the direction of generated waves, the algorithm solves their respective slowness and particle polarization. With this information, the algorithm then solves a system of equations incorporating the imposed boundary conditions to arrive at the scattered wave amplitudes, and thus reflectivity and transmissivity. Reflectivity results as well as slowness and polarization are then tested in complex computational anisotropic models to ensure their accuracy and reliability. ARTc is coded in MATLAB ® and bundled with an interactive GUI and bash script to run on single or multi-processor computers.
Calculating scattering matrices by wave function matching
Energy Technology Data Exchange (ETDEWEB)
Zwierzycki, M. [Institute of Molecular Physics, Polish Academy of Sciences, Smoluchowskiego 17, 60-179 Poznan (Poland); Khomyakov, P.A.; Starikov, A.A.; Talanana, M.; Xu, P.X.; Karpan, V.M.; Marushchenko, I.; Brocks, G.; Kelly, P.J. [Faculty of Science and Technology and MESA+ Institute for Nanotechnology, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands); Xia, K. [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100080 (China); Turek, I. [Institute of Physics of Materials, Academy of Sciences of the Czech Republic, 616 62 Brno (Czech Republic); Bauer, G.E.W. [Kavli Institute of NanoScience, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft (Netherlands)
2008-04-15
The conductance of nanoscale structures can be conveniently related to their scattering properties expressed in terms of transmission and reflection coefficients. Wave function matching (WFM) is a transparent technique for calculating transmission and reflection matrices for any Hamiltonian that can be represented in tight-binding form. A first-principles Kohn-Sham Hamiltonian represented on a localized orbital basis or on a real space grid has such a form. WFM is based upon direct matching of the scattering-region wave function to the Bloch modes of ideal leads used to probe the scattering region. The purpose of this paper is to give a pedagogical introduction to WFM and present some illustrative examples of its use in practice. We briefly discuss WFM for calculating the conductance of atomic wires, using a real space grid implementation. A tight-binding muffin-tin orbital implementation very suitable for studying spin-dependent transport in layered magnetic materials is illustrated by looking at spin-dependent transmission through ideal and disordered interfaces. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Group Contribution Methods for Phase Equilibrium Calculations.
Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian
2015-01-01
The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.
Calculation of aberration coefficients by ray tracing.
Oral, M; Lencová, B
2009-10-01
In this paper we present an approach for the calculation of aberration coefficients using accurate ray tracing. For a given optical system, intersections of a large number of trajectories with a given plane are computed. In the Gaussian image plane the imaging with the selected optical system can be described by paraxial and aberration coefficients (geometric and chromatic) that can be calculated by least-squares fitting of the analytical model on the computed trajectory positions. An advantage of such a way of computing the aberration coefficients is that, in comparison with the aberration integrals and the differential algebra method, it is relatively easy to use and its complexity stays almost constant with the growing complexity of the optical system. This paper shows a tested procedure for choosing proper initial conditions and computing the coefficients of the fifth-order geometrical and third-order, first-degree chromatic aberrations by ray tracing on an example of a weak electrostatic lens. The results are compared with the values for the same lens from a paper Liu [Ultramicroscopy 106 (2006) 220-232].
Radioprotection calculations for the TRADE experiment
Zanini, L; Herrera-Martínez, A; Kadi, Y; Rubbia, Carlo; Burgio, N; Carta, M; Santagata, A; Cinotti, L
2002-01-01
The TRADE project is based on the coupling of, in a sub-critical configuration, of a 115 MeV, 2 mA proton cyclotron with a TRIGA research reactor at the ENEA Casaccia centre (Rome). Detailed radioprotection calculations using the FLUKA and EA-MC Monte Carlo codes were performed during the feasibility study. The study concentrated on dose rates due to beam losses in normal operating conditions and in the calculation of activation in the most sensitive components of the experiment. Results show that a shielding of 1.4 m of barytes concrete around the beam line will be sufficient to maintain the effective doses below the level of 10 Mu Sv/h, provided that the beam losses are at the level of 10 nA/m. The activation level around the beam line and in the water will be negligible, while the spallation target will reach an activation level comparable to the one of a fuel element at maximum burnup.
Criticality Calculations with MCNP6 - Practical Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications (XCP-3)
2016-11-29
These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input model for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.
A new calculation of LAMOST optical vignetting
Li, Shuang; Luo, Ali; Chen, Jianjun; Liu, Genrong; Comte, Georges
2012-09-01
A new method to calculate the optical vignetting of LAMOST (Large Sky Area Muti-Object Fiber Spectroscopic Telescope) is presented. With the pilot survey of LAMOST, it is necessary to have thorough and quantitative estimation and analysis on the observing efficiency which is affected by various factors: the optical system of the telescope and the spectrograph that is vignetting, the focal instrument, and the site condition. The wide field and large pupil of LAMOST fed by a Schmidt reflecting mirror, with a fixed optical axis coinciding with the local polar axis, lead to significant telescope vignetting, caused by the effective light-collecting area of the corrector, the light obstruction of the focal-plate, and the size of the primary mirror. A calculation of the vignetting has been presented by Xue et al. (2007), which considered 4 meter circle limitation and based on ray-tracking. In fact, there is no effect of the 4 meter circle limitation, so that we compute the vignetting again by means of obtaining the ratio of effective projected area of the corrector. All the results are derived by AUTOCAD. Moreover, the vignetting functions and vignetting variations with declination at which the telescope is pointed and the position considered in the focal surface are presented and analysed. Finally, compared with the ray-tracing method to obtain the vignetting before, the validity and availability of the proposed method are illustrated.
Electronic Structure Calculations and the Ising Hamiltonian.
Xia, Rongxin; Bian, Teng; Kais, Sabre
2017-11-20
Obtaining exact solutions to the Schrödinger equation for atoms, molecules, and extended systems continues to be a "Holy Grail" problem which the fields of theoretical chemistry and physics have been striving to solve since inception. Recent breakthroughs have been made in the development of hardware-efficient quantum optimizers and coherent Ising machines capable of simulating hundreds of interacting spins with an Ising-type Hamiltonian. One of the most vital questions pertaining to these new devices is, "Can these machines be used to perform electronic structure calculations?" Within this work, we review the general procedure used by these devices and prove that there is an exact mapping between the electronic structure Hamiltonian and the Ising Hamiltonian. Additionally, we provide simulation results of the transformed Ising Hamiltonian for H2 , He2 , HeH+, and LiH molecules, which match the exact numerical calculations. This demonstrates that one can map the molecular Hamiltonian to an Ising-type Hamiltonian which could easily be implemented on currently available quantum hardware. This is an early step in developing generalized methods on such devices for chemical physics.
Fastlim: a fast LHC limit calculator.
Papucci, Michele; Sakurai, Kazuki; Weiler, Andreas; Zeune, Lisa
Fastlim is a tool to calculate conservative limits on extensions of the Standard Model from direct LHC searches without performing any Monte Carlo event generation. The program reconstructs the visible cross sections (cross sections after event selection cuts) from pre-calculated efficiency tables and cross section tables for simplified event topologies. As a proof of concept of the approach, we have implemented searches relevant for supersymmetric models with R-parity conservation. Fastlim takes the spectrum and coupling information of a given model point and provides, for each signal region of the implemented analyses, the visible cross sections normalised to the corresponding upper limit, reported by the experiments, as well as the [Formula: see text] value. To demonstrate the utility of the program we study the sensitivity of the recent ATLAS missing energy searches to the parameter space of natural SUSY models. The program structure allows the straightforward inclusion of external efficiency tables and can be generalised to R-parity violating scenarios and non-SUSY models. This paper serves as a self-contained user guide and indicates the conventions and approximations used.
Direct search algorithms for optimization calculations
Powell, M. J. D.
Many different procedures have been proposed for optimization calculations when first derivatives are not available. Further, several researchers have contributed to the subject, including some who wish to prove convergence theorems, and some who wish to make any reduction in the least calculated value of the objective function. There is not even a key idea that can be used as a foundation of a review, except for the problem itself, which is the adjustment of variables so that a function becomes least, where each value of the function is returned by a subroutine for each trial vector of variables. Therefore the paper is a collection of essays on particular strategies and algorithms, in order to consider the advantages, limitations and theory of several techniques. The subjects addressed are line search methods, the restriction of vectors of variables to discrete grids, the use of geometric simplices, conjugate direction procedures, trust region algorithms that form linear or quadratic approximations to the objective function, and simulated annealing. We study the main features of the methods themselves, instead of providing a catalogue of references to published work, because an understanding of these features may be very helpful to future research.
Electron mobility calculation for graphene on substrates
Energy Technology Data Exchange (ETDEWEB)
Hirai, Hideki; Ogawa, Matsuto [Department of Electrical and Electronic Engineering, Graduate School of Engineering, Kobe University, 1-1, Rokko-dai, Nada-ku, Kobe 657-8501 (Japan); Tsuchiya, Hideaki, E-mail: tsuchiya@eedept.kobe-u.ac.jp [Department of Electrical and Electronic Engineering, Graduate School of Engineering, Kobe University, 1-1, Rokko-dai, Nada-ku, Kobe 657-8501 (Japan); Japan Science and Technology Agency, CREST, Chiyoda, Tokyo 102-0075 (Japan); Kamakura, Yoshinari; Mori, Nobuya [Japan Science and Technology Agency, CREST, Chiyoda, Tokyo 102-0075 (Japan); Division of Electrical, Electronic and Information Engineering, Graduate School of Engineering, Osaka University, Suita, Osaka 565-0871 (Japan)
2014-08-28
By a semiclassical Monte Carlo method, the electron mobility in graphene is calculated for three different substrates: SiO{sub 2}, HfO{sub 2}, and hexagonal boron nitride (h-BN). The calculations account for polar and non-polar surface optical phonon (OP) scatterings induced by the substrates and charged impurity (CI) scattering, in addition to intrinsic phonon scattering in pristine graphene. It is found that HfO{sub 2} is unsuitable as a substrate, because the surface OP scattering of the substrate significantly degrades the electron mobility. The mobility on the SiO{sub 2} and h-BN substrates decreases due to CI scattering. However, the mobility on the h-BN substrate exhibits a high electron mobility of 170 000 cm{sup 2}/(V·s) for electron densities less than 10{sup 12 }cm{sup −2}. Therefore, h-BN should be an appealing substrate for graphene devices, as confirmed experimentally.
Calculation of fractional electron capture probabilities
Schoenfeld, E
1998-01-01
A 'Table of Radionuclides' is being prepared which will supersede the 'Table de Radionucleides' formerly issued by the LMRI/LPRI (France). In this effort it is desirable to have a uniform basis for calculating theoretical values of fractional electron capture probabilities. A table has been compiled which allows one to calculate conveniently and quickly the fractional probabilities P sub K , P sub L , P sub M , P sub N and P sub O , their ratios and the assigned uncertainties for allowed and non-unique first forbidden electron capture transitions of known transition energy for radionuclides with atomic numbers from Z=3 to 102. These results have been applied to a total of 28 transitions of 14 radionuclides ( sup 7 Be, sup 2 sup 2 Na, sup 5 sup 1 Cr, sup 5 sup 4 Mn, sup 5 sup 5 Fe, sup 6 sup 8 Ge , sup 6 sup 8 Ga, sup 7 sup 5 Se, sup 1 sup 0 sup 9 Cd, sup 1 sup 2 sup 5 I, sup 1 sup 3 sup 9 Ce, sup 1 sup 6 sup 9 Yb, sup 1 sup 9 sup 7 Hg, sup 2 sup 0 sup 2 Tl). The values are in reasonable agreement with measure...
Calculations of neoclassical impurity transport in stellarators
Mollén, Albert; Smith, Håkan M.; Langenberg, Andreas; Turkin, Yuriy; Beidler, Craig D.; Helander, Per; Landreman, Matt; Newton, Sarah L.; García-Regaña, José M.; Nunami, Masanori
2017-10-01
The new stellarator Wendelstein 7-X has finished the first operational campaign and is restarting operation in the summer 2017. To demonstrate that the stellarator concept is a viable candidate for a fusion reactor and to allow for long pulse lengths of 30 min, i.e. ``quasi-stationary'' operation, it will be important to avoid central impurity accumulation typically governed by the radial neoclassical transport. The SFINCS code has been developed to calculate neoclassical quantities such as the radial collisional transport and the ambipolar radial electric field in 3D magnetic configurations. SFINCS is a cutting-edge numerical tool which combines several important features: the ability to model an arbitrary number of kinetic plasma species, the full linearized Fokker-Planck collision operator for all species, and the ability to calculate and account for the variation of the electrostatic potential on flux surfaces. In the present work we use SFINCS to study neoclassical impurity transport in stellarators. We explore how flux-surface potential variations affect the radial particle transport, and how the radial electric field is modified by non-trace impurities and flux-surface potential variations.
Alcohol Intake and Risk of Incident Melanoma
Rivera, Andrew Robert
2015-01-01
Alcohol consumption is associated with increased risk of numerous cancers, but has not been definitively associated with risk of melanoma. We used prospectively gathered data from three large cohorts to investigate whether alcohol intake is associated with risk of invasive melanoma and melanoma in situ. Statistical analyses were conducted using the Cox proportional hazards model to calculate multivariate-adjusted risk ratios. 1,496 cases of invasive melanoma and 870 cases of melanoma in situ ...
DEFF Research Database (Denmark)
Lam, Janni Uyen Hoa; Lynge, Elsebeth; Njor, Sisse Helle
2015-01-01
, the incidence rate of cervical cancer and the screening coverage for women aged 23-64 years on 31 December 2010 were calculated with and without adjustments for hysterectomies undertaken for reasons other than cervical cancer. They were calculated as the number of cases divided by 1) the total number of woman......BACKGROUND: The incidence rates of cervical cancer and the coverage in cervical cancer screening are usually reported by including in the denominator all women from the general population. However, after hysterectomy women are not at risk anymore of developing cervical cancer. Therefore, it makes...... sense to determine the indicators also for the true at-risk populations. We described the frequency of total hysterectomy in Denmark and its impact on the calculated incidence of cervical cancer and the screening coverage. MATERIAL AND METHODS: With data from five Danish population-based registries...
Free-Energy Calculations. A Mathematical Perspective
Pohorille, Andrzej
2015-01-01
Ion channels are pore-forming assemblies of transmembrane proteins that mediate and regulate ion transport through cell walls. They are ubiquitous to all life forms. In humans and other higher organisms they play the central role in conducting nerve impulses. They are also essential to cardiac processes, muscle contraction and epithelial transport. Ion channels from lower organisms can act as toxins or antimicrobial agents, and in a number of cases are involved in infectious diseases. Because of their important and diverse biological functions they are frequent targets of drug action. Also, simple natural or synthetic channels find numerous applications in biotechnology. For these reasons, studies of ion channels are at the forefront of biophysics, structural biology and cellular biology. In the last decade, the increased availability of X-ray structures has greatly advanced our understanding of ion channels. However, their mechanism of action remains elusive. This is because, in order to assist controlled ion transport, ion channels are dynamic by nature, but X-ray crystallography captures the channel in a single, sometimes non-native state. To explain how ion channels work, X-ray structures have to be supplemented with dynamic information. In principle, molecular dynamics (MD) simulations can aid in providing this information, as this is precisely what MD has been designed to do. However, MD simulations suffer from their own problems, such as inability to access sufficiently long time scales or limited accuracy of force fields. To assess the reliability of MD simulations it is only natural to turn to the main function of channels - conducting ions - and compare calculated ionic conductance with electrophysiological data, mainly single channel recordings, obtained under similar conditions. If this comparison is satisfactory it would greatly increase our confidence that both the structures and our computational methodologies are sufficiently accurate. Channel
Development of thermodynamic databases for geochemical calculations
Energy Technology Data Exchange (ETDEWEB)
Arthur, R.C. [Monitor Scientific, L.L.C., Denver, Colorado (United States); Sasamoto, Hiroshi; Shibata, Masahiro; Yui, Mikazu [Japan Nuclear Cycle Development Inst., Tokai, Ibaraki (Japan); Neyama, Atsushi [Computer Software Development Corp., Tokyo (Japan)
1999-09-01
Two thermodynamic databases for geochemical calculations supporting research and development on geological disposal concepts for high level radioactive waste are described in this report. One, SPRONS.JNC, is compatible with thermodynamic relations comprising the SUPCRT model and software, which permits calculation of the standard molal and partial molal thermodynamic properties of minerals, gases, aqueous species and reactions from 1 to 5000 bars and 0 to 1000degC. This database includes standard molal Gibbs free energies and enthalpies of formation, standard molal entropies and volumes, and Maier-Kelly heat capacity coefficients at the reference pressure (1 bar) and temperature (25degC) for 195 minerals and 16 gases. It also includes standard partial molal Gibbs free energies and enthalpies of formation, standard partial molal entropies, and Helgeson, Kirkham and Flowers (HKF) equation-of-state coefficients at the reference pressure and temperature for 1147 inorganic and organic aqueous ions and complexes. SPRONS.JNC extends similar databases described elsewhere by incorporating new and revised data published in the peer-reviewed literature since 1991. The other database, PHREEQE.JNC, is compatible with the PHREEQE series of geochemical modeling codes. It includes equilibrium constants at 25degC and l bar for mineral-dissolution, gas-solubility, aqueous-association and oxidation-reduction reactions. Reaction enthalpies, or coefficients in an empirical log K(T) function, are also included in this database, which permits calculation of equilibrium constants between 0 and 100degC at 1 bar. All equilibrium constants, reaction enthalpies, and log K(T) coefficients in PHREEQE.JNC are calculated using SUPCRT and SPRONS.JNC, which ensures that these two databases are mutually consistent. They are also internally consistent insofar as all the data are compatible with basic thermodynamic definitions and functional relations in the SUPCRT model, and because primary
Corrugated Membrane Nonlinear Deformation Process Calculation
Directory of Open Access Journals (Sweden)
A. S. Nikolaeva
2015-01-01
Full Text Available Elastic elements are widely used in instrumentation. They are used to create a particular interference between the parts, for accumulating mechanical energy, as the motion transmission elements, elastic supports, and sensing elements of measuring devices. Device reliability and quality depend on the calculation accuracy of the elastic elements. A corrugated membrane is rather common embodiment of the elastic element.The corrugated membrane properties depend largely on its profile i.e. a generatrix of the meridian surface.Unlike other types of pressure elastic members (bellows, tube spring, the elastic characteristics of which are close to linear, an elastic characteristic of the corrugated membrane (typical movement versus external load is nonlinear. Therefore, the corrugated membranes can be used to measure quantities, nonlinearly related to the pressure (e.g., aircraft air speed, its altitude, pipeline fluid or gas flow rate. Another feature of the corrugated membrane is that significant movements are possible within the elastic material state. However, a significant non-linearity of membrane characteristics leads to severe complicated calculation.This article is aimed at calculating the corrugated membrane to obtain the elastic characteristics and the deformed shape of the membrane meridian, as well as at investigating the processes of buckling. As the calculation model, a thin-walled axisymmetric shell rotation is assumed. The material properties are linearly elastic. We consider a corrugated membrane of sinusoidal profile. The membrane load is a uniform pressure.The algorithm for calculating the mathematical model of an axisymmetric corrugated membrane of constant thickness, based on the Reissner’s theory of elastic thin shells, was realized as the author's program in C language. To solve the nonlinear problem were used a method of changing the subspace of control parameters, developed by S.S., Gavriushin, and a parameter marching method
Body Mass Index Genetic Risk Score and Endometrial Cancer Risk.
Prescott, Jennifer; Setiawan, Veronica W; Wentzensen, Nicolas; Schumacher, Fredrick; Yu, Herbert; Delahanty, Ryan; Bernstein, Leslie; Chanock, Stephen J; Chen, Chu; Cook, Linda S; Friedenreich, Christine; Garcia-Closas, Monserrat; Haiman, Christopher A; Le Marchand, Loic; Liang, Xiaolin; Lissowska, Jolanta; Lu, Lingeng; Magliocco, Anthony M; Olson, Sara H; Risch, Harvey A; Shu, Xiao-Ou; Ursin, Giske; Yang, Hannah P; Kraft, Peter; De Vivo, Immaculata
2015-01-01
Genome-wide association studies (GWAS) have identified common variants that predispose individuals to a higher body mass index (BMI), an independent risk factor for endometrial cancer. Composite genotype risk scores (GRS) based on the joint effect of published BMI risk loci were used to explore whether endometrial cancer shares a genetic background with obesity. Genotype and risk factor data were available on 3,376 endometrial cancer case and 3,867 control participants of European ancestry from the Epidemiology of Endometrial Cancer Consortium GWAS. A BMI GRS was calculated by summing the number of BMI risk alleles at 97 independent loci. For exploratory analyses, additional GRSs were based on subsets of risk loci within putative etiologic BMI pathways. The BMI GRS was statistically significantly associated with endometrial cancer risk (P = 0.002). For every 10 BMI risk alleles a woman had a 13% increased endometrial cancer risk (95% CI: 4%, 22%). However, after adjusting for BMI, the BMI GRS was no longer associated with risk (per 10 BMI risk alleles OR = 0.99, 95% CI: 0.91, 1.07; P = 0.78). Heterogeneity by BMI did not reach statistical significance (P = 0.06), and no effect modification was noted by age, GWAS Stage, study design or between studies (P≥0.58). In exploratory analyses, the GRS defined by variants at loci containing monogenic obesity syndrome genes was associated with reduced endometrial cancer risk independent of BMI (per BMI risk allele OR = 0.92, 95% CI: 0.88, 0.96; P = 2.1 x 10-5). Possessing a large number of BMI risk alleles does not increase endometrial cancer risk above that conferred by excess body weight among women of European descent. Thus, the GRS based on all current established BMI loci does not provide added value independent of BMI. Future studies are required to validate the unexpected observed relation between monogenic obesity syndrome genetic variants and endometrial cancer risk.
Energy Technology Data Exchange (ETDEWEB)
Jannik, Tim [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Stagich, Brooke [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-08-28
The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their updated “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides PRGs for radionuclides that are used as a screening tool at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and Resource Conservation and Recovery Act (RCRA) sites. These risk-based PRGs establish concentration limits under specific exposure scenarios. The purpose of this verification study is to determine that the calculator has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly. There are 167 equations used in the calculator. To verify the calculator, all equations for each of seven receptor types (resident, construction worker, outdoor and indoor worker, recreator, farmer, and composite worker) were hand calculated using the default parameters. The same four radionuclides (Am-241, Co-60, H-3, and Pu-238) were used for each calculation for consistency throughout.
Multi-compartment iodine calculations with FIPLOC/IMPAIR
Energy Technology Data Exchange (ETDEWEB)
Ewig, F.; Allelein, H.J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Schwarz, S.; Weber, G. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Garching (Germany)
1996-12-01
The multi-compartment containment code FIPLOC for the simulation of severe accidents in LWR plants was extended by the integration of the iodine model IMPAIR-3. The iodine model was changed for arbitrary compartment configurations and tightly coupled to the thermal hydraulic part. A main progress with the coupled version FIPLOC-3.0 is the sophisticated modelling of the aerosol iodine behaviour. In a PWR accident the mass of iodine is mainly released in form of CsI aerosol from the primary circuit. In IMPAIR-3 the aerosol behaviour of the species CsI, AgI and IO{sub 3}{sup -} is modelled in a very simplified way causing large uncertainties in the calculated distributions. The behaviour of these three aerosol species is treated by the aerosol model MAEROS/MGA. Agglomeration, particle growth by condensation and all deposition processes are calculated. The solubility effect for the hygroscopic species CsI and IO{sub 3}{sup -} are comprehended. Furthermore the impact of the iodine decay heat on the thermal hydraulic behaviour is considered. In order to test the code development a preliminary FIPLOC-3.0 calculation was done simulating a German PWR containment for the core melt scenario ND* according to the German risk study phase B. IN the calculation a contact of the core melt with the sump water was assumed and the containment vent line was opened after 70 hours. The result show that the different iodine species are distributed inhomogeneously within the containment. The CsI-aerosol concentrations differ by two orders of magnitude and the I{sub 2}-concentration even by three orders of magnitude. Most of the iodine is assumed to be released as CsI aerosol out of the primary circuit. Since it fastly deposits its contribution to the release into the environment is minor. CsI is however dissolved in the sump, where mainly the gaseous I{sub 2} is created which can react in the containment atmosphere to IO{sub 3}{sup -}. (author) 11 figs., 3 tabs., 12 refs.
Void growth in metals: Atomistic calculations
Energy Technology Data Exchange (ETDEWEB)
Traiviratana, Sirirat [Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA 92093 (United States); Bringa, Eduardo M. [Materials Science Department, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Benson, David J. [Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA 92093 (United States); Meyers, Marc A. [Department of Mechanical and Aerospace Engineering, University of California, San Diego, La Jolla, CA 92093 (United States); NanoEngineering, University of California, San Diego, La Jolla, CA 92093 (United States)], E-mail: mameyers@ucsd.edu
2008-09-15
Molecular dynamics simulations in monocrystalline and bicrystalline copper were carried out with LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) to reveal void growth mechanisms. The specimens were subjected to tensile uniaxial strains; the results confirm that the emission of (shear) loops is the primary mechanism of void growth. It is observed that many of these shear loops develop along two slip planes (and not one, as previously thought), in a heretofore unidentified mechanism of cooperative growth. The emission of dislocations from voids is the first stage, and their reaction and interaction is the second stage. These loops, forming initially on different {l_brace}1 1 1{r_brace} planes, join at the intersection, if the Burgers vector of the dislocations is parallel to the intersection of two {l_brace}1 1 1{r_brace} planes: a <1 1 0> direction. Thus, the two dislocations cancel at the intersection and a biplanar shear loop is formed. The expansion of the loops and their cross slip leads to the severely work-hardened region surrounding a growing void. Calculations were carried out on voids with different sizes, and a size dependence of the stress threshold to emit dislocations was obtained by MD, in disagreement with the Gurson model which is scale independent. This disagreement is most marked for the nanometer sized voids. The scale dependence of the stress required to grow voids is interpreted in terms of the decreasing availability of optimally oriented shear planes and increased stress required to nucleate shear loops as the void size is reduced. The growth of voids simulated by MD is compared with the Cocks-Ashby constitutive model and significant agreement is found. The density of geometrically necessary dislocations as a function of void size is calculated based on the emission of shear loops and their outward propagation. Calculations are also carried out for a void at the interface between two grains to simulate polycrystalline
Parameterization Impacts on Linear Uncertainty Calculation
Fienen, M. N.; Doherty, J.; Reeves, H. W.; Hunt, R. J.
2009-12-01
Efficient linear calculation of model prediction uncertainty can be an insightful diagnostic metric for decision-making. Specifically, the contributions of parameter uncertainty or the location and type of data to prediction uncertainty can be used to evaluate which types of information are most valuable. Information that most significantly reduces prediction uncertainty can be considered to have greater worth. Prediction uncertainty is commonly calculated including or excluding specific information and compared to a base scenario. The quantitative difference in uncertainty with or without the information is indicative of that information's worth in the decision-making process. These results can be calculated at many hypothetical locations to guide network design (i.e., where to install new wells/stream gages/etc.) or used to indicate which parameters are the most important to understand thus likely candidates for future characterization work. We examine a hypothetical case in which an inset model is created from a large regional model in order to better represent a surface stream network and make predictions of head near and flux in a stream due to installation and pumping of a large well near a stream headwater. Parameterization and edge boundary conditions are inherited from the regional model, the simple act of refining discretization and stream geometry shows improvement in the representation of the streams. Even visual inspection of the simulated head field highlights the need to recalibrate and potentially re-parametrize the inset model. A network of potential head observations is evaluated and contoured in the shallowest two layers of the six-layer model to assess their worth in both predicting flux at a specific gage, and head at a specific location near the stream. Three hydraulic conductivity parameterization scenarios are evaluated: using a single multiplier on hydraulic conductivity acting on the inherited hydraulic conductivity zonation using; the
Relativistic Few-Body Hadronic Physics Calculations
Energy Technology Data Exchange (ETDEWEB)
Polyzou, Wayne [Univ. of Iowa, Iowa City, IA (United States)
2016-06-20
The goal of this research proposal was to use ``few-body'' methods to understand the structure and reactions of systems of interacting hadrons (neutrons, protons, mesons, quarks) over a broad range of energy scales. Realistic mathematical models of few-hadron systems have the advantage that they are sufficiently simple that they can be solved with mathematically controlled errors. These systems are also simple enough that it is possible to perform complete accurate experimental measurements on these systems. Comparison between theory and experiment puts strong constraints on the structure of the models. Even though these systems are ``simple'', both the experiments and computations push the limits of technology. The important property of ``few-body'' systems is that the ``cluster property'' implies that the interactions that appear in few-body systems are identical to the interactions that appear in complicated many-body systems. Of particular interest are models that correctly describe physics at distance scales that are sensitive to the internal structure of the individual nucleons. The Heisenberg uncertainty principle implies that in order to be sensitive to physics on distance scales that are a fraction of the proton or neutron radius, a relativistic treatment of quantum mechanics is necessary. The research supported by this grant involved 30 years of effort devoted to studying all aspects of interacting two and three-body systems. Realistic interactions were used to compute bound states of two- and three-nucleon, and two- and three-quark systems. Scattering observables for these systems were computed for a broad range of energies - from zero energy scattering to few GeV scattering, where experimental evidence of sub-nucleon degrees of freedom is beginning to appear. Benchmark calculations were produced, which when compared with calculations of other groups provided an essential check on these complicated calculations. In
Recurrence risk for germinal mosaics revisited
Meulen ,van der Martin; te Meerman, G J
A formula to calculate recurrence risk for germline mosaicism published by Hartl in 1971 has been updated to include marker information. For practical genetic counselling new, more elaborate tables are given.
Using Angle calculations to demonstrate vowel shifts
DEFF Research Database (Denmark)
Fabricius, Anne
2008-01-01
This paper gives an overview of the long-term trends of diachronic changes evident within the short vowel system of RP during the 20th century. more specifically, it focusses on changing juxtapositions of the TRAP, STRUT and LOT, FOOT vowel centroid positions. The paper uses geometric calculations...... to give precise and replicable representations of the vowel system and the generational changes apparent in the data. While FOOT-fronting is well known in British English (Torgersen 1997), less is known about the historical trajectory of the STRUT vowel in response to the encroachment of the TRAP vowel...... whose lowering and backing are also well-documented (Wells 1982). The discussion draws out differences between 'phonetic' and 'sociolinguistic' stances on the interpretation of acoustic vowel data in formant plots...
Distributed Function Calculation over Noisy Networks
Directory of Open Access Journals (Sweden)
Zhidun Zeng
2016-01-01
Full Text Available Considering any connected network with unknown initial states for all nodes, the nearest-neighbor rule is utilized for each node to update its own state at every discrete-time step. Distributed function calculation problem is defined for one node to compute some function of the initial values of all the nodes based on its own observations. In this paper, taking into account uncertainties in the network and observations, an algorithm is proposed to compute and explicitly characterize the value of the function in question when the number of successive observations is large enough. While the number of successive observations is not large enough, we provide an approach to obtain the tightest possible bounds on such function by using linear programing optimization techniques. Simulations are provided to demonstrate the theoretical results.
Molecular orbital calculations using chemical graph theory
Dias, Jerry Ray
1993-01-01
Professor John D. Roberts published a highly readable book on Molecular Orbital Calculations directed toward chemists in 1962. That timely book is the model for this book. The audience this book is directed toward are senior undergraduate and beginning graduate students as well as practicing bench chemists who have a desire to develop conceptual tools for understanding chemical phenomena. Although, ab initio and more advanced semi-empirical MO methods are regarded as being more reliable than HMO in an absolute sense, there is good evidence that HMO provides reliable relative answers particularly when comparing related molecular species. Thus, HMO can be used to rationalize electronic structure in 1t-systems, aromaticity, and the shape use HMO to gain insight of simple molecular orbitals. Experimentalists still into subtle electronic interactions for interpretation of UV and photoelectron spectra. Herein, it will be shown that one can use graph theory to streamline their HMO computational efforts and to arrive...
Improving calorimeter resolution using temperature compensation calculations
Smiga, Joseph; Purschke, Martin
2017-01-01
The sPHENIX experiment is an upgrade of the existing PHENIX apparatus at the Relativistic Heavy-Ion Collider (RHIC). The new detector improves upon measurements of various physical processes, such as jets of particles created during heavy-ion collisions. Prototypes of various calorimeter components were tested at the Fermilab Test Beam Facility (FTBF). This analysis tries to compensate the effects of temperature drifts in the silicon photomultipliers (SiPMs). Temperature data were used to calculate an appropriate compensation factor. This analysis will improve the achievable resolution and will also determine how accurately the temperature must be controlled in the final experiment. This will improve the performance of the calorimeters in the sPHENIX experiment. This project was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS) under the Science Undergraduate Laboratory Internships Program (SULI).
Thermal calculations of underground oil pipelines
Directory of Open Access Journals (Sweden)
Moiseev Boris
2017-01-01
Full Text Available Operation of oil pipelines in the frozen soil causes heat exchange between the pipeline and the soil and formation of a melt zone which leads to deformation of pipelines. Terms of construction and operation of oil pipelines are greatly related to their temperature conditions. In this regard it is necessary to know the laws of formation of thawing halos around oil pipelines. Thus, elucidation of laws of formation of thawing halos around oil pipelines and determination of optimal conditions for their installation during construction in areas of permafrost in the north of Tyumen region is a very urgent task. The authors developed an algorithm and a computer program for construction of the temperature field of the frozen soil. Some problems have been solved basing on the obtained dependences and graphs of the dependence were constructed. Research and calculations made on the underground oil pipeline construction allowed the authors to give recommendations aimed at increasing the reliability of oil pipelines.
Angular size-redshift: Experiment and calculation
Amirkhanyan, V. R.
2014-10-01
In this paper the next attempt is made to clarify the nature of the Euclidean behavior of the boundary in the angular size-redshift cosmological test. It is shown experimentally that this can be explained by the selection determined by anisotropic morphology and anisotropic radiation of extended radio sources. A catalogue of extended radio sources with minimal flux densities of about 0.01 Jy at 1.4 GHz was compiled for conducting the test. Without the assumption of their size evolution, the agreement between the experiment and calculation was obtained both in the ΛCDM model (Ω m = 0.27, Ω v = 0.73) and the Friedman model (Ω = 0.1).
Modulated structure calculated for superconducting hydrogen sulfide
Energy Technology Data Exchange (ETDEWEB)
Majumdar, Arnab; Tse, John S.; Yao, Yansun [Department of Physics and Engineering Physics, University of Saskatchewan, Saskatoon, SK (Canada)
2017-09-11
Compression of hydrogen sulfide using first principles metadynamics and molecular dynamics calculations revealed a modulated structure with high proton mobility which exhibits a diffraction pattern matching well with experiment. The structure consists of a sublattice of rectangular meandering SH{sup -} chains and molecular-like H{sub 3}S{sup +} stacked alternately in tetragonal and cubic slabs forming a long-period modulation. The novel structure offers a new perspective on the possible origin of the superconductivity at very high temperatures in which the conducting electrons in the SH chains are perturbed by the fluxional motions of the H{sub 3}S resulting in strong electron-phonon coupling. (copyright 2017 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)
Spontaneous Radiation Background Calculation for LCLS
Reiche, Sven
2004-01-01
The intensity of undulator radiation, not amplified by the FEL interaction, can be larger than the maximum FEL signal in the case of an X-ray FEL. In the commissioning of a SASE FEL it is essential to extract an amplified signal early to diagnose eventual misalignment of undulator modules or errors in the undulator field strength. We developed a numerical code to calculate the radiation pattern at any position behind a multi-segmented undulator with arbitrary spacing and field profiles. The output can be run through numerical spatial and frequency filters to model the radiation beam transport and diagnostic. In this presentation we estimate the expected background signal for the FEL diagnostic and at what point along the undulator the FEL signal can be separated from the background. We also discusses how much information on the undulator field and alignment can be obtained from the incoherent radiation signal itself.
COSTS CALCULATION OF TARGET COSTING METHOD
Directory of Open Access Journals (Sweden)
Sebastian UNGUREANU
2014-06-01
Full Text Available Cost information system plays an important role in every organization in the decision making process. An important task of management is ensuring control of the operations, processes, sectors, and not ultimately on costs. Although in achieving the objectives of an organization compete more control systems (production control, quality control, etc., the cost information system is important because monitors results of the other. Detailed analysis of costs, production cost calculation, quantification of losses, estimate the work efficiency provides a solid basis for financial control. Knowledge of the costs is a decisive factor in taking decisions and planning future activities. Managers are concerned about the costs that will appear in the future, their level underpinning the supply and production decisions as well as price policy. An important factor is the efficiency of cost information system in such a way that the information provided by it may be useful for decisions and planning of the work.
Parallelizing Gaussian Process Calculations in R
Directory of Open Access Journals (Sweden)
Christopher J. Paciorek
2015-02-01
Full Text Available We consider parallel computation for Gaussian process calculations to overcome computational and memory constraints on the size of datasets that can be analyzed. Using a hybrid parallelization approach that uses both threading (shared memory and message-passing (distributed memory, we implement the core linear algebra operations used in spatial statistics and Gaussian process regression in an R package called bigGP that relies on C and MPI. The approach divides the covariance matrix into blocks such that the computational load is balanced across processes while communication between processes is limited. The package provides an API enabling R programmers to implement Gaussian process-based methods by using the distributed linear algebra operations without any C or MPI coding. We illustrate the approach and software by analyzing an astrophysics dataset with n = 67, 275 observations.
Shell model calculations for exotic nuclei
Energy Technology Data Exchange (ETDEWEB)
Brown, B.A. (Michigan State Univ., East Lansing, MI (USA)); Warburton, E.K. (Brookhaven National Lab., Upton, NY (USA)); Wildenthal, B.H. (New Mexico Univ., Albuquerque, NM (USA). Dept. of Physics and Astronomy)
1990-02-01
In this paper we review the progress of the shell-model approach to understanding the properties of light exotic nuclei (A < 40). By shell-model'' we mean the consistent and large-scale application of the classic methods discussed, for example, in the book of de-Shalit and Talmi. Modern calculations incorporate as many of the important configurations as possible and make use of realistic effective interactions for the valence nucleons. Properties such as the nuclear densities depend on the mean-field potential, which is usually separately from the valence interaction. We will discuss results for radii which are based on a standard Hartree-Fock approach with Skyrme-type interactions.
Calculating Outsourcing Strategies and Trials of Strength
DEFF Research Database (Denmark)
Christensen, Mark; Skærbæk, Peter; Tryggestad, Kjell
. The alternative option was an immediate outsourcing strategy with facility services being the object of large cross-functional contracts for all Danish military establishments. By succeeding in presenting ‘internal optimization’ as an outsourcing option (as opposed to the usual ‘make’ option) this case...... demonstrates the power of projects and their use of accounting calculation. We study how the two options emerged and were valued differently by the supra-national outsourcing program and the local Defense projects over 22 years and how that valuation process involved accounting. Drawing on Actor-Network Theory...... outsourcing strategies during a series of trials of strength, 2. develops the concept of ‘trial of strength’ for accounting and organization research by showing how ‘the rules of the game’ for the trials of strength can become challenged and controversial, 3. shows that, in addition to the pervasive role...
Marginal Loss Calculations for the DCOPF
Energy Technology Data Exchange (ETDEWEB)
Eldridge, Brent [Federal Energy Regulatory Commission, Washington, DC (United States); Johns Hopkins Univ., Baltimore, MD (United States); O' Neill, Richard P. [Federal Energy Regulatory Commission, Washington, DC (United States); Castillo, Andrea R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-12-05
The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today, but will include sufficient detail to demonstrate a few points with regard to the handling of losses.
Improving the accuracy of dynamic mass calculation
Directory of Open Access Journals (Sweden)
Oleksandr F. Dashchenko
2015-06-01
Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the weight of moving transport is only possible by appropriate processing of the sensor signal. The aim of the research is to develop a methodology for weighing freight rolling stock, which increases the accuracy of the measurement of dynamic mass, in particular wagon that moves. Apart from time-series methods, preliminary filtration for improving the accuracy of calculation is used. The results of the simulation are presented.
Massively parallel self-consistent-field calculations
Energy Technology Data Exchange (ETDEWEB)
Tilson, J.L.
1994-10-29
The advent of supercomputers with many computational nodes each with its own independent memory makes possible extremely fast computations. The author`s work, as part of the US High Performance Computing and Communications Program (HPCCP), is focused on the development of electronic structure techniques for the solution of Grand Challenge-size molecules containing hundreds of atoms. Their efforts have resulted in a fully scalable Direct-SCF program that is portable and efficient. This code, named NWCHEM, is built around a distributed-data model. This distributed data is managed by a software package called Global Arrays developed within the HPCCP. They present performance results for Direct-SCF calculations of interest to the consortium.
Calculations of superconducting parametric amplifiers performances
Goto, T.; Takeda, M.; Saito, S.; Shimakage, H.
2017-07-01
A superconducting parametric amplifier is an electromagnetic wave amplifier with high-quality characteristics such as a wide bandwidth, an extremely low noise, and a high dynamic range. In this paper, we report on the estimations of a YBCO superconducting parametric amplifier characteristic. The YBCO thin films were deposited on an MgO substrate by a pulsed laser deposition method. Based on the measured YBCO thin film parameters, theoretical calculations were implemented for evaluations of kinetic inductance nonlinearities and parametric gains. The nonlinearity of the YBCO thin film was estimated to be stronger than a single crystal NbTiN thin film. It is indicated that the YBCO parametric amplifier has a potential to be realized the amplifier with the high parametric gain. It is also expected that it could be operated in the range of the high frequency band, at the high temperature, and low applied current.
Zero energy scattering calculation in Euclidean space
Energy Technology Data Exchange (ETDEWEB)
Carbonell, J. [Institut de Physique Nucléaire, Université Paris-Sud, IN2P3-CNRS, 91406 Orsay Cedex (France); Karmanov, V.A., E-mail: karmanov@sci.lebedev.ru [Lebedev Physical Institute, Leninsky Prospekt 53, 119991 Moscow (Russian Federation)
2016-03-10
We show that the Bethe–Salpeter equation for the scattering amplitude in the limit of zero incident energy can be transformed into a purely Euclidean form, as it is the case for the bound states. The decoupling between Euclidean and Minkowski amplitudes is only possible for zero energy scattering observables and allows determining the scattering length from the Euclidean Bethe–Salpeter amplitude. Such a possibility strongly simplifies the numerical solution of the Bethe–Salpeter equation and suggests an alternative way to compute the scattering length in Lattice Euclidean calculations without using the Luscher formalism. The derivations contained in this work were performed for scalar particles and one-boson exchange kernel. They can be generalized to the fermion case and more involved interactions.
Improved algorithm for calculating the Chandrasekhar function
Jablonski, A.
2013-02-01
Theoretical models of electron transport in condensed matter require an effective source of the Chandrasekhar H(x,omega) function. A code providing the H(x,omega) function has to be both accurate and very fast. The current revision of the code published earlier [A. Jablonski, Comput. Phys. Commun. 183 (2012) 1773] decreased the running time, averaged over different pairs of arguments x and omega, by a factor of more than 20. The decrease of the running time in the range of small values of the argument x, less than 0.05, is even more pronounced, reaching a factor of 30. The accuracy of the current code is not affected, and is typically better than 12 decimal places. New version program summaryProgram title: CHANDRAS_v2 Catalogue identifier: AEMC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 976 No. of bytes in distributed program, including test data, etc.: 11416 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows 7, Windows XP, Unix/Linux RAM: 0.7 MB Classification: 2.4, 7.2 Catalogue identifier of previous version: AEMC_v1_0 Journal reference of previous version: Comput. Phys. Commun. 183 (2012) 1773 Does the new version supersede the old program: Yes Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two
Calculating polaron mobility in halide perovskites
Frost, Jarvist Moore
2017-11-01
Lead halide perovskite semiconductors are soft, polar materials. The strong driving force for polaron formation (the dielectric electron-phonon coupling) is balanced by the light band effective masses, leading to a strongly-interacting large polaron. A first-principles prediction of mobility would help understand the fundamental mobility limits. Theories of mobility need to consider the polaron (rather than free-carrier) state due to the strong interactions. In this material we expect that at room temperature polar-optical phonon mode scattering will dominate and so limit mobility. We calculate the temperature-dependent polaron mobility of hybrid halide perovskites by variationally solving the Feynman polaron model with the finite-temperature free energies of Ōsaka. This model considers a simplified effective-mass band structure interacting with a continuum dielectric of characteristic response frequency. We parametrize the model fully from electronic-structure calculations. In methylammonium lead iodide at 300 K we predict electron and hole mobilities of 133 and 94 cm2V-1s-1 , respectively. These are in acceptable agreement with single-crystal measurements, suggesting that the intrinsic limit of the polaron charge carrier state has been reached. Repercussions for hot-electron photoexcited states are discussed. As well as mobility, the model also exposes the dynamic structure of the polaron. This can be used to interpret impedance measurements of the charge-carrier state. We provide the phonon-drag mass renormalization and scattering time constants. These could be used as parameters for larger-scale device models and band-structure dependent mobility simulations.