The new pooled cohort equations risk calculator
DEFF Research Database (Denmark)
Preiss, David; Kristensen, Søren L
2015-01-01
total cardiovascular risk score. During development of joint guidelines released in 2013 by the American College of Cardiology (ACC) and American Heart Association (AHA), the decision was taken to develop a new risk score. This resulted in the ACC/AHA Pooled Cohort Equations Risk Calculator. This risk...... disease and any measure of social deprivation. An early criticism of the Pooled Cohort Equations Risk Calculator has been its alleged overestimation of ASCVD risk which, if confirmed in the general population, is likely to result in statin therapy being prescribed to many individuals at lower risk than...
Recommendations for Insulin Dose Calculator Risk Management
2014-01-01
Several studies have shown the usefulness of an automated insulin dose bolus advisor (BA) in achieving improved glycemic control for insulin-using diabetes patients. Although regulatory agencies have approved several BAs over the past decades, these devices are not standardized in their approach to dosage calculation and include many features that may introduce risk to patients. Moreover, there is no single standard of care for diabetes worldwide and no guidance documents for BAs, specifically. Given the emerging and more stringent regulations on software used in medical devices, the approval process is becoming more difficult for manufacturers to navigate, with some manufacturers opting to remove BAs from their products altogether. A comprehensive literature search was performed, including publications discussing: diabetes BA use and benefit, infusion pump safety and regulation, regulatory submissions, novel BAs, and recommendations for regulation and risk management of BAs. Also included were country-specific and international guidance documents for medical device, infusion pump, medical software, and mobile medical application risk management and regulation. No definitive worldwide guidance exists regarding risk management requirements for BAs, specifically. However, local and international guidance documents for medical devices, infusion pumps, and medical device software offer guidance that can be applied to this technology. In addition, risk management exercises that are algorithm-specific can help prepare manufacturers for regulatory submissions. This article discusses key issues relevant to BA use and safety, and recommends risk management activities incorporating current research and guidance. PMID:24876550
Risk Management – Managing Risks, not Calculating Them
Kostov, Phillip; Lingard, John
2004-01-01
The expected utility approach to decision making advocates a probability vision of the world and labels any deviation from it ‘irrational’. This paper reconsiders the rationality argument and argues that calculating risks is not a viable strategy in an uncertain world. Alternative strategies not only can save considerable cognitive and computational resources, but are more ‘rational’ with view to the restricted definition of rationality applied by expected utility theorists. The alternative d...
CALCULATING ECONOMIC RISK AFTER HANFORD CLEANUP
Energy Technology Data Exchange (ETDEWEB)
Scott, M.J.
2003-02-27
Since late 1997, researchers at the Hanford Site have been engaged in the Groundwater Protection Project (formerly, the Groundwater/Vadose Zone Project), developing a suite of integrated physical and environmental models and supporting data to trace the complex path of Hanford legacy contaminants through the environment for the next thousand years, and to estimate corresponding environmental, human health, economic, and cultural risks. The linked set of models and data is called the System Assessment Capability (SAC). The risk mechanism for economics consists of ''impact triggers'' (sequences of physical and human behavior changes in response to, or resulting from, human health or ecological risks), and processes by which particular trigger mechanisms induce impacts. Economic impacts stimulated by the trigger mechanisms may take a variety of forms, including changes in either costs or revenues for economic sectors associated with the affected resource or activity. An existing local economic impact model was adapted to calculate the resulting impacts on output, employment, and labor income in the local economy (the Tri-Cities Economic Risk Model or TCERM). The SAC researchers ran a test suite of 25 realization scenarios for future contamination of the Columbia River after site closure for a small subset of the radionuclides and hazardous chemicals known to be present in the environment at the Hanford Site. These scenarios of potential future river contamination were analyzed in TCERM. Although the TCERM model is sensitive to river contamination under a reasonable set of assumptions concerning reactions of the authorities and the public, the scenarios show low enough future contamination that the impacts on the local economy are small.
Calculation of Expected Shortfall for Measuring Risk and Its Applications
Institute of Scientific and Technical Information of China (English)
阎春宁; 余鹏; 黄养新
2005-01-01
Expected shortfall(ES) is a new method to measure market risk. In this paper, an example was presented to illustrate that the ES is coherent but value-at-risk(VaR) is not coherent. Three formulas for calculating the ES based on historical simulation method, normal method and GARCH method were derived. Further, a numerical experiment on optimizing portfolio using ES was provided.
Risk calculations in the manufacturing technology selection process
DEFF Research Database (Denmark)
Farooq, S.; O'Brien, C.
2010-01-01
and supports an industrial manager in achieving objective and comprehensive decisions regarding selection of a manufacturing technology. Originality/value - The paper explains the process of risk calculation in manufacturing technology selection by dividing the decision-making environment into manufacturing...... and supply chain environment. The evaluation of a manufacturing technology considering supply chain opportunities and threats provides a broader perspective to the technology evaluation process. The inclusion of supply chain dimension in technology selection process facilitates an organisation to select...... a manufacturing technology not only according to its own requirements, but also according to the interest of its constituent supply chain....
How Suitable Are Registry Data for Recurrence Risk Calculations?
DEFF Research Database (Denmark)
Ellesøe, Sabrina Gade; Jensen, Anders Boeck; Ängquist, Lars Henrik
2016-01-01
identifier in the Danish registries, thus enabling connection of information from several registries. Utilizing the CPR number, we identified Danish patients with familial CHD and reviewed each patient's file. We compared diagnoses from the registries with those manually assigned, which enabled calculation......BACKGROUND: Congenital heart disease (CHD) occurs in approximately 1% of all live births, and 3% to 8% of these have until now been considered familial cases, defined as the occurrence of two or more affected individuals in a family. The validity of CHD diagnoses in Danish administrative registry...... data has only been studied previously in highly selected patient populations. These studies identified high positive predictive values (PPVs) and recurrence risk ratios (RRRs-ratio between probabilities of CHD given family history of CHD and no family history). However, the RRR can be distorted...
Johnson, Cassandra; Campwala, Insiyah; Gupta, Subhas
2017-03-01
American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) created the Surgical Risk Calculator, to allow physicians to offer patients a risk-adjusted 30-day surgical outcome prediction. This tool has not yet been validated in plastic surgery. A retrospective analysis of all plastic surgery-specific complications from a quality assurance database from September 2013 through July 2015 was performed. Patient preoperative risk factors were entered into the ACS Surgical Risk Calculator, and predicted outcomes were compared with actual morbidities. The difference in average predicted complication rate versus the actual rate of complication within this population was examined. Within the study population of patients with complications (n=104), the calculator accurately predicted an above average risk for 20.90% of serious complications. For surgical site infections, the average predicted risk for the study population was 3.30%; this prediction was proven only 24.39% accurate. The actual incidence of any complication within the 4924 patients treated in our plastic surgery practice from September 2013 through June 2015 was 1.89%. The most common plastic surgery complications include seroma, hematoma, dehiscence and flap-related complications. The ACS Risk Calculator does not present rates for these risks. While most frequent outcomes fall into general risk calculator categories, the difference in predicted versus actual complication rates indicates that this tool does not accurately predict outcomes in plastic surgery. The ACS Surgical Risk Calculator is not a valid tool for the field of plastic surgery without further research to develop accurate risk stratification tools.
[Risk factor calculator for medical underwriting of life insurers based on the PROCAM study].
Geritse, A; Müller, G; Trompetter, T; Schulte, H; Assmann, G
2008-06-01
For its electronic manual GEM, used to perform medical risk assessment in life insurance, SCOR Global Life Germany has developed an innovative and evidence-based calculator of the mortality risk depending on cardiovascular risk factors. The calculator contains several new findings regarding medical underwriting, which were gained from the analysis of the PROCAM (Prospective Cardiovascular Münster) study. For instance, in the overall consideration of all risk factors of a medically examined applicant, BMI is not an independent risk factor. Further, given sufficient information, the total extra mortality of a person no longer results from adding up the ratings for the single risk factors. In fact, this new approach of risk assessment considers the interdependencies between the different risk factors. The new calculator is expected to improve risk selection and standard acceptances will probably increase.
Basel II Approaches for the Calculation of the Regulatory Capital for Operational Risk
Directory of Open Access Journals (Sweden)
Ivana Valová
2011-01-01
Full Text Available The final version of the New Capital Accord, which includes operational risk, was released by the Basel Committee on Banking Supervision in June 2004. The article “Basel II approaches for the calculation of the regulatory capital for operational risk” is devoted to the issue of operational risk of credit financial institutions. The paper talks about methods of operational risk calculation, advantages and disadvantages of particular methods.
Directory of Open Access Journals (Sweden)
Josep Lupón
Full Text Available BACKGROUND: A combination of clinical and routine laboratory data with biomarkers reflecting different pathophysiological pathways may help to refine risk stratification in heart failure (HF. A novel calculator (BCN Bio-HF calculator incorporating N-terminal pro B-type natriuretic peptide (NT-proBNP, a marker of myocardial stretch, high-sensitivity cardiac troponin T (hs-cTnT, a marker of myocyte injury, and high-sensitivity soluble ST2 (ST2, (reflective of myocardial fibrosis and remodeling was developed. METHODS: Model performance was evaluated using discrimination, calibration, and reclassification tools for 1-, 2-, and 3-year mortality. Ten-fold cross-validation with 1000 bootstrapping was used. RESULTS: The BCN Bio-HF calculator was derived from 864 consecutive outpatients (72% men with mean age 68.2 ± 12 years (73%/27% New York Heart Association (NYHA class I-II/III-IV, LVEF 36%, ischemic etiology 52.2% and followed for a median of 3.4 years (305 deaths. After an initial evaluation of 23 variables, eight independent models were developed. The variables included in these models were age, sex, NYHA functional class, left ventricular ejection fraction, serum sodium, estimated glomerular filtration rate, hemoglobin, loop diuretic dose, β-blocker, Angiotensin converting enzyme inhibitor/Angiotensin-2 receptor blocker and statin treatments, and hs-cTnT, ST2, and NT-proBNP levels. The calculator may run with the availability of none, one, two, or the three biomarkers. The calculated risk of death was significantly changed by additive biomarker data. The average C-statistic in cross-validation analysis was 0.79. CONCLUSIONS: A new HF risk-calculator that incorporates available biomarkers reflecting different pathophysiological pathways better allowed individual prediction of death at 1, 2, and 3 years.
Directory of Open Access Journals (Sweden)
Chang Wook Jeong
Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy
Yoon, Sungroh; Park, Man Sik; Choi, Hoon; Bae, Jae Hyun; Moon, Du Geon; Hong, Sung Kyu; Lee, Sang Eun; Park, Chanwang
2017-01-01
Purpose We developed the Korean Prostate Cancer Risk Calculator for High-Grade Prostate Cancer (KPCRC-HG) that predicts the probability of prostate cancer (PC) of Gleason score 7 or higher at the initial prostate biopsy in a Korean cohort (http://acl.snu.ac.kr/PCRC/RISC/). In addition, KPCRC-HG was validated and compared with internet-based Western risk calculators in a validation cohort. Materials and Methods Using a logistic regression model, KPCRC-HG was developed based on the data from 602 previously unscreened Korean men who underwent initial prostate biopsies. Using 2,313 cases in a validation cohort, KPCRC-HG was compared with the European Randomized Study of Screening for PC Risk Calculator for high-grade cancer (ERSPCRC-HG) and the Prostate Cancer Prevention Trial Risk Calculator 2.0 for high-grade cancer (PCPTRC-HG). The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC) and calibration plots. Results PC was detected in 172 (28.6%) men, 120 (19.9%) of whom had PC of Gleason score 7 or higher. Independent predictors included prostate-specific antigen levels, digital rectal examination findings, transrectal ultrasound findings, and prostate volume. The AUC of the KPCRC-HG (0.84) was higher than that of the PCPTRC-HG (0.79, pcancer prediction model in Korea. It had higher predictive accuracy than PCPTRC-HG in a Korean population and showed similar performance with ERSPCRC-HG in a Korean population. This prediction model could help avoid unnecessary biopsy and reduce overdiagnosis and overtreatment in clinical settings. PMID:28046017
MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM
Directory of Open Access Journals (Sweden)
A. G. Korobeynikov
2015-05-01
Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed
RESRAD for Radiological Risk Assessment. Comparison with EPA CERCLA Tools - PRG and DCC Calculators
Energy Technology Data Exchange (ETDEWEB)
Yu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, J. -J. [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, S. [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-07-01
The purpose of this report is two-fold. First, the risk assessment methodology for both RESRAD and the EPA’s tools is reviewed. This includes a review of the EPA’s justification for 2 using a dose-to-risk conversion factor to reduce the dose-based protective ARAR from 15 to 12 mrem/yr. Second, the models and parameters used in RESRAD and the EPA PRG and DCC Calculators are compared in detail, and the results are summarized and discussed. Although there are suites of software tools in the RESRAD family of codes and the EPA Calculators, the scope of this report is limited to the RESRAD (onsite) code for soil contamination and the EPA’s PRG and DCC Calculators also for soil contamination.
The Influence of Liquidity Risk on Value-at-Risk Calculations
Directory of Open Access Journals (Sweden)
Bor Bricelj
2013-01-01
Full Text Available In this article we implement liquidity in the standard value-at-riskframework. We incorporate bid-ask spread into basic VaR models. Wethen test these models on three foreign markets and on a domesticone. We conclude that liquidity VaR models adequately measure marketrisk. On one hand, the liquidity VaR methodology represents advancementin market risk analysis, but on the other hand, those modelsare not yet robust enough to pass all backtests. Comparing the resultsbetween markets we conclude that the results for the domesticmarket are comparable to those of foreign ones despite their size difference.
Manuel, Douglas G; Abdulaziz, Kasim E; Perez, Richard; Beach, Sarah; Bennett, Carol
2017-01-09
In the clinical setting, previous studies have shown personalized risk assessment and communication improves risk perception and motivation. We evaluated an online health calculator that estimated and presented six different measures of life expectancy/mortality based on a person's sociodemographic and health behavior profile. Immediately after receiving calculator results, participants were invited to complete an online survey that asked how informative and motivating they found each risk measure, whether they would share their results and whether the calculator provided information they need to make lifestyle changes. Over 80% of the 317 survey respondents found at least one of six healthy living measures highly informative and motivating, but there was moderate heterogeneity regarding which measures respondents found most informative and motivating. Overall, health age was most informative and life expectancy most motivating. Approximately 40% of respondents would share the results with their clinician (44%) or social networks (38%), although the information they would share was often different from what they found informative or motivational. Online personalized risk assessment allows for a more personalized communication compared to historic paper-based risk assessment to maximize knowledge and motivation, and people should be provided a range of risk communication measures that reflect different risk perspectives.
DSTiPE Algorithm for Fuzzy Spatio-Temporal Risk Calculation in Wireless Environments
Energy Technology Data Exchange (ETDEWEB)
Kurt Derr; Milos Manic
2008-09-01
Time and location data play a very significant role in a variety of factory automation scenarios, such as automated vehicles and robots, their navigation, tracking, and monitoring, to services of optimization and security. In addition, pervasive wireless capabilities combined with time and location information are enabling new applications in areas such as transportation systems, health care, elder care, military, emergency response, critical infrastructure, and law enforcement. A person/object in proximity to certain areas for specific durations of time may pose a risk hazard either to themselves, others, or the environment. This paper presents a novel fuzzy based spatio-temporal risk calculation DSTiPE method that an object with wireless communications presents to the environment. The presented Matlab based application for fuzzy spatio-temporal risk cluster extraction is verified on a diagonal vehicle movement example.
Risk Analysis of Reservoir Flood Routing Calculation Based on Inflow Forecast Uncertainty
Directory of Open Access Journals (Sweden)
Binquan Li
2016-10-01
Full Text Available Possible risks in reservoir flood control and regulation cannot be objectively assessed by deterministic flood forecasts, resulting in the probability of reservoir failure. We demonstrated a risk analysis of reservoir flood routing calculation accounting for inflow forecast uncertainty in a sub-basin of Huaihe River, China. The Xinanjiang model was used to provide deterministic flood forecasts, and was combined with the Hydrologic Uncertainty Processor (HUP to quantify reservoir inflow uncertainty in the probability density function (PDF form. Furthermore, the PDFs of reservoir water level (RWL and the risk rate of RWL exceeding a defined safety control level could be obtained. Results suggested that the median forecast (50th percentiles of HUP showed better agreement with observed inflows than the Xinanjiang model did in terms of the performance measures of flood process, peak, and volume. In addition, most observations (77.2% were bracketed by the uncertainty band of 90% confidence interval, with some small exceptions of high flows. Results proved that this framework of risk analysis could provide not only the deterministic forecasts of inflow and RWL, but also the fundamental uncertainty information (e.g., 90% confidence band for the reservoir flood routing calculation.
A selection method for the calculation of preliminary risk-based remediation goals
Energy Technology Data Exchange (ETDEWEB)
Mahoney, L.A.; Batey, J.C.; Pintenich, J.L. [Eckenfelder Inc., Nashville, TN (United States)
1995-12-31
In the process of deriving acceptable concentrations of chemical constituents (or preliminary risk-based remediation goals, PRGs) for hazardous and other waste sites based on the site risk assessment results, it may be necessary or desirable to select a subset of constituents to focus the remainder of the site activities including the feasibility study and possibly, remedial design and verification sampling. Use of a focused set of action or clean-up goals offers the benefits of targeting those site areas where efforts should be concentrated, and reducing the cost and complexity of clean-up and verification sampling. Although the federal Superfund risk assessment guidance provides methods by which to calculate PRGs, no information is given on how to select which chemicals PRGs should be generated for. A method for this selection is presented which establishes: the media of interest; the populations for which PRGs should be generated; the relevant exposure route(s) for a given population to be used in calculating PRGs; and the individual constituents for which PRGs should be estimated. To illustrate this selection process, remedial investigation (RI) data and a baseline risk assessment for a hazardous waste site in Mississippi were used. The media of interest were identified as surface water and sediment from a creek that is adjacent to the site, on-site surface water, and groundwater from the uppermost aquifer. Of the 45 constituents detected in site-related waters, this selection process resulted in 16 for which PRGs were calculated, which served to focus the subsequent feasibility study efforts.
O’Brien, Denzil
2016-01-01
Simple Summary This paper examines a number of methods for calculating injury risk for riders in the equestrian sport of eventing, and suggests that the primary locus of risk is the action of the horse jumping, and the jump itself. The paper argues that risk calculation should therefore focus first on this locus. Abstract All horse-riding is risky. In competitive horse sports, eventing is considered the riskiest, and is often characterised as very dangerous. But based on what data? There has been considerable research on the risks and unwanted outcomes of horse-riding in general, and on particular subsets of horse-riding such as eventing. However, there can be problems in accessing accurate, comprehensive and comparable data on such outcomes, and in using different calculation methods which cannot compare like with like. This paper critically examines a number of risk calculation methods used in estimating risk for riders in eventing, including one method which calculates risk based on hours spent in the activity and in one case concludes that eventing is more dangerous than motorcycle racing. This paper argues that the primary locus of risk for both riders and horses is the jump itself, and the action of the horse jumping. The paper proposes that risk calculation in eventing should therefore concentrate primarily on this locus, and suggests that eventing is unlikely to be more dangerous than motorcycle racing. The paper proposes avenues for further research to reduce the likelihood and consequences of rider and horse falls at jumps. PMID:26891334
Comparison of Value at Risk Calculation Models in Terms of Banks’ Capital Adequacy Ratio
Directory of Open Access Journals (Sweden)
Ahmet Bostancı
2014-07-01
Full Text Available Banks using advanced VaR models are expected to hold in a lower amount subject to market risk (ASMR than banks using simple VaR models because of measuring their risk relatively more accurately. The purpose of this study is to test the hypothesis that advanced VaR models which measures risks better are resulting a lower ASMR. In this study historical volatility, historical simulation, EWMA, GARCH (1,1, GARCH (1,1-Bootstrap and GARCH (1,1-GED models were used for VaR calculations. By backtesting the VaR measures the model security factor h has been identified and so the ASMR has been simulated. After the results have been discussed for the real data sets the same process was repeated with randomly generated six different data sets to test the consistence of the results. According to the findings, the hypothesis that advanced VaR models like GARCH (1,1-Bootstrap and GARCH (1,1-GED provides a lower ASMR was rejected.
Comparing Methods of Calculating Expected Annual Damage in Urban Pluvial Flood Risk Assessments
DEFF Research Database (Denmark)
Skovgård Olsen, Anders; Zhou, Qianqian; Linde, Jens Jørgen;
2015-01-01
Estimating the expected annual damage (EAD) due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how vulnerable it is to flood risk and how much can be gained by implementing e.g., climate change...... adaptation measures. This study identifies and compares three different methods for estimating the EAD based on unit costs of flooding of urban assets. One of these methods was used in previous studies and calculates the EAD based on a few extreme events by assuming a log-linear relationship between cost...... in the damage costs as a function of the return period. The shift occurs approximately at the 10 year return period and can perhaps be related to the design criteria for sewer systems. Further, it was tested if the EAD estimation could be simplified by assuming a single unit cost per flooded area. The results...
Namkung, Jessica M.; Fuchs, Lynn S.
2015-01-01
The purpose of this study was to examine the cognitive predictors of calculations and number line estimation with whole numbers and fractions. At-risk 4th-grade students (N = 139) were assessed on 7 domain-general abilities (i.e., working memory, processing speed, concept formation, language, attentive behavior, and nonverbal reasoning) and incoming calculation skill at the start of 4th grade. Then, they were assessed on whole-number and fraction calculation and number line estimation measure...
From Risk Models to Loan Contracts: Austerity as the Continuation of Calculation by Other Means
Directory of Open Access Journals (Sweden)
Pierre Pénet
2014-06-01
Full Text Available This article analyses how financial actors sought to minimise financial uncertainties during the European sovereign debt crisis by employing simulations as legal instruments of market regulation. We first contrast two roles that simulations can play in sovereign debt markets: ‘simulation-hypotheses’, which work as bundles of constantly updated hypotheses with the goal of better predicting financial risks; and ‘simulation-fictions’, which provide fixed narratives about the present with the purpose of postponing the revision of market risks. Using ratings reports published by Moody’s on Greece and European Central Bank (ECB regulations, we show that Moody’s stuck to a simulationfiction and displayed rating inertia on Greece’s trustworthiness to prevent the destabilising effects that further downgrades would have on Greek borrowing costs. We also show that the multi-notch downgrade issued by Moody’s in June 2010 followed the ECB’s decision to remove ratings from its collateral eligibility requirements. Then, as regulators moved from ‘regulation through model’ to ‘regulation through contract’, ratings stopped functioning as simulation-fictions. Indeed, the conditions of the Greek bailout implemented in May 2010 replaced the CRAs’ models as the main simulation-fiction, which market actors employed to postpone the prospect of a Greek default. We conclude by presenting austerity measures as instruments of calculative governance rather than ideological compacts
H. van Rhee (Henk); R. Suurmond (Robert)
2015-01-01
textabstractThis paper describes a method to convert meta-analytic results in (log) Odds Ratio to either Risk Ratio or Risk Difference. It has been argued that odds ratios are mathematically superior for meta-analysis, but risk ratios and risk differences are shown to be easier to interpret. Therefo
Directory of Open Access Journals (Sweden)
Denzil O’Brien
2016-02-01
Full Text Available All horse-riding is risky. In competitive horse sports, eventing is considered the riskiest, and is often characterised as very dangerous. But based on what data? There has been considerable research on the risks and unwanted outcomes of horse-riding in general, and on particular subsets of horse-riding such as eventing. However, there can be problems in accessing accurate, comprehensive and comparable data on such outcomes, and in using different calculation methods which cannot compare like with like. This paper critically examines a number of risk calculation methods used in estimating risk for riders in eventing, including one method which calculates risk based on hours spent in the activity and in one case concludes that eventing is more dangerous than motorcycle racing. This paper argues that the primary locus of risk for both riders and horses is the jump itself, and the action of the horse jumping. The paper proposes that risk calculation in eventing should therefore concentrate primarily on this locus, and suggests that eventing is unlikely to be more dangerous than motorcycle racing. The paper proposes avenues for further research to reduce the likelihood and consequences of rider and horse falls at jumps.
Galarza-Delgado, Dionicio A; Azpiri-Lopez, Jose R; Colunga-Pedraza, Iris J; Cardenas-de la Garza, Jesus A; Vera-Pineda, Raymundo; Serna-Peña, Griselda; Arvizu-Rivera, Rosa I; Martinez-Moreno, Adrian; Wah-Suarez, Martin; Garza Elizondo, Mario A
2017-02-01
Variability of the 10-year cardiovascular (CV) risk predicted by the Framingham Risk Score (FRS) using lipids, FRS using body mass index (BMI), Reynolds Risk Score (RRS), QRISK2, Extended Risk Score-Rheumatoid Arthritis (ERS-RA), and algorithm developed by the American College of Cardiology and the American Heart Association in 2013 (ACC/AHA 2013) according to the European League Against Rheumatism (EULAR) 2015/2016 update of its evidence-based recommendations for cardiovascular risk management in patients with rheumatoid arthritis (RA) has not been evaluated in Mexican mestizo patients. CV risk was predicted using six different risk calculators in 116 patients, aged 40-75, who fulfilled the ACR/EULAR 2010 classification criteria. Results were multiplied by 1.5 according to the EULAR 2015/2016 update. Global comparison of the risk predicted by all scales was done using the Friedman test, considering a P value of ≤0.05 as statistically significant. Individual comparison between the algorithms was made using the Wilcoxon signed-rank test, and a P value of ≤0.003 was considered statistically significant. All calculators showed to be different in the Friedman test (p ≤ 0.001). Median values of predicted 10-year CV risk were 11.02% (6.18-17.55) for FRS BMI; 8.47% (4.6-13.16) for FRS lipids; 5.55% (2.5-11.85) for QRISK2; 5% (3.1-8.65) for ERS-RA; 3.6% (1.5-9.3) for ACC/AHA 2013; and 1.5% (1.5-4.5) for RRS. ERS-RA showed no difference when compared against QRISK2 (p = 0.269). CV risk calculators showed variability among them and cannot be used indistinctly in RA-patients.
Fuchs, Lynn S.; Schumacher, Robin F.; Long, Jessica; Namkung, Jessica; Malone, Amelia S.; Wang, Amber; Hamlett, Carol L.; Jordan, Nancy C.; Siegler, Robert S.; Changas, Paul
2016-01-01
The purposes of this study were to (a) investigate the efficacy of a core fraction intervention program on understanding and calculation skill and (b) isolate the effects of different forms of fraction word-problem (WP) intervention delivered as part of the larger program. At-risk 4th graders (n = 213) were randomly assigned at the individual…
Peng, Peng; Namkung, Jessica M; Fuchs, Douglas; Fuchs, Lynn S; Patton, Samuel; Yen, Loulee; Compton, Donald L; Zhang, Wenjuan; Miller, Amanda; Hamlett, Carol
2016-12-01
The purpose of this study was to explore domain-general cognitive skills, domain-specific academic skills, and demographic characteristics that are associated with calculation development from first grade to third grade among young children with learning difficulties. Participants were 176 children identified with reading and mathematics difficulties at the beginning of first grade. Data were collected on working memory, language, nonverbal reasoning, processing speed, decoding, numerical competence, incoming calculations, socioeconomic status, and gender at the beginning of first grade and on calculation performance at four time points: the beginning of first grade, the end of first grade, the end of second grade, and the end of third grade. Latent growth modeling analysis showed that numerical competence, incoming calculation, processing speed, and decoding skills significantly explained the variance in calculation performance at the beginning of first grade. Numerical competence and processing speed significantly explained the variance in calculation performance at the end of third grade. However, numerical competence was the only significant predictor of calculation development from the beginning of first grade to the end of third grade. Implications of these findings for early calculation instructions among young at-risk children are discussed.
Directory of Open Access Journals (Sweden)
Pavlos A. Kassomenos
2009-02-01
Full Text Available The objective of the current study was the development of a reliable modeling platform to calculate in real time the personal exposure and the associated health risk for filling station employees evaluating current environmental parameters (traffic, meteorological and amount of fuel traded determined by the appropriate sensor network. A set of Artificial Neural Networks (ANNs was developed to predict benzene exposure pattern for the filling station employees. Furthermore, a Physiology Based Pharmaco-Kinetic (PBPK risk assessment model was developed in order to calculate the lifetime probability distribution of leukemia to the employees, fed by data obtained by the ANN model. Bayesian algorithm was involved in crucial points of both model sub compartments. The application was evaluated in two filling stations (one urban and one rural. Among several algorithms available for the development of the ANN exposure model, Bayesian regularization provided the best results and seemed to be a promising technique for prediction of the exposure pattern of that occupational population group. On assessing the estimated leukemia risk under the scope of providing a distribution curve based on the exposure levels and the different susceptibility of the population, the Bayesian algorithm was a prerequisite of the Monte Carlo approach, which is integrated in the PBPK-based risk model. In conclusion, the modeling system described herein is capable of exploiting the information collected by the environmental sensors in order to estimate in real time the personal exposure and the resulting health risk for employees of gasoline filling stations.
12 CFR 702.106 - Standard calculation of risk-based net worth requirement.
2010-01-01
... AFFECTING CREDIT UNIONS PROMPT CORRECTIVE ACTION Net Worth Classification § 702.106 Standard calculation of...) Allowance. Negative one hundred percent (−100%) of the balance of the Allowance for Loan and Lease...
Ellman, R.; Sibonga, J. D.; Bouxsein, M. L.
2010-01-01
The factor-of-risk (Phi), defined as the ratio of applied load to bone strength, is a biomechanical approach to hip fracture risk assessment that may be used to identify subjects who are at increased risk for fracture. The purpose of this project was to calculate the factor of risk in long duration astronauts after return from a mission on the International Space Station (ISS), which is typically 6 months in duration. The load applied to the hip was calculated for a sideways fall from standing height based on the individual height and weight of the astronauts. The soft tissue thickness overlying the greater trochanter was measured from the DXA whole body scans and used to estimate attenuation of the impact force provided by soft tissues overlying the hip. Femoral strength was estimated from femoral areal bone mineral density (aBMD) measurements by dual-energy x-ray absorptiometry (DXA), which were performed between 5-32 days of landing. All long-duration NASA astronauts from Expedition 1 to 18 were included in this study, where repeat flyers were treated as separate subjects. Male astronauts (n=20) had a significantly higher factor of risk for hip fracture Phi than females (n=5), with preflight values of 0.83+/-0.11 and 0.36+/-0.07, respectively, but there was no significant difference between preflight and postflight Phi (Figure 1). Femoral aBMD measurements were not found to be significantly different between men and women. Three men and no women exceeded the theoretical fracture threshold of Phi=1 immediately postflight, indicating that they would likely suffer a hip fracture if they were to experience a sideways fall with impact to the greater trochanter. These data suggest that male astronauts may be at greater risk for hip fracture than women following spaceflight, primarily due to relatively less soft tissue thickness and subsequently greater impact force.
On the problems regarding the risk calculation used in IEC 62305
Gellén, T. B.; Szedenik, N.; Kiss, I.; Németh, B.
2011-06-01
The 2nd part of the international standard on lightning protection (IEC 62305) deals with risk management. The explanations of the mathematical principles and the basic terms of this part facilitate the proper application of the standard. This paper gives additional information for better understanding of the standard and highlights some issues that might occur in its practical application.
On the problems regarding the risk calculation used in IEC 62305
Energy Technology Data Exchange (ETDEWEB)
Gellen, T B; Szedenik, N; Kiss, I; Nemeth, B, E-mail: szedenik.norbert@vet.bme.hu [Budapest University of Technology and Economics, Egry J.u.18, Budapest (Hungary)
2011-06-23
The 2nd part of the international standard on lightning protection (IEC 62305) deals with risk management. The explanations of the mathematical principles and the basic terms of this part facilitate the proper application of the standard. This paper gives additional information for better understanding of the standard and highlights some issues that might occur in its practical application.
Calculated Risk Taking in the Treatment of Suicidal Patients: Ethical and Legal Problems.
Maltsberger, John T.
1994-01-01
Discusses discharge of suicidal patients from inpatient care from both economic and ethical perspectives. Suggests that clinicians must exercise prudence in discharging patients unlikely to recover, considering duty to preserve life. Encourages discharge when benefits outweigh risks, with careful preparation of patient and family and meticulous…
Urbaniok, F; Rinne, T; Held, L; Rossegger, A; Endrass, J
2008-08-01
Risk assessment instruments have been the subject of a number of validation studies which have mainly examined the psychometric properties known primarily from psychological test development (objectivity, reliability and validity). Hardly any attention was paid to the fact that validation of forensic risk assessment instruments is confronted with a whole row of methodical challenges. Risk assessments include a quantitative and a qualitative component in that they state the probability (quantitative) of a particular offense (qualitative) to occur. To disregard the probabilistic nature of risk calculations leads to methodically faulty assumptions on the predictive validity of an instrument and what represents a suitable statistical method to test it. For example, ROC analyses are considered to be state of the art in the validation of risk assessment instruments. This method does however not take into account the probabilistic nature of prognoses and its results can be interpreted only to a limited degree. ROC analyses for example disregard certain aspects of an instrument's calibration which might lead in an instrument's validation to high ROC values while demonstrating only low validity. Further shortcomings of validation studies are that they ignore changes of risk dispositions or that they don't differentiate between offense specific risks (e. g. any recidivism vs. violent or sexual recidivism). The paper discusses and reviews different quality criteria of risk assessment instruments in view of methodological as well as practical issues. Many of these criteria have been ignored so far in the scientific discourse even though they are essential to the evaluation of the validity and the scope of indication of an instrument.
Comparison of the historic recycling risk for BSE in three European Countries by calculating RO.
Schwermer, H.; Koeijer, de A.A.; Brülisauer, F.; Heim, D.
2007-01-01
Adeterministic model of BSE transmission is used to calculate the R0 values for specific years of the BSE epidemics in the United Kingdom (UK), the Netherlands (NL), and Switzerland (CH). In all three countries, theR0 values decreased below 1 after the introduction of a ban on feeding meat and bone
Risk Management for Complex Calculations: EuSpRIG Best Practices in Hybrid Applications
Cernauskas, Deborah; VanVliet, Ben
2008-01-01
As the need for advanced, interactive mathematical models has increased, user/programmers are increasingly choosing the MatLab scripting language over spreadsheets. However, applications developed in these tools have high error risk, and no best practices exist. We recommend that advanced, highly mathematical applications incorporate these tools with spreadsheets into hybrid applications, where developers can apply EuSpRIG best practices. Development of hybrid applications can reduce the potential for errors, shorten development time, and enable higher level operations. We believe that hybrid applications are the future and over the course of this paper, we apply and extend spreadsheet best practices to reduce or prevent risks in hybrid Excel/MatLab applications.
Calculating LOAEL/NOAEL uncertainty factors for wildlife species in ecological risk assessments
Energy Technology Data Exchange (ETDEWEB)
Suedel, B.C.; Clifford, P.A.; Ludwig, D.F. [EA Engineering, Science, and Technology, Inc., Hunt Valley, MD (United States)
1995-12-31
Terrestrial ecological risk assessments frequently require derivation of NOAELs or toxicity reference values (TRVS) against which to compare exposure estimates. However, much of the available information from the literature is LOAELS, not NOAELS. Lacking specific guidance, arbitrary factors of ten are sometimes employed for extrapolating NOAELs from LOAELs. In this study, the scientific literature was searched to obtain chronic and subchronic studies reporting NOAEL and LOAEL data for wildlife and laboratory species. Results to date indicate a mean conversion factor of 4.0 ({+-} 2.61 S.D.), with a minimum of 1. 6 and a maximum of 10 for 106 studies across several classes of compounds (I.e., metals, pesticides, volatiles, etc.). These data suggest that an arbitrary factor of 10 conversion factor is unnecessarily restrictive for extrapolating NOAELs from LOAELs and that a factor of 4--5 would be more realistic for deriving toxicity reference values for wildlife species. Applying less arbitrary and more realistic conversion factors in ecological risk assessments will allow for a more accurate estimate of NOAEL values for assessing risk to wildlife populations.
Directory of Open Access Journals (Sweden)
Wong Jenna
2011-10-01
Full Text Available Abstract Background Surgeries and other procedures can influence the risk of death in hospital. All published scales that predict post-operative death risk require clinical data and cannot be measured using administrative data alone. This study derived and internally validated an index that can be calculated using administrative data to quantify the independent risk of hospital death after a procedure. Methods For all patients admitted to a single academic centre between 2004 and 2009, we estimated the risk of all-cause death using the Kaiser Permanente Inpatient Risk Adjustment Methodology (KP-IRAM. We determined whether each patient underwent one of 503 commonly performed therapeutic procedures using Canadian Classification of Interventions codes and whether each procedure was emergent or elective. Multivariate logistic regression modeling was used to measure the association of each procedure-urgency combination with death in hospital independent of the KP-IRAM risk of death. The final model was modified into a scoring system to quantify the independent influence each procedure had on the risk of death in hospital. Results 275 460 hospitalizations were included (137,730 derivation, 137,730 validation. In the derivation group, the median expected risk of death was 0.1% (IQR 0.01%-1.4% with 4013 (2.9% dying during the hospitalization. 56 distinct procedure-urgency combinations entered our final model resulting in a Procedural Index for Mortality Rating (PIMR score values ranging from -7 to +11. In the validation group, the PIMR score significantly predicted the risk of death by itself (c-statistic 67.3%, 95% CI 66.6-68.0% and when added to the KP-IRAM model (c-index improved significantly from 0.929 to 0.938. Conclusions We derived and internally validated an index that uses administrative data to quantify the independent association of a broad range of therapeutic procedures with risk of death in hospital. This scale will improve risk
COMPUTER MODEL USED TO CALCULATE PROFITABILITY AND ECONOMIC RISK ON FARMS
Directory of Open Access Journals (Sweden)
Rozi BEREVOIANU
2013-12-01
Full Text Available Economic information is an essential element of progress, being present in all fields. With the development of market economy must grow and economic information in order to reflect as accurately as patrimonial situation and results of financial and economic activity of enterprises. The main source of economic information is the accounting, which is the main instrument of knowledge, management and control of assets and results of any enterprise. In this paper we present a computer model to analyze economic information on the profitability and economic risk, available both in the vegetable farms and for the livestock sector.
Application of Risk within Net Present Value Calculations for Government Projects
Grandl, Paul R.; Youngblood, Alisha D.; Componation, Paul; Gholston, Sampson
2007-01-01
In January 2004, President Bush announced a new vision for space exploration. This included retirement of the current Space Shuttle fleet by 2010 and the development of new set of launch vehicles. The President's vision did not include significant increases in the NASA budget, so these development programs need to be cost conscious. Current trade study procedures address factors such as performance, reliability, safety, manufacturing, maintainability, operations, and costs. It would be desirable, however, to have increased insight into the cost factors behind each of the proposed system architectures. This paper reports on a set of component trade studies completed on the upper stage engine for the new launch vehicles. Increased insight into architecture costs was developed by including a Net Present Value (NPV) method and applying a set of associated risks to the base parametric cost data. The use of the NPV method along with the risks was found to add fidelity to the trade study and provide additional information to support the selection of a more robust design architecture.
Energy Technology Data Exchange (ETDEWEB)
Yuan, Y.C. [Square Y, Orchard Park, NY (United States); Chen, S.Y.; LePoire, D.J. [Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Rothman, R. [USDOE Idaho Field Office, Idaho Falls, ID (United States)
1993-02-01
This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors.
Energy Technology Data Exchange (ETDEWEB)
Dana L. Kelly; Nathan O. Siu
2010-06-01
As the U.S. Nuclear Regulatory Commission (NRC) continues its efforts to increase its use of risk information in decision making, the detailed, quantitative results of probabilistic risk assessment (PRA) calculations are coming under increased scrutiny. Where once analysts and users were not overly concerned with figure of merit variations that were less than an order of magnitude, now factors of two or even less can spark heated debate regarding modeling approaches and assumptions. The philosophical and policy-related aspects of this situation are well-recognized by the PRA community. On the other hand, the technical implications for PRA methods and modeling have not been as widely discussed. This paper illustrates the potential numerical effects of choices as to the details of models and methods for parameter estimation with three examples: 1) the selection of the time period data for parameter estimation, and issues related to component boundary and failure mode definitions; 2) the selection of alternative diffuse prior distributions, including the constrained noninformative prior distribution, in Bayesian parameter estimation; and 3) the impact of uncertainty in calculations for recovery of offsite power.
Stephens, Kelly I.; Rubinsztain, Leon; Payan, John; Rentsch, Chris; Rimland, David; Tangpricha, Vin
2017-01-01
Objective We evaluated the utility of the World Health Organization Fracture Risk Assessment Tool (FRAX) in assessing fracture risk in patients with human immunodeficiency virus (HIV) and vitamin D deficiency. Methods This was a retrospective study of HIV-infected patients with co-existing vitamin D deficiency at the Atlanta Veterans Affairs Medical Center. Bone mineral density (BMD) was assessed by dual-energy X-ray absorptiometry (DEXA), and the 10-year fracture risk was calculated by the WHO FRAX algorithm. Two independent radiologists reviewed lateral chest radiographs for the presence of subclinical vertebral fractures. Results We identified 232 patients with HIV and vitamin D deficiency. Overall, 15.5% of patients met diagnostic criteria for osteoporosis on DEXA, and 58% had low BMD (T-score between −1 and −2.5). The median risk of any major osteoporotic and hip fracture by FRAX score was 1.45 and 0.10%, respectively. Subclinical vertebral fractures were detected in 46.6% of patients. Compared to those without fractures, those with fractures had similar prevalence of osteoporosis (15.3% versus 15.7%; P>.999), low BMD (53.2% versus 59.3%; P = .419), and similar FRAX hip scores (0.10% versus 0.10%; P = .412). While the FRAX major score was lower in the nonfracture group versus fracture group (1.30% versus 1.60%; P = .025), this was not clinically significant. Conclusion We found a high prevalence of subclinical vertebral fractures among vitamin D–deficient HIV patients; however, DEXA and FRAX failed to predict those with fractures. Our results suggest that traditional screening tools for fragility fractures may not be applicable to this high-risk patient population. PMID:26684149
Energy Technology Data Exchange (ETDEWEB)
Whitfield, R. G.; Buehring, W. A.; Bassett, G. W. (Decision and Information Sciences)
2011-04-08
Get a GRiP (Gravitational Risk Procedure) on risk by using an approach inspired by the physics of gravitational forces between body masses! In April 2010, U.S. Department of Homeland Security Special Events staff (Protective Security Advisors [PSAs]) expressed concern about how to calculate risk given measures of consequence, vulnerability, and threat. The PSAs believed that it is not 'right' to assign zero risk, as a multiplicative formula would imply, to cases in which the threat is reported to be extremely small, and perhaps could even be assigned a value of zero, but for which consequences and vulnerability are potentially high. They needed a different way to aggregate the components into an overall measure of risk. To address these concerns, GRiP was proposed and developed. The inspiration for GRiP is Sir Isaac Newton's Universal Law of Gravitation: the attractive force between two bodies is directly proportional to the product of their masses and inversely proportional to the squares of the distance between them. The total force on one body is the sum of the forces from 'other bodies' that influence that body. In the case of risk, the 'other bodies' are the components of risk (R): consequence, vulnerability, and threat (which we denote as C, V, and T, respectively). GRiP treats risk as if it were a body within a cube. Each vertex (corner) of the cube represents one of the eight combinations of minimum and maximum 'values' for consequence, vulnerability, and threat. The risk at each of the vertices is a variable that can be set. Naturally, maximum risk occurs when consequence, vulnerability, and threat are at their maximum values; minimum risk occurs when they are at their minimum values. Analogous to gravitational forces among body masses, the GRiP formula for risk states that the risk at any interior point of the box depends on the squares of the distances from that point to each of the eight vertices. The risk
Energy Technology Data Exchange (ETDEWEB)
Whitfield, R. G.; Buehring, W. A.; Bassett, G. W. (Decision and Information Sciences)
2011-04-08
Get a GRiP (Gravitational Risk Procedure) on risk by using an approach inspired by the physics of gravitational forces between body masses! In April 2010, U.S. Department of Homeland Security Special Events staff (Protective Security Advisors [PSAs]) expressed concern about how to calculate risk given measures of consequence, vulnerability, and threat. The PSAs believed that it is not 'right' to assign zero risk, as a multiplicative formula would imply, to cases in which the threat is reported to be extremely small, and perhaps could even be assigned a value of zero, but for which consequences and vulnerability are potentially high. They needed a different way to aggregate the components into an overall measure of risk. To address these concerns, GRiP was proposed and developed. The inspiration for GRiP is Sir Isaac Newton's Universal Law of Gravitation: the attractive force between two bodies is directly proportional to the product of their masses and inversely proportional to the squares of the distance between them. The total force on one body is the sum of the forces from 'other bodies' that influence that body. In the case of risk, the 'other bodies' are the components of risk (R): consequence, vulnerability, and threat (which we denote as C, V, and T, respectively). GRiP treats risk as if it were a body within a cube. Each vertex (corner) of the cube represents one of the eight combinations of minimum and maximum 'values' for consequence, vulnerability, and threat. The risk at each of the vertices is a variable that can be set. Naturally, maximum risk occurs when consequence, vulnerability, and threat are at their maximum values; minimum risk occurs when they are at their minimum values. Analogous to gravitational forces among body masses, the GRiP formula for risk states that the risk at any interior point of the box depends on the squares of the distances from that point to each of the eight vertices. The risk
Franklin, M. R.; Veiga, L. H.; Py, D. A., Jr.; Fernandes, H. M.
2010-12-01
The uranium mining and milling facilities of Caetité (URA) is the only active uranium production center in Brazil. Operations take place at a very sensitive semi-arid region in the country where water resources are very scarce. Therefore, any contamination of the existing water bodies may trigger critical consequences to local communities because their sustainability is closely related to the availability of the groundwater resources. Due to the existence of several uranium anomalies in the region, groundwater can present radionuclide concentrations above the world average. The radiological risk associated to the ingestion of these waters have been questioned by members of the local communities, NGO’s and even regulatory bodies that suspected that the observed levels of radionuclide concentrations (specially Unat) could be related to the uranium mining and milling operations. Regardless the origin of these concentrations the fear that undesired health effects were taking place (e.g. increase in cancer incidence) remain despite the fact that no evidence - based on epidemiological studies - is available. This paper intends to present the connections between the local hydrogeology and the radiological characterization of groundwater in the neighboring areas of the uranium production center to understand the implications to the human health risk due to the ingestion of groundwater. The risk assessment was performed, taking into account the radiological and the toxicological risks. Samples from 12 wells have been collected and determinations of Unat, Thnat, 226Ra, 228Ra and 210Pb were performed. The radiation-related risks were estimated for adults and children by the calculation of the annual effective doses. The potential non-carcinogenic effects due to the ingestion of uranium were evaluated by the estimation of the hazard index (HI). Monte Carlo simulations were used to calculate the uncertainty associated with these estimates, i.e. the 95% confidence interval
Directory of Open Access Journals (Sweden)
Eunmi Kim
2014-09-01
Full Text Available Recently, flood damage by frequent localized downpours in cities is on the increase on account of abnormal climate phenomena and the growth of impermeable areas due to urbanization. This study suggests a method to estimate real-time flood risk on roads for drivers based on the accumulated rainfall. The amount of rainfall of a road link, which is an intensive type, is calculated by using the revised method of missing rainfall in meteorology, because the rainfall is not measured on roads directly. To process in real time with a computer, we use the inverse distance weighting (IDW method, which is a suitable method in the computing system and is commonly used in relation to precipitation due to its simplicity. With real-time accumulated rainfall, the flooding history, rainfall range causing flooding from previous rainfall information and frequency probability of precipitation are used to determine the flood risk on roads. The result of simulation using the suggested algorithms shows the high concordance rate between actual flooded areas in the past and flooded areas derived from the simulation for the research region in Busan, Korea.
Piacentini, Rubén D.; Cede, Alexander; Luccini, Eduardo; Stengel, Fernando
2004-01-01
The connection between ultraviolet (UV) radiation and various skin diseases is well known. In this work, we present the computer program "UVARG", developed in order to prevent the risk of getting sunburn for persons exposed to solar UV radiation in Argentina, a country that extends from low (tropical) to high southern hemisphere latitudes. The software calculates the so-called "erythemal irradiance", i.e., the spectral irradiance weighted by the McKinlay and Diffey action spectrum for erythema and integrated in wavelength. The erythemal irradiance depends mainly on the following geophysical parameters: solar elevation, total ozone column, surface altitude, surface albedo, total aerosol optical depth and Sun-Earth distance. Minor corrections are due to the variability in the vertical ozone, aerosol, pressure, humidity and temperature profiles and the extraterrestrial spectral solar UV irradiance. Key parameter in the software is a total ozone column climatology incorporating monthly averages, standard deviations and tendencies for the particular geographical situation of Argentina that was obtained from TOMS/NASA satellite data from 1978 to 2000. Different skin types are considered in order to determine the sunburn risk at any time of the day and any day of the year, with and without sunscreen protection. We present examples of the software for three different regions: the high altitude tropical Puna of Atacama desert in the North-West, Tierra del Fuego in the South when the ozone hole event overpasses and low summertime ozone conditions over Buenos Aires, the largest populated city in the country. In particular, we analyzed the maximum time for persons having different skin types during representative days of the year (southern hemisphere equinoxes and solstices). This work was made possible by the collaboration between the Argentine Skin Cancer Foundation, the Institute of Physics Rosario (CONICET-National University of Rosario, Argentina) and the Institute of
DEFF Research Database (Denmark)
Sørensen, Steen; Momsen, Günther; Sundberg, Karin
2011-01-01
Reliable individual risk calculation for trisomy (T) 13, 18, and 21 in first-trimester screening depends on good estimates of the medians for fetal nuchal translucency thickness (NT), free β-subunit of human chorionic gonadotropin (hCGβ), and pregnancy-associated plasma protein-A (PAPP-A) in mate......Reliable individual risk calculation for trisomy (T) 13, 18, and 21 in first-trimester screening depends on good estimates of the medians for fetal nuchal translucency thickness (NT), free β-subunit of human chorionic gonadotropin (hCGβ), and pregnancy-associated plasma protein-A (PAPP...
DEFF Research Database (Denmark)
Sørensen, Steen; Momsen, Günther; Sundberg, Karin
2011-01-01
Reliable individual risk calculation for trisomy (T) 13, 18, and 21 in first-trimester screening depends on good estimates of the medians for fetal nuchal translucency thickness (NT), free ß-subunit of human chorionic gonadotropin (hCGß), and pregnancy-associated plasma protein-A (PAPP-A) in mate......Reliable individual risk calculation for trisomy (T) 13, 18, and 21 in first-trimester screening depends on good estimates of the medians for fetal nuchal translucency thickness (NT), free ß-subunit of human chorionic gonadotropin (hCGß), and pregnancy-associated plasma protein-A (PAPP...
Calculation of Expected Shortfall for Measuring Risk and Its Application%评估风险的期望损失计算及应用研究
Institute of Scientific and Technical Information of China (English)
YAN Chun-ning; YU Peng; HUANG Yang-xin
2005-01-01
Expected shortfall(ES) is a new method to measure market risk. In this paper, an example was presented to illustrate that the ES is coherent but value-at-risk(VaR) is not coherent. Three formulas for calculating the ES based on historical simulation method,normal method and GARCH method were derived. Further, a numerical experiment on optimizing portfolio using ES was provided.
Institute of Scientific and Technical Information of China (English)
Yao Zhu; Ding-Wei Ye; Jin-You Wang; Yi-Jun Shen; Bo Dai; Chun-Guang Ma; Wen-Jun Xiao; Guo-Wen Lin; Xu-Dong Yao; Shi-Lin Zhang
2012-01-01
Several prediction models have been developed to estimate the outcomes of prostate biopsies.Most of these teels were designed for use with Western populations and have not been validated across different ethnic groups.Therefore,we evaluated the predictive value of the Prostate Cancer Prevention Trial (PCPT) and the European Randomized Study of Screening for Prostate Cancer (ERSPC) risk calculators in a Chinese cohort.Clinicopathological information was obtained from 495 Chinese men who had undergone extended prostate biopsies between January 2009 and March 2011.The estimated probabilities of prostate cancer and high-grade disease (Gleason ＞6) were calculated using the PCPT and ERSPC risk calculators.Overall measures,discrimination,calibration and clinical usefulness were assessed for the model evaluation.Of these patients,28.7％ were diagnosed with prostate cancer and 19.4％ had high-grade disease.Compared to the PCPT model and the prostate-specific antigen (PSA) threshold of 4 ng ml-1,the ERSPC risk calculator exhibited better discriminative ability for predicting positive biopsies and high-grade disease (the area under the curve was 0.831 and 0.852,respectively,P＜0.01 for both).Decision curve analysis also suggested the favourable clinical utility of the ERSPC calculator in the validation dataset.Both prediction models demonstrated miscalibration:the risk of prostate cancer and high-grade disease was overestimated by approximately 20％ for a wide range of predicted probabilities.In conclusion,the ERSPC risk calculator outperformed both the PCPT model and the PSA threshold of 4 ng ml-1 in predicting prostate cancer and high-grade disease in Chinese patients.However,the prediction tools derived from Western men significantly overestimated the probability of prostate cancer and high-grade disease compared to the outcomes of biopsies in a Chinese cohort.
Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, Rene P. M.; Legrand, Sara-Ann; Verstraete, Alain G.; Hels, Tove; Bernhoft, Inger Marie; Simonsen, Kirsten Wiese; Lillsunde, Pirjo; Favretto, Donata; Ferrara, Santo D.; Caplinskiene, Marija; Movig, Kris L. L.; Brookhuis, Karel A.
2013-01-01
Between 2006 and 2010, six population based case-control studies were conducted as part of the European research-project DRUID (DRiving Under the Influence of Drugs, alcohol and medicines). The aim of these case-control studies was to calculate odds ratios indicating the relative risk of serious inj
Houwing, S. Hagenzieker, M.P. Mathijssen, M.P.M. Legrand, S.-A. Verstraete, A.G. Hels, T. Bernhoft, I.M. Wiese Simonsen, K. Lillsunde, P. Favretto, D. Ferrara, S.D. Caplinskiene, M. Movig, K.L.L. & Brookhuis, K.A.
2013-01-01
Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances, Abstract: Between 2006 and 2010, six population based case–control studies were conducted as part of the European research-project DRUID (DRiving Under the Infl
Institute of Scientific and Technical Information of China (English)
殷园; 汪进; 陈珊琦; 王芳; 王家群
2014-01-01
本文采用完整的秦山第三核电厂PSA模型,分别用中科院 FDS团队自主研发的概率安全分析软件 RiskA的计算引擎 RiskAT与瑞典斯堪伯奥公司开发的 RiskSpectrum 的计算引擎 RSAT 进行了计算,结果表明二者定性和定量计算结果一致,在计算性能方面,RiskA的计算速度快于 RiskSpectrum.%With mockup model of Qinshan Ⅲ, RiskA and RiskSpectrum calculation engines were compared.RiskA was probabilistic safety analysis program independently developed by FDS Team,and Risk Spectrum was another similar program developed by Sweden Scand Power which was widely used.The comparison showed that the calculation results were exactly the same,and the computing speed of RiskA was faster than Risk Spectrum.
Penailillo B.R.; Morales, Y.; Meijers, E.
2008-01-01
On 31 July the company Chimac-Agriphar from Ougrée discharged 64 kilo chlorpyrifos and 12 kilo cypermethrin into the River Meuse, imposing risks to recreation (swimming and fishing), ecology (about 20 to 25 ton fish were killed) and drinking water production. In this study a retrospective risk analy
Institute of Scientific and Technical Information of China (English)
马振英; 张姬; 刘昕松
2016-01-01
Environmental risk accident aftereffect of the liquid chlorine tank leakage in a chemical company was calculated and predicted.The environmental effect of liquid chlorine tank leakingwas predicted by means of calculation of the degree of liquid chlorine tank leakage source and the consequences of the acci-dent risk,the lethal concentration range and emergency evacuation radiusconcentration were determined.%针对某化工企业液氯储罐泄漏环境风险事故进行后果计算及预测。通过对液氯储罐泄露源强计算及风险事故后果计算，预测出液氯储罐泄漏事故发生后对周围环境的影响，确定了半致死浓度范围和应急撤离半径。
Directory of Open Access Journals (Sweden)
Anders Chen
Full Text Available BACKGROUND: Oral pre-exposure prophylaxis (PrEP can be clinically effective and cost-effective for HIV prevention in high-risk men who have sex with men (MSM. However, individual patients have different risk profiles, real-world populations vary, and no practical tools exist to guide clinical decisions or public health strategies. We introduce a practical model of HIV acquisition, including both a personalized risk calculator for clinical management and a cost-effectiveness calculator for population-level decisions. METHODS: We developed a decision-analytic model of PrEP for MSM. The primary clinical effectiveness and cost-effectiveness outcomes were the number needed to treat (NNT to prevent one HIV infection, and the cost per quality-adjusted life-year (QALY gained. We characterized patients according to risk factors including PrEP adherence, condom use, sexual frequency, background HIV prevalence and antiretroviral therapy use. RESULTS: With standard PrEP adherence and national epidemiologic parameters, the estimated NNT was 64 (95% uncertainty range: 26, 176 at a cost of $160,000 (cost saving, $740,000 per QALY--comparable to other published models. With high (35% HIV prevalence, the NNT was 35 (21, 57, and cost per QALY was $27,000 (cost saving, $160,000, and with high PrEP adherence, the NNT was 30 (14, 69, and cost per QALY was $3,000 (cost saving, $200,000. In contrast, for monogamous, serodiscordant relationships with partner antiretroviral therapy use, the NNT was 90 (39, 157 and cost per QALY was $280,000 ($14,000, $670,000. CONCLUSIONS: PrEP results vary widely across individuals and populations. Risk calculators may aid in patient education, clinical decision-making, and cost-effectiveness evaluation.
Neslo, R E J; Oei, W; Janssen, M P
2017-02-23
Increasing identification of transmissions of emerging infectious diseases (EIDs) by blood transfusion raised the question which of these EIDs poses the highest risk to blood safety. For a number of the EIDs that are perceived to be a threat to blood safety, evidence on actual disease or transmission characteristics is lacking, which might render measures against such EIDs disputable. On the other hand, the fact that we call them "emerging" implies almost by definition that we are uncertain about at least some of their characteristics. So what is the relative importance of various disease and transmission characteristics, and how are these influenced by the degree of uncertainty associated with their actual values? We identified the likelihood of transmission by blood transfusion, the presence of an asymptomatic phase of infection, prevalence of infection, and the disease impact as the main characteristics of the perceived risk of disease transmission by blood transfusion. A group of experts in the field of infectious diseases and blood transfusion ranked sets of (hypothetical) diseases with varying degrees of uncertainty associated with their disease characteristics, and used probabilistic inversion to obtain probability distributions for the weight of each of these risk characteristics. These distribution weights can be used to rank both existing and newly emerging infectious diseases with (partially) known characteristics. Analyses show that in case there is a lack of data concerning disease characteristics, it is the uncertainty concerning the asymptomatic phase and the disease impact that are the most important drivers of the perceived risk. On the other hand, if disease characteristics are well established, it is the prevalence of infection and the transmissibility of the disease by blood transfusion that will drive the perceived risk. The risk prioritization model derived provides an easy to obtain and rational expert assessment of the relative importance of
Schwermer, Heinzpeter; de Koeijer, Aline; Brülisauer, Franz; Heim, Dagmar
2007-10-01
A deterministic model of BSE transmission is used to calculate the R(0) values for specific years of the BSE epidemics in the United Kingdom (UK), the Netherlands (NL), and Switzerland (CH). In all three countries, the R(0) values decreased below 1 after the introduction of a ban on feeding meat and bone meal (MBM) to ruminants around the 1990s. A variety of additional measures against BSE led to further decrease of R(0) to about 0.06 in the years around 1998. The calculated R(0) values were consistent with the observations made on the surveillance results for UK, but were partially conflicting with the surveillance results for NL and CH. There was evidence for a dependency of the BSE epidemic in NL and CH from an infection source not considered in the deterministic transmission model. Imports of MBM and feed components can be an explanation for this discrepancy, and the importance of imports for these observations is discussed.
Teodori, Francesco; Sumini, Marco
2008-12-01
GENII-LIN is an open source radiation protection environmental software system running on the Linux operating system. It has capabilities for calculating radiation dose and risk to individuals or populations from radionuclides released to the environment and from pre-existing environmental contamination. It can handle exposure pathways that include ingestion, inhalation and direct exposure to air, water and soil. The package is available for free and is completely open source, i.e., transparent to the users, who have full access to the source code of the software.
Directory of Open Access Journals (Sweden)
A Chaparian
2014-01-01
Full Text Available The objectives of this paper were calculation and comparison of the effective doses, the risks of exposure-induced cancer, and dose reduction in the gonads for male and female patients in different projections of some X-ray examinations. Radiographies of lumbar spine [in the eight projections of anteroposterior (AP, posteroanterior (PA, right lateral (RLAT, left lateral (LLAT, right anterior-posterior oblique (RAO, left anterior-posterior oblique (LAO, right posterior-anterior oblique (RPO, and left posterior-anterior oblique (LPO], abdomen (in the two projections of AP and PA, and pelvis (in the two projections of AP and PA were investigated. A solid-state dosimeter was used for the measuring of the entrance skin exposure. A Monte Carlo program was used for calculation of effective doses, the risks of radiation-induced cancer, and doses to the gonads related to the different projections. Results of this study showed that PA projection of abdomen, lumbar spine, and pelvis radiographies caused 50%-57% lower effective doses than AP projection and 50%-60% reduction in radiation risks. Also use of LAO projection of lumbar spine X-ray examination caused 53% lower effective dose than RPO projection and 56% and 63% reduction in radiation risk for male and female, respectively, and RAO projection caused 28% lower effective dose than LPO projection and 52% and 39% reduction in radiation risk for males and females, respectively. About dose reduction in the gonads, using of the PA position rather than AP in the radiographies of the abdomen, lumbar spine, and pelvis can result in reduction of the ovaries doses in women, 38%, 31%, and 25%, respectively and reduction of the testicles doses in males, 76%, 86%, and 94%, respectively. Also for oblique projections of lumbar spine X-ray examination, with employment of LAO rather than RPO and also RAO rather than LPO, demonstrated 22% and 13% reductions to the ovaries doses and 66% and 54% reductions in the
DEFF Research Database (Denmark)
Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René;
2013-01-01
injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case–control studies....... Relevant information was gathered from the DRUID-reports for eleven indicators for errors. The results showed that differences between the odds ratios in the DRUID case–control studies may indeed be (partially) explained by random and systematic errors. Selection bias and errors due to small sample sizes...... and cell counts were the most frequently observed errors in the six DRUID case–control studies. Therefore, it is recommended that epidemiological studies that assess the risk of psychoactive substances in traffic pay specific attention to avoid these potential sources of random and systematic errors...
Barshi, Immanuel
2016-01-01
Speaking up, i.e. expressing ones concerns, is a critical piece of effective communication. Yet, we see many situations in which crew members have concerns and still remain silent. Why would that be the case? And how can we assess the risks of speaking up vs. the risks of keeping silent? And once we do make up our minds to speak up, how should we go about it? Our workshop aims to answer these questions, and to provide us all with practical tools for effective risk assessment and effective speaking-up strategies..
National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...
A Monter Carlo optimize calculation of credit risk VaR for loan portfolio%贷款组合信用风险VaR仿真计算的一种优化方法
Institute of Scientific and Technical Information of China (English)
邓云胜; 任若恩
2003-01-01
This paper presents the principle of Monte Carlo optimize calculation of credit risk VaR for loanportfolio using Importance Sampling technique. Based on Matlab language, simulation experiments arecarried out and the result shows this approach can effectively reduce the numher of simulation runs andimprove the precision of parameter estimation.
SRD 166 MEMS Calculator (Web, free access) This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.
Energy Technology Data Exchange (ETDEWEB)
Manning, Karessa L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dolislager, Fredrick G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bellamy, Michael B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-11-01
The Preliminary Remediation Goal (PRG) and Dose Compliance Concentration (DCC) calculators are screening level tools that set forth Environmental Protection Agency s (EPA) recommended approaches, based upon currently available information with respect to risk assessment, for response actions at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites, commonly known as Superfund. The screening levels derived by the PRG and DCC calculators are used to identify isotopes contributing the highest risk and dose as well as establish preliminary remediation goals. Each calculator has a residential gardening scenario and subsistence farmer exposure scenarios that require modeling of the transfer of contaminants from soil and water into various types of biota (crops and animal products). New publications of human intake rates of biota; farm animal intakes of water, soil, and fodder; and soil to plant interactions require updates be implemented into the PRG and DCC exposure scenarios. Recent improvements have been made in the biota modeling for these calculators, including newly derived biota intake rates, more comprehensive soil mass loading factors (MLFs), and more comprehensive soil to tissue transfer factors (TFs) for animals and soil to plant transfer factors (BV s). New biota have been added in both the produce and animal products categories that greatly improve the accuracy and utility of the PRG and DCC calculators and encompass greater geographic diversity on a national and international scale.
Energy Technology Data Exchange (ETDEWEB)
1975-10-01
Information is presented concerning the radioactive releases from the containment following accidents; radioactive inventory of the reactor core; atmospheric dispersion; reactor sites and meteorological data; radioactive decay and deposition from plumes; finite distance of plume travel; dosimetric models; health effects; demographic data; mitigation of radiation exposure; economic model; and calculated results with consequence model.
工程防洪体系洪灾风险计算模型研究%Study on Flood Risk Calculation Model of Structural Flood Control System
Institute of Scientific and Technical Information of China (English)
陈艳; 陈进
2013-01-01
利用建立的工程防洪体系洪水风险分析模型，通过对简单的线性工程防洪系统进行分析，验证了洪水演进过程中相关水利工程对于洪灾风险的影响，揭示了不同水工结构可靠度的变化对于区域洪灾风险的影响。结果表明：①利用水库多蓄滞洪水并减小下泄流量虽然增大了大坝的洪水风险，但通过合理设置蓄滞洪方案可以有效地减小整个防洪体系的洪水风险值；②在防治区域洪水的过程中，可以通过人为降低上游经济损失相对较小堤段的可靠性，在合理部位设置蓄滞洪区来降低整个防洪体系的防洪风险；③适当提高重点堤段的防洪安全性，可有效降低整个防洪体系的防洪风险。%The simple linearity engineering flood control system was analyzed by using the established structural flood control system flood risk anal-ysis model. By the analysis result and the effect of water project,which was related to flood routing process on flood risk was verified. And the effect of reliability variations among different hydraulic structures on regional flood risk was revealed. The results show that:a)Although flood risk of the dam is increased by storing and detaining more flood and reducing discharge volume by using reservoir,the entire flood control system risk value is decreased effectively by reasonable storing floodwater and flood detention. b)During the process of regional flood prevention,the entire flood control system risk is decreased by decreasing reliability section of the dike on which the economic loss is relatively smaller upstream artificially and building diversion and storage works in the rational sites. c)The entire flood control system risk can be decreased effectively by improving flood prevention security of the key section appropriately.
Bos, Marian E H; Nielen, Mirjam; Koch, Guus; Bouma, Annemarie; De Jong, Mart C M; Stegeman, Arjan
2009-04-01
To optimize control of an avian influenza outbreak knowledge of within-flock transmission is needed. This study used field data to estimate the transmission rate parameter (beta) and the influence of risk factors on within-flock transmission of highly pathogenic avian influenza (HPAI) H7N7 virus in the 2003 epidemic in The Netherlands. The estimation is based on back-calculation of daily mortality data to fit a susceptible-infectious-dead format, and these data were analysed with a generalized linear model. This back-calculation method took into account the uncertainty of the length of the latent period, the survival of an infection by some birds and the influence of farm characteristics. After analysing the fit of the different databases created by back-calculation, it could be concluded that an absence of the latency period provided the best fit. The transmission rate parameter (beta) from these field data was estimated at 4.50 per infectious chicken per day (95% CI: 2.68-7.57), which was lower than what was reported from experimental data. In contrast to general belief, none of the studied risk factors (housing system, flock size, species, age of the birds in weeks and date of depopulation) had significant influence on the estimated beta.
DEFF Research Database (Denmark)
Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.;
2013-01-01
injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case-control studies....... The list of indicators that was identified in this study is useful both as guidance for systematic reviews and meta-analyses and for future epidemiological studies in the field of driving under the influence to minimize sources of errors already at the start of the study. © 2013 Published by Elsevier Ltd....
McCarty, George
1982-01-01
How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...
基于IEC 62305雷击风险评估计算方法%The Calculation Method in Lightning Risk Assessment Based on IEC 62305
Institute of Scientific and Technical Information of China (English)
问楠臻; 高文俊
2008-01-01
IEC 62305是国际电工委员会新发布的防雷技术标准.其中IEC 62305-2(Protection againstLightning Part 2:Risk Management,)是专门针对雷电灾害风险评估的标准.依据IEC 62305,并采用Excel与AutoCAD相结合的方法编制了雷击风险评估计算表格,并通过工程实例对Excel计算表格的准确性进行了验证.使得雷击风险评估建立在科学、准确、高效的基础上.
The Measuring Standards and Calculation of Financial Risks%财政风险的衡量标准及其测算
Institute of Scientific and Technical Information of China (English)
胡锋
2012-01-01
Deficit is a simple and measurable indicator to express financial risk .But some economical transforming countries usually transfer their expenditure pressure by recessive debts ,which should a part of the real decifit . Explained by deficit financing ,there exist strict logical relation among the rate of financial deficit ,recessive defi-cit rate and the deficit rate of government revenue and the deficit rate of government revenue covers the other two . From 1980 ,our country’s rate of financial deficit has been controlled within three percent ,wheras the deficit rate of government revenue reaches to about ten percent ,so to control recessive debts is the basic way to prevent and solve financial risks .% 赤字是衡量财政风险的简单、可衡量指标，但转轨国家往往通过隐性债务来转移财政支出压力，真实赤字应该包括隐性债务赤字.从赤字融资方式解释，财政赤字率、隐性债务赤字率和政府收入赤字率等三种口径赤字率有着严密的逻辑关系.其中，政府收入赤字率包括财政赤字率和隐性债务赤字率.1980年以来我国财政赤字率都在3%之内，但政府收入赤字率围绕10%波动.控制隐性债务是防范化解财政风险的根本举措.
Energy Technology Data Exchange (ETDEWEB)
Nair, M; Li, C; White, M; Davis, J [Joe Arrington Cancer Center, Lubbock, TX (United States)
2014-06-15
Purpose: We have analyzed the dose volume histogram of 140 CT based HDR brachytherapy plans and evaluated the dose received to OAR ; rectum, bladder and sigmoid colon based on recommendations from ICRU and Image guided brachytherapy working group for cervical cancer . Methods: Our treatment protocol consist of XRT to whole pelvis with 45 Gy at 1.8Gy/fraction followed by 30 Gy at 6 Gy per fraction by HDR brachytherapy in 2 weeks . The CT compatible tandem and ovoid applicators were used and stabilized with radio opaque packing material. The patient was stabilized using special re-locatable implant table and stirrups for reproducibility of the geometry during treatment. The CT scan images were taken at 3mm slice thickness and exported to the treatment planning computer. The OAR structures, bladder, rectum and sigmoid colon were outlined on the images along with the applicators. The prescription dose was targeted to A left and A right as defined in Manchester system and optimized on geometry . The dosimetry was compared on all plans using the parameter Ci.sec.cGy-1 . Using the Dose Volume Histogram (DVH) obtained from the plans the doses to rectum, sigmoid colon and bladder for ICRU defined points and 2cc volume were analyzed and reported. The following criteria were used for limiting the tolerance dose by volume (D2cc) were calculated. The rectum and sigmoid colon doses were limited to <75Gy. The bladder dose was limited to < 90Gy from both XRT and HDR brachytherapy. Results: The average total (XRT+HDRBT) BED values to prescription volume was 120 Gy. Dose 2cc to rectum was 70Gy +/− 17Gy, dose to 2cc bladder was 82+/−32 Gy. The average Ci.sec.cGy-1 calculated for the HDR plans was 6.99 +/− 0.5 Conclusion: The image based treatment planning enabled to evaluati volume based dose to critical structures for clinical interpretation.
Energy Technology Data Exchange (ETDEWEB)
Gittus, J.H.
1986-03-01
The article deals with the calculation of risks, as applied to living near to a) a nuclear reactor or b) an industrial complex. The application of risk assessment techniques to the pressurised water reactor (PWR) is discussed with respect to: containment, frequencies of degraded core accidents, release of radioisotopes, consequences and risk to society, and uncertainties. The risk assessment for an industrial complex concerns the work of the Safety and Reliability Directorate for the chemical complex on Canvey Island. (U.K.).
Interest Rate Risk and Calculation for Bond with Embedded Option%利率风险及隐含期权调整后的久期计算
Institute of Scientific and Technical Information of China (English)
唐恩林
2014-01-01
This paper expounds the meaning of the traditional Macaulay duration .Relaxing the assump-tion of the term structure of interest rate independent with cash flow ,the traditional Macaulay duration cannot effectively and accurately measure the value of financial instruments .At the same time ,based on the assumption of the term structure of interest rate independent with cash flow ,this paper gives the method of calculation for bond with embedded option .%从阐述传统麦考莱久期的含义开始，揭示了传统麦考莱久期在放松利率期限结构和现金流相互独立的假设下无法有效准确的衡量金融工具的现值。同时基于Redington模型对于利率期限结构独立于现金流的假设，给出了隐含期权条件下的久期计算方法。
Neyrinck, Marleen M; Vrielink, Hans
2015-02-01
It's important to work smoothly with your apheresis equipment when you are an apheresis nurse. Attention should be paid to your donor/patient and the product you're collecting. It gives additional value to your work when you are able to calculate the efficiency of your procedures. You must be capable to obtain an optimal product without putting your donor/patient at risk. Not only the total blood volume (TBV) of the donor/patient plays an important role, but also specific blood values influence the apheresis procedure. Therefore, not all donors/patients should be addressed in the same way. Calculation of TBV, extracorporeal volume, and total plasma volume is needed. Many issues determine your procedure time. By knowing the collection efficiency (CE) of your apheresis machine, you can calculate the number of blood volumes to be processed to obtain specific results. You can calculate whether you need one procedure to obtain specific results or more. It's not always needed to process 3× the TBV. In this way, it can be avoided that the donor/patient is needless long connected to the apheresis device. By calculating the CE of each device, you can also compare the various devices for quality control reasons, but also nurses/operators.
2016-06-10
Under the Medicare Shared Savings Program (Shared Savings Program), providers of services and suppliers that participate in an Accountable Care Organization (ACO) continue to receive traditional Medicare fee-for-service (FFS) payments under Parts A and B, but the ACO may be eligible to receive a shared savings payment if it meets specified quality and savings requirements. This final rule addresses changes to the Shared Savings Program, including: Modifications to the program's benchmarking methodology, when resetting (rebasing) the ACO's benchmark for a second or subsequent agreement period, to encourage ACOs' continued investment in care coordination and quality improvement; an alternative participation option to encourage ACOs to enter performance-based risk arrangements earlier in their participation under the program; and policies for reopening of payment determinations to make corrections after financial calculations have been performed and ACO shared savings and shared losses for a performance year have been determined.
Institute of Scientific and Technical Information of China (English)
宁云才; 张丽华; 李祥仪
2001-01-01
This paper introduces simple programming with VBA which is built in Excel. An optimal model of calculating the income and risk of asset combined investments is illustrated. How to realize repeated problem-solving and result optimization is also presented.%本文结合资产组合投资收益与风险的优化模型的求解，介绍在Excel中可以利用其内嵌的VBA进行简单编程，即可以实现系列化自动反复规划求解、保存优化结果的一些程序化方法和技巧。
Institute of Scientific and Technical Information of China (English)
问楠臻; 魏映华
2013-01-01
阐述按照GB/T 21714.2-2008/IEC 62305-2：2006《雷电防护第2部分：风险管理》进行雷电灾害风险评估中格栅形空间屏蔽或网格状LPS引下线系统的网格宽度、电缆屏蔽层单位长度电阻参数的简单计算方法和思路，并通过具体实例进行说明。%This thesis will elaborate the simple calculating methods and ideas in lightning disaster risk assessment of grid width of grid-type space shielding or latticed LPS down-conductor system, resistance of per unit length cable shielding in accordance with GB / T 21714. 2 - 2008 / IEC 62305 - 2: 2006 Protection against Lightning - Part 2: Risk Management through specific examples.
Directory of Open Access Journals (Sweden)
Haraldo Claus-Hermberg
2009-10-01
nature of the proposed endpoint, a new calculator has been proposed: Fracture Risk Assessment Tool FRAX TM, which follows the same objectives of previous models, but integrates and combines several of those factors according to their relative weight. It can estimate absolute risk of hip fracture (or a combination of osteoporotic fractures for the following 10 years. The calculator could be adapted for use in any country by the incorporation of hip fracture incidence and age- and sex-adjusted life expectancy in the same country. This instrument has been presented as a new paradigm to assist in clinical and therapeutic decision-making. In the present review some of its characteristics are discussed, such as: the purported applicability to different populations, the convenience of using 10-year absolute fracture risk for the whole age range under consideration, and whether the efficacy of pharmacological treatment for the prevention of bone fractures in osteoporotic patients can be expected to be equally effective among patients selected for treatment on the basis of this model. Finally, we would like to call attention to the fact that risk thresholds for intervention are not yet clearly defined; those thresholds can obviously be expected to have a profound impact on the number of patients amenable to treatment.
Energy Technology Data Exchange (ETDEWEB)
Kmetyk, L.N.; Brown, T.D. [Sandia National Labs., Albuquerque, NM (United States)
1995-03-01
To gain a better understanding of the risk significance of low power and shutdown modes of operation, the Office of Nuclear Regulatory Research at the NRC established programs to investigate the likelihood and severity of postulated accidents that could occur during low power and shutdown (LP&S) modes of operation at commercial nuclear power plants. To investigate the likelihood of severe core damage accidents during off power conditions, probabilistic risk assessments (PRAs) were performed for two nuclear plants: Unit 1 of the Grand Gulf Nuclear Station, which is a BWR-6 Mark III boiling water reactor (BWR), and Unit 1 of the Surry Power Station, which is a three-loop, subatmospheric, pressurized water reactor (PWR). The analysis of the BWR was conducted at Sandia National Laboratories while the analysis of the PWR was performed at Brookhaven National Laboratory. This multi-volume report presents and discusses the results of the BWR analysis. The subject of this part presents the deterministic code calculations, performed with the MELCOR code, that were used to support the development and quantification of the PRA models. The background for the work documented in this report is summarized, including how deterministic codes are used in PRAS, why the MELCOR code is used, what the capabilities and features of MELCOR are, and how the code has been used by others in the past. Brief descriptions of the Grand Gulf plant and its configuration during LP&S operation and of the MELCOR input model developed for the Grand Gulf plant in its LP&S configuration are given.
Institute of Scientific and Technical Information of China (English)
保强; 王峰; 金旭; 杨欧; 赵艳坤
2015-01-01
渣油加氢装置中原料油增压泵输送的介质具有高温、高压和易燃的特点，泄漏后会直接导致着火事故。本文应用危险与可操作性分析（hazard and operability analysis，HAZOP）方法对原料油增压泵自身故障及故障后对整个装置造成的影响进行了定性风险分析得出，泄漏事故发生频率高且后果严重。为了定量计算原料油增压泵泄漏的风险，本文提出了原料油增压泵多参数耦合定量风险计算方法：辨识原料油增压泵所在节点内部因素间的耦合关系，建立贝叶斯网络，对多因素导致机械密封失效、填料密封失效、壳体破裂等事件的可能性概率进行定量计算；调研当地全年平均风速和太阳辐射等级统计结果，针对常见和极端的大气稳定度、风速条件影响，利用过程危害分析软件工具（process hazard analysis software tool，PHAST）模拟定量计算了不同泄漏孔直径情况下，高温原料油泄漏导致的喷射火后果严重程度；基于可能性概率和后果严重度计算结果，制定可能性和严重度等级判定标准，构建风险矩阵，针对不同原因导致的泄漏事故进行了风险评级。分析得到原因事件风险等级，用于确定风险预防措施和建立应急计划。%Raw oil delivered by booster pumps in a residue hydrogenation unit is often under high-temperature and high-pressure,resulting in fire hazard when leaking. Hazard and Operability Analysis(HAZOP),a multi-parameter coupling quantitative risk calculation was applied to qualitatively analyze the failures of the booster pump and the failure impacts on the entire process. It was concluded that the leakage accidents could happen frequently resulting in serious consequence. This method identified the coupling relationship among the internal factors of the process node that contains the booster pump,established a Bayesian network model for the whole process
Institute of Scientific and Technical Information of China (English)
吴越; 刘东升; 李明军
2011-01-01
In the processes of landslide mass sliding and impacting on element at risk, the internal collapse of landslide mass will dissipate part of kinetic energy. But in practice, this part of energy is not taken usually into account. The discrete element method(DEM) is adopted to get impact force-time curves; and the impact energy conversion equation is also deduced based on impulse law and energy conservation law. With the analysis of the energy dissipation principle in the sliding and impacting processes of a practical rock slope, a comparison is made between the calculation method which takes both internal and external energy dissipations into account and the method which only takes external energy dissipation into account. The result shows that there is a significant difference between the two methods; and the internal energy dissipation can not be ignored. Moreover, the influence factors of impact energy and vulnerability for element at risk are both analyzed. The analysis result shows that impact energy is most sensitive to internal friction angle of landslide debris; second sensitive to distance between element at risk and landslide mass, gap length between joint segments, density of landslide mass and width of impact surface; last sensitive to the cohesion of landslide mass. In addition, the impact direction of landslide debris can simulataneously effect both impact energy and anti-impact energy.%滑体下滑及对受灾体冲击过程中,由于滑体内部的崩解碰撞将会耗散部分动能,而工程中通常采用简化的方法计算滑体冲击能,没有考虑内部耗能的影响.采用离散元法模拟得到滑体对受灾体的冲击力-时间曲线,根据冲量定律和能量守恒定律换算得到滑体冲击能.以实际工程为例,详细分析滑体下滑过程与冲击受灾体过程中的能耗规律.结果表明:同时考虑下滑和冲击过程中滑体内外部耗能的计算方法与只考虑滑体外部摩擦耗能的计算方法相比,
National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...
Geochemical Calculations Using Spreadsheets.
Dutch, Steven Ian
1991-01-01
Spreadsheets are well suited to many geochemical calculations, especially those that are highly repetitive. Some of the kinds of problems that can be conveniently solved with spreadsheets include elemental abundance calculations, equilibrium abundances in nuclear decay chains, and isochron calculations. (Author/PR)
Autistic Savant Calendar Calculators.
Patti, Paul J.
This study identified 10 savants with developmental disabilities and an exceptional ability to calculate calendar dates. These "calendar calculators" were asked to demonstrate their abilities, and their strategies were analyzed. The study found that the ability to calculate dates into the past or future varied widely among these…
How Do Calculators Calculate Trigonometric Functions?
Underwood, Jeremy M.; Edwards, Bruce H.
How does your calculator quickly produce values of trigonometric functions? You might be surprised to learn that it does not use series or polynomial approximations, but rather the so-called CORDIC method. This paper will focus on the geometry of the CORDIC method, as originally developed by Volder in 1959. This algorithm is a wonderful…
The Dental Trauma Internet Calculator
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; Lauridsen, Eva Fejerskov; Christensen, Søren Steno Ahrensburg
2012-01-01
Background/Aim Prediction tools are increasingly used to inform patients about the future dental health outcome. Advanced statistical methods are required to arrive at unbiased predictions based on follow-up studies. Material and Methods The Internet risk calculator at the Dental Trauma Guide...... provides prognoses for teeth with traumatic injuries based on the Copenhagen trauma database: http://www.dentaltraumaguide.org The database includes 2191 traumatized permanent teeth from 1282 patients that were treated at the dental trauma unit at the University Hospital in Copenhagen (Denmark...
Energy Technology Data Exchange (ETDEWEB)
Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment
1998-03-01
In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)
Assessment of cardiovascular risk.
LENUS (Irish Health Repository)
Cooney, Marie Therese
2010-10-01
Atherosclerotic cardiovascular disease (CVD) is the most common cause of death worldwide. Usually atherosclerosis is caused by the combined effects of multiple risk factors. For this reason, most guidelines on the prevention of CVD stress the assessment of total CVD risk. The most intensive risk factor modification can then be directed towards the individuals who will derive the greatest benefit. To assist the clinician in calculating the effects of these multiple interacting risk factors, a number of risk estimation systems have been developed. This review address several issues regarding total CVD risk assessment: Why should total CVD risk be assessed? What risk estimation systems are available? How well do these systems estimate risk? What are the advantages and disadvantages of the current systems? What are the current limitations of risk estimation systems and how can they be resolved? What new developments have occurred in CVD risk estimation?
Directory of Open Access Journals (Sweden)
MEDAR LUCIAN-ION
2011-12-01
Full Text Available The management of credit institutions must be concerned with identifying the internal and external risks of banking operations development, estimating their size and importance, assessing the possibility and imposing measures for their management. On the one hand, the identification, analysis, and mitigation of banking risks can cause reduction of inconvenient and uneconomical costs and realization of incomes with the role of shock absorber in profits reduction, and on the other hand, ignoring them can lead to loss reflected in the profit decrease, thus affecting the bank performance.
Electrical installation calculations advanced
Kitcher, Christopher
2013-01-01
All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio
Electrical installation calculations basic
Kitcher, Christopher
2013-01-01
All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo
DEFF Research Database (Denmark)
Bahr, Patrick; Hutton, Graham
2015-01-01
In this article, we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high-level semantics by systematic calculation, with all details of the implementation of the compilers...... falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language features and their combination, including arithmetic expressions, exceptions, state, various forms...
Radar Signature Calculation Facility
Federal Laboratory Consortium — FUNCTION: The calculation, analysis, and visualization of the spatially extended radar signatures of complex objects such as ships in a sea multipath environment and...
Electronics Environmental Benefits Calculator
U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...
一种新的风险度量工具:PaV及其计算框架%A New Tool Measuring Risks:PaV and Its Calculating Frame
Institute of Scientific and Technical Information of China (English)
杜本峰; 郭兴义
2003-01-01
The paper proposes a new tool to measure the risk in financial market: PaV, which means the happening probability with the given loss magnitude, and utilizes Copula function to obtain its computing algorithm. Two cases are illustrated for the promising applications of PaV.
Calculators and Polynomial Evaluation.
Weaver, J. F.
The intent of this paper is to suggest and illustrate how electronic hand-held calculators, especially non-programmable ones with limited data-storage capacity, can be used to advantage by students in one particular aspect of work with polynomial functions. The basic mathematical background upon which calculator application is built is summarized.…
PROSPECTS OF MANAGEMENT ACCOUNTING AND COST CALCULATION
Directory of Open Access Journals (Sweden)
Marian ŢAICU
2014-11-01
Full Text Available Progress in improving production technology requires appropriate measures to achieve an efficient management of costs. This raises the need for continuous improvement of management accounting and cost calculation. Accounting information in general, and management accounting information in particular, have gained importance in the current economic conditions, which are characterized by risk and uncertainty. The future development of management accounting and cost calculation is essential to meet the information needs of management.
Institute of Scientific and Technical Information of China (English)
李园
2014-01-01
社会经济的不断发展，金融行业也处于不断发展之中，服务范围也得到更广泛的拓展，在金融企业发展的同时也出现了一系列问题，商业银行的不断发展，过去银行风险管理的方式已经不能满足新时代银行各种风险因素的控制需求，非常有必要加强风险管理体系的构建，强化商业银行的抗风险能力，让商业银行作为一个资金交流支撑，能够更好地给发行者与投资者提供更准确的商业信息，降低银行的信用风险。本文主要分析了Matlab计算函数中的KMV模型在商业银行信用风险管理中的应用。%The social economy continues to develop,the financial sector is also growing among a wider range of services has also been expansion in the financial business development while there have been a series of problems,the continuous development of the commercial banks in the past the bank's risk management mode can not meet the needs of a new era of banking various control risk factors is necessary to strengthen the risk management system to build,strengthen anti-risk ability of commercial banks,commercial banks make money as an exchange of support,better able to give the publisher and investors provide more accurate business information,reduce the credit risk of banks.This paper analyzes the function of the Matlab computing KMV model in commercial bank credit risk management.
Methods for Melting Temperature Calculation
Hong, Qi-Jun
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which
Interval arithmetic in calculations
Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima
2016-10-01
Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.
Unit Cost Compendium Calculations
U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...
DEFF Research Database (Denmark)
Frederiksen, Morten
2014-01-01
Williamson’s characterisation of calculativeness as inimical to trust contradicts most sociological trust research. However, a similar argument is found within trust phenomenology. This paper re-investigates Williamson’s argument from the perspective of Løgstrup’s phenomenological theory of trust....... Contrary to Williamson, however, Løgstrup’s contention is that trust, not calculativeness, is the default attitude and only when suspicion is awoken does trust falter. The paper argues that while Williamson’s distinction between calculativeness and trust is supported by phenomenology, the analysis needs...... to take actual subjective experience into consideration. It points out that, first, Løgstrup places trust alongside calculativeness as a different mode of engaging in social interaction, rather conceiving of trust as a state or the outcome of a decision-making process. Secondly, the analysis must take...
EFFECTIVE DISCHARGE CALCULATION GUIDE
Institute of Scientific and Technical Information of China (English)
D.S.BIEDENHARN; C.R.THORNE; P.J.SOAR; R.D.HEY; C.C.WATSON
2001-01-01
This paper presents a procedure for calculating the effective discharge for rivers with alluvial channels.An alluvial river adjusts the bankfull shape and dimensions of its channel to the wide range of flows that mobilize the boundary sediments. It has been shown that time-averaged river morphology is adjusted to the flow that, over a prolonged period, transports most sediment. This is termed the effective discharge.The effective discharge may be calculated provided that the necessary data are available or can be synthesized. The procedure for effective discharge calculation presented here is designed to have general applicability, have the capability to be applied consistently, and represent the effects of physical processes responsible for determining the channel, dimensions. An example of the calculations necessary and applications of the effective discharge concept are presented.
Magnetic Field Grid Calculator
National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly...
Current interruption transients calculation
Peelo, David F
2014-01-01
Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,
Source and replica calculations
Energy Technology Data Exchange (ETDEWEB)
Whalen, P.P.
1994-02-01
The starting point of the Hiroshima-Nagasaki Dose Reevaluation Program is the energy and directional distributions of the prompt neutron and gamma-ray radiation emitted from the exploding bombs. A brief introduction to the neutron source calculations is presented. The development of our current understanding of the source problem is outlined. It is recommended that adjoint calculations be used to modify source spectra to resolve the neutron discrepancy problem.
Scientific calculating peripheral
Energy Technology Data Exchange (ETDEWEB)
Ethridge, C.D.; Nickell, J.D. Jr.; Hanna, W.H.
1979-09-01
A scientific calculating peripheral for small intelligent data acquisition and instrumentation systems and for distributed-task processing systems is established with a number-oriented microprocessor controlled by a single component universal peripheral interface microcontroller. A MOS/LSI number-oriented microprocessor provides the scientific calculating capability with Reverse Polish Notation data format. Master processor task definition storage, input data sequencing, computation processing, result reporting, and interface protocol is managed by a single component universal peripheral interface microcontroller.
Bus, James S; Banton, Marcy I; Faber, Willem D; Kirman, Christopher R; McGregor, Douglas B; Pourreau, Daniel B
2015-02-01
A screening level risk assessment has been performed for tertiary-butyl acetate (TBAC) examining its primary uses as a solvent in industrial and consumer products. Hazard quotients (HQ) were developed by merging TBAC animal toxicity and dose-response data with population-level, occupational and consumer exposure scenarios. TBAC has a low order of toxicity following subchronic inhalation exposure, and neurobehavioral changes (hyperactivity) in mice observed immediately after termination of exposure were used as conservative endpoints for derivation of acute and chronic reference concentration (RfC) values. TBAC is not genotoxic but has not been tested for carcinogenicity. However, TBAC is unlikely to be a human carcinogen in that its non-genotoxic metabolic surrogates tertiary-butanol (TBA) and methyl tertiary butyl ether (MTBE) produce only male rat α-2u-globulin-mediated kidney cancer and high-dose specific mouse thyroid tumors, both of which have little qualitative or quantitative relevance to humans. Benchmark dose (BMD)-modeling of the neurobehavioral responses yielded acute and chronic RfC values of 1.5 ppm and 0.3 ppm, respectively. After conservative modeling of general population and near-source occupational and consumer product exposure scenarios, almost all HQs were substantially less than 1. HQs exceeding 1 were limited to consumer use of automotive products and paints in a poorly ventilated garage-sized room (HQ = 313) and occupational exposures in small and large brake shops using no personal protective equipment or ventilation controls (HQs = 3.4-126.6). The screening level risk assessments confirm low human health concerns with most uses of TBAC and indicate that further data-informed refinements can address problematic health/exposure scenarios. The assessments also illustrate how tier-based risk assessments using read-across toxicity information to metabolic surrogates reduce the need for comprehensive animal testing.
Institute of Scientific and Technical Information of China (English)
戴海波
2015-01-01
对宝钢高炉冲渣水余热利用合同能源管理项目进引资金风险评估和效益计算,确立项目可行性,为项目的实施提供必要的依据.%The article introduces investment risk assessment and benefit calculation of Bao Steel blast furnace slag water waste heat recovery EMC project. It establishes feasibility of project and provides basic support for project implementation.
INVAP's Nuclear Calculation System
Directory of Open Access Journals (Sweden)
Ignacio Mochi
2011-01-01
Full Text Available Since its origins in 1976, INVAP has been on continuous development of the calculation system used for design and optimization of nuclear reactors. The calculation codes have been polished and enhanced with new capabilities as they were needed or useful for the new challenges that the market imposed. The actual state of the code packages enables INVAP to design nuclear installations with complex geometries using a set of easy-to-use input files that minimize user errors due to confusion or misinterpretation. A set of intuitive graphic postprocessors have also been developed providing a fast and complete visualization tool for the parameters obtained in the calculations. The capabilities and general characteristics of this deterministic software package are presented throughout the paper including several examples of its recent application.
Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim
2003-01-01
We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...
OFTIFEL PERSONALIZED NUTRITIONAL CALCULATOR
Directory of Open Access Journals (Sweden)
Malte BETHKE
2016-11-01
Full Text Available A food calculator for elderly people was elaborated by Centiv GmbH, an active partner in the European FP7 OPTIFEL Project, based on the functional requirement specifications and the existing recommendations for daily allowances across Europe, data which were synthetized and used to give aims in amounts per portion. The OPTIFEL Personalised Nutritional Calculator is the only available online tool which allows to determine on a personalised level the required nutrients for elderly people (65+. It has been developed mainly to support nursing homes providing best possible (personalised nutrient enriched food to their patients. The European FP7 OPTIFEL project “Optimised Food Products for Elderly Populations” aims to develop innovative products based on vegetables and fruits for elderly populations to increase length of independence. The OPTIFEL Personalised Nutritional Calculator is recommended to be used by nursing homes.
Spin Resonance Strength Calculations
Courant, E. D.
2009-08-01
In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it.
Spin resonance strength calculations
Energy Technology Data Exchange (ETDEWEB)
Courant,E.D.
2008-10-06
In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it.
Curvature calculations with GEOCALC
Energy Technology Data Exchange (ETDEWEB)
Moussiaux, A.; Tombal, P.
1987-04-01
A new method for calculating the curvature tensor has been recently proposed by D. Hestenes. This method is a particular application of geometric calculus, which has been implemented in an algebraic programming language on the form of a package called GEOCALC. They show how to apply this package to the Schwarzchild case and they discuss the different results.
Haida Numbers and Calculation.
Cogo, Robert
Experienced traders in furs, blankets, and other goods, the Haidas of the 1700's had a well-developed decimal system for counting and calculating. Their units of linear measure included the foot, yard, and fathom, or six feet. This booklet lists the numbers from 1 to 20 in English and Haida; explains the Haida use of ten, hundred, and thousand…
Daylight calculations in practice
DEFF Research Database (Denmark)
Iversen, Anne; Roy, Nicolas; Hvass, Mette;
programs can give different results. This can be due to restrictions in the program itself and/or be due to the skills of the persons setting up the models. This is crucial as daylight calculations are used to document that the demands and recommendations to daylight levels outlined by building authorities...
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
Compared with ellipse cavity, the spoke cavity has many advantages, especially for the low and medium beam energy. It will be used in the superconductor accelerator popular in the future. Based on the spoke cavity, we design and calculate an accelerator
Radioprotection calculations for MEGAPIE.
Zanini, L
2005-01-01
The MEGAwatt PIlot Experiment (MEGAPIE) liquid lead-bismuth spallation neutron source will commence operation in 2006 at the SINQ facility of the Paul Scherrer Institut. Such an innovative system presents radioprotection concerns peculiar to a liquid spallation target. Several radioprotection issues have been addressed and studied by means of the Monte Carlo transport code, FLUKA. The dose rates in the room above the target, where personnel access may be needed at times, from the activated lead-bismuth and from the volatile species produced were calculated. Results indicate that the dose rate level is of the order of 40 mSv h(-1) 2 h after shutdown, but it can be reduced below the mSv h(-1) level with slight modifications to the shielding. Neutron spectra and dose rates from neutron transport, of interest for possible damage to radiation sensitive components, have also been calculated.
PIC: Protein Interactions Calculator.
Tina, K G; Bhadra, R; Srinivasan, N
2007-07-01
Interactions within a protein structure and interactions between proteins in an assembly are essential considerations in understanding molecular basis of stability and functions of proteins and their complexes. There are several weak and strong interactions that render stability to a protein structure or an assembly. Protein Interactions Calculator (PIC) is a server which, given the coordinate set of 3D structure of a protein or an assembly, computes various interactions such as disulphide bonds, interactions between hydrophobic residues, ionic interactions, hydrogen bonds, aromatic-aromatic interactions, aromatic-sulphur interactions and cation-pi interactions within a protein or between proteins in a complex. Interactions are calculated on the basis of standard, published criteria. The identified interactions between residues can be visualized using a RasMol and Jmol interface. The advantage with PIC server is the easy availability of inter-residue interaction calculations in a single site. It also determines the accessible surface area and residue-depth, which is the distance of a residue from the surface of the protein. User can also recognize specific kind of interactions, such as apolar-apolar residue interactions or ionic interactions, that are formed between buried or exposed residues or near the surface or deep inside.
Calculations in furnace technology
Davies, Clive; Hopkins, DW; Owen, WS
2013-01-01
Calculations in Furnace Technology presents the theoretical and practical aspects of furnace technology. This book provides information pertinent to the development, application, and efficiency of furnace technology. Organized into eight chapters, this book begins with an overview of the exothermic reactions that occur when carbon, hydrogen, and sulfur are burned to release the energy available in the fuel. This text then evaluates the efficiencies to measure the quantity of fuel used, of flue gases leaving the plant, of air entering, and the heat lost to the surroundings. Other chapters consi
Angarita, Fernando A.; University Health Network; Acuña, Sergio A.; Mount Sinai Hospital; Jimenez, Carolina; University of Toronto; Garay, Javier; Pontificia Universidad Javeriana; Gömez, David; University of Toronto; Domínguez, Luis Carlos; Pontificia Universidad Javeriana
2010-01-01
Acute calculous cholecystitis is the most important cause of cholecystectomies worldwide. We review the physiopathology of the inflammatory process in this organ secondary to biliary tract obstruction, as well as its clinical manifestations, workup, and the treatment it requires. La colecistitis calculosa aguda es la causa más importante de colecistectomías en el mundo. En esta revisión de tema se resume la fisiopatología del proceso inflamatorio de la vesículabiliar secundaria a la obstru...
Zero Temperature Hope Calculations
Energy Technology Data Exchange (ETDEWEB)
Rozsnyai, B F
2002-07-26
The primary purpose of the HOPE code is to calculate opacities over a wide temperature and density range. It can also produce equation of state (EOS) data. Since the experimental data at the high temperature region are scarce, comparisons of predictions with the ample zero temperature data provide a valuable physics check of the code. In this report we show a selected few examples across the periodic table. Below we give a brief general information about the physics of the HOPE code. The HOPE code is an ''average atom'' (AA) Dirac-Slater self-consistent code. The AA label in the case of finite temperature means that the one-electron levels are populated according to the Fermi statistics, at zero temperature it means that the ''aufbau'' principle works, i.e. no a priory electronic configuration is set, although it can be done. As such, it is a one-particle model (any Hartree-Fock model is a one particle model). The code is an ''ion-sphere'' model, meaning that the atom under investigation is neutral within the ion-sphere radius. Furthermore, the boundary conditions for the bound states are also set at the ion-sphere radius, which distinguishes the code from the INFERNO, OPAL and STA codes. Once the self-consistent AA state is obtained, the code proceeds to generate many-electron configurations and proceeds to calculate photoabsorption in the ''detailed configuration accounting'' (DCA) scheme. However, this last feature is meaningless at zero temperature. There is one important feature in the HOPE code which should be noted; any self-consistent model is self-consistent in the space of the occupied orbitals. The unoccupied orbitals, where electrons are lifted via photoexcitation, are unphysical. The rigorous way to deal with that problem is to carry out complete self-consistent calculations both in the initial and final states connecting photoexcitations, an enormous computational task
Linewidth calculations and simulations
Strandberg, Ingrid
2016-01-01
We are currently developing a new technique to further enhance the sensitivity of collinear laser spectroscopy in order to study the most exotic nuclides available at radioactive ion beam facilities, such as ISOLDE at CERN. The overall goal is to evaluate the feasibility of the new method. This report will focus on the determination of the expected linewidth (hence resolution) of this approach. Different effects which could lead to a broadening of the linewidth, e.g. the ions' energy spread and their trajectories inside the trap, are studied with theoretical calculations as well as simulations.
Lopez, Cesar
2015-01-01
MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. This book is designed for use as a scientific/business calculator so that you can get numerical solutions to problems involving a wide array of mathematics using MATLAB. Just look up the function y
Multilayer optical calculations
Byrnes, Steven J
2016-01-01
When light hits a multilayer planar stack, it is reflected, refracted, and absorbed in a way that can be derived from the Fresnel equations. The analysis is treated in many textbooks, and implemented in many software programs, but certain aspects of it are difficult to find explicitly and consistently worked out in the literature. Here, we derive the formulas underlying the transfer-matrix method of calculating the optical properties of these stacks, including oblique-angle incidence, absorption-vs-position profiles, and ellipsometry parameters. We discuss and explain some strange consequences of the formulas in the situation where the incident and/or final (semi-infinite) medium are absorptive, such as calculating $T>1$ in the absence of gain. We also discuss some implementation details like complex-plane branch cuts. Finally, we derive modified formulas for including one or more "incoherent" layers, i.e. very thick layers in which interference can be neglected. This document was written in conjunction with ...
Bhatnagar, Shalabh
2017-01-01
Sound is an emerging source of renewable energy but it has some limitations. The main limitation is, the amount of energy that can be extracted from sound is very less and that is because of the velocity of the sound. The velocity of sound changes as per medium. If we could increase the velocity of the sound in a medium we would be probably able to extract more amount of energy from sound and will be able to transfer it at a higher rate. To increase the velocity of sound we should know the speed of sound. If we go by the theory of classic mechanics speed is the distance travelled by a particle divided by time whereas velocity is the displacement of particle divided by time. The speed of sound in dry air at 20 °C (68 °F) is considered to be 343.2 meters per second and it won't be wrong in saying that 342.2 meters is the velocity of sound not the speed as it's the displacement of the sound not the total distance sound wave covered. Sound travels in the form of mechanical wave, so while calculating the speed of sound the whole path of wave should be considered not just the distance traveled by sound. In this paper I would like to focus on calculating the actual speed of sound wave which can help us to extract more energy and make sound travel with faster velocity.
Molecular Dynamics Calculations
1996-01-01
The development of thermodynamics and statistical mechanics is very important in the history of physics, and it underlines the difficulty in dealing with systems involving many bodies, even if those bodies are identical. Macroscopic systems of atoms typically contain so many particles that it would be virtually impossible to follow the behavior of all of the particles involved. Therefore, the behavior of a complete system can only be described or predicted in statistical ways. Under a grant to the NASA Lewis Research Center, scientists at the Case Western Reserve University have been examining the use of modern computing techniques that may be able to investigate and find the behavior of complete systems that have a large number of particles by tracking each particle individually. This is the study of molecular dynamics. In contrast to Monte Carlo techniques, which incorporate uncertainty from the outset, molecular dynamics calculations are fully deterministic. Although it is still impossible to track, even on high-speed computers, each particle in a system of a trillion trillion particles, it has been found that such systems can be well simulated by calculating the trajectories of a few thousand particles. Modern computers and efficient computing strategies have been used to calculate the behavior of a few physical systems and are now being employed to study important problems such as supersonic flows in the laboratory and in space. In particular, an animated video (available in mpeg format--4.4 MB) was produced by Dr. M.J. Woo, now a National Research Council fellow at Lewis, and the G-VIS laboratory at Lewis. This video shows the behavior of supersonic shocks produced by pistons in enclosed cylinders by following exactly the behavior of thousands of particles. The major assumptions made were that the particles involved were hard spheres and that all collisions with the walls and with other particles were fully elastic. The animated video was voted one of two
Ahrens, Thomas J.; Okeefe, J. D.; Smither, C.; Takata, T.
1991-01-01
In the course of carrying out finite difference calculations, it was discovered that for large craters, a previously unrecognized type of crater (diameter) growth occurred which was called lip wave propagation. This type of growth is illustrated for an impact of a 1000 km (2a) silicate bolide at 12 km/sec (U) onto a silicate half-space at earth gravity (1 g). The von Misses crustal strength is 2.4 kbar. The motion at the crater lip associated with this wave type phenomena is up, outward, and then down, similar to the particle motion of a surface wave. It is shown that the crater diameter has grown d/a of approximately 25 to d/a of approximately 4 via lip propagation from Ut/a = 5.56 to 17.0 during the time when rebound occurs. A new code is being used to study partitioning of energy and momentum and cratering efficiency with self gravity for finite-sized objects rather than the previously discussed planetary half-space problems. These are important and fundamental subjects which can be addressed with smoothed particle hydrodynamic (SPH) codes. The SPH method was used to model various problems in astrophysics and planetary physics. The initial work demonstrates that the energy budget for normal and oblique impacts are distinctly different than earlier calculations for silicate projectile impact on a silicate half space. Motivated by the first striking radar images of Venus obtained by Magellan, the effect of the atmosphere on impact cratering was studied. In order the further quantify the processes of meteor break-up and trajectory scattering upon break-up, the reentry physics of meteors striking Venus' atmosphere versus that of the Earth were studied.
Directory of Open Access Journals (Sweden)
Montserrat Hernández Solís
2013-12-01
Full Text Available Una práctica común que realizan las entidades aseguradoras es la de modificar las tasas de mortalidad instantánea al aplicar el principio de prima neta con el fin de hacer frente a las desviaciones desfavorables de la siniestralidad. Este documento proporciona una respuesta matemática a esta cuestión mediante la aplicación de la función de distorsión de potencia de Wang. Tanto la prima neta y la función de distorsión de Wang son medidas de riesgo coherentes, siendo este último aplicado por primera vez en el campo delos seguros de vida.Utilizando las leyes de Gompertz y Makeham primero calculamos la prima a nivel general y en una segunda parte, se aplica el principio de cálculo de la prima basado en función de distorsión de potencia de Wang para calcular el recargo sobre la prima de riesgo ajustada. El precio de prima única de riesgo se ha aplicado a una forma de cobertura de seguro de supervivencia, el seguro de rentas.La principal conclusión que puede extraerse es que mediante el uso de la función de distorsión, la nueva tasa instantánea de mortalidad es directamente proporcional a un múltiplo, que es justamente el exponente de esta función y hace que el riesgo de longevidad sea mayor. Esta es la razón por la prima de riesgo ajustada es superior a la prima neta.Modification of instantaneous mortality rates when applying the net premium principle in order to cope with unfavorable deviations in claims, is common practice carried out by insurance companies. This paper provides a mathematical answer to this matter by applying Wang’s power distortion function. Both net premium and Wang’s distortion function are coherent risk measures, the latter being first applied to the field of life insurance.Using the Gompertz and Makeham laws we first calculate the premium at a general level and in a second part, the principle of premium calculation based on Wang´s power distortion function is applied to calculate the adjusted risk
Directory of Open Access Journals (Sweden)
Montserrat Hernández Solís
2014-07-01
Full Text Available Resumen Una práctica común que realizan las entidades aseguradoras es la de modificar las tasas de mortalidad instantánea al aplicar el principio de prima neta con el fin de hacer frente a las desviaciones desfavorables de la siniestralidad. Este documento proporciona una respuesta matemática a esta cuestión mediante la aplicación de la función de distorsión de potencia de Wang. Tanto la prima neta y la función de distorsión de Wang son medidas de riesgo coherentes, siendo este último aplicado por primera vez en el campo delos seguros de vida. Utilizando las leyes de Gompertz y Makeham primero calculamos la prima a nivel general y en una segunda parte, se aplica el principio de cálculo de la prima basado en función de distorsión de potencia de Wang para calcular el recargo sobre la prima de riesgo ajustada. El precio de prima única de riesgo se ha aplicado a una forma de cobertura de seguro de supervivencia, el seguro de rentas. La principal conclusión que puede extraerse es que mediante el uso de la función de distorsión, la nueva tasa instantánea de mortalidad es directamente proporcional a un múltiplo, que es justamente el exponente de esta función y hace que el riesgo de longevidad sea mayor. Esta es la razón por la prima de riesgo ajustada es superior a la prima neta. Abstract Modification of instantaneous mortality rates when applying the net premium principle in order to cope with unfavorable deviations in claims, is common practice carried out by insurance companies. This paper provides a mathematical answer to this matter by applying Wang’s power distortion function. Both net premium and Wang’s distortion function are coherent risk measures, the latter being first applied to the field of life insurance. Using the Gompertz and Makeham laws we first calculate the premium at a general level and in a second part, the principle of premium calculation based on Wang´s power distortion function is applied to calculate
Mathematics, Pricing, Market Risk Management and Trading Strategies for Financial Derivatives (2/3)
CERN. Geneva; Coffey, Brian
2009-01-01
Market Trading and Risk Management of Vanilla FX Options - Measures of Market Risk - Implied Volatility - FX Risk Reversals, FX Strangles - Valuation and Risk Calculations - Risk Management - Market Trading Strategies
Giantomassi, Matteo; Huhs, Georg; Waroquiers, David; Gonze, Xavier
2014-03-01
Many-Body Perturbation Theory (MBPT) defines a rigorous framework for the description of excited-state properties based on the Green's function formalism. Within MBPT, one can calculate charged excitations using e.g. Hedin's GW approximation for the electron self-energy. In the same framework, neutral excitations are also well described through the solution of the Bethe-Salpeter equation (BSE). In this talk, we report on the recent developments concerning the parallelization of the MBPT algorithms available in the ABINIT code (www.abinit.org). In particular, we discuss how to improve the parallel efficiency thanks to a hybrid version that employs MPI for the coarse-grained parallelization and OpenMP (a de facto standard for parallel programming on shared memory architectures) for the fine-grained parallelization of the most CPU-intensive parts. Benchmark results obtained with the new implementation are discussed. Finally, we present results for the GW corrections of amorphous SiO2 in the presence of defects and the BSE absorption spectrum. This work has been supported by the Prace project (PaRtnership for Advanced Computing in Europe, http://www.prace-ri.eu).
Energy Technology Data Exchange (ETDEWEB)
Uruena Llinares, A.; Santos Rubio, A.; Luis Simon, F. J.; Sanchez Carmona, G.; Herrador Cordoba, M.
2006-07-01
The objective of this paper is to compare, in thirty treatments for lung cancer,the absorbed doses at risk organs and target volumes obtained between the two used algorithms of calculation of our treatment planning system Oncentra Masterplan, that is, Pencil Beams vs Collapsed Cone. For it we use a set of measured indicators (D1 and D99 of tumor volume, V20 of lung, homogeneity index defined as (D5-D95)/D prescribed, and others). Analysing the dta, making a descriptor analysis of the results, and applying the non parametric test of the ranks with sign of Wilcoxon we find that the use of Pencil Beam algorithm underestimates the dose in the zone of the PTV including regions of low density as well as the values of maximum dose in spine cord. So, we conclude that in those treatments in which the spine dose is near the maximum permissible limit or those in which the PTV it includes a zone with pulmonary tissue must be used the Collapse Cone algorithm systematically and in any case an analysis must become to choose between time and precision in the calculation for both algorithms. (Authors)
Risk perception for paragliding practitioners.
Paixão, Jairo Antônio da; Tucher, Guilherme
2012-01-01
As an adventure sport, paragliding exposes participants to different levels of life risk. However, the boundary between calculated risk and real risk is a subtle one, depending on the practitioner’s perception. Thus, this study aimed to analyze risk perception of 73 paragliding practitioners. The descriptive-exploratory study method was used. Data was col-lected via a questionnaire validated according to the Delphi technique. Variables were evaluated from a bipolar Likert type scale, ranging ...
DEFF Research Database (Denmark)
Rothmann, M J; Ammentorp, J; Bech, M
2015-01-01
factors associated with this and to compare self-perceived risk with absolute fracture risk estimated by FRAX® in women aged 65-80 years. METHODS: Data from 20,905 questionnaires from the ROSE study were analyzed. The questionnaire included 25 items on osteoporosis, risk factors for fractures, and self......-perceived risk of fractures and enabled calculation of absolute fracture risk by FRAX®. Data were analyzed using bivariate tests and regression models. RESULTS: Women generally underestimated their fracture risk compared to absolute risk estimated by FRAX®. Women with risk factors for facture estimated......-rated heath, conditions related to secondary osteoporosis, and inability to do housework. CONCLUSIONS: These women aged 65-81 years underestimated their risk of fracture. However, they did seem to have an understanding of the importance of some risk factors such as previous fractures, parental hip fracture...
Carbon fiber dispersion models used for risk analysis calculations
1979-01-01
For evaluating the downwind, ground level exposure contours from carbon fiber dispersion, two fiber release scenarios were chosen. The first is the fire and explosion release in which all of the fibers are released instantaneously. This model applies to accident scenarios where an explosion follows a short-duration fire in the aftermath of the accident. The second is the plume release scenario in which the total mass of fibers is released into the fire plume. This model applies to aircraft accidents where only a fire results. These models are described in detail.
The rating reliability calculator
Directory of Open Access Journals (Sweden)
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
Cosmological Calculations on the GPU
Bard, Deborah; Allen, Mark T; Yepremyan, Hasmik; Kratochvil, Jan M
2012-01-01
Cosmological measurements require the calculation of nontrivial quantities over large datasets. The next generation of survey telescopes (such as DES, PanSTARRS, and LSST) will yield measurements of billions of galaxies. The scale of these datasets, and the nature of the calculations involved, make cosmological calculations ideal models for implementation on graphics processing units (GPUs). We consider two cosmological calculations, the two-point angular correlation function and the aperture mass statistic, and aim to improve the calculation time by constructing code for calculating them on the GPU. Using CUDA, we implement the two algorithms on the GPU and compare the calculation speeds to comparable code run on the CPU. We obtain a code speed-up of between 10 - 180x faster, compared to performing the same calculation on the CPU. The code has been made publicly available.
New Arsenic Cross Section Calculations
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-03-04
This report presents calculations for the new arsenic cross section. Cross sections for ^{73,74,75} As above the resonance range were calculated with a newly developed Hauser-Feshbach code, CoH3.
Genesis methodology quantitative risk assessment of innovative technologies in hydraulic engineering
Bekker Aleksandr T.; Zolotov Boris A.; Ljubimov Valeriy S.; Nosovsky Valeriy S.
2015-01-01
The historical development of studies to determine the risk of innovative technologies in hydraulic engineering. The proposed methodology for quantitative risk calculation can be used in hydraulic engineering, and serve as a basis for calculating the risk of industrial techniques.
Lessing, P.; Messina, C.P.; Fonner, R.F.
1983-01-01
Landslide risk can be assessed by evaluating geological conditions associated with past events. A sample of 2,4 16 slides from urban areas in West Virginia, each with 12 associated geological factors, has been analyzed using SAS computer methods. In addition, selected data have been normalized to account for areal distribution of rock formations, soil series, and slope percents. Final calculations yield landslide risk assessments of 1.50=high risk. The simplicity of the method provides for a rapid, initial assessment prior to financial investment. However, it does not replace on-site investigations, nor excuse poor construction. ?? 1983 Springer-Verlag New York Inc.
DEFF Research Database (Denmark)
Blicher-Mathiesen, Gitte; Andersen, Hans Estrup; Carstensen, Jacob
2014-01-01
will be more effective if they are implemented in N loss hot spots or risk areas. Additionally, the highly variable N reduction in groundwater and surface waters needs to be taken into account as this strongly influences the resulting effect of mitigation measures. The objectives of this study were to develop...... risk mapping part of the tool, we combined a modelled root zone N leaching with a catchment-specific N reduction factor which in combination determines the N load to the marine recipient. N leaching was calculated using detailed information of agricultural management from national databases as well...... and apply an N risk tool to the entire agricultural land area in Denmark. The purpose of the tool is to identify high risk areas, i.e. areas which contribute disproportionately much to diffuse N losses to the marine recipient, and to suggest cost-effective measures to reduce losses from risk areas. In the N...
Energy Technology Data Exchange (ETDEWEB)
Santos, William S.; Neves, Lucio P.; Perini, Ana P.; Caldas, Linda V.E., E-mail: wssantos@ipen.br, E-mail: lpneves@ipen.br, E-mail: aperini@ipen.br, E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Maia, Ana F., E-mail: afmaia@ufs.br [Universidade Federal de Sergipe (UFS), Sao Cristovao, SE (Brazil). Dept. de Fisica
2014-07-01
Cardiac procedures are among the most common procedures in interventional radiology (IR), and can lead to high medical and occupational exposures, as in most cases are procedures complex and long lasting. In this work, conversion coefficients (CC) for the risk of cancer, normalized by kerma area product (KAP) to the patient, cardiologist and nurse were calculated using Monte Carlo simulation. The patient and the cardiologist were represented by anthropomorphic simulators MESH, and the nurse by anthropomorphic phantom FASH. Simulators were incorporated into the code of Monte Carlo MCNPX. Two scenarios were created: in the first (1), lead curtain and protective equipment suspended were not included, and in the second (2) these devices were inserted. The radiographic parameters employed in Monte Carlo simulations were: tube voltage of 60 kVp and 120 kVp; filtration of the beam and 3,5 mmAl beam area of 10 x 10 cm{sup 2}. The average values of CCs to eight projections (in 10{sup -4} / Gy.cm{sup 2} were 1,2 for the patient, 2,6E-03 (scenario 1) and 4,9E-04 (scenario 2) for cardiologist and 5,2E-04 (scenario 1) and 4,0E-04 (Scenario 2) to the nurse. The results show a significant reduction in CCs for professionals, when the lead curtain and protective equipment suspended are employed. The evaluation method used in this work can provide important information on the risk of cancer patient and professional, and thus improve the protection of workers in cardiac procedures of RI.
Exploration Health Risks: Probabilistic Risk Assessment
Rhatigan, Jennifer; Charles, John; Hayes, Judith; Wren, Kiley
2006-01-01
Maintenance of human health on long-duration exploration missions is a primary challenge to mission designers. Indeed, human health risks are currently the largest risk contributors to the risks of evacuation or loss of the crew on long-duration International Space Station missions. We describe a quantitative assessment of the relative probabilities of occurrence of the individual risks to human safety and efficiency during space flight to augment qualitative assessments used in this field to date. Quantitative probabilistic risk assessments will allow program managers to focus resources on those human health risks most likely to occur with undesirable consequences. Truly quantitative assessments are common, even expected, in the engineering and actuarial spheres, but that capability is just emerging in some arenas of life sciences research, such as identifying and minimize the hazards to astronauts during future space exploration missions. Our expectation is that these results can be used to inform NASA mission design trade studies in the near future with the objective of preventing the higher among the human health risks. We identify and discuss statistical techniques to provide this risk quantification based on relevant sets of astronaut biomedical data from short and long duration space flights as well as relevant analog populations. We outline critical assumptions made in the calculations and discuss the rationale for these. Our efforts to date have focussed on quantifying the probabilities of medical risks that are qualitatively perceived as relatively high risks of radiation sickness, cardiac dysrhythmias, medically significant renal stone formation due to increased calcium mobilization, decompression sickness as a result of EVA (extravehicular activity), and bone fracture due to loss of bone mineral density. We present these quantitative probabilities in order-of-magnitude comparison format so that relative risk can be gauged. We address the effects of
Global nuclear-structure calculations
Energy Technology Data Exchange (ETDEWEB)
Moeller, P.; Nix, J.R.
1990-04-20
The revival of interest in nuclear ground-state octupole deformations that occurred in the 1980's was stimulated by observations in 1980 of particularly large deviations between calculated and experimental masses in the Ra region, in a global calculation of nuclear ground-state masses. By minimizing the total potential energy with respect to octupole shape degrees of freedom in addition to {epsilon}{sub 2} and {epsilon}{sub 4} used originally, a vastly improved agreement between calculated and experimental masses was obtained. To study the global behavior and interrelationships between other nuclear properties, we calculate nuclear ground-state masses, spins, pairing gaps and {Beta}-decay and half-lives and compare the results to experimental qualities. The calculations are based on the macroscopic-microscopic approach, with the microscopic contributions calculated in a folded-Yukawa single-particle potential.
Equilibrium calculations of firework mixtures
Energy Technology Data Exchange (ETDEWEB)
Hobbs, M.L. [Sandia National Labs., Albuquerque, NM (United States); Tanaka, Katsumi; Iida, Mitsuaki; Matsunaga, Takehiro [National Inst. of Materials and Chemical Research, Tsukuba, Ibaraki (Japan)
1994-12-31
Thermochemical equilibrium calculations have been used to calculate detonation conditions for typical firework components including three report charges, two display charges, and black powder which is used as a fuse or launch charge. Calculations were performed with a modified version of the TIGER code which allows calculations with 900 gaseous and 600 condensed product species at high pressure. The detonation calculations presented in this paper are thought to be the first report on the theoretical study of firework detonation. Measured velocities for two report charges are available and compare favorably to predicted detonation velocities. However, the measured velocities may not be true detonation velocities. Fast deflagration rather than an ideal detonation occurs when reactants contain significant amounts of slow reacting constituents such as aluminum or titanium. Despite such uncertainties in reacting pyrotechnics, the detonation calculations do show the complex nature of condensed phase formation at elevated pressures and give an upper bound for measured velocities.
CALCULATION OF LASER CUTTING COSTS
Directory of Open Access Journals (Sweden)
Bogdan Nedic
2016-09-01
Full Text Available The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, comparison' of costs made by other unconventional methods and provides documentation that consists of reports on estimated costs.
Modern biogeochemistry environmental risk assessment
Bashkin, Vladimir N
2006-01-01
Most books deal mainly with various technical aspects of ERA description and calculationsAims at generalizing the modern ideas of both biogeochemical and environmental risk assessment during recent yearsAims at supplementing the existing books by providing a modern understanding of mechanisms that are responsible for the ecological risk for human beings and ecosystem
Procedures for Calculating Residential Dehumidification Loads
Energy Technology Data Exchange (ETDEWEB)
Winkler, Jon [National Renewable Energy Lab. (NREL), Golden, CO (United States); Booten, Chuck [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-06-01
Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity. The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.
Calculator. Owning a Small Business.
Parma City School District, OH.
Seven activities are presented in this student workbook designed for an exploration of small business ownership and the use of the calculator in this career. Included are simulated situations in which students must use a calculator to compute property taxes; estimate payroll taxes and franchise taxes; compute pricing, approximate salaries,…
Calculation of Spectra of Solids:
DEFF Research Database (Denmark)
Lindgård, Per-Anker
1975-01-01
The Gilat-Raubenheimer method simplified to tetrahedron division is used to calculate the real and imaginary part of the dynamical response function for electrons. A frequency expansion for the real part is discussed. The Lindhard function is calculated as a test for numerical accuracy. The condu...
Kleijn van Willigen, G.K.; Meerveld, H. van
2016-01-01
The reliability and availability of the Dutch storm surge barriers are calculated by probabilistic risk assessment and various underlying risk analysis methods. These calculations, however, focus on the numerical probability of the storm surge barrier functioning adequately, and the implementation o
Closure and Sealing Design Calculation
Energy Technology Data Exchange (ETDEWEB)
T. Lahnalampi; J. Case
2005-08-26
The purpose of the ''Closure and Sealing Design Calculation'' is to illustrate closure and sealing methods for sealing shafts, ramps, and identify boreholes that require sealing in order to limit the potential of water infiltration. In addition, this calculation will provide a description of the magma that can reduce the consequences of an igneous event intersecting the repository. This calculation will also include a listing of the project requirements related to closure and sealing. The scope of this calculation is to: summarize applicable project requirements and codes relating to backfilling nonemplacement openings, removal of uncommitted materials from the subsurface, installation of drip shields, and erecting monuments; compile an inventory of boreholes that are found in the area of the subsurface repository; describe the magma bulkhead feature and location; and include figures for the proposed shaft and ramp seals. The objective of this calculation is to: categorize the boreholes for sealing by depth and proximity to the subsurface repository; develop drawing figures which show the location and geometry for the magma bulkhead; include the shaft seal figures and a proposed construction sequence; and include the ramp seal figure and a proposed construction sequence. The intent of this closure and sealing calculation is to support the License Application by providing a description of the closure and sealing methods for the Safety Analysis Report. The closure and sealing calculation will also provide input for Post Closure Activities by describing the location of the magma bulkhead. This calculation is limited to describing the final configuration of the sealing and backfill systems for the underground area. The methods and procedures used to place the backfill and remove uncommitted materials (such as concrete) from the repository and detailed design of the magma bulkhead will be the subject of separate analyses or calculations. Post
Energy Technology Data Exchange (ETDEWEB)
Liljenzin, J.O.; Rydberg, J. [Radiochemistry Consultant Group, Vaestra Froelunda (Sweden)
1996-11-01
The first part of this review discusses the importance of risk. If there is any relation between the emotional and rational risk perceptions (for example, it is believed that increased knowledge will decrease emotions), it will be a desirable goal for society, and the nuclear industry in particular, to improve the understanding by the laymen of the rational risks from nuclear energy. This review surveys various paths to a more common comprehension - perhaps a consensus - of the nuclear waste risks. The second part discusses radioactivity as a risk factor and concludes that it has no relation in itself to risk, but must be connected to exposure leading to a dose risk, i.e. a health detriment, which is commonly expressed in terms of cancer induction rate. Dose-effect relations are discussed in light of recent scientific debate. The third part of the report describes a number of hazard indexes for nuclear waste found in the literature and distinguishes between absolute and relative risk scales. The absolute risks as well as the relative risks have changed over time due to changes in radiological and metabolic data and by changes in the mode of calculation. To judge from the literature, the risk discussion is huge, even when it is limited to nuclear waste. It would be very difficult to make a comprehensive review and extract the essentials from that. Therefore, we have chosen to select some publications, out of the over 100, which we summarize rather comprehensively; in some cases we also include our remarks. 110 refs, 22 figs.
D & D screening risk evaluation guidance
Energy Technology Data Exchange (ETDEWEB)
Robers, S.K.; Golden, K.M.; Wollert, D.A.
1995-09-01
The Screening Risk Evaluation (SRE) guidance document is a set of guidelines provided for the uniform implementation of SREs performed on decontamination and decommissioning (D&D) facilities. Although this method has been developed for D&D facilities, it can be used for transition (EM-60) facilities as well. The SRE guidance produces screening risk scores reflecting levels of risk through the use of risk ranking indices. Five types of possible risk are calculated from the SRE: current releases, worker exposures, future releases, physical hazards, and criticality. The Current Release Index (CRI) calculates the current risk to human health and the environment, exterior to the building, from ongoing or probable releases within a one-year time period. The Worker Exposure Index (WEI) calculates the current risk to workers, occupants and visitors inside contaminated D&D facilities due to contaminant exposure. The Future Release Index (FRI) calculates the hypothetical risk of future releases of contaminants, after one year, to human health and the environment. The Physical Hazards Index (PHI) calculates the risks to human health due to factors other than that of contaminants. Criticality is approached as a modifying factor to the entire SRE, due to the fact that criticality issues are strictly regulated under DOE. Screening risk results will be tabulated in matrix form, and Total Risk will be calculated (weighted equation) to produce a score on which to base early action recommendations. Other recommendations from the screening risk scores will be made based either on individual index scores or from reweighted Total Risk calculations. All recommendations based on the SRE will be made based on a combination of screening risk scores, decision drivers, and other considerations, as determined on a project-by-project basis.
Practical astronomy with your calculator
Duffett-Smith, Peter
1989-01-01
Practical Astronomy with your Calculator, first published in 1979, has enjoyed immense success. The author's clear and easy to follow routines enable you to solve a variety of practical and recreational problems in astronomy using a scientific calculator. Mathematical complexity is kept firmly in the background, leaving just the elements necessary for swiftly making calculations. The major topics are: time, coordinate systems, the Sun, the planetary system, binary stars, the Moon, and eclipses. In the third edition there are entirely new sections on generalised coordinate transformations, nutr
Risk Analysis in Road Tunnels – Most Important Risk Indicators
DEFF Research Database (Denmark)
Berchtold, Florian; Knaust, Christian; Thöns, Sebastian
2016-01-01
the effects and highlights the most important risk indicators with the aim to support further developments in risk analysis. Therefore, a system model of a road tunnel was developed to determine the risk measures. The system model can be divided into three parts: the fire part connected to the fire model Fire...... Dynamics Simulator (FDS); the evacuation part connected to the evacuation model FDS+Evac; and the frequency part connected to a model to calculate the frequency of fires. This study shows that the parts of the system model (and their most important risk indicators) affect the risk measures in the following......, further research can focus on these most important risk indicators with the aim to optimise risk analysis....
Prenatal radiation exposure. Dose calculation; Praenatale Strahlenexposition. Dosisermittlung
Energy Technology Data Exchange (ETDEWEB)
Scharwaechter, C.; Schwartz, C.A.; Haage, P. [University Hospital Witten/Herdecke, Wuppertal (Germany). Dept. of Diagnostic and Interventional Radiology; Roeser, A. [University Hospital Witten/Herdecke, Wuppertal (Germany). Dept. of Radiotherapy and Radio-Oncology
2015-05-15
The unborn child requires special protection. In this context, the indication for an X-ray examination is to be checked critically. If thereupon radiation of the lower abdomen including the uterus cannot be avoided, the examination should be postponed until the end of pregnancy or alternative examination techniques should be considered. Under certain circumstances, either accidental or in unavoidable cases after a thorough risk assessment, radiation exposure of the unborn may take place. In some of these cases an expert radiation hygiene consultation may be required. This consultation should comprise the expected risks for the unborn while not perturbing the mother or the involved medical staff. For the risk assessment in case of an in-utero X-ray exposition deterministic damages with a defined threshold dose are distinguished from stochastic damages without a definable threshold dose. The occurrence of deterministic damages depends on the dose and the developmental stage of the unborn at the time of radiation. To calculate the risks of an in-utero radiation exposure a three-stage concept is commonly applied. Depending on the amount of radiation, the radiation dose is either estimated, roughly calculated using standard tables or, in critical cases, accurately calculated based on the individual event. The complexity of the calculation thereby increases from stage to stage. An estimation based on stage one is easily feasible whereas calculations based on stages two and especially three are more complex and often necessitate execution by specialists. This article demonstrates in detail the risks for the unborn child pertaining to its developmental phase and explains the three-stage concept as an evaluation scheme. It should be noted, that all risk estimations are subject to considerable uncertainties.
Directory of Open Access Journals (Sweden)
Nicolas Eckert, Éric Parent, Mohamed Naaim et Didier Richard
2010-09-01
Full Text Available Si l'on connaît assez bien les principales zones où se produisent les avalanches, il est plus difficile de prévoir les caractéristiques précises de ces événements extrêmes. Après un tour d'horizon des méthodes traditionnelles utilisées en ingénierie paravalanche pour prédéterminer les événements extrêmes et calculer les risques, les auteurs de l'article nous présentent ici des nouvelles méthodes combinant statistiques et modèles dynamiques d’écoulement, qui permettent d’employer de manière plus rigoureuse les notions de période de retour et de risque.For natural hazard management in mountainous regions, design values and return periods are often used in a questionable manner. This article aims at reviewing the existing methods for predetermination and risk computations in the case of snow avalanches. After remembering the classical engineering methods, the recently developed statistical-dynamical approaches are introduced. They allow rigorously defining the return period as a one-to-one mapping of the runout distance and evaluating all reference scenarios corresponding to the chosen design values. Finally, as soon as hazard consequences are quantified as a function of its magnitude, a decisional approach can be used for hazard zoning and the design of defence structures. The proposed framework is illustrated with a case study from the French avalanche database.
Transfer Area Mechanical Handling Calculation
Energy Technology Data Exchange (ETDEWEB)
B. Dianda
2004-06-23
This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use
DEFF Research Database (Denmark)
Pedersen, Liselotte; Rasmussen, Kirsten; Elsass, Peter
2010-01-01
). All patients were assessed for risk of future violence utilizing a structured professional judgment model: the Historical-Clinical-Risk Management-20 (HCR-20) violence risk assessment scheme. After a follow-up period of 5.6 years, recidivism outcome were obtained from the Danish National Crime...... predictive of violent recidivism compared to static items. In sum, the findings support the use of structured professional judgment models of risk assessment and in particular the HCR-20 violence risk assessment scheme. Findings regarding the importance of the (clinical) structured final risk judgment......International research suggests that using formalized risk assessment methods may improve the predictive validity of professionals' predictions of risk of future violence. This study presents data on forensic psychiatric patients discharged from a forensic unit in Denmark in year 2001-2002 (n=107...
Directory of Open Access Journals (Sweden)
Mogens Steffensen
2013-05-01
Full Text Available Research in insurance and finance was always intersecting although they were originally and generally viewed as separate disciplines. Insurance is about transferring risks between parties such that the burdens of risks are borne by those who can. This makes insurance transactions a beneficial activity for the society. It calls on detection, modelling, valuation, and controlling of risks. One of the main sources of control is diversification of risks and in that respect it becomes an issue in itself to clarify diversifiability of risks. However, many diversifiable risks are not, by nature or by contract design, separable from non-diversifiable risks that are, on the other hand, sometimes traded in financial markets and sometimes not. A key observation is that the economic risk came before the insurance contract: Mother earth destroys and kills incidentally and mercilessly, but the uncertainty of economic consequences can be more or less cleverly distributed by the introduction of an insurance market.
AN ANALYTICAL SOLUTION FOR CALCULATING THE INITIATION OF SEDIMENT MOTION
Institute of Scientific and Technical Information of China (English)
Thomas LUCKNER; Ulrich ZANKE
2007-01-01
This paper presents an analytical solution for calculating the initiation of sediment motion and the risk of river bed movement. It thus deals with a fundamental problem in sediment transport, for which no complete analytical solution has yet been found. The analytical solution presented here is based on forces acting on a single grain in state of initiation of sediment motion. The previous procedures for calculating the initiation of sediment motion are complemented by an innovative combination of optical surface measurement technology for determining geometrical parameters and their statistical derivation as well as a novel approach for determining the turbulence effects of velocity fluctuations. This two aspects and the comparison of the solution functions presented here with the well known data and functions of different authors mainly differ the presented solution model for calculating the initiation of sediment motion from previous approaches. The defined values of required geometrical parameters are based on hydraulically laboratory tests with spheres. With this limitations the derivated solution functions permit the calculation of the effective critical transport parameters of a single grain, the calculation of averaged critical parameters for describing the state of initiation of sediment motion on the river bed, the calculation of the probability density of the effective critical velocity as well as the calculation of the risk of river bed movement. The main advantage of the presented model is the closed analytical solution from the equilibrium of forces on a single grain to the solution functions describing the initiation of sediment motion.
Collision Risk and Damage after Collision
DEFF Research Database (Denmark)
Pedersen, Preben Terndrup; Hansen, Peter Friis; Nielsen, Lars Peter
1996-01-01
of the damage risk is calculated by a numerical procedure. These directly calculated distributions for hull damages are subsequently approximated by analytical expressions suited for probabilistic damage stability calculations similar to the procedure described in IMO regulation A.265.Numerical results...
COMPARISON OF VALUE AT RISK APPROACHES ON A STOCK PORTFOLIO
Directory of Open Access Journals (Sweden)
Šime Čorkalo
2011-02-01
Full Text Available Value at risk is risk management tool for measuring and controlling market risks. Through this paper reader will get to know what value at risk is, how it can be calculated, what are the main characteristics, advantages and disadvantages of value at risk. Author compares the main approaches of calculating VaR and implements Variance-Covariance, Historical and Bootstrapping approach on stock portfolio. Finally results of empirical part are compared and presented using histogram.
Chambers, David W
2010-01-01
Every plan contains risk. To proceed without planning some means of managing that risk is to court failure. The basic logic of risk is explained. It consists in identifying a threshold where some corrective action is necessary, the probability of exceeding that threshold, and the attendant cost should the undesired outcome occur. This is the probable cost of failure. Various risk categories in dentistry are identified, including lack of liquidity; poor quality; equipment or procedure failures; employee slips; competitive environments; new regulations; unreliable suppliers, partners, and patients; and threats to one's reputation. It is prudent to make investments in risk management to the extent that the cost of managing the risk is less than the probable loss due to risk failure and when risk management strategies can be matched to type of risk. Four risk management strategies are discussed: insurance, reducing the probability of failure, reducing the costs of failure, and learning. A risk management accounting of the financial meltdown of October 2008 is provided.
MFTF-B performance calculations
Energy Technology Data Exchange (ETDEWEB)
Thomassen, K.I.; Jong, R.A.
1982-12-06
In this report we document the operating scenario models and calculations as they exist and comment on those aspects of the models where performance is sensitive to the assumptions that are made. We also focus on areas where improvements need to be made in the mathematical descriptions of phenomena, work which is in progress. To illustrate the process of calculating performance, and to be very specific in our documentation, part 2 of this report contains the complete equations and sequence of calculations used to determine parameters for the MARS mode of operation in MFTF-B. Values for all variables for a particular set of input parameters are also given there. The point design so described is typical, but should be viewed as a snapshot in time of our ongoing estimations and predictions of performance.
Insertion device calculations with mathematica
Energy Technology Data Exchange (ETDEWEB)
Carr, R. [Stanford Synchrotron Radiation Lab., CA (United States); Lidia, S. [Univ. of California, Davis, CA (United States)
1995-02-01
The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectory solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.
The Collective Practice of Calculation
DEFF Research Database (Denmark)
Schrøder, Ida
and judgement to reach decisions to invest in social services. The line is not drawn between the two, but between the material arrangements that make decisions possible. This implies that the insisting on qualitatively based decisions gives the professionals agency to collectively engage in practical......The calculation of costs plays an increasingly large role in the decision-making processes of public sector human service organizations. This has brought scholars of management accounting to investigate the relationship between caring professions and demands to make economic entities of the service...... on the idea that professions are hybrids by introducing the notion of qualculation as an entry point to investigate decision-making in child protection work as an extreme case of calculating on the basis of other elements than quantitative numbers. The analysis reveals that it takes both calculation...
Institute of Scientific and Technical Information of China (English)
孙可可; 陈进; 金菊良; 郦建强; 许继军; 费振宇
2014-01-01
Based on the relation curve of seasonal drought frequency and drought loss proposed in previous study under four different irrigation levels, the quantitative relationship between seasonal drought frequency and drought loss has been calculated to construct the drought loss risk curve, considering the changes of actual drought resistance ability with drought frequency. First, the ratio of water supply to meet the water demand in periods of drought was defined as drought resistance index. The relation curve between drought resistance index and inflow frequency, and the relation curve between drought frequency and guaranteed rate of drought intensity were established respectively, and then using guaranteed rate of drought intensity to express inflow frequency, the one-to-one relationship between drought resistance index and drought fre-quency of each drought process was obtained. Then,drought frequency can be calculated using Copula func-tion,the crop yield was simulated and yield loss rate under drought was computed using the Environmental Policy Integrated Climate Model, and the relations were set up among drought frequency, irrigation levels and drought loss rate. At last, the drought resistance index can express irrigation levels, the relation curve can be estimated between drought frequency and drought loss rate under the actual drought resistance abili-ty conditions, which can be regarded as agricultural drought loss risk curve. By applying the above calcula-tion method in Zhuzhou City of Hunan Province, the results show that the relation curves of drought fre-quency and drought loss rate of early rice during summer from May to July under the actual drought resis-tance ability conditions basically complies with semi-logarithmic function in line with trends in the relation-ship. In comparing with the history crop yield losses of drought, when 2-year, 5-year and 10-year return droughts take place, the relative errors of the actual survey results and theoretical
Friction and wear calculation methods
Kragelsky, I V; Kombalov, V S
1981-01-01
Friction and Wear: Calculation Methods provides an introduction to the main theories of a new branch of mechanics known as """"contact interaction of solids in relative motion."""" This branch is closely bound up with other sciences, especially physics and chemistry. The book analyzes the nature of friction and wear, and some theoretical relationships that link the characteristics of the processes and the properties of the contacting bodies essential for practical application of the theories in calculating friction forces and wear values. The effect of the environment on friction and wear is a
Multifragmentation calculated with relativistic forces
Feldmeier, H; Papp, G
1995-01-01
A saturating hamiltonian is presented in a relativistically covariant formalism. The interaction is described by scalar and vector mesons, with coupling strengths adjusted to the nuclear matter. No explicit density depe ndence is assumed. The hamiltonian is applied in a QMD calculation to determine the fragment distribution in O + Br collision at different energies (50 -- 200 MeV/u) to test the applicability of the model at low energies. The results are compared with experiment and with previous non-relativistic calculations. PACS: 25.70Mn, 25.75.+r
Molecular calculations with B functions
Steinborn, E O; Ema, I; López, R; Ramírez, G
1998-01-01
A program for molecular calculations with B functions is reported and its performance is analyzed. All the one- and two-center integrals, and the three-center nuclear attraction integrals are computed by direct procedures, using previously developed algorithms. The three- and four-center electron repulsion integrals are computed by means of Gaussian expansions of the B functions. A new procedure for obtaining these expansions is also reported. Some results on full molecular calculations are included to show the capabilities of the program and the quality of the B functions to represent the electronic functions in molecules.
Use of risk aversion in risk acceptance criteria
Energy Technology Data Exchange (ETDEWEB)
Griesmeyer, J. M.; Simpson, M.; Okrent, D.
1980-06-01
Quantitative risk acceptance criteria for technological systems must be both justifiable, based upon societal values and objectives, and workable in the sense that compliance is possible and can be demonstrated in a straightforward manner. Societal values have frequently been assessed using recorded accident statistics on a wide range of human activities assuming that the statistics in some way reflect societal preferences, or by psychometric surveys concerning perceptions and evaluations of risk. Both methods indicate a societal aversion to risk e.g., many small accidents killing a total of 100 people are preferred over one large accident in which 100 lives are lost. Some of the implications of incorporating risk aversion in acceptance criteria are discussed. Calculated risks of various technological systems are converted to expected social costs using various risk aversion factors. The uncertainties in these assessments are also discussed.
Ab Initio Calculations of Oxosulfatovanadates
DEFF Research Database (Denmark)
Frøberg, Torben; Johansen, Helge
1996-01-01
Restricted Hartree-Fock and multi-configurational self-consistent-field calculations together with secondorder perturbation theory have been used to study the geometry, the electron density, and the electronicspectrum of (VO2SO4)-. A bidentate sulphate attachment to vanadium was found to be stable...
Dead reckoning calculating without instruments
Doerfler, Ronald W
1993-01-01
No author has gone as far as Doerfler in covering methods of mental calculation beyond simple arithmetic. Even if you have no interest in competing with computers you'll learn a great deal about number theory and the art of efficient computer programming. -Martin Gardner
ITER Port Interspace Pressure Calculations
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan J [ORNL; Van Hove, Walter A [ORNL
2016-01-01
The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.
Calculations for cosmic axion detection
Krauss, L.; Moody, J.; Wilczek, F.; Morris, D. E.
1985-01-01
Calculations are presented, using properly nomalized couplings and masses for Dine-Fischler-Srednicki axions, of power rates and signal temperatures for axion-photon conversion in microwave cavities. The importance of the galactic-halo axion line shape is emphasized. Spin-coupled detection as an alternative to magnetic-field-coupled detection is mentioned.
Theoretical Calculation of MMF's Bandwidth
Institute of Scientific and Technical Information of China (English)
LI Xiao-fu; JIANG De-sheng; YU Hai-hu
2004-01-01
The difference between over-filled launch bandwidth (OFL BW) and restricted mode launch bandwidth (RML BW) is described. A theoretical model is founded to calculate the OFL BW of grade index multimode fiber (GI-MMF),and the result is useful to guide the modification of the manufacturing method.
Data Acquisition and Flux Calculations
DEFF Research Database (Denmark)
Rebmann, C.; Kolle, O; Heinesch, B;
2012-01-01
In this chapter, the basic theory and the procedures used to obtain turbulent fluxes of energy, mass, and momentum with the eddy covariance technique will be detailed. This includes a description of data acquisition, pretreatment of high-frequency data and flux calculation....
Cognitive Reflection Versus Calculation in Decision Making
Directory of Open Access Journals (Sweden)
Aleksandr eSinayev
2015-05-01
Full Text Available Scores on the three-item Cognitive Reflection Test (CRT have been linked with dual-system theory and normative decision making (Frederick, 2005. In particular, the CRT is thought to measure monitoring of System 1 intuitions such that, if cognitive reflection is high enough, intuitive errors will be detected and the problem will be solved. However, CRT items also require numeric ability to be answered correctly and it is unclear how much numeric ability vs. cognitive reflection contributes to better decision making. In two studies, CRT responses were used to calculate Cognitive Reflection and numeric ability; a numeracy scale was also administered. Numeric ability, measured on the CRT or the numeracy scale, accounted for the CRT’s ability to predict more normative decisions (a subscale of decision-making competence, incentivized measures of impatient and risk-averse choice, and self-reported financial outcomes; Cognitive Reflection contributed no independent predictive power. Results were similar whether the two abilities were modeled (Study 1 or calculated using proportions (Studies 1 and 2. These findings demonstrate numeric ability as a robust predictor of superior decision making across multiple tasks and outcomes. They also indicate that correlations of decision performance with the CRT are insufficient evidence to implicate overriding intuitions in the decision-making biases and outcomes we examined. Numeric ability appears to be the key mechanism instead.
Energy Technology Data Exchange (ETDEWEB)
Park, Jong Min; Park, So Yeon; Kim, Jung In; Kim, Jin Ho [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of); Wu, Hong Gyun [Dept. of Radiation Oncology, Seoul National University College of Medicine, Seoul (Korea, Republic of)
2015-10-15
Since those organs are small in volume, dose calculation for those organs seems to be more susceptible to the calculation grid size in the treatment planning system (TPS). Moreover, since they are highly radio-sensitive organs, especially eye lens, they should be considered carefully for radiotherapy. On the other hand, in the treatment of head and neck (H and N) cancer or brain tumor that generally involves radiation exposure to eye lens and optic apparatus, intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT) techniques are frequently used because of the proximity of various radio-sensitive normal organs to the target volumes. Since IMRT and VMAT can deliver prescription dose to target volumes while minimizing dose to nearby organs at risk (OARs) by generating steep dose gradients near the target volumes, high dose gradient sometimes occurs near or at the eye lenses and optic apparatus. In this case, the effect of dose calculation resolution on the accuracy of calculated dose to eye lens and optic apparatus might be significant. Therefore, the effect of dose calculation grid size on the accuracy of calculated doses for each eye lens and optic apparatus was investigated in this study. If an inappropriate calculation resolution was applied for dose calculation of eye lens and optic apparatus, considerable errors can be occurred due to the volume averaging effect in high dose gradient region.
Institute of Scientific and Technical Information of China (English)
Xiaoyun MO; Jieming ZHOU; Hui OU; Xiangqun YANG
2013-01-01
Given a new Double-Markov risk model DM =(μ,Q,v,H; Y,Z) and Double-Markov risk process U ={U(t),t ≥ 0}.The ruin or survival problem is addressed.Equations which the survival probability satisfied and the formulas of calculating survival probability are obtained.Recursion formulas of calculating the survival probability and analytic expression of recursion items are obtained.The conclusions are expressed by Q matrix for a Markov chain and transition probabilities for another Markov Chain.
CONTRIBUTION FOR MINING ATMOSPHERE CALCULATION
Directory of Open Access Journals (Sweden)
Franica Trojanović
1989-12-01
Full Text Available Humid air is an unavoidable feature of mining atmosphere, which plays a significant role in defining the climate conditions as well as permitted circumstances for normal mining work. Saturated humid air prevents heat conduction from the human body by means of evaporation. Consequently, it is of primary interest in the mining practice to establish the relative air humidity either by means of direct or indirect methods. Percentage of water in the surrounding air may be determined in various procedures including tables, diagrams or particular calculations, where each technique has its specific advantages and disadvantages. Classical calculation is done according to Sprung's formula, in which case partial steam pressure should also be taken from the steam table. The new method without the use of diagram or tables, established on the functional relation of pressure and temperature on saturated line, is presented here for the first time (the paper is published in Croatian.
RISK LOAN PORTFOLIO OPTIMIZATION MODEL BASED ON CVAR RISK MEASURE
Directory of Open Access Journals (Sweden)
Ming-Chang LEE
2015-07-01
Full Text Available In order to achieve commercial banks liquidity, safety and profitability objective requirements, loan portfolio risk analysis based optimization decisions are rational allocation of assets. The risk analysis and asset allocation are the key technology of banking and risk management. The aim of this paper, build a loan portfolio optimization model based on risk analysis. Loan portfolio rate of return by using Value-at-Risk (VaR and Conditional Value-at-Risk (CVaR constraint optimization decision model reflects the bank's risk tolerance, and the potential loss of direct control of the bank. In this paper, it analyze a general risk management model applied to portfolio problems with VaR and CVaR risk measures by using Using the Lagrangian Algorithm. This paper solves the highly difficult problem by matrix operation method. Therefore, the combination of this paper is easy understanding the portfolio problems with VaR and CVaR risk model is a hyperbola in mean-standard deviation space. It is easy calculation in proposed method.
Archimedes' calculations of square roots
Davies, E B
2011-01-01
We reconsider Archimedes' evaluations of several square roots in 'Measurement of a Circle'. We show that several methods proposed over the last century or so for his evaluations fail one or more criteria of plausibility. We also provide internal evidence that he probably used an interpolation technique. The conclusions are relevant to the precise calculations by which he obtained upper and lower bounds on pi.
Parallel plasma fluid turbulence calculations
Energy Technology Data Exchange (ETDEWEB)
Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.
1994-12-31
The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center`s CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.
Directory of Open Access Journals (Sweden)
Ross Cagan
2015-08-01
Full Text Available I entered the science field because I imagined that scientists were society's “professional risk takers”, that they like surfing out on the edge. I understood that a lot of science – perhaps even most science – has to be a solid exploration of partly understood phenomena. But any science that confronts a difficult problem has to start with risk. Most people are at least a bit suspicious of risk, and scientists such as myself are no exception. Recently, risk-taking has been under attack financially, but this Editorial is not about that. I am writing about the long view and the messages we send to our trainees. I am Senior Associate Dean of the graduate school at Mount Sinai and have had the privilege to discuss these issues with the next generation of scientists, for whom I care very deeply. Are we preparing you to embrace risk?
Calculation and Updating of Reliability Parameters in Probabilistic Safety Assessment
Zubair, Muhammad; Zhang, Zhijian; Khan, Salah Ud Din
2011-02-01
The internal events of nuclear power plant are complex and include equipment maintenance, equipment damage etc. These events will affect the probability of the current risk level of the system as well as the reliability of the equipment parameter values so such kind of events will serve as an important basis for systematic analysis and calculation. This paper presents a method for reliability parameters calculation and their updating. The method is based on binomial likelihood function and its conjugate beta distribution. For update parameters Bayes' theorem has been selected. To implement proposed method a computer base program is designed which provide help to estimate reliability parameters.
Calculating Outcrossing Rates used in Decision Support Systems for Ships
DEFF Research Database (Denmark)
Nielsen, Ulrik Dam
2008-01-01
Onboard decision support systems (DSS) are used to increase the operational safety of ships. Ideally, DSS can estimate - in the statistical sense - future ship responses on a time scale of the order of 1-3 hours taking into account speed and course changes. The calculations depend on both...... operational and environmental parameters that are known only in the statistical sense. The present paper suggests a procedure to incorporate random variables and associated uncertainties in calculations of outcrossing rates, which are the basis for risk-based DSS. The procedure is based on parallel system...
AGING FACILITY CRITICALITY SAFETY CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
C.E. Sanders
2004-09-10
The purpose of this design calculation is to revise and update the previous criticality calculation for the Aging Facility (documented in BSC 2004a). This design calculation will also demonstrate and ensure that the storage and aging operations to be performed in the Aging Facility meet the criticality safety design criteria in the ''Project Design Criteria Document'' (Doraswamy 2004, Section 4.9.2.2), and the functional nuclear criticality safety requirement described in the ''SNF Aging System Description Document'' (BSC [Bechtel SAIC Company] 2004f, p. 3-12). The scope of this design calculation covers the systems and processes for aging commercial spent nuclear fuel (SNF) and staging Department of Energy (DOE) SNF/High-Level Waste (HLW) prior to its placement in the final waste package (WP) (BSC 2004f, p. 1-1). Aging commercial SNF is a thermal management strategy, while staging DOE SNF/HLW will make loading of WPs more efficient (note that aging DOE SNF/HLW is not needed since these wastes are not expected to exceed the thermal limits form emplacement) (BSC 2004f, p. 1-2). The description of the changes in this revised document is as follows: (1) Include DOE SNF/HLW in addition to commercial SNF per the current ''SNF Aging System Description Document'' (BSC 2004f). (2) Update the evaluation of Category 1 and 2 event sequences for the Aging Facility as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004c, Section 7). (3) Further evaluate the design and criticality controls required for a storage/aging cask, referred to as MGR Site-specific Cask (MSC), to accommodate commercial fuel outside the content specification in the Certificate of Compliance for the existing NRC-certified storage casks. In addition, evaluate the design required for the MSC that will accommodate DOE SNF/HLW. This design calculation will achieve the objective of providing the
Calculation of gas turbine characteristic
Mamaev, B. I.; Murashko, V. L.
2016-04-01
The reasons and regularities of vapor flow and turbine parameter variation depending on the total pressure drop rate π* and rotor rotation frequency n are studied, as exemplified by a two-stage compressor turbine of a power-generating gas turbine installation. The turbine characteristic is calculated in a wide range of mode parameters using the method in which analytical dependences provide high accuracy for the calculated flow output angle and different types of gas dynamic losses are determined with account of the influence of blade row geometry, blade surface roughness, angles, compressibility, Reynolds number, and flow turbulence. The method provides satisfactory agreement of results of calculation and turbine testing. In the design mode, the operation conditions for the blade rows are favorable, the flow output velocities are close to the optimal ones, the angles of incidence are small, and the flow "choking" modes (with respect to consumption) in the rows are absent. High performance and a nearly axial flow behind the turbine are obtained. Reduction of the rotor rotation frequency and variation of the pressure drop change the flow parameters, the parameters of the stages and the turbine, as well as the form of the characteristic. In particular, for decreased n, nonmonotonic variation of the second stage reactivity with increasing π* is observed. It is demonstrated that the turbine characteristic is mainly determined by the influence of the angles of incidence and the velocity at the output of the rows on the losses and the flow output angle. The account of the growing flow output angle due to the positive angle of incidence for decreased rotation frequencies results in a considerable change of the characteristic: poorer performance, redistribution of the pressure drop at the stages, and change of reactivities, growth of the turbine capacity, and change of the angle and flow velocity behind the turbine.
Rate calculation with colored noise
Bartsch, Thomas; Benito, R M; Borondo, F
2016-01-01
The usual identification of reactive trajectories for the calculation of reaction rates requires very time-consuming simulations, particularly if the environment presents memory effects. In this paper, we develop a new method that permits the identification of reactive trajectories in a system under the action of a stochastic colored driving. This method is based on the perturbative computation of the invariant structures that act as separatrices for reactivity. Furthermore, using this perturbative scheme, we have obtained a formally exact expression for the reaction rate in multidimensional systems coupled to colored noisy environments.
Electronics reliability calculation and design
Dummer, Geoffrey W A; Hiller, N
1966-01-01
Electronics Reliability-Calculation and Design provides an introduction to the fundamental concepts of reliability. The increasing complexity of electronic equipment has made problems in designing and manufacturing a reliable product more and more difficult. Specific techniques have been developed that enable designers to integrate reliability into their products, and reliability has become a science in its own right. The book begins with a discussion of basic mathematical and statistical concepts, including arithmetic mean, frequency distribution, median and mode, scatter or dispersion of mea
Band calculation of lonsdaleite Ge
Chen, Pin-Shiang; Fan, Sheng-Ting; Lan, Huang-Siang; Liu, Chee Wee
2017-01-01
The band structure of Ge in the lonsdaleite phase is calculated using first principles. Lonsdaleite Ge has a direct band gap at the Γ point. For the conduction band, the Γ valley is anisotropic with the low transverse effective mass on the hexagonal plane and the large longitudinal effective mass along the c axis. For the valence band, both heavy-hole and light-hole effective masses are anisotropic at the Γ point. The in-plane electron effective mass also becomes anisotropic under uniaxial tensile strain. The strain response of the heavy-hole mass is opposite to the light hole.
Semiclassical calculation of decay rates
Bessa, A; Fraga, E S
2008-01-01
Several relevant aspects of quantum-field processes can be well described by semiclassical methods. In particular, the knowledge of non-trivial classical solutions of the field equations, and the thermal and quantum fluctuations around them, provide non-perturbative information about the theory. In this work, we discuss the calculation of the one-loop effective action from the semiclasssical viewpoint. We intend to use this formalism to obtain an accurate expression for the decay rate of non-static metastable states.
Digital calculations of engine cycles
Starkman, E S; Taylor, C Fayette
1964-01-01
Digital Calculations of Engine Cycles is a collection of seven papers which were presented before technical meetings of the Society of Automotive Engineers during 1962 and 1963. The papers cover the spectrum of the subject of engine cycle events, ranging from an examination of composition and properties of the working fluid to simulation of the pressure-time events in the combustion chamber. The volume has been organized to present the material in a logical sequence. The first two chapters are concerned with the equilibrium states of the working fluid. These include the concentrations of var
Calculational Tool for Skin Contamination Dose Assessment
Hill, R L
2002-01-01
Spreadsheet calculational tool was developed to automate the calculations preformed for dose assessment of skin contamination. This document reports on the design and testing of the spreadsheet calculational tool.
Calculation of sound propagation in fibrous materials
DEFF Research Database (Denmark)
Tarnow, Viggo
1996-01-01
Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements.......Calculations of attenuation and velocity of audible sound waves in glass wools are presented. The calculations use only the diameters of fibres and the mass density of glass wools as parameters. The calculations are compared with measurements....
The Management Object in Risk Management Approaches
DEFF Research Database (Denmark)
Christiansen, Ulrik
. The paper synthesise by developing a framework of how different views on risk management enable and constrain the knowledge about risk and thus frame the possibilities to measure, analyse and calculate uncertainty and risk. Inspired by social studies of finance and accounting, the paper finally develops......Using a systematic review of the last 55 years of research within risk management this paper explores how risk management as a management technology (methodologies, tools and frameworks to mitigate or manage risks) singles out risks as an object for management in order to make action possible...... three propositions that illustrate how the framing of risk establishes a boundary for how managers might understand value creation and the possible future and how this impacts the possible responses to risk....
Recurrence risk for germinal mosaics revisited
van der Meulen, M A; te Meerman, G J
1995-01-01
A formula to calculate recurrence risk for germline mosaicism published by Hartl in 1971 has been updated to include marker information. For practical genetic counselling new, more elaborate tables are given.
Simplified seismic risk analysis
Energy Technology Data Exchange (ETDEWEB)
Pellissetti, Manuel; Klapp, Ulrich [AREVA NP GmbH, Erlangen (Germany)
2011-07-01
Within the context of probabilistic safety analysis (PSA) for nuclear power plants (NPP's), seismic risk assessment has the purpose to demonstrate that the contribution of seismic events to overall risk is not excessive. The most suitable vehicle for seismic risk assessment is a full scope seismic PSA (SPSA), in which the frequency of core damage due to seismic events is estimated. An alternative method is represented by seismic margin assessment (SMA), which aims at showing sufficient margin between the site-specific safe shutdown earthquake (SSE) and the actual capacity of the plant. Both methods are based on system analysis (fault-trees and event-trees) and hence require fragility estimates for safety relevant systems, structures and components (SSC's). If the seismic conditions at a specific site of a plant are not very demanding, then it is reasonable to expect that the risk due to seismic events is low. In such cases, the cost-benefit ratio for performing a full scale, site-specific SPSA or SMA will be excessive, considering the ultimate objective of seismic risk analysis. Rather, it will be more rational to rely on a less comprehensive analysis, used as a basis for demonstrating that the risk due to seismic events is not excessive. The present paper addresses such a simplified approach to seismic risk assessment which is used in AREVA to: - estimate seismic risk in early design stages, - identify needs to extend the design basis, - define a reasonable level of seismic risk analysis Starting from a conservative estimate of the overall plant capacity, in terms of the HCLPF (High Confidence of Low Probability of Failure), and utilizing a generic value for the variability, the seismic risk is estimated by convolution of the hazard and the fragility curve. Critical importance is attached to the selection of the plant capacity in terms of the HCLPF, without performing extensive fragility calculations of seismically relevant SSC's. A suitable basis
Flow Field Calculations for Afterburner
Institute of Scientific and Technical Information of China (English)
ZhaoJianxing; LiuQuanzhong; 等
1995-01-01
In this paper a calculation procedure for simulating the coimbustion flow in the afterburner with the heat shield,flame stabilizer and the contracting nozzle is described and evaluated by comparison with experimental data.The modified two-equation κ-ε model is employed to consider the turbulence effects,and the κ-ε-g turbulent combustion model is used to determine the reaction rate.To take into accunt the influence of heat radiation on gas temperature distribution,heat flux model is applied to predictions of heat flux distributions,The solution domain spanned the entire region between centerline and afterburner wall ,with the heat shield represented as a blockage to the mesh.The enthalpy equation and wall boundary of the heat shield require special handling for two passages in the afterburner,In order to make the computer program suitable to engineering applications,a subregional scheme is developed for calculating flow fields of complex geometries.The computational grids employed are 100×100 and 333×100(non-uniformly distributed).The numerical results are compared with experimental data,Agreement between predictions and measurements shows that the numerical method and the computational program used in the study are fairly reasonable and appopriate for primary design of the afterburner.
Dietary burden calculations relating to fish metabolism studies.
Schlechtriem, Christian; Pucher, Johannes; Michalski, Britta
2016-03-30
Fish farming is increasingly dependent on plant commodities as a source of feed leading to an increased risk for pesticide residues in aquaculture diets and consequently their transfer into aquaculture food products. The European pesticide regulation requires fish metabolism and fish feeding studies where residues in fish feed exceed 0.1 mg kg(-1) of the total diet (dry weight basis) to enable the setting of appropriate maximum residue levels in fish commodities. Fish dietary burden calculation is therefore an important prerequisite to decide on further experimental testing as part of the consumer risk assessment. In this review, the different aquaculture production systems are compared with regard to their specific feeding practices and the principles of dietary burden calculation are described.
47 CFR 1.1623 - Probability calculation.
2010-10-01
... Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be... determine their new intermediate probabilities. (g) Multiply each applicant's probability pursuant...
Critical evaluation of German regulatory specifications for calculating radiological exposure
Energy Technology Data Exchange (ETDEWEB)
Koenig, Claudia; Walther, Clemens [Hannover Univ. (Germany). Inst. of Radioecology; Smeddinck, Ulrich [Technische Univ. Braunschweig (Germany). Inst. of Law
2015-07-01
The assessment of radiological exposure of the public is an issue at the interface between scientific findings, juridical standard setting and political decision. The present work revisits the German regulatory specifications for calculating radiological exposure, like the already existing calculation model General Administrative Provision (AVV) for planning and monitoring nuclear facilities. We address the calculation models for the recent risk assessment regarding the final disposal of radioactive waste in Germany. To do so, a two-pronged approach is pursued. One part deals with radiological examinations of the groundwater-soil-transfer path of radionuclides into the biosphere. Processes at the so-called geosphere-biosphere-interface are examined, especially migration of I-129 in the unsaturated zone. This is necessary, since the German General Administrative Provision does not consider radionuclide transport via groundwater from an underground disposal facility yet. Especially data with regard to processes in the vadose zone are scarce. Therefore, using I-125 as a tracer, immobilization and mobilization of iodine is investigated in two reference soils from the German Federal Environment Agency. The second part of this study examines how scientific findings but also measures and activities of stakeholders and concerned parties influence juridical standard setting, which is necessary for risk management. Risk assessment, which is a scientific task, includes identification and investigation of relevant sources of radiation, possible pathways to humans, and maximum extent and duration of exposure based on dose-response functions. Risk characterization identifies probability and severity of health effects. These findings have to be communicated to authorities, who have to deal with the risk management. Risk management includes, for instance, taking into account acceptability of the risk, actions to reduce, mitigate, substitute or monitor the hazard, the setting of
Dupuytren diathesis and genetic risk
Dolmans, Guido H; de Bock, Geertruida H; Werker, Paul M
2012-01-01
PURPOSE: Dupuytren disease (DD) is a benign fibrosing disorder of the hand and fingers. Recently, we identified 9 single nucleotide polymorphisms (SNPs) associated with DD in a genome-wide association study. These SNPs can be used to calculate a genetic risk score for DD. The aim of this study was t
DEFF Research Database (Denmark)
Bernatsky, Sasha; Ramsey-Goldman, Rosalind; Labrecque, Jeremy
2013-01-01
OBJECTIVE: To update estimates of cancer risk in SLE relative to the general population. METHODS: A multisite international SLE cohort was linked with regional tumor registries. Standardized incidence ratios (SIRs) were calculated as the ratio of observed to expected cancers. RESULTS: Across 30 c...
Risk Control of Offshore Installations. A Framework for the Establishment of Risk Indicators
Energy Technology Data Exchange (ETDEWEB)
Oeien, Knut
2001-07-01
Currently quantitative risk assessments are carried out to analyze the risk level of offshore installations and to evaluate whether or not the risk level is acceptable. By way of the quantitative risk analysis the risk status of a given installation is obtained. However, the risk status is obtained so infrequently that it is inadequate for risk control. It can be compared to economic control having the economic status presented about each fifth year, which is obviously inadequate. It is important to know the risk status because this may provide an early warning about the need for remedial actions. Without frequent information about the risk status, control of risk cannot be claimed. The main objective of this thesis has been the development of a framework for the establishment of risk indicators. These risk indicators provide a status of the risk level through measuring of changes in technical, operational and organizational factors important to risk, and is thus a means to control risk during operation of offshore petroleum installations. The framework consists of a technical methodology using the quantitative risk assessment as a basis, an organizational model, and an organizational quantification methodology. Technical risk indicators are established from the technical methodology covering the risk factors explicitly included in the quantitative risk assessment. Organizational risk indicators measure changes in the organizational risk factors included in the organizational model, but not included in the quantitative risk assessment. The organizational model is an extension to the risk model in the quantitative risk assessment. The organizational quantification methodology calculates the effect of the changes measured by the organizational risk indicators. The organizational model may also be applied as a qualitative tool for root cause analysis of incidents (process leaks). Other results are an intermediate-level expert judgment procedure applicable for
Painless causality in defect calculations
Cheung, C; Cheung, Charlotte; Magueijo, Joao
1997-01-01
Topological defects must respect causality, a statement leading to restrictive constraints on the power spectrum of the total cosmological perturbations they induce. Causality constraints have for long been known to require the presence of an under-density in the surrounding matter compensating the defect network on large scales. This so-called compensation can never be neglected and significantly complicates calculations in defect scenarios, eg. computing cosmic microwave background fluctuations. A quick and dirty way to implement the compensation are the so-called compensation fudge factors. Here we derive the complete photon-baryon-CDM backreaction effects in defect scenarios. The fudge factor comes out as an algebraic identity and so we drop the negative qualifier ``fudge''. The compensation scale is computed and physically interpreted. Secondary backreaction effects exist, and neglecting them constitutes the well-defined approximation scheme within which one should consider compensation factor calculatio...
Dyscalculia and the Calculating Brain.
Rapin, Isabelle
2016-08-01
Dyscalculia, like dyslexia, affects some 5% of school-age children but has received much less investigative attention. In two thirds of affected children, dyscalculia is associated with another developmental disorder like dyslexia, attention-deficit disorder, anxiety disorder, visual and spatial disorder, or cultural deprivation. Infants, primates, some birds, and other animals are born with the innate ability, called subitizing, to tell at a glance whether small sets of scattered dots or other items differ by one or more item. This nonverbal approximate number system extends mostly to single digit sets as visual discrimination drops logarithmically to "many" with increasing numerosity (size effect) and crowding (distance effect). Preschoolers need several years and specific teaching to learn verbal names and visual symbols for numbers and school agers to understand their cardinality and ordinality and the invariance of their sequence (arithmetic number line) that enables calculation. This arithmetic linear line differs drastically from the nonlinear approximate number system mental number line that parallels the individual number-tuned neurons in the intraparietal sulcus in monkeys and overlying scalp distribution of discrete functional magnetic resonance imaging activations by number tasks in man. Calculation is a complex skill that activates both visual and spatial and visual and verbal networks. It is less strongly left lateralized than language, with approximate number system activation somewhat more right sided and exact number and arithmetic activation more left sided. Maturation and increasing number skill decrease associated widespread non-numerical brain activations that persist in some individuals with dyscalculia, which has no single, universal neurological cause or underlying mechanism in all affected individuals.
Visualization tools for insurance risk processes
Krzysztof Burnecki; Rafal Weron
2006-01-01
This chapter develops on risk processes which, perhaps, are most suitable for computer visualization of all insurance objects. At the same time, risk processes are basic instruments for any non-life actuary – they are vital for calculating the amount of loss that an insurance company may incur.
Factors affecting calculation of L
Ciotola, Mark P.
2001-08-01
A detectable extraterrestrial civilization can be modeled as a series of successive regimes over time each of which is detectable for a certain proportion of its lifecycle. This methodology can be utilized to produce an estimate for L. Potential components of L include quantity of fossil fuel reserves, solar energy potential, quantity of regimes over time, lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and downtime between regimes. Relationships between these components provide a means of calculating the lifetime of communicative species in a detectable state, L. An example of how these factors interact is provided, utilizing values that are reasonable given known astronomical data for components such as solar energy potential while existing knowledge about the terrestrial case is used as a baseline for other components including fossil fuel reserves, quantity of regimes over time, and lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and gaps of time between regimes due to recovery from catastrophic war or resource exhaustion. A range of values is calculated for L when parameters are established for each component so as to determine the lowest and highest values of L. roadmap for SETI research at the SETI Institute for the next few decades. Three different approaches were identified. 1) Continue the radio search: build an affordable array incorporating consumer market technologies, expand the search frequency, and increase the target list to 100,000 stars. This array will also serve as a technology demonstration and enable the international radio astronomy community to realize an array that is a hundred times larger and capable (among other things) of searching a million stars. 2) Begin searches for very fast optical pulses from a million stars. 3) As Moore's Law delivers increased computational capacity, build an omni-directional sky survey array capable of detecting strong, transient
Contradictions Between Risk Management and Sustainable Development
Energy Technology Data Exchange (ETDEWEB)
Olsen, Odd Einar; Langhelle, Oluf; Engen, Ole A. [Univ. of Stavanger (Norway). Dept. of Media, Culture and Social Science
2006-09-15
The aim of this paper is to discuss how risk management as a methodology and mindset influence on priorities and decisions concerning sustainable development. Management of risks and hazards often rely on partial analysis with a limited time frame. This may lead to a paradoxical situation where risk management and extended use of risk analysis could hamper long term sustainable development. The question is: Does the use of risk and vulnerability analysis (RaV-analysis) hamper or contribute to sustainable development? Because risk management and assessment has a more narrow scope and a limited time perspective based on well established methodologies, the tangible impacts of risk reducing measures in a project is easier to calculate than long-term and intangible impacts on global development. Empirical evidence is still scarce, but our preliminary conclusion is that mainstream risk management and assessments is counterproductive to sustainable development.
RTU Comparison Calculator Enhancement Plan
Energy Technology Data Exchange (ETDEWEB)
Miller, James D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wang, Weimin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Katipamula, Srinivas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-07-01
Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.
Selfconsistent calculations for hyperdeformed nuclei
Energy Technology Data Exchange (ETDEWEB)
Molique, H.; Dobaczewski, J.; Dudek, J.; Luo, W.D. [Universite Louis Pasteur, Strasbourg (France)
1996-12-31
Properties of the hyperdeformed nuclei in the A {approximately} 170 mass range are re-examined using the self-consistent Hartree-Fock method with the SOP parametrization. A comparison with the previous predictions that were based on a non-selfconsistent approach is made. The existence of the {open_quotes}hyper-deformed shell closures{close_quotes} at the proton and neutron numbers Z=70 and N=100 and their very weak dependence on the rotational frequency is suggested; the corresponding single-particle energy gaps are predicted to play a role similar to that of the Z=66 and N=86 gaps in the super-deformed nuclei of the A {approximately} 150 mass range. Selfconsistent calculations suggest also that the A {approximately} 170 hyperdeformed structures have neglegible mass asymmetry in their shapes. Very importantly for the experimental studies, both the fission barriers and the {open_quotes}inner{close_quotes} barriers (that separate the hyperdeformed structures from those with smaller deformations) are predicted to be relatively high, up to the factor of {approximately}2 higher than the corresponding ones in the {sup 152}Dy superdeformed nucleus used as a reference.
RTU Comparison Calculator Enhancement Plan
Energy Technology Data Exchange (ETDEWEB)
Miller, James D.; Wang, Weimin; Katipamula, Srinivas
2014-03-31
Over the past two years, Department of Energy’s Building Technologies Office (BTO) has been investigating ways to increase the operating efficiency of the packaged rooftop units (RTUs) in the field. First, by issuing a challenge to the RTU manufactures to increase the integrated energy efficiency ratio (IEER) by 60% over the existing ASHRAE 90.1-2010 standard. Second, by evaluating the performance of an advanced RTU controller that reduces the energy consumption by over 40%. BTO has previously also funded development of a RTU comparison calculator (RTUCC). RTUCC is a web-based tool that provides the user a way to compare energy and cost savings for two units with different efficiencies. However, the RTUCC currently cannot compare savings associated with either the RTU Challenge unit or the advanced RTU controls retrofit. Therefore, BTO has asked PNNL to enhance the tool so building owners can compare energy and savings associated with this new class of products. This document provides the details of the enhancements that are required to support estimating energy savings from use of RTU challenge units or advanced controls on existing RTUs.
Body Mass Index Genetic Risk Score and Endometrial Cancer Risk.
Directory of Open Access Journals (Sweden)
Jennifer Prescott
Full Text Available Genome-wide association studies (GWAS have identified common variants that predispose individuals to a higher body mass index (BMI, an independent risk factor for endometrial cancer. Composite genotype risk scores (GRS based on the joint effect of published BMI risk loci were used to explore whether endometrial cancer shares a genetic background with obesity. Genotype and risk factor data were available on 3,376 endometrial cancer case and 3,867 control participants of European ancestry from the Epidemiology of Endometrial Cancer Consortium GWAS. A BMI GRS was calculated by summing the number of BMI risk alleles at 97 independent loci. For exploratory analyses, additional GRSs were based on subsets of risk loci within putative etiologic BMI pathways. The BMI GRS was statistically significantly associated with endometrial cancer risk (P = 0.002. For every 10 BMI risk alleles a woman had a 13% increased endometrial cancer risk (95% CI: 4%, 22%. However, after adjusting for BMI, the BMI GRS was no longer associated with risk (per 10 BMI risk alleles OR = 0.99, 95% CI: 0.91, 1.07; P = 0.78. Heterogeneity by BMI did not reach statistical significance (P = 0.06, and no effect modification was noted by age, GWAS Stage, study design or between studies (P≥0.58. In exploratory analyses, the GRS defined by variants at loci containing monogenic obesity syndrome genes was associated with reduced endometrial cancer risk independent of BMI (per BMI risk allele OR = 0.92, 95% CI: 0.88, 0.96; P = 2.1 x 10-5. Possessing a large number of BMI risk alleles does not increase endometrial cancer risk above that conferred by excess body weight among women of European descent. Thus, the GRS based on all current established BMI loci does not provide added value independent of BMI. Future studies are required to validate the unexpected observed relation between monogenic obesity syndrome genetic variants and endometrial cancer risk.
DEFF Research Database (Denmark)
Wirtanen, Gun Linnea; Salo, Satu
2016-01-01
This chapter on biofilm risks deals with biofilm formation of pathogenic microbes, sampling and detection methods, biofilm removal, and prevention of biofilm formation. Several common pathogens produce sticky and/or slimy structures in which the cells are embedded, that is, biofilms, on various s...
Peterson, Laurel M; Helweg-Larsen, Marie; Volpp, Kevin G; Kimmel, Stephen E
2012-01-01
Risk biases such as comparative optimism (thinking one is better off than similar others) and risk inaccuracy (misestimating one's risk compared to one's calculated risk) for health outcomes are common. Little research has investigated racial or socioeconomic differences in these risk biases. Results from a survey of individuals with poorly controlled hypertension (N=813) indicated that participants showed (1) comparative optimism for heart attack risk by underestimating their heart attack risk compared to similar others, and (2) risk inaccuracy by overestimating their heart attack risk compared to their calculated heart attack risk. More highly educated participants were more comparatively optimistic because they rated their personal risk as lower; education was not related to risk inaccuracy. Neither race nor the federal poverty level was related to risk biases. Worry partially mediated the relationship between education and personal risk. Results are discussed as they relate to the existing literature on risk perception.
Algorithms for the Computation of Debris Risks
Matney, Mark
2017-01-01
Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.
SIMULATION OF COLLECTIVE RISK MODEL
Directory of Open Access Journals (Sweden)
Viera Pacáková
2007-12-01
Full Text Available The article focuses on providing brief theoretical definitions of the basic terms and methods of modeling and simulations of insurance risks in non-life insurance by means of mathematical and statistical methods using statistical software. While risk assessment of insurance company in connection with its solvency is a rather complex and comprehensible problem, its solution starts with statistical modeling of number and amount of individual claims. Successful solution of these fundamental problems enables solving of curtail problems of insurance such as modeling and simulation of collective risk, premium an reinsurance premium calculation, estimation of probabiliy of ruin etc. The article also presents some essential ideas underlying Monte Carlo methods and their applications to modeling of insurance risk. Solving problem is to find the probability distribution of the collective risk in non-life insurance portfolio. Simulation of the compound distribution function of the aggregate claim amount can be carried out, if the distibution functions of the claim number process and the claim size are assumed given. The Monte Carlo simulation is suitable method to confirm the results of other methods and for treatments of catastrophic claims, when small collectives are studied. Analysis of insurance risks using risk theory is important part of the project Solvency II. Risk theory is analysis of stochastic features of non-life insurance process. The field of application of risk theory has grown rapidly. There is a need to develop the theory into form suitable for practical purposes and demostrate their application. Modern computer simulation techniques open up a wide field of practical applications for risk theory concepts, without requiring the restricive assumptions and sophisticated mathematics. This article presents some comparisons of the traditional actuarial methods and of simulation methods of the collective risk model.
Rücker, Viktoria; Keil, Ulrich; Fitzgerald, Anthony P; Malzahn, Uwe; Prugger, Christof; Ertl, Georg; Heuschmann, Peter U; Neuhauser, Hannelore
2016-01-01
Estimation of absolute risk of cardiovascular disease (CVD), preferably with population-specific risk charts, has become a cornerstone of CVD primary prevention. Regular recalibration of risk charts may be necessary due to decreasing CVD rates and CVD risk factor levels. The SCORE risk charts for fatal CVD risk assessment were first calibrated for Germany with 1998 risk factor level data and 1999 mortality statistics. We present an update of these risk charts based on the SCORE methodology including estimates of relative risks from SCORE, risk factor levels from the German Health Interview and Examination Survey for Adults 2008–11 (DEGS1) and official mortality statistics from 2012. Competing risks methods were applied and estimates were independently validated. Updated risk charts were calculated based on cholesterol, smoking, systolic blood pressure risk factor levels, sex and 5-year age-groups. The absolute 10-year risk estimates of fatal CVD were lower according to the updated risk charts compared to the first calibration for Germany. In a nationwide sample of 3062 adults aged 40–65 years free of major CVD from DEGS1, the mean 10-year risk of fatal CVD estimated by the updated charts was lower by 29% and the estimated proportion of high risk people (10-year risk > = 5%) by 50% compared to the older risk charts. This recalibration shows a need for regular updates of risk charts according to changes in mortality and risk factor levels in order to sustain the identification of people with a high CVD risk. PMID:27612145
[Medical insurance estimation of risks].
Dunér, H
1975-11-01
The purpose of insurance medicine is to make a prognostic estimate of medical risk-factors in persons who apply for life, health, or accident insurance. Established risk-groups with a calculated average mortality and morbidity form the basis for premium rates and insurance terms. In most cases the applicant is accepted for insurance after a self-assessment of his health. Only around one per cent of the applications are refused, but there are cases in which the premium is raised, temporarily or permanently. It is often a matter of rough estimate, since the knowlege of the long-term prognosis for many diseases is incomplete. The insurance companies' rules for estimate of risk are revised at intervals of three or four years. The estimate of risk as regards life insurance has been gradually liberalised, while the medical conditions for health insurance have become stricter owing to an increase in the claims rate.
2011-12-21
... ratio approach to calculate the specific risk add-on. Under Basel III: A global regulatory framework for more resilient banks and banking systems (Basel III), published by the BCBS in December 2010, and... Basel Committee on Banking Supervision (BCBS) for calculating the standard specific risk...
Mirjana M. Ilic; Veselin Avdalovic; Milica D. Obadovic
2011-01-01
The literature on the topic of risk management in insurance generally separately treat insurance risk and insurance company risk, however such a separate treatment of risk excludes a third type of risk which is defined here, that is the risk of incorrectly calculating insurance risk, which causes uncertainties and disruptions in the operations of an insurance company and its concept of risk management arising from its activities. This paper presents a Model developed for insurance risk manage...
Explosion Calculations of SN1087
Wooden, Diane H.; Morrison, David (Technical Monitor)
1994-01-01
Explosion calculations of SNT1987A generate pictures of Rayleigh-Taylor fingers of radioactive Ni-56 which are boosted to velocities of several thousand km/s. From the KAO observations of the mid-IR iron lines, a picture of the iron in the ejecta emerges which is consistent with the "frothy iron fingers" having expanded to fill about 50% of the metal-rich volume of the ejecta. The ratio of the nickel line intensities yields a high ionization fraction of greater than or equal to 0.9 in the volume associated with the iron-group elements at day 415, before dust condenses in the ejecta. From the KAO observations of the dust's thermal emission, it is deduced that when the grains condense their infrared radiation is trapped, their apparent opacity is gray, and they have a surface area filling factor of about 50%. The dust emission from SN1987A is featureless: no 9.7 micrometer silicate feature, nor PAH features, nor dust emission features of any kind are seen at any time. The total dust opacity increases with time even though the surface area filling factor and the dust/gas ratio remain constant. This suggests that the dust forms along coherent structures which can maintain their radial line-of-sight opacities, i.e., along fat fingers. The coincidence of the filling factor of the dust and the filling factor of the iron strongly suggests that the dust condenses within the iron, and therefore the dust is iron-rich. It only takes approximately 4 x 10(exp -4) solar mass of dust for the ejecta to be optically thick out to approximately 100 micrometers; a lower limit of 4 x 10(exp -4) solar mass of condensed grains exists in the metal-rich volume, but much more dust could be present. The episode of dust formation started at about 530 days and proceeded rapidly, so that by 600 days 45% of the bolometric luminosity was being emitted in the IR; by 775 days, 86% of the bolometric luminosity was being reradiated by the dust. Measurements of the bolometric luminosity of SN1987A from
Lyman, Gary H; Dale, David C; Legg, Jason C; Abella, Esteban; Morrow, Phuong Khanh; Whittaker, Sadie; Crawford, Jeffrey
2015-08-01
This study evaluated the correlation between the risk of febrile neutropenia (FN) estimated by physicians and the risk of severe neutropenia or FN predicted by a validated multivariate model in patients with nonmyeloid malignancies receiving chemotherapy. Before patient enrollment, physician and site characteristics were recorded, and physicians self-reported the FN risk at which they would typically consider granulocyte colony-stimulating factor (G-CSF) primary prophylaxis (FN risk intervention threshold). For each patient, physicians electronically recorded their estimated FN risk, orders for G-CSF primary prophylaxis (yes/no), and patient characteristics for model predictions. Correlations between physician-assessed FN risk and model-predicted risk (primary endpoints) and between physician-assessed FN risk and G-CSF orders were calculated. Overall, 124 community-based oncologists registered; 944 patients initiating chemotherapy with intermediate FN risk enrolled. Median physician-assessed FN risk over all chemotherapy cycles was 20.0%, and median model-predicted risk was 17.9%; the correlation was 0.249 (95% CI, 0.179-0.316). The correlation between physician-assessed FN risk and subsequent orders for G-CSF primary prophylaxis (n = 634) was 0.313 (95% CI, 0.135-0.472). Among patients with a physician-assessed FN risk ≥ 20%, 14% did not receive G-CSF orders. G-CSF was not ordered for 16% of patients at or above their physician's self-reported FN risk intervention threshold (median, 20.0%) and was ordered for 21% below the threshold. Physician-assessed FN risk and model-predicted risk correlated weakly; however, there was moderate correlation between physician-assessed FN risk and orders for G-CSF primary prophylaxis. Further research and education on FN risk factors and appropriate G-CSF use are needed.
Breast cancer risk prediction using a clinical risk model and polygenic risk score.
Shieh, Yiwey; Hu, Donglei; Ma, Lin; Huntsman, Scott; Gard, Charlotte C; Leung, Jessica W T; Tice, Jeffrey A; Vachon, Celine M; Cummings, Steven R; Kerlikowske, Karla; Ziv, Elad
2016-10-01
Breast cancer risk assessment can inform the use of screening and prevention modalities. We investigated the performance of the Breast Cancer Surveillance Consortium (BCSC) risk model in combination with a polygenic risk score (PRS) comprised of 83 single nucleotide polymorphisms identified from genome-wide association studies. We conducted a nested case-control study of 486 cases and 495 matched controls within a screening cohort. The PRS was calculated using a Bayesian approach. The contributions of the PRS and variables in the BCSC model to breast cancer risk were tested using conditional logistic regression. Discriminatory accuracy of the models was compared using the area under the receiver operating characteristic curve (AUROC). Increasing quartiles of the PRS were positively associated with breast cancer risk, with OR 2.54 (95 % CI 1.69-3.82) for breast cancer in the highest versus lowest quartile. In a multivariable model, the PRS, family history, and breast density remained strong risk factors. The AUROC of the PRS was 0.60 (95 % CI 0.57-0.64), and an Asian-specific PRS had AUROC 0.64 (95 % CI 0.53-0.74). A combined model including the BCSC risk factors and PRS had better discrimination than the BCSC model (AUROC 0.65 versus 0.62, p = 0.01). The BCSC-PRS model classified 18 % of cases as high-risk (5-year risk ≥3 %), compared with 7 % using the BCSC model. The PRS improved discrimination of the BCSC risk model and classified more cases as high-risk. Further consideration of the PRS's role in decision-making around screening and prevention strategies is merited.
A toolbox for rockfall Quantitative Risk Assessment
Agliardi, F.; Mavrouli, O.; Schubert, M.; Corominas, J.; Crosta, G. B.; Faber, M. H.; Frattini, P.; Narasimhan, H.
2012-04-01
Rockfall Quantitative Risk Analysis for mitigation design and implementation requires evaluating the probability of rockfall events, the probability and intensity of impacts on structures (elements at risk and countermeasures), their vulnerability, and the related expected costs for different scenarios. A sound theoretical framework has been developed during the last years both for spatially-distributed and local (i.e. single element at risk) analyses. Nevertheless, the practical application of existing methodologies remains challenging, due to difficulties in the collection of required data and to the lack of simple, dedicated analysis tools. In order to fill this gap, specific tools have been developed in the form of Excel spreadsheets, in the framework of Safeland EU project. These tools can be used by stakeholders, practitioners and other interested parties for the quantitative calculation of rock fall risk through its key components (probabilities, vulnerability, loss), using combinations of deterministic and probabilistic approaches. Three tools have been developed, namely: QuRAR (by UNIMIB), VulBlock (by UPC), and RiskNow-Falling Rocks (by ETH Zurich). QuRAR implements a spatially distributed, quantitative assessment methodology of rockfall risk for individual buildings or structures in a multi-building context (urban area). Risk is calculated in terms of expected annual cost, through the evaluation of rockfall event probability, propagation and impact probability (by 3D numerical modelling of rockfall trajectories), and empirical vulnerability for different risk protection scenarios. Vulblock allows a detailed, analytical calculation of the vulnerability of reinforced concrete frame buildings to rockfalls and related fragility curves, both as functions of block velocity and the size. The calculated vulnerability can be integrated in other methodologies/procedures based on the risk equation, by incorporating the uncertainty of the impact location of the rock
Selection of Dispersivity in Groundwater Risk Assessment
Institute of Scientific and Technical Information of China (English)
武晓峰; 唐杰
2004-01-01
The Domenico model is used in combination with ASTM E 1739 in a Tier 2 risk assessment of chlorinated organic solvents contaminated groundwater sites to predict potential contaminant concentration in groundwater down-gradient from the point of exposure (POE). A knowledge of the dispersivity parameters is necessary for carrying out this calculation. A constant longitudinal dispersivity of 10 m is often used in analytical and numerical calculation. However, because of the scale effect of dispersion, two other main approaches are currently often used. From the viewpoint of conservative principle in risk assessment, it is necessary to determine which dispersivity data will give a higher predicted concentration, corresponding to a more conservative risk calculation. Generally, it is considered that a smaller dispersivity leads to a higher predicted concentration. This assumption is correct when dispersion is the only natural attenuation factor. However, degradation of commonly encountered chlorinated organic solvents in environment under natural condition has been widely reported. Calculations given in this paper of several representative cases show that a general consideration of the influence of dispersivity on concentration prediction is not always correct when a degradation term is included in the calculation. To give a conservative risk calculation, the scale effect of dispersion is considered. Calculations also show that the dispersivity parameters need to be determined by considering the POE distance from the source, the groundwater velocity, and the degradation rate of the contaminant.
A New Approach for Calculating Vacuum Susceptibility
Institute of Scientific and Technical Information of China (English)
宗红石; 平加伦; 顾建中
2004-01-01
Based on the Dyson-Schwinger approach, we propose a new method for calculating vacuum susceptibilities. As an example, the vector vacuum susceptibility is calculated. A comparison with the results of the previous approaches is presented.
Dynamics Calculation of Travel Wave Tube
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
During the dynamics calculating of the travel tube, we must obtain the field map in the tube. The field map can be affected by not only the beam loading, but also the attenuation coefficient. The calculation of the attenuation coefficient
Reliability of Calculated Low-Density Lipoprotein Cholesterol.
Meeusen, Jeffrey W; Snozek, Christine L; Baumann, Nikola A; Jaffe, Allan S; Saenger, Amy K
2015-08-15
Aggressive low-density lipoprotein cholesterol (LDL-C)-lowering strategies are recommended for prevention of cardiovascular events in high-risk populations. Guidelines recommend a 30% to 50% reduction in at-risk patients even when LDL-C concentrations are between 70 and 130 mg/dl (1.8 to 3.4 mmol/L). However, calculation of LDL-C by the Friedewald equation is the primary laboratory method for routine LDL-C measurement. We compared the accuracy and reproducibility of calculated LDL-C <130 mg/dl (3.4 mmol/L) to LDL-C measured by β quantification (considered the gold standard method) in 15,917 patients with fasting triglyceride concentrations <400 mg/dl (4.5 mmol/L). Both variation and bias of calculated LDL-C increased at lower values of measured LDL-C. The 95% confidence intervals for a calculated LDL-C of 70 mg/dl (1.8 mmol/L) and 30 mg/dl (0.8 mmol/L) were 60 to 86 mg/dl (1.6 to 2.2 mmol/L) and 24 to 60 mg/dl (0.6 to 1.6 mmol/L), respectively. Previous recommendations have emphasized the requirement for a fasting sample with triglycerides <400 mg/dl (4.5 mmol/L) to calculate LDL-C by the Friedewald equation. However, no recommendations have addressed the appropriate lower reportable limit for calculated LDL-C. In conclusion, calculated LDL-C <30 mg/dl (0.8 mmol/L) should not be reported because of significant deviation from the gold standard measured LDL-C results, and caution is advised when using calculated LDL-CF values <70 mg/dl (1.8 mmol/L) to make treatment decisions.
Pressure Vessel Calculations for VVER-440 Reactors
Hordósy, G.; Hegyi, Gy.; Keresztúri, A.; Maráczy, Cs.; Temesvári, E.; Vértes, P.; Zsolnay, É.
2003-06-01
Monte Carlo calculations were performed for a selected cycle of the Paks NPP Unit II to test a computational model. In the model the source term was calculated by the core design code KARATE and the neutron transport calculations were performed by the MCNP. Different forms of the source specification were examined. The calculated results were compared with measurements and in most cases fairly good agreement was found.
A Risk Radar driven by Internet of intelligences serving for emergency management in community.
Huang, Chongfu; Wu, Tong; Renn, Ortwin
2016-07-01
Today, most of the commercial risk radars only have the function to show risks, as same as a set of risk matrixes. In this paper, we develop the Internet of intelligences (IOI) to drive a risk radar monitoring dynamic risks for emergency management in community. An IOI scans risks in a community by 4 stages: collecting information and experience about risks; evaluating risk incidents; verifying; and showing risks. Employing the information diffusion method, we optimized to deal with the effective information for calculating risk value. Also, a specific case demonstrates the reliability and practicability of risk radar.
2012-03-23
... risk, or the calculation of payments and charges, or that are used for validation or audit of such data... Affordable Care Act; Standards Related to Reinsurance, Risk Corridors and Risk Adjustment; Final Rule #0;#0...; Standards Related to Reinsurance, Risk Corridors and Risk Adjustment AGENCY: Department of Health and...
A general formalism for phase space calculations
Norbury, John W.; Deutchman, Philip A.; Townsend, Lawrence W.; Cucinotta, Francis A.
1988-01-01
General formulas for calculating the interactions of galactic cosmic rays with target nuclei are presented. Methods for calculating the appropriate normalization volume elements and phase space factors are presented. Particular emphasis is placed on obtaining correct phase space factors for 2-, and 3-body final states. Calculations for both Lorentz-invariant and noninvariant phase space are presented.
Status Report of NNLO QCD Calculations
Klasen, M
2005-01-01
We review recent progress in next-to-next-to-leading order (NNLO) perturbative QCD calculations with special emphasis on results ready for phenomenological applications. Important examples are new results on structure functions and jet or Higgs boson production. In addition, we describe new calculational techniques based on twistors and their potential for efficient calculations of multiparticle amplitudes.
Mathematical Creative Activity and the Graphic Calculator
Duda, Janina
2011-01-01
Teaching mathematics using graphic calculators has been an issue of didactic discussions for years. Finding ways in which graphic calculators can enrich the development process of creative activity in mathematically gifted students between the ages of 16-17 is the focus of this article. Research was conducted using graphic calculators with…
Decimals, Denominators, Demons, Calculators, and Connections
Sparrow, Len; Swan, Paul
2005-01-01
The authors provide activities for overcoming some fraction misconceptions using calculators specially designed for learners in primary years. The writers advocate use of the calculator as a way to engage children in thinking about mathematics. By engaging with a calculator as part of mathematics learning, children are learning about and using the…
METHODOLOGICAL PROBLEMS OF PRACTICAL RADIOGENIC RISK ESTIMATIONS
Directory of Open Access Journals (Sweden)
A. Т. Gubin
2014-01-01
Full Text Available Mathematical ratios were established according to the description of the calculation procedure for the values of the nominal risk coefficient given in the ICRP Recommendations 2007. It is shown that the lifetime radiogenic risk is a linear functional from the distribution of the dose in time with a multiplier descending with age. As a consequence, application of the nominal risk coefficient in the risk calculations is justified in the case when prolonged exposure is practically evenly distributed in time, and gives a significant deviation at a single exposure. When using the additive model of radiogenic risk proposed in the UNSCEAR Report 2006 for solid cancers, this factor is almost linearly decreasing with the age, which is convenient for its practical application.
Risk Assessment Study for Storage Explosive
Directory of Open Access Journals (Sweden)
S. S. Azhar
2006-01-01
Full Text Available In Malaysia, there has been rapidly increasing usage in amount of explosives due to widely expansion in quarrying and mining industries. The explosives are usually stored in the storage where the safety precaution had given high attention. As the storage of large quantity of explosive can be hazardous to workers and nearby residents in the events of accidental denotation of explosives, a risk assessment study for storage explosive (magazine had been carried out. Risk assessment study had been conducted in Kimanis Quarry Sdn. Bhd, located in Sabah. Risk assessment study had been carried out with the identification of hazards and failure scenarios and estimation of the failure frequency of occurrence. Analysis of possible consequences of failure and the effects of blast waves due to the explosion was evaluated. The risk had been estimated in term of fatalities and eardrum rupture to the workers and public. The average individual voluntary risk for fatality to the workers at the quarry is calculated to be 5.75 x 10-6 per person per year, which is much lower than the acceptable level. Eardrum rupture risk calculated to be 3.15 x 10-6 per person per year for voluntary risk. There is no involuntary risk found for fatality but for eardrum rupture it was calculated to be 6.98 x 10-8 per person per year, as given by Asian Development Bank.
Bank Liquidity Risk: Analysis and Estimates
Directory of Open Access Journals (Sweden)
Meilė Jasienė
2012-12-01
Full Text Available In today’s banking business, liquidity risk and its management are some of the most critical elements that underlie the stability and security of the bank’s operations, profit-making and clients confidence as well as many of the decisions that the bank makes. Managing liquidity risk in a commercial bank is not something new, yet scientific literature has not focused enough on different approaches to liquidity risk management and assessment. Furthermore, models, methodologies or policies of managing liquidity risk in a commercial bank have never been examined in detail either. The goal of this article is to analyse the liquidity risk of commercial banks as well as the possibilities of managing it and to build a liquidity risk management model for a commercial bank. The development, assessment and application of the commercial bank liquidity risk management was based on an analysis of scientific resources, a comparative analysis and mathematical calculations.
The Application of Asymmetric Liquidity Risk Measure in Modelling the Risk of Investment
Directory of Open Access Journals (Sweden)
Garsztka Przemysław
2015-06-01
Full Text Available The article analyses the relationship between investment risk (as measured by the variance of returns or standard deviation of returns and liquidity risk. The paper presents a method for calculating a new measure of liquidity risk, based on the characteristic line. In addition, it is checked what is the impact of liquidity risk to the volatility of daily returns. To describe this relationship dynamic econometric models were used. It was found that there was an econometric relationship between the proposed measure liquidity risk and the variance of returns.
Modeling Linkage Disequilibrium Increases Accuracy of Polygenic Risk Scores
DEFF Research Database (Denmark)
Vilhjálmsson, Bjarni J; Yang, Jian; Finucane, Hilary K;
2015-01-01
Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold...
Non-nuclear industries in the Netherlands and radiological risks
Leenhouts HP; Stoop P; Tuinen ST van; LSO
1996-01-01
Concerns a review of the risks imposed on the population due to natural radioactive active substances from non-nuclear industries (NNI) and an update and improvement of a previous review. The risk calculations with respect to modelling and risk analysis are based on guidelines of the Ministry of Hou
Risk Factors of Entry in Out-of-Home Care
DEFF Research Database (Denmark)
Ejrnæs, Mette; Ejrnæs, Niels Morten; Frederiksen, Signe
2011-01-01
. The mother’s characteristics are more important risk factors than the corresponding risk factors of the father. The results, the applied method and the epidemiological inspired analysis make an opportunity to discuss the central concepts and methods of calculation of statistical association, risk, prediction...... and causal inference in applied sociology and social work....
Verbal risk in communicating risk
Energy Technology Data Exchange (ETDEWEB)
Walters, J.C. [Northern Arizona Univ., Flagstaff, AZ (United States). School of Communication; Reno, H.W. [EG and G Idaho, Inc., Idaho Falls, ID (United States). Idaho National Engineering Lab.
1993-03-01
When persons in the waste management industry have a conversation concerning matters of the industry, thoughts being communicated are understood among those in the industry. However, when persons in waste management communicate with those outside the industry, communication may suffer simply because of poor practices such as the use of jargon, euphemisms, acronyms, abbreviations, language usage, not knowing audience, and public perception. This paper deals with ways the waste management industry can communicate risk to the public without obfuscating issues. The waste management industry should feel obligated to communicate certain meanings within specific contexts and, then, if the context changes, should not put forth a new, more appropriate meaning to the language already used. Communication of the waste management industry does not have to be provisional. The authors suggest verbal risks in communicating risk can be reduced significantly or eliminated by following a few basic communication principles. The authors make suggestions and give examples of ways to improve communication with the general public by avoiding or reducing jargon, euphemisms, and acronyms; knowing the audience; avoiding presumptive knowledge held by the audience; and understanding public perception of waste management issues.
Directory of Open Access Journals (Sweden)
Mogens Steffensen
2014-02-01
Full Text Available “What is complicated is not necessarily insightful and what is insightful is not necessarily complicated: Risks welcomes simple manuscripts that contribute with insight, outlook, understanding and overview”—a quote from the first editorial of this journal [1]. Good articles are not characterized by their level of complication but by their level of imagination, innovation, and power of penetration. Creativity sessions and innovative tasks are most elegant and powerful when they are delicately simple. This is why the articles you most remember are not the complicated ones that you struggled to digest, but the simpler ones you enjoyed swallowing.
DEFF Research Database (Denmark)
Rojas-Nandayapa, Leonardo
Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... of insurance companies facing losses due to natural disasters, banks seeking protection against huge losses, failures in expensive and sophisticated systems or loss of valuable information in electronic systems. The main difficulty when dealing with this kind of problems is the unavailability of a closed...
Lymphedema Risk Reduction Practices
... now! Position Paper: Lymphedema Risk Reduction Practices Category: Position Papers Tags: Risks Archives Treatment risk reduction garments surgery obesity infection blood pressure trauma morbid obesity body weight ...
Cardiovascular risk score in Rheumatoid Arthritis
Wagan, Abrar Ahmed; Mahmud, Tafazzul E Haque; Rasheed, Aflak; Zafar, Zafar Ali; Rehman, Ata ur; Ali, Amjad
2016-01-01
Objective: To determine the 10-year Cardiovascular risk score with QRISK-2 and Framingham risk calculators in Rheumatoid Arthritis and Non Rheumatoid Arthritis subjects and asses the usefulness of QRISK-2 and Framingham calculators in both groups. Methods: During the study 106 RA and 106 Non RA patients age and sex matched participants were enrolled from outpatient department. Demographic data and questions regarding other study parameters were noted. After 14 hours of fasting 5 ml of venous blood was drawn for Cholesterol and HDL levels, laboratory tests were performed on COBAS c III (ROCHE). QRISK-2 and Framingham risk calculators were used to get individual 10-year CVD risk score. Results: In this study the mean age of RA group was (45.1±9.5) for Non RA group (43.7±8.2), with female gender as common. The mean predicted 10-year score with QRISK-2 calculator in RA group (14.2±17.1%) and Non RA group was (13.2±19.0%) with (p-value 0.122). The 10-year score with Framingham risk score in RA group was (12.9±10.4%) and Non RA group was (8.9±8.7%) with (p-value 0.001). In RA group QRISK-2 (24.5%) and FRS (31.1%) cases with predicted score were in higher risk category. The maximum agreement scores between both calculators was observed in both groups (Kappa = 0.618 RA Group; Kappa = 0.671 Non RA Group). Conclusion: QRISK-2 calculator is more appropriate as it takes RA, ethnicity, CKD, and Atrial fibrillation as factors in risk assessment score. PMID:27375684
Morring, Frank, Jr.
2004-01-01
A National Academies panel says the Hubble Space Telescope is too valuable ;or gamblingon a long-shot robotic mission to extend its service life, and urges Directly contradicting Administrator Sean O'Keefe, who killed a planned fifth shuttle servicing mission to the telescope on grounds it was too dangerous for a human crew in the post-Challenger environment, the expert committee found that upgrades to shuttle safety actually should make it less hazardous to fly to the telescope than it was before Columbia was lost. Risks of a telescope-servicing mission are only marginally greater than the planned missions to the International Space Station (ISS) O'Keefe has authorized, the panel found. After comparing those risks to the dangers inherent in trying to develop a complex space robot in the 39 months remaining in the Hubble s estimated service life, the panel opted for the human mission to save one of the major achievements of the American space program, in the words of Louis J. Lanzerotti, its chairman.
Fuzzy-probabilistic calculations of water-balance uncertainty
Energy Technology Data Exchange (ETDEWEB)
Faybishenko, B.
2009-10-01
Hydrogeological systems are often characterized by imprecise, vague, inconsistent, incomplete, or subjective information, which may limit the application of conventional stochastic methods in predicting hydrogeologic conditions and associated uncertainty. Instead, redictions and uncertainty analysis can be made using uncertain input parameters expressed as probability boxes, intervals, and fuzzy numbers. The objective of this paper is to present the theory for, and a case study as an application of, the fuzzyprobabilistic approach, ombining probability and possibility theory for simulating soil water balance and assessing associated uncertainty in the components of a simple waterbalance equation. The application of this approach is demonstrated using calculations with the RAMAS Risk Calc code, to ssess the propagation of uncertainty in calculating potential evapotranspiration, actual evapotranspiration, and infiltration-in a case study at the Hanford site, Washington, USA. Propagation of uncertainty into the results of water-balance calculations was evaluated by hanging he types of models of uncertainty incorporated into various input parameters. The results of these fuzzy-probabilistic calculations are compared to the conventional Monte Carlo simulation approach and estimates from field observations at the Hanford site.
Institute of Scientific and Technical Information of China (English)
李国明; 任哲; 杨帆
2016-01-01
人工掘进顶管穿越沙基公路施工易出现总顶力计算错误、后背墙倾斜、工作面坍塌等问题，从而造成顶管施工失败，应结合工程实例，探讨适于人工掘进顶管穿越沙基公路的总顶力计算公式，对于后背墙倾斜、工作面坍塌等施工难点提出具体的工程措施。通过理论计算与工程实例得出结论，国家标准总顶力计算公式适于人工掘进顶管总顶力计算；人工掘进顶管穿越沙基公路的穿越长度不宜大于26 m；后背墙、工具管的设计对于人工掘进顶管穿越沙基公路施工成败至关重要。%The total jacking force calculation error, backwall tilted, working face landslide and so on which caused by the manual pipe jacking through sand layers road construction fail-ure. Combined with construction, to discuss the calculation formula of the total jacking force which is suitable for construction of manual pipe jacking through sand layers road;En-gineering difficulties such as backwall tilted, working face landslide and so on, engineering measures is provided, and it can be referred to engineering practices. According to analysis, the calculation formula of the total jacking force which is adopted by the national standard is suitable for construction of manual pipe jacking through sand layers road;The total length should be less than 26 m while manual pipe jacking through sand layers road;Design of back-wall and tool pipe for manual pipe jacking through sand layers road are crucial to Success of engineering construction.
Microscopic Calculations of 240Pu Fission
Energy Technology Data Exchange (ETDEWEB)
Younes, W; Gogny, D
2007-09-11
Hartree-Fock-Bogoliubov calculations have been performed with the Gogny finite-range effective interaction for {sup 240}Pu out to scission, using a new code developed at LLNL. A first set of calculations was performed with constrained quadrupole moment along the path of most probable fission, assuming axial symmetry but allowing for the spontaneous breaking of reflection symmetry of the nucleus. At a quadrupole moment of 345 b, the nucleus was found to spontaneously scission into two fragments. A second set of calculations, with all nuclear moments up to hexadecapole constrained, was performed to approach the scission configuration in a controlled manner. Calculated energies, moments, and representative plots of the total nuclear density are shown. The present calculations serve as a proof-of-principle, a blueprint, and starting-point solutions for a planned series of more comprehensive calculations to map out a large set of scission configurations, and the associated fission-fragment properties.
DEFF Research Database (Denmark)
Bidstrup, Signe Brøker; Kaerlev, Linda; Thulstrup, Ane Marie;
2015-01-01
INTRODUCTION: Our aim was to study the association between pregnant women's referral status for occupational risk assessment, and their risk of preterm delivery (.... Logistic regression was used to calculate odds ratios (OR) with 95% confidence intervals (CI). Calculations were adjusted for the mother's age at delivery, parity, ethnicity, socioeconomic status, smoking, and in supplementary analyses for year of birth. RESULTS: Referred women gave birth to children....../or that the occupational risk assessment and counselling of pregnant women are preventing these selected adverse pregnancy outcomes. FUNDING: The Research Unit at Department of Occupational and Environmental Medicine at Bispebjerg Hospital supported the study financially. TRIAL REGISTRATION: not relevant. The study...
DEFF Research Database (Denmark)
Bidstrup, Signe Brøker; Kaerlev, Linda; Thulstrup, Ane Marie;
2015-01-01
INTRODUCTION: Our aim was to study the association between pregnant women's referral status for occupational risk assessment, and their risk of preterm delivery (.../or that the occupational risk assessment and counselling of pregnant women are preventing these selected adverse pregnancy outcomes. FUNDING: The Research Unit at Department of Occupational and Environmental Medicine at Bispebjerg Hospital supported the study financially. TRIAL REGISTRATION: not relevant. The study....... Logistic regression was used to calculate odds ratios (OR) with 95% confidence intervals (CI). Calculations were adjusted for the mother's age at delivery, parity, ethnicity, socioeconomic status, smoking, and in supplementary analyses for year of birth. RESULTS: Referred women gave birth to children...
SEISMIC RISK ASSESSMENT OF LEVEES
Directory of Open Access Journals (Sweden)
Dario Rosidi
2007-01-01
Full Text Available A seismic risk assessment procedure for earth embankments and levees is presented. The procedure consists of three major elements: (1 probability of ground motion at the site, (2 probability of levee failure given a level of ground motion has occurred and (3 expected loss resulting from the failure. This paper discusses the first two elements of the risk assessment. The third element, which includes economic losses and human casualty, will not be presented herein. The ground motions for risk assessment are developed using a probabilistic seismic hazard analysis. A two-dimensional finite element analysis is performed to estimate the dynamic responses of levee, and the probability of levee failure is calculated using the levee fragility curve. The overall objective of the assessment is to develop an analytical tool for assessing the failure risk and the effectiveness of various levee strengthening alternatives for risk reduction. An example of the procedure, as it applies to a levee built along the perimeter of an island for flood protection and water storage, is presented. Variations in earthquake ground motion and soil and water conditions at the site are incorporated in the risk assessment. The effects of liquefaction in the foundation soils are also considered.
Lunar Landing Operational Risk Model
Mattenberger, Chris; Putney, Blake; Rust, Randy; Derkowski, Brian
2010-01-01
Characterizing the risk of spacecraft goes beyond simply modeling equipment reliability. Some portions of the mission require complex interactions between system elements that can lead to failure without an actual hardware fault. Landing risk is currently the least characterized aspect of the Altair lunar lander and appears to result from complex temporal interactions between pilot, sensors, surface characteristics and vehicle capabilities rather than hardware failures. The Lunar Landing Operational Risk Model (LLORM) seeks to provide rapid and flexible quantitative insight into the risks driving the landing event and to gauge sensitivities of the vehicle to changes in system configuration and mission operations. The LLORM takes a Monte Carlo based approach to estimate the operational risk of the Lunar Landing Event and calculates estimates of the risk of Loss of Mission (LOM) - Abort Required and is Successful, Loss of Crew (LOC) - Vehicle Crashes or Cannot Reach Orbit, and Success. The LLORM is meant to be used during the conceptual design phase to inform decision makers transparently of the reliability impacts of design decisions, to identify areas of the design which may require additional robustness, and to aid in the development and flow-down of requirements.
Calculation of the Moments of Polygons.
1987-06-01
2.1) VowUK-1N0+IDIO TUUNTKPlNO.YKNO C Calculate AREA YKXK-YKPIND*IKNO-YKNO*XKP1NO AIKA-hEEA4YKXX C Calculate ACEIT ACENT (1)- ACEIT ( 1) VSUNI4TKIK... ACEIT (2) -ACENT(2) .VSUNYKXK C Calculate SECHON 3ECNON (1) -SCNON( 1) TKXK*(XX~PIdO*VSUNXKKO**2) SECNO(2) -SEn N(2) .yrf* (XKP114*YKP1MO.XKO*YXO+VB1hi
... Prevented? Thyroid Cancer Causes, Risk Factors, and Prevention Thyroid Cancer Risk Factors A risk factor is anything that ... Cancer? Can Thyroid Cancer Be Prevented? More In Thyroid Cancer About Thyroid Cancer Causes, Risk Factors, and Prevention ...
Surface Tension Calculation of Undercooled Alloys
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Based on the Butler equation and extrapolated thermodynamic data of undercooled alloys from those of liquid stable alloys, a method for surface tension calculation of undercooled alloys is proposed. The surface tensions of liquid stable and undercooled Ni-Cu (xNi=0.42) and Ni-Fe (xNi=0.3 and 0.7) alloys are calculated using STCBE (Surface Tension Calculation based on Butler Equation) program. The agreement between calculated values and experimental data is good enough, and the temperature dependence of the surface tension can be reasonable down to 150-200 K under the liquid temperature of the alloys.
The conundrum of calculating carbon footprints
DEFF Research Database (Denmark)
Strobel, Bjarne W.; Erichsen, Anders Christian; Gausset, Quentin
2016-01-01
A pre-condition for reducing global warming is to minimise the emission of greenhouse gasses (GHGs). A common approach to informing people about the link between behaviour and climate change rests on developing GHG calculators that quantify the ‘carbon footprint’ of a product, a sector or an actor....... There is, however, an abundance of GHG calculators that rely on very different premises and give very different estimates of carbon footprints. In this chapter, we compare and analyse the main principles of calculating carbon footprints, and discuss how calculators can inform (or misinform) people who wish...
MATNORM: Calculating NORM using composition matrices
Pruseth, Kamal L.
2009-09-01
This paper discusses the implementation of an entirely new set of formulas to calculate the CIPW norm. MATNORM does not involve any sophisticated programming skill and has been developed using Microsoft Excel spreadsheet formulas. These formulas are easy to understand and a mere knowledge of the if-then-else construct in MS-Excel is sufficient to implement the whole calculation scheme outlined below. The sequence of calculation used here differs from that of the standard CIPW norm calculation, but the results are very similar. The use of MS-Excel macro programming and other high-level programming languages has been deliberately avoided for simplicity.
Pile Load Capacity – Calculation Methods
Directory of Open Access Journals (Sweden)
Wrana Bogumił
2015-12-01
Full Text Available The article is a review of the current problems of the foundation pile capacity calculations. The article considers the main principles of pile capacity calculations presented in Eurocode 7 and other methods with adequate explanations. Two main methods are presented: α – method used to calculate the short-term load capacity of piles in cohesive soils and β – method used to calculate the long-term load capacity of piles in both cohesive and cohesionless soils. Moreover, methods based on cone CPTu result are presented as well as the pile capacity problem based on static tests.
Project Risk Management Phases
Claudiu-George BOCEAN
2008-01-01
Risk management is the human activity which integrates recognition of risk, risk assessment, developing strategies to manage it, and mitigation of risk using managerial resources. Notwithstanding the domain of activities where they are conducted, projects often entail risks, and risk management has been widely recognized as a success factor in project management. Following a concept clarification on project risk management, this paper presents a generic list steps in the risk management proce...
Mlodinow, Alexei S.; Khavanin, Nima; Hume, Keith M.; Simmons, Christopher J.; Weiss, Michael J.; Murphy, Robert X.; Gutowski, Karol A.
2015-01-01
Background: Risk discussion is a central tenet of the dialogue between surgeon and patient. Risk calculators have recently offered a new way to integrate evidence-based practice into the discussion of individualized patient risk and expectation management. Focusing on the comprehensive Tracking Operations and Outcomes for Plastic Surgeons (TOPS) database, we endeavored to add plastic surgical outcomes to the previously developed Breast Reconstruction Risk Assessment (BRA) score. Methods: The TOPS database from 2008 to 2011 was queried for patients undergoing breast reconstruction. Regression models were constructed for the following complications: seroma, dehiscence, surgical site infection (SSI), explantation, flap failure, reoperation, and overall complications. Results: Of 11,992 cases, 4439 met inclusion criteria. Overall complication rate was 15.9%, with rates of 3.4% for seroma, 4.0% for SSI, 6.1% for dehiscence, 3.7% for explantation, 7.0% for flap loss, and 6.4% for reoperation. Individualized risk models were developed with acceptable goodness of fit, accuracy, and internal validity. Distribution of overall complication risk was broad and asymmetric, meaning that the average risk was often a poor estimate of the risk for any given patient. These models were added to the previously developed open-access version of the risk calculator, available at http://www.BRAscore.org. Conclusions: Population-based measures of risk may not accurately reflect risk for many individual patients. In this era of increasing emphasis on evidence-based medicine, we have developed a breast reconstruction risk assessment calculator from the robust TOPS database. The BRA Score tool can aid in individualizing—and quantifying—risk to better inform surgical decision making and better manage patient expectations. PMID:26090295
Financing and risk management of investments in mining sector
Hashemi, Seyedmajid
2013-01-01
ABSTRACT: This study aims for investigating the process of mining investments and calculating the level of risk to which mining companies are exposed. As a mining firm gets involved in a project, there are many risks to be assessed including environmental, social and reputational risks. Therefore, the presence of a sustainable development framework in the mining sector helps to consider all dimensions of mining projects in order to mitigate the risk exposure. As undeveloped mineral resour...
Risk assessment and risk transfer from an insurerś point of view
Ebner, G.
2009-04-01
Risk, a word that causes a lot of associations in human brains. Many of us don't like risks. Since hundreds of years insurance is the most common way to get rid of the financial consequences when risks convert to damages. This article deals with commercial risks and the possibilities of risk transfer, an important task within the field of risk management. For commercial entities it is very important to transfer risks, threatening the competitiveness or even worse the existence of a company. At the beginning of insurance it was more the less a bet between merchants and rich people. Later on mutual societies were taking place. Today we see a complex insurance industry with insurers, reinsurers, self insuring possibilities via captives and much more. This complex system, with all the different ways to deal with risk transfer requires a professional risk assessment! Risk assessment is based on knowledge about the threatened assets, the likelihood that they will be damaged, the threats and the possibilities to protect these assets. Assets may be tangible or intangible. Assessing risks is not a precise calculation that delivers a result without any doubt. But insurers and insured need a basis to fix a premium, both of them can agree. This contribution will present a system to assess risks and to find the right risk-transfer-premiums.
Dysphonia risk screening protocol
Directory of Open Access Journals (Sweden)
Katia Nemr
2016-03-01
Full Text Available OBJECTIVE: To propose and test the applicability of a dysphonia risk screening protocol with score calculation in individuals with and without dysphonia. METHOD: This descriptive cross-sectional study included 365 individuals (41 children, 142 adult women, 91 adult men and 91 seniors divided into a dysphonic group and a non-dysphonic group. The protocol consisted of 18 questions and a score was calculated using a 10-cm visual analog scale. The measured value on the visual analog scale was added to the overall score, along with other partial scores. Speech samples allowed for analysis/assessment of the overall degree of vocal deviation and initial definition of the respective groups and after six months, the separation of the groups was confirmed using an acoustic analysis. RESULTS: The mean total scores were different between the groups in all samples. Values ranged between 37.0 and 57.85 in the dysphonic group and between 12.95 and 19.28 in the non-dysphonic group, with overall means of 46.09 and 15.55, respectively. High sensitivity and specificity were demonstrated when discriminating between the groups with the following cut-off points: 22.50 (children, 29.25 (adult women, 22.75 (adult men, and 27.10 (seniors. CONCLUSION: The protocol demonstrated high sensitivity and specificity in differentiating groups of individuals with and without dysphonia in different sample groups and is thus an effective instrument for use in voice clinics.
Atomic Structure Calculations for Neutral Oxygen
Norah Alonizan; Rabia Qindeel; Nabil Ben Nessib
2016-01-01
Energy levels and oscillator strengths for neutral oxygen have been calculated using the Cowan (CW), SUPERSTRUCTURE (SS), and AUTOSTRUCTURE (AS) atomic structure codes. The results obtained with these atomic codes have been compared with MCHF calculations and experimental values from the National Institute of Standards and Technology (NIST) database.
10 CFR 766.102 - Calculation methodology.
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology....
Calculation of cohesive energy of actinide metals
Institute of Scientific and Technical Information of China (English)
钱存富; 陈秀芳; 余瑞璜; 耿平; 段占强
1997-01-01
According to empirical electron theory of solids and molecules (EET), an equation for calculating the cohesive energy of actinide metals is given, the cohesive energy of 9 actinide metals with known crystal structure is calculated, which is identical with the experimental values on the whole, and the cohesive energy of 6 actinide metals with unknown crystal structure is forecast.
Calculation reliability in vehicle accident reconstruction.
Wach, Wojciech
2016-06-01
The reconstruction of vehicle accidents is subject to assessment in terms of the reliability of a specific system of engineering and technical operations. In the article [26] a formalized concept of the reliability of vehicle accident reconstruction, defined using Bayesian networks, was proposed. The current article is focused on the calculation reliability since that is the most objective section of this model. It is shown that calculation reliability in accident reconstruction is not another form of calculation uncertainty. The calculation reliability is made dependent on modeling reliability, adequacy of the model and relative uncertainty of calculation. All the terms are defined. An example is presented concerning the analytical determination of the collision location of two vehicles on the road in the absence of evidential traces. It has been proved that the reliability of this kind of calculations generally does not exceed 0.65, despite the fact that the calculation uncertainty itself can reach only 0.05. In this example special attention is paid to the analysis of modeling reliability and calculation uncertainty using sensitivity coefficients and weighted relative uncertainty.
Calculating "g" from Acoustic Doppler Data
Torres, Sebastian; Gonzalez-Espada, Wilson J.
2006-01-01
Traditionally, the Doppler effect for sound is introduced in high school and college physics courses. Students calculate the perceived frequency for several scenarios relating a stationary or moving observer and a stationary or moving sound source. These calculations assume a constant velocity of the observer and/or source. Although seldom…
Efficient Calculation of Earth Penetrating Projectile Trajectories
2006-09-01
CALCULATION OF EARTH PENETRATING PROJECTILE TRAJECTORIES by Daniel F . Youch September 2006 Thesis Advisor: Joshua Gordis... Daniel F . Youch 5. FUNDING NUMBERS 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School Monterey, CA 93943-5000 8. PERFORMING...EFFICIENT CALCULATION OF EARTH PENETRATING PROJECTILE TRAJECTORIES Daniel F . Youch Lieutenant Commander, United States Navy B.S., Temple
Direct calculation of wind turbine tip loss
DEFF Research Database (Denmark)
Wood, D.H.; Okulov, Valery; Bhattacharjee, D.
2016-01-01
. We develop three methods for the direct calculation of the tip loss. The first is the computationally expensive calculation of the velocities induced by the helicoidal wake which requires the evaluation of infinite sums of products of Bessel functions. The second uses the asymptotic evaluation...
Calculating Electromagnetic Fields Of A Loop Antenna
Schieffer, Mitchell B.
1987-01-01
Approximate field values computed rapidly. MODEL computer program developed to calculate electromagnetic field values of large loop antenna at all distances to observation point. Antenna assumed to be in x-y plane with center at origin of coordinate system. Calculates field values in both rectangular and spherical components. Also solves for wave impedance. Written in MicroSoft FORTRAN 77.
New tool for standardized collector performance calculations
DEFF Research Database (Denmark)
Perers, Bengt; Kovacs, Peter; Olsson, Marcus;
2011-01-01
A new tool for standardized calculation of solar collector performance has been developed in cooperation between SP Technical Research Institute Sweden, DTU Denmark and SERC Dalarna University. The tool is designed to calculate the annual performance for a number of representative cities in Europe...
Calculation of Temperature Rise in Calorimetry.
Canagaratna, Sebastian G.; Witt, Jerry
1988-01-01
Gives a simple but fuller account of the basis for accurately calculating temperature rise in calorimetry. Points out some misconceptions regarding these calculations. Describes two basic methods, the extrapolation to zero time and the equal area method. Discusses the theoretical basis of each and their underlying assumptions. (CW)
Sniderman, A.D.; Tremblay, A.J.; Graaf, J. de; Couture, P.
2014-01-01
OBJECTIVES: This study tests the validity of the Hattori formula to calculate LDL apoB based on plasma lipids and total apoB. METHODS: In 2178 patients in a tertiary care lipid clinic, LDL apoB calculated as suggested by Hattori et al. was compared to directly measured LDL apoB isolated by ultracent
Investment Return Calculations and Senior School Mathematics
Fitzherbert, Richard M.; Pitt, David G. W.
2010-01-01
The methods for calculating returns on investments are taught to undergraduate level business students. In this paper, the authors demonstrate how such calculations are within the scope of senior school students of mathematics. In providing this demonstration the authors hope to give teachers and students alike an illustration of the power and the…
40 CFR 1065.850 - Calculations.
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the...
Teaching Discrete Mathematics with Graphing Calculators.
Masat, Francis E.
Graphing calculator use is often thought of in terms of pre-calculus or continuous topics in mathematics. This paper contains examples and activities that demonstrate useful, interesting, and easy ways to use a graphing calculator with discrete topics. Examples are given for each of the following topics: functions, mathematical induction and…
Using Calculators in Mathematics 12. Student Text.
Rising, Gerald R.; And Others
This student textbook is designed to incorporate programable calculators in grade 12 mathematics. The seven chapters contained in this document are: (1) Using Calculators in Mathematics; (2) Sequences, Series, and Limits; (3) Iteration, Mathematical Induction, and the Binomial Theorem; (4) Applications of the Fundamental Counting Principle; (5)…
46 CFR 154.520 - Piping calculations.
2010-10-01
... 46 Shipping 5 2010-10-01 2010-10-01 false Piping calculations. 154.520 Section 154.520 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) CERTAIN BULK DANGEROUS CARGOES SAFETY STANDARDS... Process Piping Systems § 154.520 Piping calculations. A piping system must be designed to meet...
Data base to compare calculations and observations
Energy Technology Data Exchange (ETDEWEB)
Tichler, J.L.
1985-01-01
Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed. (PSB)
76 FR 71431 - Civil Penalty Calculation Methodology
2011-11-17
... Uniform Fine Assessment (UFA) algorithm, which FMCSA currently uses for calculation of civil penalties. UFA takes into account the statutory penalty factors under 49 U.S.C. 521(b)(2)(D). The evaluation will... will impose a minimum civil penalty that is calculated by UFA. In many cases involving small...
PRIME VALUE METHOD TO PRIORITIZE RISK HANDLING STRATEGIES
Energy Technology Data Exchange (ETDEWEB)
Noller, D
2007-10-31
Funding for implementing risk handling strategies typically is allocated according to either the risk-averse approach (the worst risk first) or the cost-effective approach (the greatest risk reduction per implementation dollar first). This paper introduces a prime value approach in which risk handling strategies are prioritized according to how nearly they meet the goals of the organization that disburses funds for risk handling. The prime value approach factors in the importance of the project in which the risk has been identified, elements of both risk-averse and cost-effective approaches, and the time period in which the risk could happen. This paper also presents a prioritizer spreadsheet, which employs weighted criteria to calculate a relative rank for the handling strategy of each risk evaluated.
An immunity based network security risk estimation
Institute of Scientific and Technical Information of China (English)
LI Tao
2005-01-01
According to the relationship between the antibody concentration and the pathogen intrusion intensity, here we present an immunity-based model for the network security risk estimation (Insre). In Insre, the concepts and formal definitions of self,nonself, antibody, antigen and lymphocyte in the network security domain are given. Then the mathematical models of the self-tolerance, the clonal selection, the lifecycle of mature lymphocyte, immune memory and immune surveillance are established. Building upon the above models, a quantitative computation model for network security risk estimation,which is based on the calculation of antibody concentration, is thus presented. By using Insre, the types and intensity of network attacks, as well as the risk level of network security, can be calculated quantitatively and in real-time. Our theoretical analysis and experimental results show that Insre is a good solution to real-time risk evaluation for the network security.
Heat Calculation of Borehole Heat Exchangers
Directory of Open Access Journals (Sweden)
S. Filatov
2013-01-01
Full Text Available The paper considers a heat calculation method of borehole heat exchangers (BHE which can be used for designing and optimization of their design values and included in a comprehensive mathematical model of heat supply system with a heat pump based on utilization of low-grade heat from the ground.The developed method of calculation is based on the reduction of the problem general solution pertaining to heat transfer in BHE with due account of heat transfer between top-down and bottom-up flows of heat carrier to the solution for a boundary condition of one kind on the borehole wall. Used the a method of electrothermal analogy has been used for a calculation of the thermal resistance and the required shape factors for calculation of a borehole filler thermal resistance have been obtained numerically. The paper presents results of heat calculation of various BHE designs in accordance with the proposed method.
Mathematical modelling of risk reduction in reinsurance
Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.
2017-01-01
The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.
Perera, Jeevan S.
2013-01-01
Phased-approach for implementation of risk management is necessary. Risk management system will be simple, accessible and promote communication of information to all relevant stakeholders for optimal resource allocation and risk mitigation. Risk management should be used by all team members to manage risks - not just risk office personnel. Each group/department is assigned Risk Integrators who are facilitators for effective risk management. Risks will be managed at the lowest-level feasible, elevate only those risks that require coordination or management from above. Risk informed decision making should be introduced to all levels of management. ? Provide necessary checks and balances to insure that risks are caught/identified and dealt with in a timely manner. Many supporting tools, processes & training must be deployed for effective risk management implementation. Process improvement must be included in the risk processes.
The Calculator of Anti-Alzheimer’s Diet. Macronutrients
Studnicki, Marcin; Woźniak, Grażyna; Stępkowski, Dariusz
2016-01-01
The opinions about optimal proportions of macronutrients in a healthy diet have changed significantly over the last century. At the same time nutritional sciences failed to provide strong evidence backing up any of the variety of views on macronutrient proportions. Herein we present an idea how these proportions can be calculated to find an optimal balance of macronutrients with respect to prevention of Alzheimer’s Disease (AD) and dementia. These calculations are based on our published observation that per capita personal income (PCPI) in the USA correlates with age-adjusted death rates for AD (AADR). We have previously reported that PCPI through the period 1925–2005 correlated with AADR in 2005 in a remarkable, statistically significant oscillatory manner, as shown by changes in the correlation coefficient R (Roriginal). A question thus arises what caused the oscillatory behavior of Roriginal? What historical events in the life of 2005 AD victims had shaped their future with AD? Looking for the answers we found that, considering changes in the per capita availability of macronutrients in the USA in the period 1929–2005, we can mathematically explain the variability of Roriginal for each quarter of a human life. On the basis of multiple regression of Roriginal with regard to the availability of three macronutrients: carbohydrates, total fat, and protein, with or without alcohol, we propose seven equations (referred to as “the calculator” throughout the text) which allow calculating optimal changes in the proportions of macronutrients to reduce the risk of AD for each age group: youth, early middle age, late middle age and late age. The results obtained with the use of “the calculator” are grouped in a table (Table 4) of macronutrient proportions optimal for reducing the risk of AD in each age group through minimizing Rpredicted−i.e., minimizing the strength of correlation between PCPI and future AADR. PMID:27992612
Decreasing relative risk premium
DEFF Research Database (Denmark)
Hansen, Frank
2007-01-01
such that the corresponding relative risk premium is a decreasing function of present wealth, and we determine the set of associated utility functions. We find a new characterization of risk vulnerability and determine a large set of utility functions, closed under summation and composition, which are both risk vulnerable...... and have decreasing relative risk premium. We finally introduce the notion of partial risk neutral preferences on binary lotteries and show that partial risk neutrality is equivalent to preferences with decreasing relative risk premium...
Total Pesticide Exposure Calculation among Vegetable Farmers in Benguet, Philippines
Directory of Open Access Journals (Sweden)
Jinky Leilanie Lu
2009-01-01
Full Text Available This was a cross-sectional study that investigated pesticide exposure and its risk factors targeting vegetable farmers selected through cluster sampling. The sampling size calculated with =.05 was 211 vegetable farmers and 37 farms. The mean usage of pesticide was 21.35 liters. Risk factors included damaged backpack sprayer (34.7%, spills on hands (31.8%, and spraying against the wind (58%. The top 3 pesticides used were pyrethroid (46.4%, organophosphates (24.2%, and carbamates (21.3%. Those who were exposed to fungicides and insecticides also had higher total pesticide exposure. Furthermore, a farmer who was a pesticide applicator, mixer, loader, and who had not been given instructions through training was at risk of having higher pesticide exposure. The most prevalent symptoms were headache (64.1%, muscle pain (61.1%, cough (45.5%, weakness (42.4%, eye pain (39.9%, chest pain (37.4%, and eye redness (33.8%. The data can be used for the formulation of an integrated program on safety and health in the vegetable industry.
[Model calculation to explain the BSE-incidence in Germany].
Oberthür, Radulf C
2004-01-01
The future development of BSE-incidence in Germany is investigated using a simple epidemiological model calculation. Starting point is the development of the incidence of confirmed suspect BSE-cases in Great Britain since 1988, the hitherto known mechanisms of transmission and the measures taken to decrease the risk of transmission as well as the development of the BSE-incidence in Germany obtained from active post mortem laboratory testing of all cattle older then 24 months. The risk of transmission is characterized by the reproduction ratio of the disease. There is a shift in time between the risk of BSE transmission and the BSE incidence caused by the incubation time of more than 4 years. The observed decrease of the incidence in Germany from 2001 to 2003 is not a consequence of the measures taken at the end of 2000 to contain the disease. It can rather be explained by an import of BSE contaminated products from countries with a high BSE incidence in the years 1995/96 being used in calf feeding in Germany. From the future course of the BSE-incidence in Germany after 2003 a quantification of the recycling rate of BSE-infected material within Germany before the end of 2000 will be possible by use of the proposed model if the active surveillance is continued.
Comparison of analytical methods for calculation of wind loads
Minderman, Donald J.; Schultz, Larry L.
1989-01-01
The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.
A Novel TRM Calculation Method by Probabilistic Concept
Audomvongseree, Kulyos; Yokoyama, Akihiko; Verma, Suresh Chand; Nakachi, Yoshiki
In a new competitive environment, it becomes possible for the third party to access a transmission facility. From this structure, to efficiently manage the utilization of the transmission network, a new definition about Available Transfer Capability (ATC) has been proposed. According to the North American ElectricReliability Council (NERC)’s definition, ATC depends on several parameters, i. e. Total Transfer Capability (TTC), Transmission Reliability Margin (TRM), and Capacity Benefit Margin (CBM). This paper is focused on the calculation of TRM which is one of the security margin reserved for any uncertainty of system conditions. The TRM calculation by probabilistic method is proposed in this paper. Based on the modeling of load forecast error and error in transmission line limitation, various cases of transmission transfer capability and its related probabilistic nature can be calculated. By consideration of the proposed concept of risk analysis, the appropriate required amount of TRM can be obtained. The objective of this research is to provide realistic information on the actual ability of the network which may be an alternative choice for system operators to make an appropriate decision in the competitive market. The advantages of the proposed method are illustrated by application to the IEEJ-WEST10 model system.
CALCULATION OF PER PARCEL PROBABILITY FOR DUD BOMBS IN GERMANY
Directory of Open Access Journals (Sweden)
S. M. Tavakkoli Sabour
2014-10-01
Full Text Available Unexploded aerial Bombs, also known as duds or unfused bombs, of the bombardments in the past wars remain explosive for decades after the war under the earth’s surface threatening the civil activities especially if dredging works are involved. Interpretation of the aerial photos taken shortly after bombardments has been proven to be useful for finding the duds. Unfortunately, the reliability of this method is limited by some factors. The chance of finding a dud on an aerial photo depends strongly on the photography system, the size of the bomb and the landcover. On the other hand, exploded bombs are considerably better detectable on aerial photos and confidently represent the extent and density of a bombardment. Considering an empirical quota of unfused bombs, the expected number of duds can be calculated by the number of exploded bombs. This can help to have a better calculation of cost-risk ratio and to classify the areas for clearance. This article is about a method for calculation of a per parcel probability of dud bombs according to the distribution and density of exploded bombs. No similar work has been reported in this field by other authors.
Providing access to risk prediction tools via the HL7 XML-formatted risk web service.
Chipman, Jonathan; Drohan, Brian; Blackford, Amanda; Parmigiani, Giovanni; Hughes, Kevin; Bosinoff, Phil
2013-07-01
Cancer risk prediction tools provide valuable information to clinicians but remain computationally challenging. Many clinics find that CaGene or HughesRiskApps fit their needs for easy- and ready-to-use software to obtain cancer risks; however, these resources may not fit all clinics' needs. The HughesRiskApps Group and BayesMendel Lab therefore developed a web service, called "Risk Service", which may be integrated into any client software to quickly obtain standardized and up-to-date risk predictions for BayesMendel tools (BRCAPRO, MMRpro, PancPRO, and MelaPRO), the Tyrer-Cuzick IBIS Breast Cancer Risk Evaluation Tool, and the Colorectal Cancer Risk Assessment Tool. Software clients that can convert their local structured data into the HL7 XML-formatted family and clinical patient history (Pedigree model) may integrate with the Risk Service. The Risk Service uses Apache Tomcat and Apache Axis2 technologies to provide an all Java web service. The software client sends HL7 XML information containing anonymized family and clinical history to a Dana-Farber Cancer Institute (DFCI) server, where it is parsed, interpreted, and processed by multiple risk tools. The Risk Service then formats the results into an HL7 style message and returns the risk predictions to the originating software client. Upon consent, users may allow DFCI to maintain the data for future research. The Risk Service implementation is exemplified through HughesRiskApps. The Risk Service broadens the availability of valuable, up-to-date cancer risk tools and allows clinics and researchers to integrate risk prediction tools into their own software interface designed for their needs. Each software package can collect risk data using its own interface, and display the results using its own interface, while using a central, up-to-date risk calculator. This allows users to choose from multiple interfaces while always getting the latest risk calculations. Consenting users contribute their data for future
Spreadsheet Based Scaling Calculations and Membrane Performance
Energy Technology Data Exchange (ETDEWEB)
Wolfe, T D; Bourcier, W L; Speth, T F
2000-12-28
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI
Ti-84 Plus graphing calculator for dummies
McCalla
2013-01-01
Get up-to-speed on the functionality of your TI-84 Plus calculator Completely revised to cover the latest updates to the TI-84 Plus calculators, this bestselling guide will help you become the most savvy TI-84 Plus user in the classroom! Exploring the standard device, the updated device with USB plug and upgraded memory (the TI-84 Plus Silver Edition), and the upcoming color screen device, this book provides you with clear, understandable coverage of the TI-84's updated operating system. Details the new apps that are available for download to the calculator via the USB cabl
Energy of plate tectonics calculation and projection
Directory of Open Access Journals (Sweden)
N. H. Swedan
2013-02-01
Full Text Available Mathematics and observations suggest that the energy of the geological activities resulting from plate tectonics is equal to the latent heat of melting, calculated at mantle's pressure, of the new ocean crust created at midocean ridges following sea floor spreading. This energy varies with the temperature of ocean floor, which is correlated with surface temperature. The objective of this manuscript is to calculate the force that drives plate tectonics, estimate the energy released, verify the calculations based on experiments and observations, and project the increase of geological activities with surface temperature rise caused by climate change.
Assessment of seismic margin calculation methods
Energy Technology Data Exchange (ETDEWEB)
Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.
1989-03-01
Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs.
Program Calculates Current Densities Of Electronic Designs
Cox, Brian
1996-01-01
PDENSITY computer program calculates current densities for use in calculating power densities of electronic designs. Reads parts-list file for given design, file containing current required for each part, and file containing size of each part. For each part in design, program calculates current density in units of milliamperes per square inch. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. Sun version of program (NPO-19588). PC version of program (NPO-19171).
Hamming generalized corrector for reactivity calculation
Energy Technology Data Exchange (ETDEWEB)
Suescun-Diaz, Daniel; Ibarguen-Gonzalez, Maria C.; Figueroa-Jimenez, Jorge H. [Pontificia Universidad Javeriana Cali, Cali (Colombia). Dept. de Ciencias Naturales y Matematicas
2014-06-15
This work presents the Hamming method generalized corrector for numerically resolving the differential equation of delayed neutron precursor concentration from the point kinetics equations for reactivity calculation, without using the nuclear power history or the Laplace transform. A study was carried out of several correctors with their respective modifiers with different time step calculations, to offer stability and greater precision. Better results are obtained for some correctors than with other existing methods. Reactivity can be calculated with precision of the order h{sup 5}, where h is the time step. (orig.)
Pressure vessel calculations for VVER-440 reactors.
Hordósy, G; Hegyi, Gy; Keresztúri, A; Maráczy, Cs; Temesvári, E; Vértes, P; Zsolnay, E
2005-01-01
For the determination of the fast neutron load of the reactor pressure vessel a mixed calculational procedure was developed. The procedure was applied to the Unit II of Paks NPP, Hungary. The neutron source on the outer surfaces of the reactor was determined by a core design code, and the neutron transport calculations outside the core were performed by the Monte Carlo code MCNP. The reaction rate in the activation detectors at surveillance positions and at the cavity were calculated and compared with measurements. In most cases, fairly good agreement was found.
The WFIRST Galaxy Survey Exposure Time Calculator
Hirata, Christopher M.; Gehrels, Neil; Kneib, Jean-Paul; Kruk, Jeffrey; Rhodes, Jason; Wang, Yun; Zoubian, Julien
2013-01-01
This document describes the exposure time calculator for the Wide-Field Infrared Survey Telescope (WFIRST) high-latitude survey. The calculator works in both imaging and spectroscopic modes. In addition to the standard ETC functions (e.g. background and SN determination), the calculator integrates over the galaxy population and forecasts the density and redshift distribution of galaxy shapes usable for weak lensing (in imaging mode) and the detected emission lines (in spectroscopic mode). The source code is made available for public use.
Risks of advanced technology - Nuclear: risk comparison
Energy Technology Data Exchange (ETDEWEB)
Latarjet, R. (Institut du Radium, Orsay (France))
The author presents a general definition of the concept of risk and makes a distinction between the various types of risk - the absolute and the relative; the risk for oneself and for others. The quantitative comparison of risks presupposes their ''interchangeability''. In the case of major risks in the long term - or genotoxic risks - there is a certain degree of interchangeability which makes this quantitative comparison possible. It is expressed by the concept of rad-equivalence which the author defines and explains giving as a concrete example the work conducted on ethylene and ethylene oxide.
Temperature calculation in fire safety engineering
Wickström, Ulf
2016-01-01
This book provides a consistent scientific background to engineering calculation methods applicable to analyses of materials reaction-to-fire, as well as fire resistance of structures. Several new and unique formulas and diagrams which facilitate calculations are presented. It focuses on problems involving high temperature conditions and, in particular, defines boundary conditions in a suitable way for calculations. A large portion of the book is devoted to boundary conditions and measurements of thermal exposure by radiation and convection. The concepts and theories of adiabatic surface temperature and measurements of temperature with plate thermometers are thoroughly explained. Also presented is a renewed method for modeling compartment fires, with the resulting simple and accurate prediction tools for both pre- and post-flashover fires. The final chapters deal with temperature calculations in steel, concrete and timber structures exposed to standard time-temperature fire curves. Useful temperature calculat...
Measured and Calculated Volumes of Wetland Depressions
U.S. Environmental Protection Agency — Measured and calculated volumes of wetland depressions This dataset is associated with the following publication: Wu, Q., and C. Lane. Delineation and quantification...
Spectra: Time series power spectrum calculator
Gallardo, Tabaré
2017-01-01
Spectra calculates the power spectrum of a time series equally spaced or not based on the Spectral Correlation Coefficient (Ferraz-Mello 1981, Astron. Journal 86 (4), 619). It is very efficient for detection of low frequencies.
Large Numbers and Calculators: A Classroom Activity.
Arcavi, Abraham; Hadas, Nurit
1989-01-01
Described is an activity demonstrating how a scientific calculator can be used in a mathematics classroom to introduce new content while studying a conventional topic. Examples of reading and writing large numbers, and reading hidden results are provided. (YP)
Fair and Reasonable Rate Calculation Data -
Department of Transportation — This dataset provides guidelines for calculating the fair and reasonable rates for U.S. flag vessels carrying preference cargoes subject to regulations contained at...
Quantum Monte Carlo Calculations of Light Nuclei
Pieper, Steven C
2007-01-01
During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.
Multigrid Methods in Electronic Structure Calculations
Briggs, E L; Bernholc, J
1996-01-01
We describe a set of techniques for performing large scale ab initio calculations using multigrid accelerations and a real-space grid as a basis. The multigrid methods provide effective convergence acceleration and preconditioning on all length scales, thereby permitting efficient calculations for ill-conditioned systems with long length scales or high energy cut-offs. We discuss specific implementations of multigrid and real-space algorithms for electronic structure calculations, including an efficient multigrid-accelerated solver for Kohn-Sham equations, compact yet accurate discretization schemes for the Kohn-Sham and Poisson equations, optimized pseudo\\-potentials for real-space calculations, efficacious computation of ionic forces, and a complex-wavefunction implementation for arbitrary sampling of the Brillioun zone. A particular strength of a real-space multigrid approach is its ready adaptability to massively parallel computer architectures, and we present an implementation for the Cray-T3D with essen...
46 CFR 170.090 - Calculations.
2010-10-01
... necessary to compute and plot any of the following curves as part of the calculations required in this subchapter, these plots must also be submitted: (1) Righting arm or moment curves. (2) Heeling arm or...
Representation and calculation of economic uncertainties
DEFF Research Database (Denmark)
Schjær-Jacobsen, Hans
2002-01-01
Management and decision making when certain information is available may be a matter of rationally choosing the optimal alternative by calculation of the utility function. When only uncertain information is available (which is most often the case) decision-making calls for more complex methods...... of representation and calculation and the basis for choosing the optimal alternative may become obscured by uncertainties of the utility function. In practice, several sources of uncertainties of the required information impede optimal decision making in the classical sense. In order to be able to better handle...... to uncertain economic numbers are discussed. When solving economic models for decision-making purposes calculation of uncertain functions will have to be carried out in addition to the basic arithmetical operations. This is a challenging numerical problem since improper methods of calculation may introduce...
Note about socio-economic calculations
DEFF Research Database (Denmark)
Landex, Alex; Andersen, Jonas Lohmann Elkjær; Salling, Kim Bang
2006-01-01
these effects must be described qualitatively. This note describes the socio-economic evaluation based on market prices and not factor prices which has been the tradition in Denmark till now. This is due to the recommendation from the Ministry of Transport to start using calculations based on market prices......This note gives a short introduction of how to make socio-economic evaluations in connection with the teaching at the Centre for Traffic and Transport (CTT). It is not a manual for making socio-economic calculations in transport infrastructure projects – in this context we refer to the guidelines...... for socio-economic calculations within the transportation area (Ministry of Traffic, 2003). The note also explains the theory of socio-economic calculations – reference is here made to ”Road Infrastructure Planning – a Decision-oriented approach” (Leleur, 2000). Socio-economic evaluations of infrastructure...
Obliged to Calculate: "My School", Markets, and Equipping Parents for Calculativeness
Gobby, Brad
2016-01-01
This paper argues neoliberal programs of government in education are equipping parents for calculativeness. Regimes of testing and the publication of these results and other organizational data are contributing to a public economy of numbers that increasingly oblige citizens to calculate. Using the notions of calculative and market devices, this…
A revised calculational model for fission
Energy Technology Data Exchange (ETDEWEB)
Atchison, F.
1998-09-01
A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)
A Java Interface for Roche Lobe Calculations
Leahy, D. A.; Leahy, J. C.
2015-09-01
A JAVA interface for calculating various properties of the Roche lobe has been created. The geometry of the Roche lobe is important for studying interacting binary stars, particularly those with compact objects which have a companion which fills the Roche lobe. There is no known analytic solution to the Roche lobe problem. Here the geometry of the Roche lobe is calculated numerically to high accuracy and made available to the user for arbitrary input mass ratio, q.
Realistic level density calculation for heavy nuclei
Energy Technology Data Exchange (ETDEWEB)
Cerf, N. [Institut de Physique Nucleaire, Orsay (France); Pichon, B. [Observatoire de Paris, Meudon (France); Rayet, M.; Arnould, M. [Institut d`Astronomie et d`Astrophysique, Bruxelles (Belgium)
1994-12-31
A microscopic calculation of the level density is performed, based on a combinatorial evaluation using a realistic single-particle level scheme. This calculation relies on a fast Monte Carlo algorithm, allowing to consider heavy nuclei (i.e., large shell model spaces) which could not be treated previously in combinatorial approaches. An exhaustive comparison of the predicted neutron s-wave resonance spacings with experimental data for a wide range of nuclei is presented.
Flow calculation of a bulb turbine
Energy Technology Data Exchange (ETDEWEB)
Goede, E.; Pestalozzi, J.
1987-01-01
In recent years remarkable progress has been made in the field of theoretical flow calculation. Studying the relevant literature one might receive the impression that most problems have been solved. But probing more deeply into details one becomes aware that by no means all questions are answered. The report tries to point out what may be expected of the quasi-three-dimensional flow calculation method employed and - much more important - what it must not be expected to accomplish. (orig.)
Green's function calculations of light nuclei
Sun, ZhongHao; Wu, Qiang; Xu, FuRong
2016-09-01
The influence of short-range correlations in nuclei was investigated with realistic nuclear force. The nucleon-nucleon interaction was renormalized with V lowk technique and applied to the Green's function calculations. The Dyson equation was reformulated with algebraic diagrammatic constructions. We also analyzed the binding energy of 4He, calculated with chiral potential and CD-Bonn potential. The properties of Green's function with realistic nuclear forces are also discussed.
Calculation Methods for Wallenius’ Noncentral Hypergeometric Distribution
DEFF Research Database (Denmark)
Fog, Agner
2008-01-01
distribution are derived. Range of applicability, numerical problems, and efficiency are discussed for each method. Approximations to the mean and variance are also discussed. This distribution has important applications in models of biased sampling and in models of evolutionary systems....... is the conditional distribution of independent binomial variates given their sum. No reliable calculation method for Wallenius' noncentral hypergeometric distribution has hitherto been described in the literature. Several new methods for calculating probabilities from Wallenius' noncentral hypergeometric...
Users enlist consultants to calculate costs, savings
Energy Technology Data Exchange (ETDEWEB)
1982-05-24
Consultants who calculate payback provide expertise and a second opinion to back up energy managers' proposals. They can lower the costs of an energy-management investment by making complex comparisons of systems and recommending the best system for a specific application. Examples of payback calculations include simple payback for a school system, a university, and a Disneyland hotel, as well as internal rate of return for a corporate office building and a chain of clothing stores. (DCK)
DOWNSCALE APPLICATION OF BOILER THERMAL CALCULATION APPROACH
Zelený, Zbynĕk; Hrdlička, Jan
2016-01-01
Commonly used thermal calculation methods are intended primarily for large scale boilers. Hot water small scale boilers, which are commonly used for home heating have many specifics, that distinguish them from large scale boilers especially steam boilers. This paper is focused on application of thermal calculation procedure that is designed for large scale boilers, on a small scale boiler for biomass combustion of load capacity 25 kW. Special issue solved here is influence of formation of dep...
Reciprocity Theorems for Ab Initio Force Calculations
Wei, C; Mele, E J; Rappe, A M; Lewis, Steven P.; Rappe, Andrew M.
1996-01-01
We present a method for calculating ab initio interatomic forces which scales quadratically with the size of the system and provides a physically transparent representation of the force in terms of the spatial variation of the electronic charge density. The method is based on a reciprocity theorem for evaluating an effective potential acting on a charged ion in the core of each atom. We illustrate the method with calculations for diatomic molecules.
R-matrix calculation for photoionization
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
We have employed the R-matrix method to calculate differe ntial cross sections for photoionization of helium leaving helium ion in an exci ted state for incident photon energy between the N=2 and N=3 thresholds (69～73 eV) of He+ ion. Differential cross sections for photoionization in the N=2 level at emission angle 0° are provide. Our results are in good agreem ent with available experimental data and theoretical calculations.
Efficient Finite Element Calculation of Nγ
DEFF Research Database (Denmark)
Clausen, Johan; Damkilde, Lars; Krabbenhøft, K.
2007-01-01
This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing.......This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing....
Computerized calculation of material balances in carbonization
Energy Technology Data Exchange (ETDEWEB)
Chistyakov, A.M.
1980-09-01
Charge formulations and carbonisation schedules are described by empirical formulae used to calculate the yield of coking products. An algorithm is proposed for calculating the material balance, and associated computer program. The program can be written in conventional languages, e.g. Fortran, Algol etc. The information obtained can be used for on-line assessment of the effects of charge composition and properties on the coke and by-products yields, as well as the effects of the carbonisation conditions.
Calculating Cumulative Binomial-Distribution Probabilities
Scheuer, Ernest M.; Bowerman, Paul N.
1989-01-01
Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.
Linear Response Calculations of Spin Fluctuations
Savrasov, S. Y.
1998-09-01
A variational formulation of the time-dependent linear response based on the Sternheimer method is developed in order to make practical ab initio calculations of dynamical spin susceptibilities of solids. Using gradient density functional and a muffin-tin-orbital representation, the efficiency of the approach is demonstrated by applications to selected magnetic and strongly paramagnetic metals. The results are found to be consistent with experiment and are compared with previous theoretical calculations.
Cucinotta, Francis A.; Kim, Myung-Hee Y.; Ren, Lei
2005-01-01
This document addresses calculations of probability distribution functions (PDFs) representing uncertainties in projecting fatal cancer risk from galactic cosmic rays (GCR) and solar particle events (SPEs). PDFs are used to test the effectiveness of potential radiation shielding approaches. Monte-Carlo techniques are used to propagate uncertainties in risk coefficients determined from epidemiology data, dose and dose-rate reduction factors, quality factors, and physics models of radiation environments. Competing mortality risks and functional correlations in radiation quality factor uncertainties are treated in the calculations. The cancer risk uncertainty is about four-fold for lunar and Mars mission risk projections. For short-stay lunar missins (shielding. For long-duration (>180 d) lunar or Mars missions, GCR risks may exceed radiation risk limits. While shielding materials are marginally effective in reducing GCR cancer risks because of the penetrating nature of GCR and secondary radiation produced in tissue by relativisitc particles, polyethylene or carbon composite shielding cannot be shown to significantly reduce risk compared to aluminum shielding. Therefore, improving our knowledge of space radiobiology to narrow uncertainties that lead to wide PDFs is the best approach to ensure radiation protection goals are met for space exploration.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
How can ab initio simulations address risks in nanotech?
Barnard, Amanda S
2009-06-01
Discussions of the potential risks and hazards associated with nanomaterials and nanoparticles tend to focus on the need for further experiments. However, theoretical and computational nanoscientists could also contribute by making their calculations more relevant to research into this area.
The Architecture of Financial Risk Management Systems
Directory of Open Access Journals (Sweden)
Iosif ZIMAN
2013-01-01
Full Text Available The architecture of systems dedicated to risk management is probably one of the more complex tasks to tackle in the world of finance. Financial risk has been at the center of attention since the explosive growth of financial markets and even more so after the 2008 financial crisis. At multiple levels, financial companies, financial regulatory bodies, governments and cross-national regulatory bodies, all have put the subject of financial risk in particular and the way it is calculated, managed, reported and monitored under intense scrutiny. As a result the technology underpinnings which support the implementation of financial risk systems has evolved considerably and has become one of the most complex areas involving systems and technology in the context of the financial industry. We present the main paradigms, require-ments and design considerations when undertaking the implementation of risk system and give examples of user requirements, sample product coverage and performance parameters.
Real Time Radiation Exposure And Health Risks
Hu, Shaowen; Barzilla, Janet E.; Semones, Edward J.
2015-01-01
Radiation from solar particle events (SPEs) poses a serious threat to future manned missions outside of low Earth orbit (LEO). Accurate characterization of the radiation environment in the inner heliosphere and timely monitoring the health risks to crew are essential steps to ensure the safety of future Mars missions. In this project we plan to develop an approach that can use the particle data from multiple satellites and perform near real-time simulations of radiation exposure and health risks for various exposure scenarios. Time-course profiles of dose rates will be calculated with HZETRN and PDOSE from the energy spectrum and compositions of the particles archived from satellites, and will be validated from recent radiation exposure measurements in space. Real-time estimation of radiation risks will be investigated using ARRBOD. This cross discipline integrated approach can improve risk mitigation by providing critical information for risk assessment and medical guidance to crew during SPEs.
Evaluation of allowed outage times (AOTs) from a risk and reliability standpoint
Energy Technology Data Exchange (ETDEWEB)
Vesely, W.E. (Science Applications International Corp., Columbus, OH (USA))
1989-08-01
This report describes the basic risks which are associated with allowed outage times (AOTs), defines strategies for selecting the risks to be quantified, and describes how the risks can be quantified. The report furthermore describes criteria considerations in determining the acceptability of calculated AOT risks, and discusses the merits of relative risk criteria versus absolute risk criteria. The detailed evaluations which are involved in calculating AOT risks, including uncertainty considerations are also discussed. The report also describes the proper ways that risks from multiple AOTs should be considered so that risks are properly accumulated from proposed multiple AOT changes, but are not double-counted. Generally, average AOT risks which include the frequency of occurrence of the AOT need to be accumulated but single downtime risks don't since they apply to individual AOTs. 8 refs., 22 tabs.
The Application of VaR Method to Risk Evaluation of Bank Loans%VaR方法在银行贷款风险评估中的应用
Institute of Scientific and Technical Information of China (English)
邹新月
2005-01-01
Value-at-Risk model developed recently is a mathemetical medol to measure and monitor market risk. The article focuses on discussing calculate procedure and calculate method about applying VaR means for the bank loan risk in evaluation, we make clear differentiate both the Bank for International Settlements draw credit risk reserve and VaR means calculate bank loan risk value, find VaR means in application practicality value and extensity perspective in our bank loan risk for evaluation
Information security risk analysis
Peltier, Thomas R
2001-01-01
Effective Risk AnalysisQualitative Risk AnalysisValue AnalysisOther Qualitative MethodsFacilitated Risk Analysis Process (FRAP)Other Uses of Qualitative Risk AnalysisCase StudyAppendix A: QuestionnaireAppendix B: Facilitated Risk Analysis Process FormsAppendix C: Business Impact Analysis FormsAppendix D: Sample of ReportAppendix E: Threat DefinitionsAppendix F: Other Risk Analysis OpinionsIndex
Risk, Resources and Structures
DEFF Research Database (Denmark)
Lyng Jensen, Jesper; Ponsaing, Claus Due; Thrane, Sof
2012-01-01
to large risk events is to mitigate the consequences of the risk event through negotiating with the environment. If such negotiations fail, the subject will have no alternative but to let other activities and projects under direct control of the risk owner suffer. We end the article with conjectures...... and implications for ERM, suggesting the addition of a risk resource forecast and discussing implications for four types of risk mitigation strategies: capital requirements, risk diversification, network relations and insurance....
Medicare's risk-adjusted capitation method.
Grimaldi, Paul L
2002-01-01
Since 1997, the method to establish capitation rates for Medicare beneficiaries who are members of risk-bearing managed care plans has undergone several important developments. This includes the factoring of beneficiary health status into the rate-setting calculations. These changes were expected to increase the number of participating health plans, accelerate Medicare enrollment growth, and slice Medicare spending.
Aven, Terje
2012-01-01
Foundations of Risk Analysis presents the issues core to risk analysis - understanding what risk means, expressing risk, building risk models, addressing uncertainty, and applying probability models to real problems. The author provides the readers with the knowledge and basic thinking they require to successfully manage risk and uncertainty to support decision making. This updated edition reflects recent developments on risk and uncertainty concepts, representations and treatment. New material in Foundations of Risk Analysis includes:An up to date presentation of how to understand, define and
RISK MANAGEMENT IN PHARMACEUTICALS
Directory of Open Access Journals (Sweden)
V. SIVA RAMA KRISHNA
2014-04-01
Full Text Available Objective: To review the risk in pharmaceutical industries and the risk management process and tools. There is risk always in anything we do. All the industries on this globe perform actions that involve risks; risk is only dangerous when there is no anticipation to manage it. Risks if assessed and controlled properly will benefit the industries against the fall and makes stronger. Risk should not be assessed as bad, but should assess as an opportunity for making things resilient by proper management. Risk management can benefit industries from disasters either natural or human. The impact of the risk should be assessed in order to plan alternatives and minimize the effect of the impact. Risk in pharmaceutical industry is very high because it involves research, money and health. The impact is severe and the probability of the risk is more often in pharmaceutical industry. A risk management plans and control measures will help the companies to do better at the time of uncertainties and create positive opportunities to turn those risks into benefits which maximize quality. Materials and Methods: The information was collected and compiled from scientific literature present in different databases and articles, books. Results: The risk management process and tools helps to minimize the risk and its effects. Conclusion: The risk management is at the core of any organization. Risk management should be part of organization culture. The risk management is a wise investment if properly processed.
DEFF Research Database (Denmark)
Rubin, Katrine Hass; Abrahamsen, Bo; Hermann, Anne Pernille
2011-01-01
Purpose: To evaluate the performance of the Swedish version of Fracture Risk Assessment Tool (FRAX)) without bone mass density (BMD) in a Danish population to examine the possibility of applying this version to Danish women. METHODS: From the Danish National Register of social security numbers, we...... randomly selected 5000 women living in the region of Southern Denmark aged 40-90 years to receive a mailed questionnaire concerning risk factors for osteoporosis based on FRAX. The predicted 10-year probability of hip fractures was calculated for each woman returning a complete questionnaire using...... the Swedish version of FRAX. The observed 10-year hip fracture risk was also calculated for each woman using age-specific hip fracture rates from the National Hospital Discharge Register and National survival tables. RESULTS: A total of 4194 (84%) women responded to the questionnaire and 3636 (73%) gave...
Good Practices in Free-energy Calculations
Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher
2013-01-01
As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.
Paramedics’ Ability to Perform Drug Calculations
Directory of Open Access Journals (Sweden)
Eastwood, Kathyrn J
2009-11-01
Full Text Available Background: The ability to perform drug calculations accurately is imperative to patient safety. Research into paramedics’ drug calculation abilities was first published in 2000 and for nurses’ abilities the research dates back to the late 1930s. Yet, there have been no studies investigating an undergraduate paramedic student’s ability to perform drug or basic mathematical calculations. The objective of this study was to review the literature and determine the ability of undergraduate and qualified paramedics to perform drug calculations.Methods: A search of the prehospital-related electronic databases was undertaken using the Ovid and EMBASE systems available through the Monash University Library. Databases searched included the Cochrane Central Register of Controlled Trials (CENTRAL, MEDLINE, CINAHL, JSTOR, EMBASE and Google Scholar, from their beginning until the end of August 2009. We reviewed references from articles retrieved.Results: The electronic database search located 1,154 articles for review. Six additional articles were identified from reference lists of retrieved articles. Of these, 59 were considered relevant. After reviewing the 59 articles only three met the inclusion criteria. All articles noted some level of mathematical deficiencies amongst their subjects.Conclusions: This study identified only three articles. Results from these limited studies indicate a significant lack of mathematical proficiency amongst the paramedics sampled. A need exists to identify if undergraduate paramedic students are capable of performing the required drug calculations in a non-clinical setting.[WestJEM. 2009;10:240-243.
Comparison of Polar Cap (PC) index calculations.
Stauning, P.
2012-04-01
The Polar Cap (PC) index introduced by Troshichev and Andrezen (1985) is derived from polar magnetic variations and is mainly a measure of the intensity of the transpolar ionospheric currents. These currents relate to the polar cap antisunward ionospheric plasma convection driven by the dawn-dusk electric field, which in turn is generated by the interaction of the solar wind with the Earth's magnetosphere. Coefficients to calculate PCN and PCS index values from polar magnetic variations recorded at Thule and Vostok, respectively, have been derived by several different procedures in the past. The first published set of coefficients for Thule was derived by Vennerstrøm, 1991 and is still in use for calculations of PCN index values by DTU Space. Errors in the program used to calculate index values were corrected in 1999 and again in 2001. In 2005 DMI adopted a unified procedure proposed by Troshichev for calculations of the PCN index. Thus there exists 4 different series of PCN index values. Similarly, at AARI three different sets of coefficients have been used to calculate PCS indices in the past. The presentation discusses the principal differences between the various PC index procedures and provides comparisons between index values derived from the same magnetic data sets using the different procedures. Examples from published papers are examined to illustrate the differences.
Accurate free energy calculation along optimized paths.
Chen, Changjun; Xiao, Yi
2010-05-01
The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.
2016 WSES guidelines on acute calculous cholecystitis.
LENUS (Irish Health Repository)
Ansaloni, L
2016-01-01
Acute calculus cholecystitis is a very common disease with several area of uncertainty. The World Society of Emergency Surgery developed extensive guidelines in order to cover grey areas. The diagnostic criteria, the antimicrobial therapy, the evaluation of associated common bile duct stones, the identification of "high risk" patients, the surgical timing, the type of surgery, and the alternatives to surgery are discussed. Moreover the algorithm is proposed: as soon as diagnosis is made and after the evaluation of choledocholitiasis risk, laparoscopic cholecystectomy should be offered to all patients exception of those with high risk of morbidity or mortality. These Guidelines must be considered as an adjunctive tool for decision but they are not substitute of the clinical judgement for the individual patient.
Perturbation calculation of thermodynamic density of states.
Brown, G; Schulthess, T C; Nicholson, D M; Eisenbach, M; Stocks, G M
2011-12-01
The density of states g (ε) is frequently used to calculate the temperature-dependent properties of a thermodynamic system. Here a derivation is given for calculating the warped density of states g*(ε) resulting from the addition of a perturbation. The method is validated for a classical Heisenberg model of bcc Fe and the errors in the free energy are shown to be second order in the perturbation. Taking the perturbation to be the difference between a first-principles quantum-mechanical energy and a corresponding classical energy, this method can significantly reduce the computational effort required to calculate g(ε) for quantum systems using the Wang-Landau approach.
Using Inverted Indices for Accelerating LINGO Calculations
DEFF Research Database (Denmark)
Kristensen, Thomas Greve; Nielsen, Jesper; Pedersen, Christian Nørgaard Storm
2011-01-01
The ever growing size of chemical data bases calls for the development of novel methods for representing and comparing molecules. One such method called LINGO is based on fragmenting the SMILES string representation of molecules. Comparison of molecules can then be performed by calculating...... the Tanimoto coefficient which is called the LINGOsim when used on LINGO multisets. This paper introduces a verbose representation for storing LINGO multisets which makes it possible to transform them into sparse fingerprints such that fingerprint data structures and algorithms can be used to accelerate...... queries. The previous best method for rapidly calculating the LINGOsim similarity matrix required specialised hardware to yield a significant speedup over existing methods. By representing LINGO multisets in the verbose representation and using inverted indices it is possible to calculate LINGOsim...
Using inverted indices for accelerating LINGO calculations.
Kristensen, Thomas G; Nielsen, Jesper; Pedersen, Christian N S
2011-03-28
The ever growing size of chemical databases calls for the development of novel methods for representing and comparing molecules. One such method called LINGO is based on fragmenting the SMILES string representation of molecules. Comparison of molecules can then be performed by calculating the Tanimoto coefficient, which is called LINGOsim when used on LINGO multisets. This paper introduces a verbose representation for storing LINGO multisets, which makes it possible to transform them into sparse fingerprints such that fingerprint data structures and algorithms can be used to accelerate queries. The previous best method for rapidly calculating the LINGOsim similarity matrix required specialized hardware to yield a significant speedup over existing methods. By representing LINGO multisets in the verbose representation and using inverted indices, it is possible to calculate LINGOsim similarity matrices roughly 2.6 times faster than existing methods without relying on specialized hardware.
Automated one-loop calculations with GOSAM
Energy Technology Data Exchange (ETDEWEB)
Cullen, Gavin [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Greiner, Nicolas [Illinois Univ., Urbana-Champaign, IL (United States). Dept. of Physics; Max-Planck-Institut fuer Physik, Muenchen (Germany); Heinrich, Gudrun; Reiter, Thomas [Max-Planck-Institut fuer Physik, Muenchen (Germany); Luisoni, Gionata [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology; Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padua Univ. (Italy). Dipt. di Fisica; Ossola, Giovanni [New York City Univ., NY (United States). New York City College of Technology; New York City Univ., NY (United States). The Graduate School and University Center; Tramontano, Francesco [European Organization for Nuclear Research (CERN), Geneva (Switzerland)
2011-11-15
We present the program package GoSam which is designed for the automated calculation of one-loop amplitudes for multi-particle processes in renormalisable quantum field theories. The amplitudes, which are generated in terms of Feynman diagrams, can be reduced using either D-dimensional integrand-level decomposition or tensor reduction. GoSam can be used to calculate one-loop QCD and/or electroweak corrections to Standard Model processes and offers the flexibility to link model files for theories Beyond the Standard Model. A standard interface to programs calculating real radiation is also implemented. We demonstrate the flexibility of the program by presenting examples of processes with up to six external legs attached to the loop. (orig.)
Benchmarking calculations of excitonic couplings between bacteriochlorophylls
Kenny, Elise P
2015-01-01
Excitonic couplings between (bacterio)chlorophyll molecules are necessary for simulating energy transport in photosynthetic complexes. Many techniques for calculating the couplings are in use, from the simple (but inaccurate) point-dipole approximation to fully quantum-chemical methods. We compared several approximations to determine their range of applicability, noting that the propagation of experimental uncertainties poses a fundamental limit on the achievable accuracy. In particular, the uncertainty in crystallographic coordinates yields an uncertainty of about 20% in the calculated couplings. Because quantum-chemical corrections are smaller than 20% in most biologically relevant cases, their considerable computational cost is rarely justified. We therefore recommend the electrostatic TrEsp method across the entire range of molecular separations and orientations because its cost is minimal and it generally agrees with quantum-chemical calculations to better than the geometric uncertainty. We also caution ...
Detailed Burnup Calculations for Research Reactors
Energy Technology Data Exchange (ETDEWEB)
Leszczynski, F. [Centro Atomico Bariloche (CNEA), 8400 S. C. de Bariloche (Argentina)
2011-07-01
A general method (RRMCQ) has been developed by introducing a microscopic burn up scheme which uses the Monte Carlo calculated spatial power distribution of a research reactor core and a depletion code for burn up calculations, as a basis for solving nuclide material balance equations for each spatial region in which the system is divided. Continuous energy dependent cross-section libraries and full 3D geometry of the system is input for the calculations. The resulting predictions for the system at successive burn up time steps are thus based on a calculation route where both geometry and cross-sections are accurately represented, without geometry simplifications and with continuous energy data. The main advantage of this method over the classical deterministic methods currently used is that RRMCQ System is a direct 3D method without the limitations and errors introduced on the homogenization of geometry and condensation of energy of deterministic methods. The Monte Carlo and burn up codes adopted until now are the widely used MCNP5 and ORIGEN2 codes, but other codes can be used also. For using this method, there is a need of a well-known set of nuclear data for isotopes involved in burn up chains, including burnable poisons, fission products and actinides. For fixing the data to be included on this set, a study of the present status of nuclear data is performed, as part of the development of RRMCQ method. This study begins with a review of the available cross-section data of isotopes involved in burn up chains for research nuclear reactors. The main data needs for burn up calculations are neutron cross-sections, decay constants, branching ratios, fission energy and yields. The present work includes results of selected experimental benchmarks and conclusions about the sensitivity of different sets of cross-section data for burn up calculations, using some of the main available evaluated nuclear data files. Basically, the RRMCQ detailed burn up method includes four
Dose calculations for intakes of ore dust
Energy Technology Data Exchange (ETDEWEB)
O`Brien, R.S
1998-08-01
This report describes a methodology for calculating the committed effective dose for mixtures of radionuclides, such as those which occur in natural radioactive ores and dusts. The formulae are derived from first principles, with the use of reasonable assumptions concerning the nature and behaviour of the radionuclide mixtures. The calculations are complicated because these `ores` contain a range of particle sizes, have different degrees of solubility in blood and other body fluids, and also have different biokinetic clearance characteristics from the organs and tissues in the body. The naturally occurring radionuclides also tend to occur in series, i.e. one is produced by the radioactive decay of another `parent` radionuclide. The formulae derived here can be used, in conjunction with a model such as LUDEP, for calculating total dose resulting from inhalation and/or ingestion of a mixture of radionuclides, and also for deriving annual limits on intake and derived air concentrations for these mixtures. 15 refs., 14 tabs., 3 figs.
Numerical inductance calculations based on first principles.
Shatz, Lisa F; Christensen, Craig W
2014-01-01
A method of calculating inductances based on first principles is presented, which has the advantage over the more popular simulators in that fundamental formulas are explicitly used so that a deeper understanding of the inductance calculation is obtained with no need for explicit discretization of the inductor. It also has the advantage over the traditional method of formulas or table lookups in that it can be used for a wider range of configurations. It relies on the use of fast computers with a sophisticated mathematical computing language such as Mathematica to perform the required integration numerically so that the researcher can focus on the physics of the inductance calculation and not on the numerical integration.
Challenges in Large Scale Quantum Mechanical Calculations
Ratcliff, Laura E; Huhs, Georg; Deutsch, Thierry; Masella, Michel; Genovese, Luigi
2016-01-01
During the past decades, quantum mechanical methods have undergone an amazing transition from pioneering investigations of experts into a wide range of practical applications, made by a vast community of researchers. First principles calculations of systems containing up to a few hundred atoms have become a standard in many branches of science. The sizes of the systems which can be simulated have increased even further during recent years, and quantum-mechanical calculations of systems up to many thousands of atoms are nowadays possible. This opens up new appealing possibilities, in particular for interdisciplinary work, bridging together communities of different needs and sensibilities. In this review we will present the current status of this topic, and will also give an outlook on the vast multitude of applications, challenges and opportunities stimulated by electronic structure calculations, making this field an important working tool and bringing together researchers of many different domains.
Cosmology calculations almost without general relativity
Jordan, T F
2003-01-01
The Friedmann equation can be derived for a Newtonian universe. Changing mass density to energy density gives exactly the Friedmann equation of general relativity. Accounting for work done by pressure then yields the two Einstein equations that govern the expansion of the universe. Descriptions and explanations of radiation pressure and vacuum pressure are added to complete a basic kit of cosmology tools. It provides a basis for teaching cosmology to undergraduates in a way that quickly equips them to do basic calculations. This is demonstrated with calculations involving: characteristics of the expansion for densities dominated by radiation, matter, or vacuum; the closeness of the density to the critical density; how much vacuum energy compared to matter energy is needed to make the expansion accelerate; and how little is needed to make it stop. Travel time and luninosity distance are calculated in terms of the redshift and the densities of matter and vacuum energy, using a scaled Friedmann equation with the...
Parallel scalability of Hartree–Fock calculations
Energy Technology Data Exchange (ETDEWEB)
Chow, Edmond, E-mail: echow@cc.gatech.edu; Liu, Xing [School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0765 (United States); Smelyanskiy, Mikhail; Hammond, Jeff R. [Parallel Computing Lab, Intel Corporation, Santa Clara, California 95054-1549 (United States)
2015-03-14
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
Lagrange interpolation for the radiation shielding calculation
Isozumi, Y; Miyatake, H; Kato, T; Tosaki, M
2002-01-01
Basing on some formulas of Lagrange interpolation derived in this paper, a computer program for table calculations has been prepared. Main features of the program are as follows; 1) maximum degree of polynomial in Lagrange interpolation is 10, 2) tables with both one variable and two variables can be applied, 3) logarithmic transformations of function and/or variable values can be included and 4) tables with discontinuities and cusps can be applied. The program has been carefully tested by using the data tables in the manual of shielding calculation for radiation facilities. For all available tables in the manual, calculations with the program have been reasonably performed under conditions of 1) logarithmic transformation of both function and variable values and 2) degree 4 or 5 of the polynomial.
eQuilibrator--the biochemical thermodynamics calculator.
Flamholz, Avi; Noor, Elad; Bar-Even, Arren; Milo, Ron
2012-01-01
The laws of thermodynamics constrain the action of biochemical systems. However, thermodynamic data on biochemical compounds can be difficult to find and is cumbersome to perform calculations with manually. Even simple thermodynamic questions like 'how much Gibbs energy is released by ATP hydrolysis at pH 5?' are complicated excessively by the search for accurate data. To address this problem, eQuilibrator couples a comprehensive and accurate database of thermodynamic properties of biochemical compounds and reactions with a simple and powerful online search and calculation interface. The web interface to eQuilibrator (http://equilibrator.weizmann.ac.il) enables easy calculation of Gibbs energies of compounds and reactions given arbitrary pH, ionic strength and metabolite concentrations. The eQuilibrator code is open-source and all thermodynamic source data are freely downloadable in standard formats. Here we describe the database characteristics and implementation and demonstrate its use.
Daylight calculations using constant luminance curves
Energy Technology Data Exchange (ETDEWEB)
Betman, E. [CRICYT, Mendoza (Argentina). Laboratorio de Ambiente Humano y Vivienda
2005-02-01
This paper presents a simple method to manually estimate daylight availability and to make daylight calculations using constant luminance curves calculated with local illuminance and irradiance data and the all-weather model for sky luminance distribution developed in the Atmospheric Science Research Center of the University of New York (ARSC) by Richard Perez et al. Work with constant luminance curves has the advantage that daylight calculations include the problem's directionality and preserve the information of the luminous climate of the place. This permits accurate knowledge of the resource and a strong basis to establish conclusions concerning topics related to the energy efficiency and comfort in buildings. The characteristics of the proposed method are compared with the method that uses the daylight factor. (author)
Energy Technology Data Exchange (ETDEWEB)
Kariyawasam, S. [TransCanada PipeLines Ltd., Calgary, AB (Canada); Weir, D. [Enbridge Pipelines Inc., Calgary, AB (Canada)] (comps.)
2009-07-01
Risk assessments and risk analysis are system-wide activities that include site-specific risk and reliability-based decision-making, implementation, and monitoring. This working group discussed the risk management process in the pipeline industry, including reliability-based integrity management and risk control processes. Attendants at the group discussed reliability-based decision support and performance measurements designed to support corporate risk management policies. New developments and technologies designed to optimize risk management procedures were also presented. The group was divided into 3 sessions: (1) current practice, strengths and limitations of system-wide risk assessments for facility assets; (2) accounting for uncertainties to assure safety; and (3) reliability based excavation repair criteria and removing potentially unsafe corrosion defects. Presentations of risk assessment procedures used at various companies were given. The role of regulators, best practices, and effective networking environments in ensuring the success of risk assessment policies was discussed. Risk assessment models were also reviewed.
Energy Technology Data Exchange (ETDEWEB)
Laurier, D.; Monchaux, G.; Tirmarche, M. [Institute for Radiological Protection and Nuclear Safety, 92 - Fontenay aux Roses (France); Darby, S. [Cancer Research UK, Oxford (United Kingdom); Cardis, E. [International Agency for Research on Cancer, 69 - Lyon (France); Binks, K. [Westlakes Scientific Consulti ng Ltd, Moor Row (United Kingdom); Hofmann, W. [Salzburg Univ. (Austria); Muirhead, C. [Health Protection Agency, Chilton (United Kingdom)
2006-07-01
The Alpha-Risk research project is being conducted within the Sixth European Framework Programme (EC-FP6, 2005 -2008). It aims to improve the quantification of risks associated with multiple exposures, taking into account the contribution of different radionuclides and external exposure using specific organ dose calculations. The Alpha-Risk Consortium involves 18 partners from 9 countries, and is coordinated by the IRSN. Its composition allows a multidisciplinary collaboration between researchers in epidemiology, dosimetry, statistics, modelling and risk assessment. Alpha-Risk brings together major epidemiological studies in Europe, which are able to evaluate long-term health effects of internal exposure from radionuclides. It includes large size cohort and case-control studies, with accurate registration of individual annual exposures: uranium miner studies, studies on lung cancer and indoor radon exposure, and studies of lung cancer and leukaemia among nuclear workers exposed to transuranic nuclides (mainly uranium and plutonium), for whom organ doses will be reconstructed individually. The contribution of experts in dosimetry will allow the calculation of organ doses in presence of multiple exposures (radon decay products, uranium dust and external gamma exposure). Expression of the risk per unit organ dose will make it possible to compare results with those from other populations exposed to external radiation. The multidisciplinary approach of Alpha-Risk promotes the development of coherent and improved methodological approaches regarding risk modelling. A specific work - package is dedicated to the integration of results and their use for risk assessment, especially for radon. Alpha-Risk will contribute to a better understanding of long-term health risks following chronic low doses from internal exposures. The project also has the great potential to help resolve major public health concerns about the effects of low and/or protracted exposures, especially
Risk cross sections and their application to risk estimation in the galactic cosmic-ray environment
Curtis, S. B.; Nealy, J. E.; Wilson, J. W.; Chatterjee, A. (Principal Investigator)
1995-01-01
Radiation risk cross sections (i.e. risks per particle fluence) are discussed in the context of estimating the risk of radiation-induced cancer on long-term space flights from the galactic cosmic radiation outside the confines of the earth's magnetic field. Such quantities are useful for handling effects not seen after low-LET radiation. Since appropriate cross-section functions for cancer induction for each particle species are not yet available, the conventional quality factor is used as an approximation to obtain numerical results for risks of excess cancer mortality. Risks are obtained for seven of the most radiosensitive organs as determined by the ICRP [stomach, colon, lung, bone marrow (BFO), bladder, esophagus and breast], beneath 10 g/cm2 aluminum shielding at solar minimum. Spectra are obtained for excess relative risk for each cancer per LET interval by calculating the average fluence-LET spectrum for the organ and converting to risk by multiplying by a factor proportional to R gamma L Q(L) before integrating over L, the unrestricted LET. Here R gamma is the risk coefficient for low-LET radiation (excess relative mortality per Sv) for the particular organ in question. The total risks of excess cancer mortality obtained are 1.3 and 1.1% to female and male crew, respectively, for a 1-year exposure at solar minimum. Uncertainties in these values are estimated to range between factors of 4 and 15 and are dominated by the biological uncertainties in the risk coefficients for low-LET radiation and in the LET (or energy) dependence of the risk cross sections (as approximated by the quality factor). The direct substitution of appropriate risk cross sections will eventually circumvent entirely the need to calculate, measure or use absorbed dose, equivalent dose and quality factor for such a high-energy charged-particle environment.
Evaluating Shielding Effectiveness for Reducing Space Radiation Cancer Risks
Cucinotta, Francis A.; Kim, Myung-Hee Y.; Ren, Lei
2007-01-01
We discuss calculations of probability distribution functions (PDF) representing uncertainties in projecting fatal cancer risk from galactic cosmic rays (GCR) and solar particle events (SPE). The PDF s are used in significance tests of the effectiveness of potential radiation shielding approaches. Uncertainties in risk coefficients determined from epidemiology data, dose and dose-rate reduction factors, quality factors, and physics models of radiation environments are considered in models of cancer risk PDF s. Competing mortality risks and functional correlations in radiation quality factor uncertainties are treated in the calculations. We show that the cancer risk uncertainty, defined as the ratio of the 95% confidence level (CL) to the point estimate is about 4-fold for lunar and Mars mission risk projections. For short-stay lunar missions (shielding, especially for carbon composites structures with high hydrogen content. In contrast, for long duration lunar (>180 d) or Mars missions, GCR risks may exceed radiation risk limits, with 95% CL s exceeding 10% fatal risk for males and females on a Mars mission. For reducing GCR cancer risks, shielding materials are marginally effective because of the penetrating nature of GCR and secondary radiation produced in tissue by relativistic particles. At the present time, polyethylene or carbon composite shielding can not be shown to significantly reduce risk compared to aluminum shielding based on a significance test that accounts for radiobiology uncertainties in GCR risk projection.
Calculation of Radiation Damage in SLAC Targets
Energy Technology Data Exchange (ETDEWEB)
Wirth, B D; Monasterio, P; Stein, W
2008-04-03
Ti-6Al-4V alloys are being considered as a positron producing target in the Next Linear Collider, with an incident photon beam and operating temperatures between room temperature and 300 C. Calculations of displacement damage in Ti-6Al-4V alloys have been performed by combining high-energy particle FLUKA simulations with SPECTER calculations of the displacement cross section from the resulting energy-dependent neutron flux plus the displacements calculated from the Lindhard model from the resulting energy-dependent ion flux. The radiation damage calculations have investigated two cases, namely the damage produced in a Ti-6Al-4V SLAC positron target where the irradiation source is a photon beam with energies between 5 and 11 MeV. As well, the radiation damage dose in displacements per atom, dpa, has been calculated for a mono-energetic 196 MeV proton irradiation experiment performed at Brookhaven National Laboratory (BLIP experiment). The calculated damage rate is 0.8 dpa/year for the Ti-6Al-4V SLAC photon irradiation target, and a total damage exposure of 0.06 dpa in the BLIP irradiation experiment. In both cases, the displacements are predominantly ({approx}80%) produced by recoiling ions (atomic nuclei) from photo-nuclear collisions or proton-nuclear collisions, respectively. Approximately 25% of the displacement damage results from the neutrons in both cases. Irradiation effects studies in titanium alloys have shown substantial increases in the yield and ultimate strength of up to 500 MPa and a corresponding decrease in uniform ductility for neutron and high energy proton irradiation at temperatures between 40 and 300 C. Although the data is limited, there is an indication that the strength increases will saturate by doses on the order of a few dpa. Microstructural investigations indicate that the dominant features responsible for the strength increases were dense precipitation of a {beta} (body-centered cubic) phase precipitate along with a high number density
Multiple Sclerosis Increases Fracture Risk: A Meta-Analysis
Directory of Open Access Journals (Sweden)
Guixian Dong
2015-01-01
Full Text Available Purpose. The association between multiple sclerosis (MS and fracture risk has been reported, but results of previous studies remain controversial and ambiguous. To assess the association between MS and fracture risk, a meta-analysis was performed. Method. Based on comprehensive searches of the PubMed, Embase, and Web of Science, we identified outcome data from all articles estimating the association between MS and fracture risk. The pooled risk ratios (RRs with 95% confidence intervals (CIs were calculated. Results. A significant association between MS and fracture risk was found. This result remained statistically significant when the adjusted RRs were combined. Subgroup analysis stratified by the site of fracture suggested significant associations between MS and tibia fracture risk, femur fracture risk, hip fracture risk, pelvis fracture risk, vertebrae fracture risk, and humerus fracture risk. In the subgroup analysis by gender, female MS patients had increased fracture risk. When stratified by history of drug use, use of antidepressants, hypnotics/anxiolytics, anticonvulsants, and glucocorticoids increased the risk of fracture risk in MS patients. Conclusions. This meta-analysis demonstrated that MS was significantly associated with fracture risk.
Reichhardt, Tony
1984-04-01
Three researchers from the Energy and Environmental Policy Center at Harvard University have come up with a new method of calculating the risk from contaminants in drinking water, one that they believe takes into account some of the uncertainties in pronouncing water safe or dangerous to drink. The new method concentrates on the risk of cancer, which authors Edmund Crouch, Richard Wilson, and Lauren Zeise believe has not been properly considered in establishing drinking water standards.Writing in the December 1983 issue of Water Resources Research, the authors state that “current [drinking water] standards for a given chemical or class of chemicals do not account for the presence of other pollutants” that could combine to create dangerous substances. According to Wilson, “Over a hundred industrial pollutants and chlorination byproducts have been found in various samples of drinking water, some of which are known carcinogens, others suspected carcinogens.” The same chlorine that solves one major health problem—the threat of bacterial disease—can thus contribute to another, according to the authors, by increasing the long-term risk of cancer. The largest risks are due to halomethanes such as chloroform and bromoform, produced as chlorine reacts with organic matter in drinking water.
Risk of atrial fibrillation in diabetes mellitus
DEFF Research Database (Denmark)
Pallisgaard, Jannik L.; Schjerning, Anne-Marie; Lindhardt, Tommi B.
2016-01-01
AIM: Diabetes has been associated with atrial fibrillation but the current evidence is conflicting. In particular knowledge regarding young diabetes patients and the risk of developing atrial fibrillation is sparse. The aim of our study was to investigate the risk of atrial fibrillation in patients...... with diabetes compared to the background population in Denmark. METHODS AND RESULTS: Through Danish nationwide registries we included persons above 18 years of age and without prior atrial fibrillation and/or diabetes from 1996 to 2012. The study cohort was divided into a background population without diabetes...... and a diabetes group. The absolute risk of developing atrial fibrillation was calculated and Poisson regression models adjusted for sex, age and comorbidities were used to calculate incidence rate ratios of atrial fibrillation. The total study cohort included 5,081,087 persons, 4,827,713 (95%) in the background...
Precise calculations of the deuteron quadrupole moment
Energy Technology Data Exchange (ETDEWEB)
Gross, Franz L. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2016-06-01
Recently, two calculations of the deuteron quadrupole moment have have given predictions that agree with the measured value to within 1%, resolving a long-standing discrepancy. One of these uses the covariant spectator theory (CST) and the other chiral effective field theory (cEFT). In this talk I will first briefly review the foundations and history of the CST, and then compare these two calculations with emphasis on how the same physical processes are being described using very different language. The comparison of the two methods gives new insights into the dynamics of the low energy NN interaction.
Local orbitals in electron scattering calculations*
Winstead, Carl L.; McKoy, Vincent
2016-05-01
We examine the use of local orbitals to improve the scaling of calculations that incorporate target polarization in a description of low-energy electron-molecule scattering. After discussing the improved scaling that results, we consider the results of a test calculation that treats scattering from a two-molecule system using both local and delocalized orbitals. Initial results are promising. Contribution to the Topical Issue "Advances in Positron and Electron Scattering", edited by Paulo Limao-Vieira, Gustavo Garcia, E. Krishnakumar, James Sullivan, Hajime Tanuma and Zoran Petrovic.
Numerical calculation of impurity charge state distributions
Energy Technology Data Exchange (ETDEWEB)
Crume, E. C.; Arnurius, D. E.
1977-09-01
The numerical calculation of impurity charge state distributions using the computer program IMPDYN is discussed. The time-dependent corona atomic physics model used in the calculations is reviewed, and general and specific treatments of electron impact ionization and recombination are referenced. The complete program and two examples relating to tokamak plasmas are given on a microfiche so that a user may verify that his version of the program is working properly. In the discussion of the examples, the corona steady-state approximation is shown to have significant defects when the plasma environment, particularly the electron temperature, is changing rapidly.
Idiot savant calendrical calculators: maths or memory?
O'Connor, N; Hermelin, B
1984-11-01
Eight idiot savant calendrical calculators were tested on dates in the years 1963, 1973, 1983, 1986 and 1993. The study was carried out in 1983. Speeds of correct response were minimal in 1983 and increased markedly into the past and the future. The response time increase was matched by an increase in errors. Speeds of response were uncorrelated with measured IQ, but the numbers were insufficient to justify any inference in terms of IQ-independence. Results are interpreted as showing that memory alone is inadequate to explain the calendrical calculating performance of the idiot savant subjects.
Calculated Electron Fluxes at Airplane Altitudes
Schaefer, R K; Stanev, T
1993-01-01
A precision measurement of atmospheric electron fluxes has been performed on a Japanese commercial airliner (Enomoto, {\\it et al.}, 1991). We have performed a monte carlo calculation of the cosmic ray secondary electron fluxes expected in this experiment. The monte carlo uses the hadronic portion of our neutrino flux cascade program combined with the electromagnetic cascade portion of the CERN library program GEANT. Our results give good agreement with the data, provided we boost the overall normalization of the primary cosmic ray flux by 12\\% over the normalization used in the neutrino flux calculation.
Program Calculates Power Demands Of Electronic Designs
Cox, Brian
1995-01-01
CURRENT computer program calculates power requirements of electronic designs. For given design, CURRENT reads in applicable parts-list file and file containing current required for each part. Program also calculates power required for circuit at supply potentials of 5.5, 5.0, and 4.5 volts. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. Sun version of program (NPO-19590). PC version of program (NPO-19111).
Calculated optical absorption of different perovskite phases
DEFF Research Database (Denmark)
Castelli, Ivano Eligio; Thygesen, Kristian Sommer; Jacobsen, Karsten Wedel
2015-01-01
We present calculations of the optical properties of a set of around 80 oxides, oxynitrides, and organometal halide cubic and layered perovskites (Ruddlesden-Popper and Dion-Jacobson phases) with a bandgap in the visible part of the solar spectrum. The calculations show that for different classes...... of perovskites the solar light absorption efficiency varies greatly depending not only on bandgap size and character (direct/indirect) but also on the dipole matrix elements. The oxides exhibit generally a fairly weak absorption efficiency due to indirect bandgaps while the most efficient absorbers are found...... in the classes of oxynitride and organometal halide perovskites with strong direct transitions....
Relaxation Method For Calculating Quantum Entanglement
Tucci, R R
2001-01-01
In a previous paper, we showed how entanglement of formation can be defined as a minimum of the quantum conditional mutual information (a.k.a. quantum conditional information transmission). In classical information theory, the Arimoto-Blahut method is one of the preferred methods for calculating extrema of mutual information. We present a new method akin to the Arimoto-Blahut method for calculating entanglement of formation. We also present several examples computed with a computer program called Causa Comun that implements the ideas of this paper.
DFT calculations with the exact functional
Burke, Kieron
2014-03-01
I will discuss several works in which we calculate the exact exchange-correlation functional of density functional theory, mostly using the density-matrix renormalization group method invented by Steve White, our collaborator. We demonstrate that a Mott-Hubard insulator is a band metal. We also perform Kohn-Sham DFT calculations with the exact functional and prove that a simple algoritm always converges. But we find convergence becomes harder as correlations get stronger. An example from transport through molecular wires may also be discussed. Work supported by DOE grant DE-SC008696.
Calculating reliability measures for ordinal data.
Gamsu, C V
1986-11-01
Establishing the reliability of measures taken by judges is important in both clinical and research work. Calculating the statistic of choice, the kappa coefficient, unfortunately is not a particularly quick and simple procedure. Two much-needed practical tools have been developed to overcome these difficulties: a comprehensive and easily understood guide to the manual calculation of the most complex form of the kappa coefficient, weighted kappa for ordinal data, has been written; and a computer program to run under CP/M, PC-DOS and MS-DOS has been developed. With simple modification the program will also run on a Sinclair Spectrum home computer.
Improving on calculation of martensitic phenomenological theory
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
Exemplified by the martensitic transformation from DO3 to 18R in Cu-14.2Al-4.3Ni alloy and according to the principle that invariant-habit-plane can be obtained by self-accommodation between variants with twin relationships, and on the basis of displacement vector, volume fractions of two variants with twin relationships in martensitic transformation, habit-plane indexes, and orientation relationships between martensite and austenite after phase transformation can be calculated. Because no additional rotation matrixes are needed to be considered and mirror symmetric operations are used, the calculation process is simple and the results are accurate.
Transmission pipeline calculations and simulations manual
Menon, E Shashi
2014-01-01
Transmission Pipeline Calculations and Simulations Manual is a valuable time- and money-saving tool to quickly pinpoint the essential formulae, equations, and calculations needed for transmission pipeline routing and construction decisions. The manual's three-part treatment starts with gas and petroleum data tables, followed by self-contained chapters concerning applications. Case studies at the end of each chapter provide practical experience for problem solving. Topics in this book include pressure and temperature profile of natural gas pipelines, how to size pipelines for specified f
Pumping slots: Coupling impedance calculations and estimates
Energy Technology Data Exchange (ETDEWEB)
Kurennoy, S.
1993-08-01
Coupling impedances of small pumping holes in vacuum-chamber walls have been calculated at low frequencies, i.e., for wavelengths large compared to a typical hole size, in terms of electric and magnetic polarizabilities of the hole. The polarizabilities can be found by solving and electro- or magnetostatic problem and are known analytically for the case of the elliptic shape of the hole in a thin wall. The present paper studies the case of pumping slots. Using results of numerical calculations and analytical approximations of polarizabilities, we give formulae for practically important estimates of slot contribution to low-frequency coupling impedances.
Necessity of Exact Calculation for Transition Probability
Institute of Scientific and Technical Information of China (English)
LIU Fu-Sui; CHEN Wan-Fang
2003-01-01
This paper shows that exact calculation for transition probability can make some systems deviate fromFermi golden rule seriously. This paper also shows that the corresponding exact calculation of hopping rate inducedby phonons for deuteron in Pd-D system with the many-body electron screening, proposed by Ichimaru, can explainthe experimental fact observed in Pd-D system, and predicts that perfection and low-dimension of Pd lattice are veryimportant for the phonon-induced hopping rate enhancement in Pd-D system.
Risk analysis of analytical validations by probabilistic modification of FMEA
DEFF Research Database (Denmark)
Barends, D.M.; Oldenhof, M.T.; Vredenbregt, M.J.
2012-01-01
Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection...... and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring...... of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure....
Stress Analysis in Managing the Region’s Budget Risks
Directory of Open Access Journals (Sweden)
Natalya Pavlovna Pazdnikova
2014-09-01
Full Text Available The article addresses the implementation of budget risk management methods into the practices of governmental authorities. Drawing on the example of a particular region the article aims to demonstrate the possible methods of budget risk management. The authors refine the existing approaches to the notion of risk in its relation to budget system by introducing the notion of “budget risk.” Here the focus is the risk of default of budget spending in full which causes underfunding of territories and decrease in quality of life in the region. The authors have particularized the classification of budget risks and grouped together the criteria and factors which significantly influence the assessment and choice of method to manage budget risks. They hypothesize that budget risk is a financial risk. Therefore, the methods of financial risks management can be applied to budget risks management. The authors suggest a methodological approach to risk assessment based on correlation and regression analysis of program financing. The application of Kendall rank correlation coefficient allowed to assess the efficiency of budget spending on the implementation of state programs in Perm Krai. Two clusters — “Nature management and infrastructure” and “Public security” — turned out to be in the zone of high budget risk. The method of stress analysis, which consists in calculating Value at Risk (VaR, was applied to budget risks that in terms of probability are classified as critical. In order to assess risk as probability rate, the amount of Perm Krai deficit budget was calculated as induced variable from budget revenues and spending. The results demonstrate that contemporary management of public resources in the regions calls for the implementation of new management tools of higher quality and budget risk management is one of them.
Calculation of U-value for Concrete Element
DEFF Research Database (Denmark)
Rose, Jørgen
1997-01-01
This report is a U-value calculation of a typical concrete element used in industrial buildings.The calculations are performed using a 2-dimensional finite difference calculation programme.......This report is a U-value calculation of a typical concrete element used in industrial buildings.The calculations are performed using a 2-dimensional finite difference calculation programme....
Loss of partner and suicide risks among oldest old
DEFF Research Database (Denmark)
Erlangsen, Annette; Jeune, Bernard; Bille-Brahe, Unni
2004-01-01
the impact that loss of a partner has on the suicide risks of the oldest old (80+) compared to younger age groups. SUBJECTS: the entire Danish population aged 50 during 1994-1998 (n = 1,978,527). METHODS: we applied survival analysis to calculate the changes in relative risk of suicide after a loss by using...
Influence of Perinatal Risk Factors on Premature Labor Outcome
Directory of Open Access Journals (Sweden)
Agamurad A. Orazmuradov
2016-09-01
Full Text Available In this article, for the first time, the problem of premature labor (PL is considered from the standpoint of the concept of perinatal obstetric risk. The obtained results show that the optimal choice of the mode of delivery must be based on gestational age and perinatal risk (PR factors with calculation of their intrapartum gain (IG.
Energy Technology Data Exchange (ETDEWEB)
Jannik, Tim [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Stagich, Brooke [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-08-28
The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their updated “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides PRGs for radionuclides that are used as a screening tool at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and Resource Conservation and Recovery Act (RCRA) sites. These risk-based PRGs establish concentration limits under specific exposure scenarios. The purpose of this verification study is to determine that the calculator has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly. There are 167 equations used in the calculator. To verify the calculator, all equations for each of seven receptor types (resident, construction worker, outdoor and indoor worker, recreator, farmer, and composite worker) were hand calculated using the default parameters. The same four radionuclides (Am-241, Co-60, H-3, and Pu-238) were used for each calculation for consistency throughout.
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The improved form of calculation formula for the activities of the components in binary liquids and solid alloys has been derived based on the free volume theory considering excess entropy and Miedema's model for calculating the formation heat of binary alloys. A calculation method of excess thermodynamic functions for binary alloys, the formulas of integral molar excess properties and partial molar excess properties for solid ordered or disordered binary alloys have been developed. The calculated results are in good agreement with the experimental values.
Identifying and Managing Risk.
Abraham, Janice M.
1999-01-01
The role of the college or university chief financial officer in institutional risk management is (1) to identify risk (physical, casualty, fiscal, business, reputational, workplace safety, legal liability, employment practices, general liability), (2) to develop a campus plan to reduce and control risk, (3) to transfer risk, and (4) to track and…
DEFF Research Database (Denmark)
Zichella, Giulio; Reichstein, Toke
to choose risk vis-à-vis certainty. Drawing on prospect theory, we formulate hypotheses about the greater likelihood that entrepreneurs (compared to others) will choose risk immediately after a positive gain, but will shy away from risk compared to others as the degree of risk increases. The hypotheses...
Engineering calculations in radiative heat transfer
Gray, W A; Hopkins, D W
1974-01-01
Engineering Calculations in Radiative Heat Transfer is a six-chapter book that first explains the basic principles of thermal radiation and direct radiative transfer. Total exchange of radiation within an enclosure containing an absorbing or non-absorbing medium is then described. Subsequent chapters detail the radiative heat transfer applications and measurement of radiation and temperature.
Net analyte signal calculation for multivariate calibration
Ferre, J.; Faber, N.M.
2003-01-01
A unifying framework for calibration and prediction in multivariate calibration is shown based on the concept of the net analyte signal (NAS). From this perspective, the calibration step can be regarded as the calculation of a net sensitivity vector, whose length is the amount of net signal when the
Towards the exact calculation of medium nuclei
Energy Technology Data Exchange (ETDEWEB)
Gandolfi, Stefano [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Carlson, Joseph Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lonardoni, Diego [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wang, Xiaobao [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-12-19
The prediction of the structure of light and medium nuclei is crucial to test our knowledge of nuclear interactions. The calculation of the nuclei from two- and three-nucleon interactions obtained from rst principle is, however, one of the most challenging problems for many-body nuclear physics.
Complex Kohn calculations on an overset grid
Greenman, Loren; Lucchese, Robert; McCurdy, C. William
2016-05-01
An implentation of the overset grid method for complex Kohn scattering calculations is presented, along with static exchange calculations of electron-molecule scattering for small molecules including methane. The overset grid method uses multiple numerical grids, for instance Finite Element Method - Discrete Variable Representation (FEM-DVR) grids, expanded radially around multiple centers (corresponding to the individual atoms in each molecule as well as the center-of-mass of the molecule). The use of this flexible grid allows the complex angular dependence of the wavefunctions near the atomic centers to be well-described, but also allows scattering wavefunctions that oscillate rapidly at large distances to be accurately represented. Additionally, due to the use of multiple grids (and also grid shells), the method is easily parallelizable. The method has been implemented in ePolyscat, a multipurpose suite of programs for general molecular scattering calculations. It is interfaced with a number of quantum chemistry programs (including MolPro, Gaussian, GAMESS, and Columbus), from which it can read molecular orbitals and wavefunctions obtained using standard computational chemistry methods. The preliminary static exchange calculations serve as a test of the applicability.
Calculation of Nucleon Electromagnetic Form Factors
Renner, D B; Dolgov, D S; Eicker, N; Lippert, T; Negele, J W; Pochinsky, A V; Schilling, K; Lippert, Th.
2002-01-01
The fomalism is developed to express nucleon matrix elements of the electromagnetic current in terms of form factors consistent with the translational, rotational, and parity symmetries of a cubic lattice. We calculate the number of these form factors and show how appropriate linear combinations approach the continuum limit.
Calculating Free Energies Using Average Force
Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)
2001-01-01
A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.
Calculating Traffic based on Road Sensor Data
Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Sikora, Monika
2014-01-01
Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for la
Computational chemistry: Making a bad calculation
Winter, Arthur
2015-06-01
Computations of the energetics and mechanism of the Morita-Baylis-Hillman reaction are ``not even wrong'' when compared with experiments. While computational abstinence may be the purest way to calculate challenging reaction mechanisms, taking prophylactic measures to avoid regrettable outcomes may be more realistic.
Ammonia synthesis from first principles calculations
DEFF Research Database (Denmark)
Honkala, Johanna Karoliina; Hellman, Anders; Remediakis, Ioannis
2005-01-01
The rate of ammonia synthesis over a nanoparticle ruthenium catalyst can be calculated directly on the basis of a quantum chemical treatment of the problem using density functional theory. We compared the results to measured rates over a ruthenium catalyst supported on magnesium aluminum spinet...
Calculation of tubular joints as compound shells
Golovanov, A. I.
A scheme for joining isoparametric finite shell elements with a bend in the middle surface is described. A solution is presented for the problem of the stress-strain state of a T-joint loaded by internal pressure. A refined scheme is proposed for calculating structures of this kind with allowance for the stiffness of the welded joint.
IOL Power Calculation after Corneal Refractive Surgery
Directory of Open Access Journals (Sweden)
Maddalena De Bernardo
2014-01-01
Full Text Available Purpose. To describe the different formulas that try to overcome the problem of calculating the intraocular lens (IOL power in patients that underwent corneal refractive surgery (CRS. Methods. A Pubmed literature search review of all published articles, on keyword associated with IOL power calculation and corneal refractive surgery, as well as the reference lists of retrieved articles, was performed. Results. A total of 33 peer reviewed articles dealing with methods that try to overcome the problem of calculating the IOL power in patients that underwent CRS were found. According to the information needed to try to overcome this problem, the methods were divided in two main categories: 18 methods were based on the knowledge of the patient clinical history and 15 methods that do not require such knowledge. The first group was further divided into five subgroups based on the parameters needed to make such calculation. Conclusion. In the light of our findings, to avoid postoperative nasty surprises, we suggest using only those methods that have shown good results in a large number of patients, possibly by averaging the results obtained with these methods.
Gaseous Nitrogen Orifice Mass Flow Calculator
Ritrivi, Charles
2013-01-01
The Gaseous Nitrogen (GN2) Orifice Mass Flow Calculator was used to determine Space Shuttle Orbiter Water Spray Boiler (WSB) GN2 high-pressure tank source depletion rates for various leak scenarios, and the ability of the GN2 consumables to support cooling of Auxiliary Power Unit (APU) lubrication during entry. The data was used to support flight rationale concerning loss of an orbiter APU/hydraulic system and mission work-arounds. The GN2 mass flow-rate calculator standardizes a method for rapid assessment of GN2 mass flow through various orifice sizes for various discharge coefficients, delta pressures, and temperatures. The calculator utilizes a 0.9-lb (0.4 kg) GN2 source regulated to 40 psia (.276 kPa). These parameters correspond to the Space Shuttle WSB GN2 Source and Water Tank Bellows, but can be changed in the spreadsheet to accommodate any system parameters. The calculator can be used to analyze a leak source, leak rate, gas consumables depletion time, and puncture diameter that simulates the measured GN2 system pressure drop.
Block Tridiagonal Matrices in Electronic Structure Calculations
DEFF Research Database (Denmark)
Petersen, Dan Erik
This thesis focuses on some of the numerical aspects of the treatment of the electronic structure problem, in particular that of determining the ground state electronic density for the non–equilibrium Green’s function formulation of two–probe systems and the calculation of transmission in the Lan...
Vibrational Spectra and Quantum Calculations of Ethylbenzene
Institute of Scientific and Technical Information of China (English)
Jian Wang; Xue-jun Qiu; Yan-mei Wang; Song Zhang; Bing Zhang
2012-01-01
Normal vibrations of ethylbenzene in the first excited state have been studied using resonant two-photon ionization spectroscopy.The band origin of ethylbenzene of S1←S0 transition appeared at 37586 cm-1.A vibrational spectrum of 2000 cm-1 above the band origin in the first excited state has been obtained.Several chain torsions and normal vibrations are obtained in the spectrum.The energies of the first excited state are calculated by the time-dependent density function theory and configuration interaction singles (CIS) methods with various basis sets.The optimized structures and vibrational frequencies of the S0 and S1 states are calculated using Hartree-Fock and CIS methods with 6-311++G(2d,2p) basis set.The calculated geometric structures in the S0 and S1 states are gauche conformations that the symmetric plane of ethyl group is perpendicular to the ring plane.All the observed spectral bands have been successfully assigned with the help of our calculations.
Calculation of Thermochemical Constants of Propellants
Directory of Open Access Journals (Sweden)
K. P. Rao
1979-01-01
Full Text Available A method for calculation of thermo chemical constants and products of explosion of propellants from the knowledge of molecular formulae and heats of formation of the ingredients is given. A computer programme in AUTOMATH-400 has been established for the method. The results of application of the method for a number of propellants are given.
Calculations of dietary exposure to acrylamide
Boon, P.E.; Mul, de A.; Voet, van der H.; Donkersgoed, van G.; Brette, M.; Klaveren, van J.D.
2005-01-01
In this paper we calculated the usual and acute exposure to acrylamide (AA) in the Dutch population and young children (1-6 years). For this AA levels of different food groups were used as collected by the Institute for Reference Materials and Measurements (IRMM) of the European Commission's Directo
Precipitates/Salts Model Sensitivity Calculation
Energy Technology Data Exchange (ETDEWEB)
P. Mariner
2001-12-20
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO{sub 2}) on the chemical evolution of water in the drift.
Heat pipe thermosyphon heat performance calculation
Novomestský, Marcel; Kapjor, Andrej; Papučík, Štefan; Siažik, Ján
2016-06-01
In this article the heat performance of the heat pipe thermosiphon is achieved through numerical model. The heat performance is calculated from few simplified equations which depends on the working fluid and geometry. Also the thermal conductivity is good to mentioning, because is really interesting how big differences are between heat pipes and full solid surfaces.
Conductance calculations with a wavelet basis set
DEFF Research Database (Denmark)
Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel
2003-01-01
. The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...
40 CFR 1065.650 - Emission calculations.
2010-07-01
... into the system boundary, this work flow rate signal becomes negative; in this case, include these negative work rate values in the integration to calculate total work from that work path. Some work paths... interval. When power flows into the system boundary, the power/work flow rate signal becomes negative;...
7 CFR 760.307 - Payment calculation.
2010-01-01
...) The monthly feed cost calculated by using the normal carrying capacity of the eligible grazing land of...) By 56. (j) The monthly feed cost using the normal carrying capacity of the eligible grazing land... pastureland by (ii) The normal carrying capacity of the specific type of eligible grazing land or...
Tubular stabilizer bars – calculations and construction
Directory of Open Access Journals (Sweden)
Adam-Markus WITTEK
2011-01-01
Full Text Available The article outlines the calculation methods for tubular stabilizer bars. Modern technological and structural solutions in contemporary cars are reflected also in the construction, selection and manufacturing of tubular stabilizer bars. A proper construction and the selection of parameters influence the strength properties, the weight, durability and reliability as well as the selection of an appropriate production method.
Stabilizer bars: Part 1. Calculations and construction
Directory of Open Access Journals (Sweden)
Adam-Markus WITTEK
2010-01-01
Full Text Available The article outlines the calculation methods for stabilizer bars. Modern technological and structural solutions in contemporary cars are reflected also in the construction and manufacturing of stabilizer bars. A proper construction and the selection of parameters influence the strength properties, the weight, durability and reliability as well as the selection of an appropriate production method.
7 CFR 1416.704 - Payment calculation.
2010-01-01
... for: (1) Seedlings or cuttings, for trees, bushes or vine replanting; (2) Site preparation and debris...) Replacement, rehabilitation, and pruning; and (6) Labor used to transplant existing seedlings established..., the county committee shall calculate payment based on the number of qualifying trees, bushes or...
On the calculation of Mossbauer isomer shift
Filatov, Michael
2007-01-01
A quantum chemical computational scheme for the calculation of isomer shift in Mossbauer spectroscopy is suggested. Within the described scheme, the isomer shift is treated as a derivative of the total electronic energy with respect to the radius of a finite nucleus. The explicit use of a finite nuc
Normalisation of database expressions involving calculations
Denneheuvel, S. van; Renardel de Lavalette, G.R.
2008-01-01
In this paper we introduce a relational algebra extended with a calculate operator and derive, for expressions in the corresponding language PCSJL, a normalisation procedure. PCSJL plays a role in the implementation of the Rule Language RL; the normalisation is to be used for query optimisation.
Using Angle calculations to demonstrate vowel shifts
DEFF Research Database (Denmark)
Fabricius, Anne
2008-01-01
This paper gives an overview of the long-term trends of diachronic changes evident within the short vowel system of RP during the 20th century. more specifically, it focusses on changing juxtapositions of the TRAP, STRUT and LOT, FOOT vowel centroid positions. The paper uses geometric calculation...
Directory of Open Access Journals (Sweden)
Viorica IOAN
2012-11-01
Full Text Available The bank is exposed to credit risk, the risk of not being able to recuperate the debtor claims as a result of the activity of granting loans to the clientele. Also, credit risk may manifest due to investments in other local and foreign credit institutions. Credit risk may be minimized through the careful evaluation of credit solicitors, through their monitoring along the duration of the loan and through the establishing of risk exposure limits, of significant risk margins as well as the acceptable balance between risk and profit.
Systematic Risk in Agriculture: A Case of Slovakia
Directory of Open Access Journals (Sweden)
M. Tóth
2014-12-01
Full Text Available The paper uses the alternative Markowitz portfolio theory approach, by replacing the stock return with return on equity (ROE and estimates the systematic risk of unquoted agricultural farms. The systematic risk is standardly measured by the mean-variance model and standard deviation of stock return. In case of unquoted firms the information regarding the market rate of return is missing. To assess the risk and return, the use of individual financial statements is necessary. The systematic risk in Slovak agriculture over the period 2009-2012 was 3% of equity or capital invested with the average return 0,048%. We calculated the systematic risk separately for two prevailing legal forms in Slovak agriculture: cooperatives and companies (JSC., Ltd.. Cooperatives represent farms with lower individual risk and lower ROE, but higher systematic risk. Companies represent farms established after 1989. These farms generate higher profit for the owner with lower systematic risk.
Radionuclide release calculations for SAR-08
Energy Technology Data Exchange (ETDEWEB)
Thomson, Gavin; Miller, Alex; Smith, Graham; Jackson, Duncan (Enviros Consulting Ltd, Wolverhampton (United Kingdom))
2008-04-15
Following a review by the Swedish regulatory authorities of the post-closure safety assessment of the SFR 1 disposal facility for low and intermediate waste (L/ILW), SAFE, the SKB has prepared an updated assessment called SAR-08. This report describes the radionuclide release calculations that have been undertaken as part of SAR-08. The information, assumptions and data used in the calculations are reported and the results are presented. The calculations address issues raised in the regulatory review, but also take account of new information including revised inventory data. The scenarios considered include the main case of expected behaviour of the system, with variants; low probability releases, and so-called residual scenarios. Apart from these scenario uncertainties, data uncertainties have been examined using a probabilistic approach. Calculations have been made using the AMBER software. This allows all the component features of the assessment model to be included in one place. AMBER has been previously used to reproduce results the corresponding calculations in the SAFE assessment. It is also used in demonstration of the IAEA's near surface disposal assessment methodology ISAM and has been subject to very substantial verification tests and has been used in verifying other assessment codes. Results are presented as a function of time for the release of radionuclides from the near field, and then from the far field into the biosphere. Radiological impacts of the releases are reported elsewhere. Consideration is given to each radionuclide and to each component part of the repository. The releases from the entire repository are also presented. The peak releases rates are, for most scenarios, due to organic C-14. Other radionuclides which contribute to peak release rates include inorganic C-14, Ni-59 and Ni-63. (author)
Comparison of Country Risk, Sustainability and Economic Safety Indices
Jelena Stankeviciene; Tatjana Sviderskė; Algita Miečinskienė
2014-01-01
Country risk, sustainability an economic safety are becoming more important in the contemporary economic world. The aim of this paper is to present the importance of comparison formalisation of country risk, sustainability, and economic safety indices for strategic alignment. The work provides an analysis on the relationship between country risk, sustainability an economic safety in EU countries, based on statistical data. Investigations and calculations of rankings provided by Euromoney Coun...
Multilevel Fuzzy Approach to the Risk and Disaster Management
Directory of Open Access Journals (Sweden)
Márta Takács
2010-11-01
Full Text Available In this paper a short general review of the main characteristics of riskmanagement applications is given, where a hierarchical, multilevel risk managementmethod can be applied in a fuzzy decision making environment. The given case study is atravel risk-level calculation based on the presented model. In the last section an extendedmodel and a preliminary mathematical description is presented, where the pairwisecomparison matrix of the grouped risk factors expands the previous principles.
Chen, Yu; Song, Guobao; Yang, Fenglin; Zhang, Shushen; Zhang, Yun; Liu, Zhenyu
2012-12-03
According to risk systems theory and the characteristics of the chemical industry, an index system was established for risk assessment of enterprises in chemical industrial parks (CIPs) based on the inherent risk of the source, effectiveness of the prevention and control mechanism, and vulnerability of the receptor. A comprehensive risk assessment method based on catastrophe theory was then proposed and used to analyze the risk levels of ten major chemical enterprises in the Songmu Island CIP, China. According to the principle of equal distribution function, the chemical enterprise risk level was divided into the following five levels: 1.0 (very safe), 0.8 (safe), 0.6 (generally recognized as safe, GRAS), 0.4 (unsafe), 0.2 (very unsafe). The results revealed five enterprises (50%) with an unsafe risk level, and another five enterprises (50%) at the generally recognized as safe risk level. This method solves the multi-objective evaluation and decision-making problem. Additionally, this method involves simple calculations and provides an effective technique for risk assessment and hierarchical risk management of enterprises in CIPs.
Agarwal, Riya
2010-01-01
Small and medium enterprises are the backbone for the development of the economy. They provide employment, contribute to GDP and an important source of revenue for the country especially India employing approximately 30 million people and generating 40% of the export surplus. However the SMEs have to face lot of operational risks- credit risk, liquidity risk, foreign exchange risk, interest rate risk, competition from the MNCs and foreign buyers. Despite the failure of several SMEs, those pul...
Crop production structure optimization with considering risk
Directory of Open Access Journals (Sweden)
Lajos Nagy
2012-12-01
Full Text Available The effects of global climate change are occurring more and more sharply, and because of it – amongst the indisputable genetic and technological development – the yield fluctuation has increased in the crop production past years. Otherwise, this sector is one of the riskiest, so it is obvious to consider risk during the planning, in the phase of decision preparation. Risk programming models are usually applied in agriculture, which take the attitude of the decision-maker to risk into consideration, i.e. these are utility maximization models. First of all, in case of risk programming models the character of risk must be decided. For determining the degree of risk – among others – dispersion indicators are also suitable. If financial portfolios are optimized, most frequently risk is given by the variance of the portfolio. Variance is also applied in the expected value – variance (E-V models. If variance is minimized, the model has a quadratic object function. An alternative for variance in the linear programming model is the application of mean absolute deviation (MAD. The purpose of this article is to present the application of a portfolio model for optimizing crop production structure and minimizing risk that is generally used in financial investment calculations."
Calculating Contained Firing Facility (CFF) explosive
Energy Technology Data Exchange (ETDEWEB)
Lyle, J W.
1998-10-20
The University of California awarded LLNL contract No. B345381 for the design of the facility to Parsons Infrastructure Technology, Inc., of Pasadena, California. The Laboratory specified that the firing chamber be able to withstand repeated fxings of 60 Kg of explosive located in the center of the chamber, 4 feet above the floor, and repeated firings of 35 Kg of explosive at the same height and located anywhere within 2 feet of the edge of a region on the floor called the anvil. Other requirements were that the chamber be able to accommodate the penetrations of the existing bullnose of the Bunker 801 flash X-ray machine and the roof of the underground camera room. These requirements and provisions for blast-resistant doors formed the essential basis for the design. The design efforts resulted in a steel-reinforced concrete snucture measuring (on the inside) 55 x 5 1 feet by 30 feet high. The walls and ceiling are to be approximately 6 feet thick. Because the 60-Kg charge is not located in the geometric center of the volume and a 35-K:: charge could be located anywhere in a prescribed area, there will be different dynamic pressures and impulses on the various walls floor, and ceiling, depending upon the weights and locations of the charges. The detailed calculations and specifications to achieve the design criteria were performed by Parsons and are included in Reference 1. Reference 2, Structures to Resist the E xts of Accidental L%plosions (TMS- 1300>, is the primary design manual for structures of this type. It includes an analysis technique for the calculation of blast loadings within a cubicle or containment-type structure. Parsons used the TM5- 1300 methods to calculate the loadings on the various fling chamber surfaces for the design criteria explosive weights and locations. At LLNL the same methods were then used to determine the firing zones for other weights and elevations that would give the same or lesser loadings. Although very laborious, a hand
The development of a 3D risk analysis method.
I, Yet-Pole; Cheng, Te-Lung
2008-05-01
Much attention has been paid to the quantitative risk analysis (QRA) research in recent years due to more and more severe disasters that have happened in the process industries. Owing to its calculation complexity, very few software, such as SAFETI, can really make the risk presentation meet the practice requirements. However, the traditional risk presentation method, like the individual risk contour in SAFETI, is mainly based on the consequence analysis results of dispersion modeling, which usually assumes that the vapor cloud disperses over a constant ground roughness on a flat terrain with no obstructions and concentration fluctuations, which is quite different from the real situations of a chemical process plant. All these models usually over-predict the hazardous regions in order to maintain their conservativeness, which also increases the uncertainty of the simulation results. On the other hand, a more rigorous model such as the computational fluid dynamics (CFD) model can resolve the previous limitations; however, it cannot resolve the complexity of risk calculations. In this research, a conceptual three-dimensional (3D) risk calculation method was proposed via the combination of results of a series of CFD simulations with some post-processing procedures to obtain the 3D individual risk iso-surfaces. It is believed that such technique will not only be limited to risk analysis at ground level, but also be extended into aerial, submarine, or space risk analyses in the near future.
CANISTER HANDLING FACILITY CRITICALITY SAFETY CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
C.E. Sanders
2005-04-07
This design calculation revises and updates the previous criticality evaluation for the canister handling, transfer and staging operations to be performed in the Canister Handling Facility (CHF) documented in BSC [Bechtel SAIC Company] 2004 [DIRS 167614]. The purpose of the calculation is to demonstrate that the handling operations of canisters performed in the CHF meet the nuclear criticality safety design criteria specified in the ''Project Design Criteria (PDC) Document'' (BSC 2004 [DIRS 171599], Section 4.9.2.2), the nuclear facility safety requirement in ''Project Requirements Document'' (Canori and Leitner 2003 [DIRS 166275], p. 4-206), the functional/operational nuclear safety requirement in the ''Project Functional and Operational Requirements'' document (Curry 2004 [DIRS 170557], p. 75), and the functional nuclear criticality safety requirements described in the ''Canister Handling Facility Description Document'' (BSC 2004 [DIRS 168992], Sections 3.1.1.3.4.13 and 3.2.3). Specific scope of work contained in this activity consists of updating the Category 1 and 2 event sequence evaluations as identified in the ''Categorization of Event Sequences for License Application'' (BSC 2004 [DIRS 167268], Section 7). The CHF is limited in throughput capacity to handling sealed U.S. Department of Energy (DOE) spent nuclear fuel (SNF) and high-level radioactive waste (HLW) canisters, defense high-level radioactive waste (DHLW), naval canisters, multicanister overpacks (MCOs), vertical dual-purpose canisters (DPCs), and multipurpose canisters (MPCs) (if and when they become available) (BSC 2004 [DIRS 168992], p. 1-1). It should be noted that the design and safety analyses of the naval canisters are the responsibility of the U.S. Department of the Navy (Naval Nuclear Propulsion Program) and will not be included in this document. In addition, this calculation is valid for
Introducing Risk Analysis and Calculation of Profitability under Uncertainty in Engineering Design
Kosmopoulou, Georgia; Freeman, Margaret; Papavassiliou, Dimitrios V.
2011-01-01
A major challenge that chemical engineering graduates face at the modern workplace is the management and operation of plants under conditions of uncertainty. Developments in the fields of industrial organization and microeconomics offer tools to address this challenge with rather well developed concepts, such as decision theory and financial risk…
Denov, Myriam; Bryan, Catherine
2012-01-01
Similar to refugees in general, independent child migrants are frequently constructed in academic and popular discourse as passive and powerless or as untrustworthy and potentially threatening. Such portrayals fail to capture how these youth actively navigate the complex experiences of forced migration. Drawing on interviews with independent child…
Interbirth interval is associated with childhood type 1 diabetes risk
DEFF Research Database (Denmark)
Cardwell, Chris R; Svensson, Jannet; Waldhoer, Thomas
2012-01-01
of childhood type 1 diabetes has not been investigated. A secondary analysis of 14 published observational studies of perinatal risk factors for type 1 diabetes was conducted. Risk estimates of diabetes by category of interbirth interval were calculated for each study. Random effects models were used...... to calculate pooled odds ratios (ORs) and investigate heterogeneity between studies. Overall, 2,787 children with type 1 diabetes were included. There was a reduction in the risk of childhood type 1 diabetes in children born to mothers after interbirth intervals...
First-principles calculations of novel materials
Sun, Jifeng
Computational material simulation is becoming more and more important as a branch of material science. Depending on the scale of the systems, there are many simulation methods, i.e. first-principles calculation (or ab-initio), molecular dynamics, mesoscale methods and continuum methods. Among them, first-principles calculation, which involves density functional theory (DFT) and based on quantum mechanics, has become to be a reliable tool in condensed matter physics. DFT is a single-electron approximation in solving the many-body problems. Intrinsically speaking, both DFT and ab-initio belong to the first-principles calculation since the theoretical background of ab-initio is Hartree-Fock (HF) approximation and both are aimed at solving the Schrodinger equation of the many-body system using the self-consistent field (SCF) method and calculating the ground state properties. The difference is that DFT introduces parameters either from experiments or from other molecular dynamic (MD) calculations to approximate the expressions of the exchange-correlation terms. The exchange term is accurately calculated but the correlation term is neglected in HF. In this dissertation, DFT based first-principles calculations were performed for all the novel materials and interesting materials introduced. Specifically, the DFT theory together with the rationale behind related properties (e.g. electronic, optical, defect, thermoelectric, magnetic) are introduced in Chapter 2. Starting from Chapter 3 to Chapter 5, several representative materials were studied. In particular, a new semiconducting oxytelluride, Ba2TeO is studied in Chapter 3. Our calculations indicate a direct semiconducting character with a band gap value of 2.43 eV, which agrees well with the optical experiment (˜ 2.93 eV). Moreover, the optical and defects properties of Ba2TeO are also systematically investigated with a view to understanding its potential as an optoelectronic or transparent conducting material. We find
On the Origins of Calculation Abilities
Directory of Open Access Journals (Sweden)
A. Ardila
1993-01-01
Full Text Available A historical review of calculation abilities is presented. Counting, starting with finger sequencing, has been observed in different ancient and contemporary cultures, whereas number representation and arithmetic abilities are found only during the last 5000–6000 years. The rationale for selecting a base of ten in most numerical systems and the clinical association between acalculia and finger agnosia are analyzed. Finger agnosia (as a restricted form of autotopagnosia, right–left discrimination disturbances, semantic aphasia, and acalculia are proposed to comprise a single neuropsychological syndrome associated with left angular gyrus damage. A classification of calculation disturbances resulting from brain damage is presented. It is emphasized that using historical/anthropological analysis, it becomes evident that acalculia, finger agnosia, and disorders in right–left discrimination (as in general, in the use of spatial concepts must constitute a single clinical syndrome, resulting from the disruption of some common brain activity and the impairment of common cognitive mechanisms.
High-Power Wind Turbine: Performance Calculation
Directory of Open Access Journals (Sweden)
Goldaev Sergey V.
2015-01-01
Full Text Available The paper is devoted to high-power wind turbine performance calculation using Pearson’s chi-squared test the statistical hypothesis on distribution of general totality of air velocities by Weibull-Gnedenko. The distribution parameters are found by numerical solution of transcendental equation with the definition of the gamma function interpolation formula. Values of the operating characteristic of the incomplete gamma function are defined by numerical integration using Weddle’s rule. The comparison of the calculated results using the proposed methodology with those obtained by other authors found significant differences in the values of the sample variance and empirical Pearson. The analysis of the initial and maximum wind speed influence on performance of the high-power wind turbine is done
Isogeometric analysis in electronic structure calculations
Cimrman, Robert; Kolman, Radek; Tůma, Miroslav; Vackář, Jiří
2016-01-01
In electronic structure calculations, various material properties can be obtained by means of computing the total energy of a system as well as derivatives of the total energy w.r.t. atomic positions. The derivatives, also known as Hellman-Feynman forces, require, because of practical computational reasons, the discretized charge density and wave functions having continuous second derivatives in the whole solution domain. We describe an application of isogeometric analysis (IGA), a spline modification of finite element method (FEM), to achieve the required continuity. The novelty of our approach is in employing the technique of B\\'ezier extraction to add the IGA capabilities to our FEM based code for ab-initio calculations of electronic states of non-periodic systems within the density-functional framework, built upon the open source finite element package SfePy. We compare FEM and IGA in benchmark problems and several numerical results are presented.
Equation of State from Lattice QCD Calculations
Energy Technology Data Exchange (ETDEWEB)
Gupta, Rajan [Los Alamos National Laboratory
2011-01-01
We provide a status report on the calculation of the Equation of State (EoS) of QCD at finite temperature using lattice QCD. Most of the discussion will focus on comparison of recent results obtained by the HotQCD and Wuppertal-Budapest collaborations. We will show that very significant progress has been made towards obtaining high precision results over the temperature range of T = 150-700 MeV. The various sources of systematic uncertainties will be discussed and the differences between the two calculations highlighted. Our final conclusion is that these lattice results of EoS are precise enough to be used in the phenomenological analysis of heavy ion experiments at RHIC and LHC.
Labview virtual instruments for calcium buffer calculations.
Reitz, Frederick B; Pollack, Gerald H
2003-01-01
Labview VIs based upon the calculator programs of Fabiato and Fabiato (J. Physiol. Paris 75 (1979) 463) are presented. The VIs comprise the necessary computations for the accurate preparation of multiple-metal buffers, for the back-calculation of buffer composition given known free metal concentrations and stability constants used, for the determination of free concentrations from a given buffer composition, and for the determination of apparent stability constants from absolute constants. As implemented, the VIs can concurrently account for up to three divalent metals, two monovalent metals and four ligands thereof, and the modular design of the VIs facilitates further extension of their capacity. As Labview VIs are inherently graphical, these VIs may serve as useful templates for those wishing to adapt this software to other platforms.
Tearing mode stability calculations with pressure flattening
Ham, C J; Cowley, S C; Hastie, R J; Hender, T C; Liu, Y Q
2013-01-01
Calculations of tearing mode stability in tokamaks split conveniently into an external region, where marginally stable ideal MHD is applicable, and a resonant layer around the rational surface where sophisticated kinetic physics is needed. These two regions are coupled by the stability parameter. Pressure and current perturbations localized around the rational surface alter the stability of tearing modes. Equations governing the changes in the external solution and - are derived for arbitrary perturbations in axisymmetric toroidal geometry. The relationship of - with and without pressure flattening is obtained analytically for four pressure flattening functions. Resistive MHD codes do not contain the appropriate layer physics and therefore cannot predict stability directly. They can, however, be used to calculate -. Existing methods (Ham et al. 2012 Plasma Phys. Control. Fusion 54 025009) for extracting - from resistive codes are unsatisfactory when there is a finite pressure gradient at the rational surface ...
Normal mode calculations of trigonal selenium
DEFF Research Database (Denmark)
Hansen, Flemming Yssing; McMurry, H. L.
1980-01-01
symmetry. The intrachain force field is projected from a valence type field including a bond stretch, angle bend, and dihedral torsion. With these coordinates we obtain the strong dispersion of the upper optic modes as observed by neutron scattering, where other models have failed and give flat bands......The phonon dispersion relations for trigonal selenium have been calculated on the basis of a short range potential field model. Electrostatic long range forces have not been included. The force field is defined in terms of symmetrized coordinates which reflect partly the symmetry of the space group....... In this way we have eliminated the ambiguity in the choice of valence coordinates, which has been a problem in previous models which used valence type interactions. Calculated sound velocities and elastic moduli are also given. The Journal of Chemical Physics is copyrighted by The American Institute...
A Methodology for Calculating Radiation Signatures
Energy Technology Data Exchange (ETDEWEB)
Klasky, Marc Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wilcox, Trevor [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bathke, Charles G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); James, Michael R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-05-01
A rigorous formalism is presented for calculating radiation signatures from both Special Nuclear Material (SNM) as well as radiological sources. The use of MCNP6 in conjunction with CINDER/ORIGEN is described to allow for the determination of both neutron and photon leakages from objects of interest. In addition, a description of the use of MCNP6 to properly model the background neutron and photon sources is also presented. Examinations of the physics issues encountered in the modeling are investigated so as to allow for guidance in the user discerning the relevant physics to incorporate into general radiation signature calculations. Furthermore, examples are provided to assist in delineating the pertinent physics that must be accounted for. Finally, examples of detector modeling utilizing MCNP are provided along with a discussion on the generation of Receiver Operating Curves, which are the suggested means by which to determine detectability radiation signatures emanating from objects.
Numerical calculations of magnetic properties of nanostructures
Kapitan, Vitalii; Nefedev, Konstantin
2015-01-01
Magnetic force microscopy and scanning tunneling microscopy data could be used to test computer numerical models of magnetism. The elaborated numerical model of a face-centered lattice Ising spins is based on pixel distribution in the image of magnetic nanostructures obtained by using scanning microscope. Monte Carlo simulation of the magnetic structure model allowed defining the temperature dependence of magnetization; calculating magnetic hysteresis curves and distribution of magnetization on the surface of submonolayer and monolayer nanofilms of cobalt, depending on the experimental conditions. Our developed package of supercomputer parallel software destined for a numerical simulation of the magnetic-force experiments and allows obtaining the distribution of magnetization in one-dimensional arrays of nanodots and on their basis. There has been determined interpretation of magneto-force microscopy images of magnetic nanodots states. The results of supercomputer simulations and numerical calculations are in...
A priori calculations for the rotational stabilisation
Directory of Open Access Journals (Sweden)
Iwata Yoritaka
2013-12-01
Full Text Available The synthesis of chemical elements are mostly realised by low-energy heavy-ion reactions. Synthesis of exotic and heavy nuclei as well as that of superheavy nuclei is essential not only to find out the origin and the limit of the chemical elements but also to clarify the historical/chemical evolution of our universe. Despite the life time of exotic nuclei is not so long, those indispensable roles in chemical evolution has been pointed out. Here we are interested in examining the rotational stabilisation. In this paper a priori calculation (before microscopic density functional calculations is carried out for the rotational stabilisation effect in which the balance between the nuclear force, the Coulomb force and the centrifugal force is taken into account.
Energy Technology Data Exchange (ETDEWEB)
Gamiz, E.; /CAFPE, Granada /Granada U., Theor. Phys. Astrophys. /Fermilab; DeTar, C.; /Utah U.; El-Khadra, A.X.; /Illinois U., Urbana; Kronfeld, A.S.; /Fermilab; Mackenzie, P.B.; /Fermilab; Simone, J.; /Fermilab
2011-11-01
We report on the status of the Fermilab-MILC calculation of the form factor f{sub +}{sup K}{pi}(q{sup 2} = 0), needed to extract the CKM matrix element |V{sub us}| from experimental data on K semileptonic decays. The HISQ formulation is used in the simulations for the valence quarks, while the sea quarks are simulated with the asqtad action (MILC N{sub f} = 2 + 1 configurations). We discuss the general methodology of the calculation, including the use of twisted boundary conditions to get values of the momentum transfer close to zero and the different techniques applied for the correlators fits. We present initial results for lattice spacings a {approx} 0.12 fm and a {approx} 0.09 fm, and several choices of the light quark masses.
Pressure Correction in Density Functional Theory Calculations
Lee, S H
2008-01-01
First-principles calculations based on density functional theory have been widely used in studies of the structural, thermoelastic, rheological, and electronic properties of earth-forming materials. The exchange-correlation term, however, is implemented based on various approximations, and this is believed to be the main reason for discrepancies between experiments and theoretical predictions. In this work, by using periclase MgO as a prototype system we examine the discrepancies in pressure and Kohn-Sham energy that are due to the choice of the exchange-correlation functional. For instance, we choose local density approximation and generalized gradient approximation. We perform extensive first-principles calculations at various temperatures and volumes and find that the exchange-correlation-based discrepancies in Kohn-Sham energy and pressure should be independent of temperature. This implies that the physical quantities, such as the equation of states, heat capacity, and the Gr\\"{u}neisen parameter, estimat...
The Gravity- Powered Calculator, a Galilean Exhibit
Cerreta, Pietro
2014-04-01
The Gravity-Powered Calculator is an exhibit of the Exploratorium in San Francisco. It is presented by its American creators as an amazing device that extracts the square roots of numbers, using only the force of gravity. But if you analyze his concept construction one can not help but recall the research of Galileo on falling bodies, the inclined plane and the projectile motion; exactly what the American creators did not put into prominence with their exhibit. Considering the equipment only for what it does, in my opinion, is very reductive compared to the historical roots of the Galilean mathematical physics contained therein. Moreover, if accurate deductions are contained in the famous study of S. Drake on the Galilean drawings and, in particular on Folio 167 v, the parabolic paths of the ball leaping from its launch pad after descending a slope really actualize Galileo's experiments. The exhibit therefore may be best known as a `Galilean calculator'.
Occupation and prostate cancer risk in Sweden.
Sharma-Wagner, S; Chokkalingam, A P; Malker, H S; Stone, B J; McLaughlin, J K; Hsing, A W
2000-05-01
To provide new leads regarding occupational prostate cancer risk factors, we linked 36,269 prostate cancer cases reported to the Swedish National Cancer Registry during 1961 to 1979 with employment information from the 1960 National Census. Standardized incidence ratios for prostate cancer, within major (1-digit), general (2-digit), and specific (3-digit) industries and occupations, were calculated. Significant excess risks were seen for agriculture-related industries, soap and perfume manufacture, and leather processing industries. Significantly elevated standardized incidence ratios were also seen for the following occupations: farmers, leather workers, and white-collar occupations. Our results suggest that farmers; certain occupations and industries with exposures to cadmium, herbicides, and fertilizers; and men with low occupational physical activity levels have elevated prostate cancer risks. Further research is needed to confirm these findings and identify specific exposures related to excess risk in these occupations and industries.
Analyzing BSE transmission to quantify regional risk.
de Koeijer, Aline A
2007-10-01
As a result of consumer fears and political concerns related to BSE as a risk to human health, a need has arisen recently for more sensitive methods to detect BSE and more accurate methods to determine BSE incidence. As a part of the development of such methods, it is important to be able to identify groups of animals with above-average BSE risk. One of the well-known risk factors for BSE is age, as very young animals do not develop the disease, and very old animals are less likely to develop the disease. Here, we analyze which factors have a strong influence on the age distribution of BSE in a population. Building on that, we develop a simple set of calculation rules for classifying the BSE risk in a given cattle population. Required inputs are data on imports and on the BSE control measures in place over the last 10 or 20 years.
Portfolio Allocation Subject to Credit Risk
Directory of Open Access Journals (Sweden)
Rogerio de Deus Oliveira
2003-12-01
Full Text Available Credit Risk is an important dimension to be considered in the risk management procedures of financial institutions. Is a particularly useful in emerging markets where default rates on bank loan products are usually high. It is usually calculated through highly costly Monte Carlo simulations which consider different stochastic factors driving the uncertainly associated to the borrowers liabilities. In this paper, under some restrictions, we drive closed form formulas for the probability distributions of default rates of bank loans products involving a big number of clients. This allows us to quickly obtain the credit risk of such products. Moreover, using these probability distributions, we solve the problem of optimal portfolio allocation under default risk.
Metrics of Risk Associated with Defects Rediscovery
Miranskyy, Andriy V; Reesor, Mark
2011-01-01
Software defects rediscovered by a large number of customers affect various stakeholders and may: 1) hint at gaps in a software manufacturer's Quality Assurance (QA) processes, 2) lead to an over-load of a software manufacturer's support and maintenance teams, and 3) consume customers' resources, leading to a loss of reputation and a decrease in sales. Quantifying risk associated with the rediscovery of defects can help all of these stake-holders. In this chapter we present a set of metrics needed to quantify the risks. The metrics are designed to help: 1) the QA team to assess their processes; 2) the support and maintenance teams to allocate their resources; and 3) the customers to assess the risk associated with using the software product. The paper includes a validation case study which applies the risk metrics to industrial data. To calculate the metrics we use mathematical instruments like the heavy-tailed Kappa distribution and the G/M/k queuing model.
Diversity in Risk Communication
Directory of Open Access Journals (Sweden)
Agung Nur Probohudono
2013-03-01
Full Text Available This study analyses the communication of the five major categories of risk (business, strategy, market and credit risk disclosure over the volatile 2007-2009 Global Financial Crisis (GFC time period in key South East Asian countries’ manufacturing listed companies. This study is important as it contributes to the literature by providing insights into the voluntary risk disclosure practices using sample countries with different economic scenarios. Key findings are that business risk is the most disclosed category and strategy risk is the least disclosed. Business and credit risk disclosure consistently increase over the three year period, while operating, market and strategy risk disclosure increase in 2008, but then decrease slightly in 2009. Statistical analysis reveals that country of incorporation and size help predict risk disclosure levels. The overall low disclosure levels (26-29% highlight the potential for far higher communication of key risk factors.
Decreasing Relative Risk Premium
DEFF Research Database (Denmark)
Hansen, Frank
We consider the risk premium demanded by a decision maker with wealth x in order to be indifferent between obtaining a new level of wealth y1 with certainty, or to participate in a lottery which either results in unchanged present wealth or a level of wealth y2 > y1. We define the relative risk...... premium as the quotient between the risk premium and the increase in wealth y1–x which the decision maker puts on the line by choosing the lottery in place of receiving y1 with certainty. We study preferences such that the relative risk premium is a decreasing function of present wealth, and we determine...... relative risk premium in the small implies decreasing relative risk premium in the large, and decreasing relative risk premium everywhere implies risk aversion. We finally show that preferences with decreasing relative risk premium may be equivalently expressed in terms of certain preferences on risky...
DEFF Research Database (Denmark)
Harrison, Glenn W.; Lau, Morten; Rutström, E. Elisabet;
2013-01-01
the methodological issues extend to larger groups that form endogenously (e.g., families, committees, communities). Preferences over social risk can be closely approximated by individual risk attitudes when subjects have no information about the risk preferences of other group members. We find no evidence......We elicit individual preferences over social risk. We identify the extent to which these preferences are correlated with preferences over individual risk and the well-being of others. We examine these preferences in the context of laboratory experiments over small, anonymous groups, although...... that subjects systematically reveal different risk attitudes in a social setting with no prior knowledge about the risk preferences of others compared to when they solely bear the consequences of the decision. However, we also find that subjects are significantly more risk averse when they know the risk...
On the Origins of Calculation Abilities
Ardila, A.
1993-01-01
A historical review of calculation abilities is presented. Counting, starting with finger sequencing, has been observed in different ancient and contemporary cultures, whereas number representation and arithmetic abilities are found only during the last 5000–6000 years. The rationale for selecting a base of ten in most numerical systems and the clinical association between acalculia and finger agnosia are analyzed. Finger agnosia (as a restricted form of autotopagnosia), right–left discrimina...
Scaling Calculations for a Relativistic Gyrotron.
2014-09-26
a relativistic gyrotron. The results of calculations are given in Section 3. The non- linear , slow-time-scale equations of motion used for these...corresponds to a cylindrical resonator and a thin annular electron beam ;, " with the beam radius chosen to coincide with a maximum of the resonator...entering the cavity. A tractable set of non- linear equations based on a slow-time-scale formulation developed previously was used. For this
A Paleolatitude Calculator for Paleoclimate Studies.
van Hinsbergen, Douwe J J; de Groot, Lennart V; van Schaik, Sebastiaan J; Spakman, Wim; Bijl, Peter K; Sluijs, Appy; Langereis, Cor G; Brinkhuis, Henk
2015-01-01
Realistic appraisal of paleoclimatic information obtained from a particular location requires accurate knowledge of its paleolatitude defined relative to the Earth's spin-axis. This is crucial to, among others, correctly assess the amount of solar energy received at a location at the moment of sediment deposition. The paleolatitude of an arbitrary location can in principle be reconstructed from tectonic plate reconstructions that (1) restore the relative motions between plates based on (marine) magnetic anomalies, and (2) reconstruct all plates relative to the spin axis using a paleomagnetic reference frame based on a global apparent polar wander path. Whereas many studies do employ high-quality relative plate reconstructions, the necessity of using a paleomagnetic reference frame for climate studies rather than a mantle reference frame appears under-appreciated. In this paper, we briefly summarize the theory of plate tectonic reconstructions and their reference frames tailored towards applications of paleoclimate reconstruction, and show that using a mantle reference frame, which defines plate positions relative to the mantle, instead of a paleomagnetic reference frame may introduce errors in paleolatitude of more than 15° (>1500 km). This is because mantle reference frames cannot constrain, or are specifically corrected for the effects of true polar wander. We used the latest, state-of-the-art plate reconstructions to build a global plate circuit, and developed an online, user-friendly paleolatitude calculator for the last 200 million years by placing this plate circuit in three widely used global apparent polar wander paths. As a novelty, this calculator adds error bars to paleolatitude estimates that can be incorporated in climate modeling. The calculator is available at www.paleolatitude.org. We illustrate the use of the paleolatitude calculator by showing how an apparent wide spread in Eocene sea surface temperatures of southern high latitudes may be in part
Prediction and calculation for new energy development
Institute of Scientific and Technical Information of China (English)
Fu Yuhua; Fu Anjie
2008-01-01
Some important questions for new energy development were discussed, such as the prediction and calculation of sea surface temperature, ocean wave, offshore platform price, typhoon track, fn'e status, vibration due to earth-quake, energy price, stock market's trend and so on with the fractal methods ( including the four ones of constant di-mension fractal, variable dimension fractal, complex number dimension fractal and fractal series) and the improved res-caled range analysis (R/S analysis).
Calculation and application of liquidus projection
Institute of Scientific and Technical Information of China (English)
CHEN Shuanglin; CAO Weisheng; YANG Ying; ZHANG Fan; WU Kaisheng; DU Yong; Y.Austin Chang
2006-01-01
Liquidus projection usually refers to a two-dimensional projection of ternary liquidus univariant lines at constant pressure. The algorithms used in Pandat for the calculation of liquidus projection with isothermal lines and invariant reaction equations in a ternary system are presented. These algorithms have been extended to multicomponent liquidus projections and have also been implemented in Pandat. Some examples on ternary and quaternary liquidus projections are presented.
Flow calculation in a bulb turbine
Energy Technology Data Exchange (ETDEWEB)
Goede, E.; Pestalozzi, J.
1987-02-01
In recent years remarkable progress has been made in the field of computational fluid dynamics. Sometimes the impression may arise when reading the relevant literature that most of the problems in this field have already been solved. Upon studying the matter more deeply, however, it is apparent that some questions still remain unanswered. The use of the quasi-3D (Q3D) computational method for calculating the flow in a fuel hydraulic turbine is described.
Calculation of reactor antineutrino spectra in TEXONO
Chen Dong Liang; Mao Ze Pu; Wong, T H
2002-01-01
In the low energy reactor antineutrino physics experiments, either for the researches of antineutrino oscillation and antineutrino reactions, or for the measurement of abnormal magnetic moment of antineutrino, the flux and the spectra of reactor antineutrino must be described accurately. The method of calculation of reactor antineutrino spectra was discussed in detail. Furthermore, based on the actual circumstances of NP2 reactors and the arrangement of detectors, the flux and the spectra of reactor antineutrino in TEXONO were worked out
Perturbative calculation of quasi-normal modes
Siopsis, G
2005-01-01
I discuss a systematic method of analytically calculating the asymptotic form of quasi-normal frequencies. In the case of a four-dimensional Schwarzschild black hole, I expand around the zeroth-order approximation to the wave equation proposed by Motl and Neitzke. In the case of a five-dimensional AdS black hole, I discuss a perturbative solution of the Heun equation. The analytical results are in agreement with the results from numerical analysis.
Theoretical Calculations of Atomic Data for Spectroscopy
Bautista, Manuel A.
2000-01-01
Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.
Calculation of Loudspeaker Cabinet Diffraction and Correction
Institute of Scientific and Technical Information of China (English)
LE Yi; SHEN Yong; XIA Jie
2011-01-01
A method of calculating the cabinet edge diffractions for loudspeaker driver when mounted in an enclosure is proposed,based on the extended Biot-Tolstoy-Medwin model.Up to the third order,cabinet diffractions are discussed in detail and the diffractive effects on the radiated sound field of the loudspeaker system are quantitatively described,with a correction function built to compensate for the diffractive interference.The method is applied to a practical loudspeaker enclosure that has rectangular facets.The diffractive effects of the cabinet on the forward sound radiation are investigated and predictions of the calculations show quite good agreements with experimental measurements.Most loudspeaker systems employ box-like cabinets.The response of a loudspeaker mounted in a box is much rougher than that of the same driver mounted on a large baffle.Although resonances in the box are partly responsible for the lack of smoothness,a major contribution is the diffraction of the cabinet edges,which aggravates the final response performance.Consequently,an analysis of the cabinet diffraction problem is required.%A method of calculating the cabinet edge diffractions for loudspeaker driver when mounted in an enclosure is proposed, based on the extended Biot-Tolstoy-Medwin model. Up to the third order, cabinet diffractions are discussed in detail and the diffractive effects on the radiated sound field of the loudspeaker system are quantitatively described, with a correction function built to compensate for the diffractive interference. The method is applied to a practical loudspeaker enclosure that has rectangular facets. The diffractive effects of the cabinet on the forward sound radiation are investigated and predictions of the calculations show quite good agreements with experimental measurements.
Configuration mixing calculations in soluble models
Cambiaggio, M. C.; Plastino, A.; Szybisz, L.; Miller, H. G.
1983-07-01
Configuration mixing calculations have been performed in two quasi-spin models using basis states which are solutions of a particular set of Hartree-Fock equations. Each of these solutions, even those which do not correspond to the global minimum, is found to contain interesting physical information. Relatively good agreement with the exact lowest-lying states has been obtained. In particular, one obtains a better approximation to the ground state than that provided by Hartree-Fock.
A Paleolatitude Calculator for Paleoclimate Studies.
Directory of Open Access Journals (Sweden)
Douwe J J van Hinsbergen
Full Text Available Realistic appraisal of paleoclimatic information obtained from a particular location requires accurate knowledge of its paleolatitude defined relative to the Earth's spin-axis. This is crucial to, among others, correctly assess the amount of solar energy received at a location at the moment of sediment deposition. The paleolatitude of an arbitrary location can in principle be reconstructed from tectonic plate reconstructions that (1 restore the relative motions between plates based on (marine magnetic anomalies, and (2 reconstruct all plates relative to the spin axis using a paleomagnetic reference frame based on a global apparent polar wander path. Whereas many studies do employ high-quality relative plate reconstructions, the necessity of using a paleomagnetic reference frame for climate studies rather than a mantle reference frame appears under-appreciated. In this paper, we briefly summarize the theory of plate tectonic reconstructions and their reference frames tailored towards applications of paleoclimate reconstruction, and show that using a mantle reference frame, which defines plate positions relative to the mantle, instead of a paleomagnetic reference frame may introduce errors in paleolatitude of more than 15° (>1500 km. This is because mantle reference frames cannot constrain, or are specifically corrected for the effects of true polar wander. We used the latest, state-of-the-art plate reconstructions to build a global plate circuit, and developed an online, user-friendly paleolatitude calculator for the last 200 million years by placing this plate circuit in three widely used global apparent polar wander paths. As a novelty, this calculator adds error bars to paleolatitude estimates that can be incorporated in climate modeling. The calculator is available at www.paleolatitude.org. We illustrate the use of the paleolatitude calculator by showing how an apparent wide spread in Eocene sea surface temperatures of southern high
Index calculation by means of harmonic expansion
Imamura, Yosuke
2015-01-01
We review derivation of superconformal indices by means of supersymmetric localization and spherical harmonic expansion for 3d N=2, 4d N=1, and 6d N=(1,0) supersymmetric gauge theories. We demonstrate calculation of indices for vector multiplets in each dimensions by analysing energy eigenmodes in S^pxR. For the 6d index we consider the perturbative contribution only. We put focus on technical details of harmonic expansion rather than physical applications.
Bias in Dynamic Monte Carlo Alpha Calculations
Energy Technology Data Exchange (ETDEWEB)
Sweezy, Jeremy Ed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nolen, Steven Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adams, Terry R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-02-06
A 1/N bias in the estimate of the neutron time-constant (commonly denoted as α) has been seen in dynamic neutronic calculations performed with MCATK. In this paper we show that the bias is most likely caused by taking the logarithm of a stochastic quantity. We also investigate the known bias due to the particle population control method used in MCATK. We conclude that this bias due to the particle population control method is negligible compared to other sources of bias.
Preconditioned iterations to calculate extreme eigenvalues
Energy Technology Data Exchange (ETDEWEB)
Brand, C.W.; Petrova, S. [Institut fuer Angewandte Mathematik, Leoben (Austria)
1994-12-31
Common iterative algorithms to calculate a few extreme eigenvalues of a large, sparse matrix are Lanczos methods or power iterations. They converge at a rate proportional to the separation of the extreme eigenvalues from the rest of the spectrum. Appropriate preconditioning improves the separation of the eigenvalues. Davidson`s method and its generalizations exploit this fact. The authors examine a preconditioned iteration that resembles a truncated version of Davidson`s method with a different preconditioning strategy.
CALCULATION OF KAON ELECTROMAGNETIC FORM FACTOR
Institute of Scientific and Technical Information of China (English)
WANG ZHI-GANG; WAN SHAO-LONG; WANG KE-LIN
2001-01-01
The kaon meson electromagnetic form factor is calculated in the framework of coupled Schwinger-Dyson and Bethe-Salpeter formulation in simplified impulse approximation (dressed vertex) with modified fiat-bottom potential,which is a combination of the flat-bottom potential taking into consideration the infrared and ultraviolet asymptotic behaviours of the effective quark-gluon coupling. All the numerical results give a good fit to experimental values.
TINTE. Nuclear calculation theory description report
Energy Technology Data Exchange (ETDEWEB)
Gerwin, H.; Scherer, W.; Lauer, A. [Forschungszentrum Juelich GmbH (DE). Institut fuer Energieforschung (IEF), Sicherheitsforschung und Reaktortechnik (IEF-6); Clifford, I. [Pebble Bed Modular Reactor (Pty) Ltd. (South Africa)
2010-01-15
The Time Dependent Neutronics and Temperatures (TINTE) code system deals with the nuclear and the thermal transient behaviour of the primary circuit of the High-temperature Gas-cooled Reactor (HTGR), taking into consideration the mutual feedback effects in twodimensional axisymmetric geometry. This document contains a complete description of the theoretical basis of the TINTE nuclear calculation, including the equations solved, solution methods and the nuclear data used in the solution. (orig.)
Warhead Performance Calculations for Threat Hazard Assessment
1996-08-01
correlation can be drawn between an explosive’s heat of combustion, heat of detonation , and its EWF. The method of Baroody and Peters41 was used to calculate...from air-blast tests can be rationalized to a combination of an explosive’s heat of combustion and heat of detonation ratioed to the heat of...Center, China Lake, California, NWC TM 3754, February 1979. 41. Baroody, E. and Peters, S., Heats of Explosion, Heat of Detonation , and Reaction
Toward a nitrogen footprint calculator for Tanzania
Hutton, Mary Olivia; Leach, Allison M.; Leip, Adrian; Galloway, James N.; Bekunda, Mateete; Sullivan, Clare; Lesschen, Jan Peter
2017-03-01
We present the first nitrogen footprint model for a developing country: Tanzania. Nitrogen (N) is a crucial element for agriculture and human nutrition, but in excess it can cause serious environmental damage. The Sub-Saharan African nation of Tanzania faces a two-sided nitrogen problem: while there is not enough soil nitrogen to produce adequate food, excess nitrogen that escapes into the environment causes a cascade of ecological and human health problems. To identify, quantify, and contribute to solving these problems, this paper presents a nitrogen footprint tool for Tanzania. This nitrogen footprint tool is a concept originally designed for the United States of America (USA) and other developed countries. It uses personal resource consumption data to calculate a per-capita nitrogen footprint. The Tanzania N footprint tool is a version adapted to reflect the low-input, integrated agricultural system of Tanzania. This is reflected by calculating two sets of virtual N factors to describe N losses during food production: one for fertilized farms and one for unfertilized farms. Soil mining factors are also calculated for the first time to address the amount of N removed from the soil to produce food. The average per-capita nitrogen footprint of Tanzania is 10 kg N yr‑1. 88% of this footprint is due to food consumption and production, while only 12% of the footprint is due to energy use. Although 91% of farms in Tanzania are unfertilized, the large contribution of fertilized farms to N losses causes unfertilized farms to make up just 83% of the food production N footprint. In a developing country like Tanzania, the main audiences for the N footprint tool are community leaders, planners, and developers who can impact decision-making and use the calculator to plan positive changes for nitrogen sustainability in the developing world.
Automation of 2-loop Amplitude Calculations
Jones, S P
2016-01-01
Some of the tools and techniques that have recently been used to compute Higgs boson pair production at NLO in QCD are discussed. The calculation relies on the use of integral reduction, to reduce the number of integrals which must be computed, and expressing the amplitude in terms of a quasi-finite basis, which simplifies their numeric evaluation. Emphasis is placed on sector decomposition and Quasi-Monte Carlo (QMC) integration which are used to numerically compute the master integrals.
Uncertainty calculation in (operational) modal analysis
Pintelon, R.; Guillaume, P.; Schoukens, J.
2007-08-01
In (operational) modal analysis the modal parameters of a structure are identified from the response of that structure to (unmeasurable operational) perturbations. A key issue that remains to be solved is the calculation of uncertainty bounds on the estimated modal parameters. The present paper fills this gap. The theory is illustrated by means of a simulation and a real measurement example (operational modal analysis of a bridge).
Eigenvalue translation method for mode calculations.
Gerck, E; Cruz, C H
1979-05-01
A new method is described for the first few modes calculations in a interferometer that has several advantages over the Allmat subroutine, the Prony method, and the Fox and Li method. In the illustrative results shown for some cases it can be seen that the eigenvalue translation method is typically 100-fold times faster than the usual Fox and Li method and ten times faster than Allmat.
Inductance Calculations of Variable Pitch Helical Inductors
2015-08-01
current. Using the classical skin depth definition , we can adjust the effec- tive diameters used to calculate the inductances. The classical skin depth can...are not. The definition of classical skin depth is an approximation that assumes that all the cmrent is flowing evenly within the region encompassed...inductance can be applied to other more complex forms of geometry, including tapered coils, by simply using the more general forms of the self- and
Practical Rhumb Line Calculations on the Spheroid
Bennett, G. G.
About ten years ago this author wrote the software for a suite of navigation programmes which was resident in a small hand-held computer. In the course of this work it became apparent that the standard text books of navigation were perpetuating a flawed method of calculating rhumb lines on the Earth considered as an oblate spheroid. On further investigation it became apparent that these incorrect methods were being used in programming a number of calculator/computers and satellite navigation receivers. Although the discrepancies were not large, it was disquieting to compare the results of the same rhumb line calculations from a number of such devices and find variations of some miles when the output was given, and therefore purported to be accurate, to a tenth of a mile in distance and/or a tenth of a minute of arc in position. The problem has been highlighted in the past and the references at the end of this show that a number of methods have been proposed for the amelioration of this problem. This paper summarizes formulae that the author recommends should be used for accurate solutions. Most of these may be found in standard geodetic text books, such as, but also provided are new formulae and schemes of solution which are suitable for use with computers or tables. The latter also take into account situations when a near-indeterminate solution may arise. Some examples are provided in an appendix which demonstrate the methods. The data for these problems do not refer to actual terrestrial situations but have been selected for illustrative purposes only. Practising ships' navigators will find the methods described in detail in this paper to be directly applicable to their work and also they should find ready acceptance because they are similar to current practice. In none of the references cited at the end of this paper has the practical task of calculating, using either a computer or tabular techniques, been addressed.
Risk assessment and risk management of mycotoxins.
2012-01-01
Risk assessment is the process of quantifying the magnitude and exposure, or probability, of a harmful effect to individuals or populations from certain agents or activities. Here, we summarize the four steps of risk assessment: hazard identification, dose-response assessment, exposure assessment, and risk characterization. Risk assessments using these principles have been conducted on the major mycotoxins (aflatoxins, fumonisins, ochratoxin A, deoxynivalenol, and zearalenone) by various regulatory agencies for the purpose of setting food safety guidelines. We critically evaluate the impact of these risk assessment parameters on the estimated global burden of the associated diseases as well as the impact of regulatory measures on food supply and international trade. Apart from the well-established risk posed by aflatoxins, many uncertainties still exist about risk assessments for the other major mycotoxins, often reflecting a lack of epidemiological data. Differences exist in the risk management strategies and in the ways different governments impose regulations and technologies to reduce levels of mycotoxins in the food-chain. Regulatory measures have very little impact on remote rural and subsistence farming communities in developing countries, in contrast to developed countries, where regulations are strictly enforced to reduce and/or remove mycotoxin contamination. However, in the absence of the relevant technologies or the necessary infrastructure, we highlight simple intervention practices to reduce mycotoxin contamination in the field and/or prevent mycotoxin formation during storage.
Probabilistic risk analysis and terrorism risk.
Ezell, Barry Charles; Bennett, Steven P; von Winterfeldt, Detlof; Sokolowski, John; Collins, Andrew J
2010-04-01
Since the terrorist attacks of September 11, 2001, and the subsequent establishment of the U.S. Department of Homeland Security (DHS), considerable efforts have been made to estimate the risks of terrorism and the cost effectiveness of security policies to reduce these risks. DHS, industry, and the academic risk analysis communities have all invested heavily in the development of tools and approaches that can assist decisionmakers in effectively allocating limited resources across the vast array of potential investments that could mitigate risks from terrorism and other threats to the homeland. Decisionmakers demand models, analyses, and decision support that are useful for this task and based on the state of the art. Since terrorism risk analysis is new, no single method is likely to meet this challenge. In this article we explore a number of existing and potential approaches for terrorism risk analysis, focusing particularly on recent discussions regarding the applicability of probabilistic and decision analytic approaches to bioterrorism risks and the Bioterrorism Risk Assessment methodology used by the DHS and criticized by the National Academies and others.
Risk assessment terminology: risk communication part 1
Directory of Open Access Journals (Sweden)
Gaetano Liuzzo
2016-03-01
Full Text Available The paper describes the terminology of risk communication in the view of food safety: the theory of stakeholders, the citizens’ involvement and the community interest and consultation are reported. Different aspects of risk communication (public communication, scientific uncertainty, trust, care, consensus and crisis communication are discussed.
TEA: A Code Calculating Thermochemical Equilibrium Abundances
Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver
2016-07-01
We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. We tested the code against the method of Burrows & Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows & Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.
Coupled-cluster calculations of nucleonic matter
Hagen, G; Ekström, A; Wendt, K A; Baardsen, G; Gandolfi, S; Hjorth-Jensen, M; Horowitz, C J
2014-01-01
Background: The equation of state (EoS) of nucleonic matter is central for the understanding of bulk nuclear properties, the physics of neutron star crusts, and the energy release in supernova explosions. Purpose: This work presents coupled-cluster calculations of infinite nucleonic matter using modern interactions from chiral effective field theory (EFT). It assesses the role of correlations beyond particle-particle and hole-hole ladders, and the role of three-nucleon-forces (3NFs) in nuclear matter calculations with chiral interactions. Methods: This work employs the optimized nucleon-nucleon NN potential NNLOopt at next-to-next-to leading-order, and presents coupled-cluster computations of the EoS for symmetric nuclear matter and neutron matter. The coupled-cluster method employs up to selected triples clusters and the single-particle space consists of a momentum-space lattice. We compare our results with benchmark calculations and control finite-size effects and shell oscillations via twist-averaged bound...