A Modified APACHE II Score for Predicting Mortality of Variceal ...
African Journals Online (AJOL)
Conclusion: Modified APACHE II score is effective in predicting outcome of patients with variceal bleeding. Score of L 15 points and long ICU stay are associated with high mortality. Keywords: liver cirrhosis, periportal fibrosis, portal hypertension, schistosomiasis udan Journal of Medical Sciences Vol. 2 (2) 2007: pp. 105- ...
Khwannimit, Bodin
2008-01-01
The Logistic Organ Dysfunction score (LOD) is an organ dysfunction score that can predict hospital mortality. The aim of this study was to validate the performance of the LOD score compared with the Acute Physiology and Chronic Health Evaluation II (APACHE II) score in a mixed intensive care unit (ICU) at a tertiary referral university hospital in Thailand. The data were collected prospectively on consecutive ICU admissions over a 24 month period from July1, 2004 until June 30, 2006. Discrimination was evaluated by the area under the receiver operating characteristic curve (AUROC). The calibration was assessed by the Hosmer-Lemeshow goodness-of-fit H statistic. The overall fit of the model was evaluated by the Brier's score. Overall, 1,429 patients were enrolled during the study period. The mortality in the ICU was 20.9% and in the hospital was 27.9%. The median ICU and hospital lengths of stay were 3 and 18 days, respectively, for all patients. Both models showed excellent discrimination. The AUROC for the LOD and APACHE II were 0.860 [95% confidence interval (CI) = 0.838-0.882] and 0.898 (95% Cl = 0.879-0.917), respectively. The LOD score had perfect calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 10 (p = 0.44). However, the APACHE II had poor calibration with the Hosmer-Lemeshow goodness-of-fit H chi-2 = 75.69 (p < 0.001). Brier's score showed the overall fit for both models were 0.123 (95%Cl = 0.107-0.141) and 0.114 (0.098-0.132) for the LOD and APACHE II, respectively. Thus, the LOD score was found to be accurate for predicting hospital mortality for general critically ill patients in Thailand.
Directory of Open Access Journals (Sweden)
Amina Godinjak
2016-11-01
Full Text Available Objective. The aim is to determine SAPS II and APACHE II scores in medical intensive care unit (MICU patients, to compare them for prediction of patient outcome, and to compare with actual hospital mortality rates for different subgroups of patients. Methods. One hundred and seventy-four patients were included in this analysis over a oneyear period in the MICU, Clinical Center, University of Sarajevo. The following patient data were obtained: demographics, admission diagnosis, SAPS II, APACHE II scores and final outcome. Results. Out of 174 patients, 70 patients (40.2% died. Mean SAPS II and APACHE II scores in all patients were 48.4±17.0 and 21.6±10.3 respectively, and they were significantly different between survivors and non-survivors. SAPS II >50.5 and APACHE II >27.5 can predict the risk of mortality in these patients. There was no statistically significant difference in the clinical values of SAPS II vs APACHE II (p=0.501. A statistically significant positive correlation was established between the values of SAPS II and APACHE II (r=0.708; p=0.001. Patients with an admission diagnosis of sepsis/septic shock had the highest values of both SAPS II and APACHE II scores, and also the highest hospital mortality rate of 55.1%. Conclusion. Both APACHE II and SAPS II had an excellent ability to discriminate between survivors and non-survivors. There was no significant difference in the clinical values of SAPS II and APACHE II. A positive correlation was established between them. Sepsis/septic shock patients had the highest predicted and observed hospital mortality rate.
D-dimer as marker for microcirculatory failure: correlation with LOD and APACHE II scores.
Angstwurm, Matthias W A; Reininger, Armin J; Spannagl, Michael
2004-01-01
The relevance of plasma d-dimer levels as marker for morbidity and organ dysfunction in severely ill patients is largely unknown. In a prospective study we determined d-dimer plasma levels of 800 unselected patients at admission to our intensive care unit. In 91% of the patients' samples d-dimer levels were elevated, in some patients up to several hundredfold as compared to normal values. The highest mean d-dimer values were present in the patient group with thromboembolic diseases, and particularly in non-survivors of pulmonary embolism. In patients with circulatory impairment (r=0.794) and in patients with infections (r=0.487) a statistically significant correlation was present between d-dimer levels and the APACHE II score (P<0.001). The logistic organ dysfunction score (LOD, P<0.001) correlated with d-dimer levels only in patients with circulatory impairment (r=0.474). On the contrary, patients without circulatory impairment demonstrated no correlation of d-dimer levels to the APACHE II or LOD score. Taking all patients together, no correlations of d-dimer levels with single organ failure or with indicators of infection could be detected. In conclusion, d-dimer plasma levels strongly correlated with the severity of the disease and organ dysfunction in patients with circulatory impairment or infections suggesting that elevated d-dimer levels may reflect the extent of microcirculatory failure. Thus, a therapeutic strategy to improve the microcirculation in such patients may be monitored using d-dimer plasma levels.
Rathnakar, Surag Kajoor; Vishnu, Vikram Hubbanageri; Muniyappa, Shridhar; Prasath, Arun
2017-02-01
Acute Pancreatitis (AP) is one of the common conditions encountered in the emergency room. The course of the disease ranges from mild form to severe acute form. Most of these episodes are mild and spontaneously subsiding within 3 to 5 days. In contrast, Severe Acute Pancreatitis (SAP) occurring in around 15-20% of all cases, mortality can range between 10 to 85% across various centres and countries. In such a situation we need an indicator which can predict the outcome of an attack, as severe or mild, as early as possible and such an indicator should be sensitive and specific enough to trust upon. PANC-3 scoring is such a scoring system in predicting the outcome of an attack of AP. To assess the accuracy and predictability of PANC-3 scoring system over APACHE II in predicting severity in an attack of AP. This prospective study was conducted on 82 patients admitted with the diagnosis of pancreatitis. Investigations to evaluate PANC-3 and APACHE II were done on all the patients and the PANC-3 and APACHE II score was calculated. PANC-3 score has a sensitivity of 82.6% and specificity of 77.9%, the test had a Positive Predictive Value (PPV) of 0.59 and Negative Predictive Value (NPV) of 0.92. Sensitivity of APACHE II in predicting SAP was 91.3% and specificity was 96.6% with PPV of 0.91, NPV was 0.96. Our study shows that PANC-3 can be used to predict the severity of pancreatitis as efficiently as APACHE II. The interpretation of PANC-3 does not need expertise and can be applied at the time of admission which is an advantage when compared to classical scoring systems.
Directory of Open Access Journals (Sweden)
А. V. Sotnikov
2014-01-01
Full Text Available Short-term disease prognosis should be considered for the appropriate treatment policy based on the assessment of disease severity in patients with acute disease. The adequate assessment of disease severity and prognosis allows the indications for transferring patients to the resuscitation and intensive care department to be defined more precisely. Disease severity of patients who underwent polychemotherapy was assessed using APACHE II scoring system.
Scoring Rules for Subjective Probability Distributions
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
The theoretical literature has a rich characterization of scoring rules for eliciting the subjective beliefs that an individual has for continuous events, but under the restrictive assumption of risk neutrality. It is well known that risk aversion can dramatically affect the incentives to correctly...... report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...... to distort reports. We characterize the comparable implications of the general case of a risk averse agent when facing a popular scoring rule over continuous events, and find that these concerns do not apply with anything like the same force. For empirically plausible levels of risk aversion, one can...
Directory of Open Access Journals (Sweden)
Jae Woo Choi
2017-08-01
Full Text Available Background The Acute Physiology and Chronic Health Evaluation (APACHE II model has been widely used in Korea. However, there have been few studies on the APACHE IV model in Korean intensive care units (ICUs. The aim of this study was to compare the ability of APACHE IV and APACHE II in predicting hospital mortality, and to investigate the ability of APACHE IV as a critical care triage criterion. Methods The study was designed as a prospective cohort study. Measurements of discrimination and calibration were performed using the area under the receiver operating characteristic curve (AUROC and the Hosmer-Lemeshow goodness-of-fit test respectively. We also calculated the standardized mortality ratio (SMR. Results The APACHE IV score, the Charlson Comorbidity index (CCI score, acute respiratory distress syndrome, and unplanned ICU admissions were independently associated with hospital mortality. The calibration, discrimination, and SMR of APACHE IV were good (H = 7.67, P = 0.465; C = 3.42, P = 0.905; AUROC = 0.759; SMR = 1.00. However, the explanatory power of an APACHE IV score >93 alone on hospital mortality was low at 44.1%. The explanatory power was increased to 53.8% when the hospital mortality was predicted using a model that considers APACHE IV >93 scores, medical admission, and risk factors for CCI >3 coincidentally. However, the discriminative ability of the prediction model was unsatisfactory (C index <0.70. Conclusions The APACHE IV presented good discrimination, calibration, and SMR for hospital mortality.
Energy Technology Data Exchange (ETDEWEB)
Tzeng, Wen Sheng; Wu, Reng Hong; Lin, Ching Yih; Chen, Jyh Jou; Sheu, Ming Juen; Koay, Lok Beng; Lee, Chuan [Chi-Mei Foundation Medical Center, Tainan (China)
2009-10-15
This study was designed to determine if existing methods of grading liver function that have been developed in non-Asian patients with cirrhosis can be used to predict mortality in Asian patients treated for refractory variceal hemorrhage by the use of the transjugular intrahepatic portosystemic shunt (TIPS) procedure. Data for 107 consecutive patients who underwent an emergency TIPS procedure were retrospectively analyzed. Acute physiology and chronic health evaluation (APACHE II), Child-Pugh and model for end-stage liver disease (MELD) scores were calculated. Survival analyses were performed to evaluate the ability of the various models to predict 30-day, 60-day and 360-day mortality. The ability of stratified APACHE II, Child-Pugh, and MELD scores to predict survival was assessed by the use of Kaplan-Meier analysis with the log-rank test. No patient died during the TIPS procedure, but 82 patients died during the follow-up period. Thirty patients died within 30 days after the TIPS procedure; 37 patients died within 60 days and 53 patients died within 360 days. Univariate analysis indicated that hepatorenal syndrome, use of inotropic agents and mechanical ventilation were associated with elevated 30-day mortality (p < 0.05). Multivariate analysis showed that a Child-Pugh score > 11 or an MELD score > 20 predicted increased risk of death at 30, 60 and 360 days (p < 0.05). APACHE II scores could only predict mortality at 360 days (p < 0.05). A Child-Pugh score > 11 or an MELD score > 20 are predictive of mortality in Asian patients with refractory variceal hemorrhage treated with the TIPS procedure. An APACHE II score is not predictive of early mortality in this patient population.
International Nuclear Information System (INIS)
Tzeng, Wen Sheng; Wu, Reng Hong; Lin, Ching Yih; Chen, Jyh Jou; Sheu, Ming Juen; Koay, Lok Beng; Lee, Chuan
2009-01-01
This study was designed to determine if existing methods of grading liver function that have been developed in non-Asian patients with cirrhosis can be used to predict mortality in Asian patients treated for refractory variceal hemorrhage by the use of the transjugular intrahepatic portosystemic shunt (TIPS) procedure. Data for 107 consecutive patients who underwent an emergency TIPS procedure were retrospectively analyzed. Acute physiology and chronic health evaluation (APACHE II), Child-Pugh and model for end-stage liver disease (MELD) scores were calculated. Survival analyses were performed to evaluate the ability of the various models to predict 30-day, 60-day and 360-day mortality. The ability of stratified APACHE II, Child-Pugh, and MELD scores to predict survival was assessed by the use of Kaplan-Meier analysis with the log-rank test. No patient died during the TIPS procedure, but 82 patients died during the follow-up period. Thirty patients died within 30 days after the TIPS procedure; 37 patients died within 60 days and 53 patients died within 360 days. Univariate analysis indicated that hepatorenal syndrome, use of inotropic agents and mechanical ventilation were associated with elevated 30-day mortality (p 11 or an MELD score > 20 predicted increased risk of death at 30, 60 and 360 days (p 11 or an MELD score > 20 are predictive of mortality in Asian patients with refractory variceal hemorrhage treated with the TIPS procedure. An APACHE II score is not predictive of early mortality in this patient population
Directory of Open Access Journals (Sweden)
Sundaramoorthy VijayGanapathy
2017-11-01
Full Text Available Purpose: Urosepsis implies clinically evident severe infection of urinary tract with features of systemic inflammatory response syndrome (SIRS. We validate the role of a single Acute Physiology and Chronic Health Evaluation II (APACHE II score at 24 hours after admission in predicting mortality in urosepsis. Materials and Methods: A prospective observational study was done in 178 patients admitted with urosepsis in the Department of Urology, in a tertiary care institute from January 2015 to August 2016. Patients >18 years diagnosed as urosepsis using SIRS criteria with positive urine or blood culture for bacteria were included. At 24 hours after admission to intensive care unit, APACHE II score was calculated using 12 physiological variables, age and chronic health. Results: Mean±standard deviation (SD APACHE II score was 26.03±7.03. It was 24.31±6.48 in survivors and 32.39±5.09 in those expired (p<0.001. Among patients undergoing surgery, mean±SD score was higher (30.74±4.85 than among survivors (24.30±6.54 (p<0.001. Receiver operating characteristic (ROC analysis revealed area under curve (AUC of 0.825 with cutoff 25.5 being 94.7% sensitive and 56.4% specific to predict mortality. Mean±SD score in those undergoing surgery was 25.22±6.70 and was lesser than those who did not undergo surgery (28.44±7.49 (p=0.007. ROC analysis revealed AUC of 0.760 with cutoff 25.5 being 94.7% sensitive and 45.6% specific to predict mortality even after surgery. Conclusions: A single APACHE II score assessed at 24 hours after admission was able to predict morbidity, mortality, need for surgical intervention, length of hospitalization, treatment success and outcome in urosepsis patients.
VijayGanapathy, Sundaramoorthy; Karthikeyan, VIlvapathy Senguttuvan; Sreenivas, Jayaram; Mallya, Ashwin; Keshavamurthy, Ramaiah
2017-11-01
Urosepsis implies clinically evident severe infection of urinary tract with features of systemic inflammatory response syndrome (SIRS). We validate the role of a single Acute Physiology and Chronic Health Evaluation II (APACHE II) score at 24 hours after admission in predicting mortality in urosepsis. A prospective observational study was done in 178 patients admitted with urosepsis in the Department of Urology, in a tertiary care institute from January 2015 to August 2016. Patients >18 years diagnosed as urosepsis using SIRS criteria with positive urine or blood culture for bacteria were included. At 24 hours after admission to intensive care unit, APACHE II score was calculated using 12 physiological variables, age and chronic health. Mean±standard deviation (SD) APACHE II score was 26.03±7.03. It was 24.31±6.48 in survivors and 32.39±5.09 in those expired (p<0.001). Among patients undergoing surgery, mean±SD score was higher (30.74±4.85) than among survivors (24.30±6.54) (p<0.001). Receiver operating characteristic (ROC) analysis revealed area under curve (AUC) of 0.825 with cutoff 25.5 being 94.7% sensitive and 56.4% specific to predict mortality. Mean±SD score in those undergoing surgery was 25.22±6.70 and was lesser than those who did not undergo surgery (28.44±7.49) (p=0.007). ROC analysis revealed AUC of 0.760 with cutoff 25.5 being 94.7% sensitive and 45.6% specific to predict mortality even after surgery. A single APACHE II score assessed at 24 hours after admission was able to predict morbidity, mortality, need for surgical intervention, length of hospitalization, treatment success and outcome in urosepsis patients.
Liu, Xiao-Wei; Ma, Tao; Li, Lu-Lu; Qu, Bo; Liu, Zhi
2017-07-01
The present study investigated the predictive values of urine paraquat (PQ) concentration, dose of poison, arterial blood lactate and Acute Physiology and Chronic Health Evaluation (APACHE) II score in the prognosis of patients with acute PQ poisoning. A total of 194 patients with acute PQ poisoning, hospitalized between April 2012 and January 2014 at the First Affiliated Hospital of P.R. China Medical University (Shenyang, China), were selected and divided into survival and mortality groups. Logistic regression analysis, receiver operator characteristic (ROC) curve analysis and Kaplan-Meier curve were applied to evaluate the values of urine paraquat (PQ) concentration, dose of poison, arterial blood lactate and (APACHE) II score for predicting the prognosis of patients with acute PQ poisoning. Initial urine PQ concentration (C0), dose of poison, arterial blood lactate and APACHE II score of patients in the mortality group were significantly higher compared with the survival group (all Ppoison and arterial blood lactate correlated with mortality risk of acute PQ poisoning (all Ppoison, arterial blood lactate and APACHE II score in predicting the mortality of patients within 28 days were 0.921, 0.887, 0.808 and 0.648, respectively. The AUC of C0 for predicting early and delayed mortality were 0.890 and 0.764, respectively. The AUC values of urine paraquat concentration the day after poisoning (Csec) and the rebound rate of urine paraquat concentration in predicting the mortality of patients within 28 days were 0.919 and 0.805, respectively. The 28-day survival rate of patients with C0 ≤32.2 µg/ml (42/71; 59.2%) was significantly higher when compared with patients with C0 >32.2 µg/ml (38/123; 30.9%). These results suggest that the initial urine PQ concentration may be the optimal index for predicting the prognosis of patients with acute PQ poisoning. Additionally, dose of poison, arterial blood lactate, Csec and rebound rate also have referential significance.
Posterior probability of linkage and maximal lod score.
Génin, E; Martinez, M; Clerget-Darpoux, F
1995-01-01
To detect linkage between a trait and a marker, Morton (1955) proposed to calculate the lod score z(theta 1) at a given value theta 1 of the recombination fraction. If z(theta 1) reaches +3 then linkage is concluded. However, in practice, lod scores are calculated for different values of the recombination fraction between 0 and 0.5 and the test is based on the maximum value of the lod score Zmax. The impact of this deviation of the test on the probability that in fact linkage does not exist, when linkage was concluded, is documented here. This posterior probability of no linkage can be derived by using Bayes' theorem. It is less than 5% when the lod score at a predetermined theta 1 is used for the test. But, for a Zmax of +3, we showed that it can reach 16.4%. Thus, considering a composite alternative hypothesis instead of a single one decreases the reliability of the test. The reliability decreases rapidly when Zmax is less than +3. Given a Zmax of +2.5, there is a 33% chance that linkage does not exist. Moreover, the posterior probability depends not only on the value of Zmax but also jointly on the family structures and on the genetic model. For a given Zmax, the chance that linkage exists may then vary.
Optimization of continuous ranked probability score using PSO
Directory of Open Access Journals (Sweden)
Seyedeh Atefeh Mohammadi
2015-07-01
Full Text Available Weather forecast has been a major concern in various industries such as agriculture, aviation, maritime, tourism, transportation, etc. A good weather prediction may reduce natural disasters and unexpected events. This paper presents an empirical investigation to predict weather temperature using continuous ranked probability score (CRPS. The mean and standard deviation of normal density function are linear combination of the components of ensemble system. The resulted optimization model has been solved using particle swarm optimization (PSO and the results are compared with Broyden–Fletcher–Goldfarb–Shanno (BFGS method. The preliminary results indicate that the proposed PSO provides better results in terms of root-mean-square deviation criteria than the alternative BFGS method.
National Aeronautics and Space Administration — PROBABILITY CALIBRATION BY THE MINIMUM AND MAXIMUM PROBABILITY SCORES IN ONE-CLASS BAYES LEARNING FOR ANOMALY DETECTION GUICHONG LI, NATHALIE JAPKOWICZ, IAN HOFFMAN,...
Bharathan, Raghuram
2015-01-01
If you are a Java developer or a manager who has experience with Apache Maven and want to extend your knowledge, then this is the ideal book for you. Apache Maven Cookbook is for those who want to learn how Apache Maven can be used for build automation. It is also meant for those familiar with Apache Maven, but want to understand the finer nuances of Maven and solve specific problems.
Li, Junhui; Li, Yingchuan; Sheng, Xiaohua; Wang, Feng; Cheng, Dongsheng; Jian, Guihua; Li, Yongguang; Feng, Liang; Wang, Niansong
2018-03-29
Both the Acute physiology and Chronic Health Evaluation (APACHE II) score and mean platelet volume/platelet count Ratio (MPR) can independently predict adverse outcomes in critically ill patients. This study was aimed to investigate whether the combination of them could have a better performance in predicting prognosis of patients with acute kidney injury (AKI) who received continuous renal replacement therapy (CRRT). Two hundred twenty-three patients with AKI who underwent CRRT between January 2009 and December 2014 in a Chinese university hospital were enrolled. They were divided into survivals group and non-survivals group based on the situation at discharge. Receiver Operating Characteristic (ROC) curve was used for MPR and APACHE II score, and to determine the optimal cut-off value of MPR for in-hospital mortality. Factors associated with mortality were identified by univariate and multivariate logistic regression analysis. The mean age of the patients was 61.4 years, and the overall in-hospital mortality was 48.4%. Acute cardiorenal syndrome (ACRS) was the most common cause of AKI. The optimal cut-off value of MPR for mortality was 0.099 with an area under the ROC curve (AUC) of 0.636. The AUC increased to 0.851 with the addition of the APACHE II score. The mortality of patients with of MPR > 0.099 was 56.4%, which was significantly higher than that of the control group with of ≤ 0.099 (39.6%, P= 0.012). Logistic regression analysis showed that average number of organ failure (OR = 2.372), APACHE II score (OR = 1.187), age (OR = 1.028) and vasopressors administration (OR = 38.130) were significantly associated with poor prognosis. Severity of illness was significantly associated with prognosis of patients with AKI. The combination of MPR and APACHE II score may be helpful in predicting the short-term outcome of AKI. © 2018 The Author(s). Published by S. Karger AG, Basel.
Directory of Open Access Journals (Sweden)
Junhui Li
2018-03-01
Full Text Available Background/Aims: Both the Acute physiology and Chronic Health Evaluation (APACHE II score and mean platelet volume/platelet count Ratio (MPR can independently predict adverse outcomes in critically ill patients. This study was aimed to investigate whether the combination of them could have a better performance in predicting prognosis of patients with acute kidney injury (AKI who received continuous renal replacement therapy (CRRT. Methods: Two hundred twenty-three patients with AKI who underwent CRRT between January 2009 and December 2014 in a Chinese university hospital were enrolled. They were divided into survivals group and non-survivals group based on the situation at discharge. Receiver Operating Characteristic (ROC curve was used for MPR and APACHE II score, and to determine the optimal cut-off value of MPR for in-hospital mortality. Factors associated with mortality were identified by univariate and multivariate logistic regression analysis. Results: The mean age of the patients was 61.4 years, and the overall in-hospital mortality was 48.4%. Acute cardiorenal syndrome (ACRS was the most common cause of AKI. The optimal cut-off value of MPR for mortality was 0.099 with an area under the ROC curve (AUC of 0.636. The AUC increased to 0.851 with the addition of the APACHE II score. The mortality of patients with of MPR > 0.099 was 56.4%, which was significantly higher than that of the control group with of ≤ 0.099 (39.6%, P= 0.012. Logistic regression analysis showed that average number of organ failure (OR = 2.372, APACHE II score (OR = 1.187, age (OR = 1.028 and vasopressors administration (OR = 38.130 were significantly associated with poor prognosis. Conclusion: Severity of illness was significantly associated with prognosis of patients with AKI. The combination of MPR and APACHE II score may be helpful in predicting the short-term outcome of AKI.
Directory of Open Access Journals (Sweden)
Travaglino Francesco
2012-08-01
Full Text Available Abstract Background The aim of our study was to evaluate the prognostic value of MR-proADM and PCT levels in febrile patients in the ED in comparison with a disease severity index score, the APACHE II score. We also evaluated the ability of MR-proADM and PCT to predict hospitalization. Methods This was an observational, multicentric study. We enrolled 128 patients referred to the ED with high fever and a suspicion of severe infection such as sepsis, lower respiratory tract infections, urinary tract infections, gastrointestinal infections, soft tissue infections, central nervous system infections, or osteomyelitis. The APACHE II score was calculated for each patient. Results MR-proADM median values in controls were 0.5 nmol/l as compared with 0.85 nmol/l in patients (P P . MR-proADM and PCT levels were significantly increased in accordance with the Apache II quartiles (P respectively. In the respiratory infections, urinary infections, and sepsis-septic shock groups we found a correlation between the Apache II and MR-proADM respectively and MR-proADM and PCT respectively. We evaluated the ability of MR-proADM and PCT to predict hospitalization in patients admitted to our emergency departments complaining of fever. MR-proADM alone had an AUC of 0.694, while PCT alone had an AUC of 0.763. The combined use of PCT and MR-proADM instead showed an AUC of 0.79. Conclusions The present study highlights the way in which MR-proADM and PCT may be helpful to the febrile patient’s care in the ED. Our data support the prognostic role of MR-proADM and PCT in that setting, as demonstrated by the correlation with the APACHE II score. The combined use of the two biomarkers can predict a subsequent hospitalization of febrile patients. The rational use of these two molecules could lead to several advantages, such as faster diagnosis, more accurate risk stratification, and optimization of the treatment, with consequent benefit to the patient and
Garg, Nishant
2015-01-01
This book is for readers who want to know more about Apache Kafka at a hands-on level; the key audience is those with software development experience but no prior exposure to Apache Kafka or similar technologies. It is also useful for enterprise application developers and big data enthusiasts who have worked with other publisher-subscriber-based systems and want to explore Apache Kafka as a futuristic solution.
Quantification of type I error probabilities for heterogeneity LOD scores.
Abreu, Paula C; Hodge, Susan E; Greenberg, David A
2002-02-01
Locus heterogeneity is a major confounding factor in linkage analysis. When no prior knowledge of linkage exists, and one aims to detect linkage and heterogeneity simultaneously, classical distribution theory of log-likelihood ratios does not hold. Despite some theoretical work on this problem, no generally accepted practical guidelines exist. Nor has anyone rigorously examined the combined effect of testing for linkage and heterogeneity and simultaneously maximizing over two genetic models (dominant, recessive). The effect of linkage phase represents another uninvestigated issue. Using computer simulation, we investigated type I error (P value) of the "admixture" heterogeneity LOD (HLOD) score, i.e., the LOD score maximized over both recombination fraction theta and admixture parameter alpha and we compared this with the P values when one maximizes only with respect to theta (i.e., the standard LOD score). We generated datasets of phase-known and -unknown nuclear families, sizes k = 2, 4, and 6 children, under fully penetrant autosomal dominant inheritance. We analyzed these datasets (1) assuming a single genetic model, and maximizing the HLOD over theta and alpha; and (2) maximizing the HLOD additionally over two dominance models (dominant vs. recessive), then subtracting a 0.3 correction. For both (1) and (2), P values increased with family size k; rose less for phase-unknown families than for phase-known ones, with the former approaching the latter as k increased; and did not exceed the one-sided mixture distribution xi = (1/2) chi1(2) + (1/2) chi2(2). Thus, maximizing the HLOD over theta and alpha appears to add considerably less than an additional degree of freedom to the associated chi1(2) distribution. We conclude with practical guidelines for linkage investigators. Copyright 2002 Wiley-Liss, Inc.
Laurie, Ben
2003-01-01
Apache is far and away the most widely used web server platform in the world. This versatile server runs more than half of the world's existing web sites. Apache is both free and rock-solid, running more than 21 million web sites ranging from huge e-commerce operations to corporate intranets and smaller hobby sites. With this new third edition of Apache: The Definitive Guide, web administrators new to Apache will come up to speed quickly, and experienced administrators will find the logically organized, concise reference sections indispensable, and system programmers interested in customizin
Zabolotskikh, I B; Musaeva, T S; Denisova, E A
2012-01-01
to estimate efficiency of APACHE II, APACHE III, SAPS II, SAPS III, SOFA scales for obstetric patients with heavy sepsis. 186 medical cards retrospective analysis of pregnant women with pulmonary sepsis, 40 women with urosepsis and puerperas with abdominal sepsis--66 was performed. Middle age of women was 26.7 (22.4-34.5). In population of puerperas with abdominal sepsis APACHE II, APACHE III, SAPS 2, SAPS 3, SOFA scales showed to good calibration, however, high resolution was observed only in APACHE III, SAPS 3 and SOFA (AUROC 0.95; 0.93; 0.92 respectively). APACHE III and SOFA scales provided qualitative prognosis in pregnant women with urosepsis; resolution ratio of these scales considerably exceeds APACHE II, SAPS 2 and SAPS 3 (AUROC 0.73; 0.74; 0.79 respectively). APACHE II scale is inapplicable because of a lack of calibration (X2 = 13.1; p < 0.01), and at other scales (APACHE III, SAPS 2, SAPS 3, SOFA) was observed the insufficient resolution (AUROC < 0.9) in pregnant women with pulmonary sepsis. Prognostic possibilities assessment of score scales showed that APACHE III, SAPS 3 and SOFA scales can be used for a lethality prognosis for puerperas with abdominal sepsis, in population of pregnant women with urosepsis--only APACHE III and SOFA, and with pulmonary sepsis--SAPS 3 and APACHE III only in case of additional clinical information.
Edstrom, Johan; Kesler, Heath
2013-01-01
The book is a fast-paced guide full of step-by-step instructions covering all aspects of application development using Apache Karaf.Learning Apache Karaf will benefit all Java developers and system administrators who need to develop for and/or operate Karaf's OSGi-based runtime. Basic knowledge of Java is assumed.
Directory of Open Access Journals (Sweden)
Giacobbe P.
2013-04-01
Full Text Available First, we summarize the four-year long efforts undertaken to build the final setup of the APACHE Project, a photometric transit search for small-size planets orbiting bright, low-mass M dwarfs. Next, we describe the present status of the APACHE survey, officially started in July 2012 at the site of the Astronomical Observatory of the Autonomous Region of the Aosta Valley, in the Western Italian Alps. Finally, we briefly discuss the potentially far-reaching consequences of a multi-technique characterization program of the (potentially planet-bearing APACHE targets.
Longo, João Sávio Ceregatti
2013-01-01
Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. This Starter style guide takes the reader through the basic workflow of Apache Wicket in a practical and friendly style.Instant Apache Wicket 6 is for people who want to learn the basics of Apache Wicket 6 and who already have some experience with Java and object-oriented programming. Basic knowledge of web concepts like HTTP and Ajax will be an added advantage.
Withanawasam, Jayani
2015-01-01
If you are a Java developer or data scientist, haven't worked with Apache Mahout before, and want to get up to speed on implementing machine learning on big data, then this is the perfect guide for you.
Gazzarini, Andrea
2015-01-01
If you are a competent developer with experience of working with technologies similar to Apache Solr and want to develop efficient search applications, then this book is for you. Familiarity with the Java programming language is required.
Giacomelli, Piero
2013-01-01
Apache Mahout Cookbook uses over 35 recipes packed with illustrations and real-world examples to help beginners as well as advanced programmers get acquainted with the features of Mahout.""Apache Mahout Cookbook"" is great for developers who want to have a fresh and fast introduction to Mahout coding. No previous knowledge of Mahout is required, and even skilled developers or system administrators will benefit from the various recipes presented
Turatti, Maurizio
2013-01-01
Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks.The book follows a starter approach for using Maven to create and build a new Java application or Web project from scratch.Instant Apache Maven Starter is great for Java developers new to Apache Maven, but also for experts looking for immediate information. Moreover, only 20% of the necessary information about Maven is used in 80% of the activities. This book aims to focus on the most important information, those pragmatic parts you actually use
Khare, Tanuj
2012-01-01
This book is a step-by-step tutorial for anyone wanting to learn Apache Tomcat 7 from scratch. There are plenty of illustrations and examples to escalate you from a novice to an expert with minimal strain. If you are a J2EE administrator, migration administrator, technical architect, or a project manager for a web hosting domain, and are interested in Apache Tomcat 7, then this book is for you. If you are someone responsible for installation, configuration, and management of Tomcat 7, then too, this book will be of help to you.
International Nuclear Information System (INIS)
Daniels, S.
1994-01-01
Trying to drum up business for what would be the first private temporary storage facility for spent nuclear fuel rods, the Mescalero Apaches are inviting officials of 30 utilities to convene March 10 at the tribe's New Mexico reservation. The state public utilities commission will also attend the meeting, which grew from an agreement the tribe signed last month with Minneapolis-based Northern States Power Co
Neeraj, Nishant
2013-01-01
Mastering Apache Cassandra is a practical, hands-on guide with step-by-step instructions. The smooth and easy tutorial approach focuses on showing people how to utilize Cassandra to its full potential.This book is aimed at intermediate Cassandra users. It is best suited for startups where developers have to wear multiple hats: programmer, DevOps, release manager, convincing clients, and handling failures. No prior knowledge of Cassandra is required.
Apache 2 Pocket Reference For Apache Programmers & Administrators
Ford, Andrew
2008-01-01
Even if you know the Apache web server inside and out, you still need an occasional on-the-job reminder -- especially if you're moving to the newer Apache 2.x. Apache 2 Pocket Reference gives you exactly what you need to get the job done without forcing you to plow through a cumbersome, doorstop-sized reference. This Book provides essential information to help you configure and maintain the server quickly, with brief explanations that get directly to the point. It covers Apache 2.x, giving web masters, web administrators, and programmers a quick and easy reference solution. This pocket r
Lim, Grace; Horowitz, Jeanne M; Berggruen, Senta; Ernst, Linda M; Linn, Rebecca L; Hewlett, Bradley; Kim, Jennifer; Chalifoux, Laurie A; McCarthy, Robert J
2016-11-01
To evaluate the hypothesis that assigning grades to magnetic resonance imaging (MRI) findings of suspected placenta accreta will correlate with hemorrhagic outcomes. We chose a single-center, retrospective, observational design. Nulliparous or multiparous women who had antenatal placental MRI performed at a tertiary level academic hospital were included. Cases with antenatal placental MRI were included and compared with cases without MRI performed. Two radiologists assigned a probability score for accreta to each study. Estimated blood loss and transfusion requirements were compared among groups by the Kruskal-Wallis H test. Thirty-five cases had placental MRI performed. MRI performance was associated with higher blood loss compared with the non-MRI group (2600 [1400-4500]mL vs 900[600-1500]mL, Paccreta, probability scores for antenatal placental MRI may not be associated with increasing degrees of hemorrhage. Continued research is warranted to determine the effectiveness of assigning probability scores for antenatal accreta imaging studies, combined with clinical indices of suspicion, in assisting with antenatal multidisciplinary team planning for operative management of this morbid condition. Copyright © 2016 Elsevier Inc. All rights reserved.
A probability score for preoperative prediction of type 2 diabetes remission following RYGB surgery
Still, Christopher D.; Wood, G. Craig; Benotti, Peter; Petrick, Anthony T.; Gabrielsen, Jon; Strodel, William E.; Ibele, Anna; Seiler, Jamie; Irving, Brian A.; Celaya, Melisa P.; Blackstone, Robin; Gerhard, Glenn S.; Argyropoulos, George
2014-01-01
BACKGROUND Type 2 diabetes (T2D) is a metabolic disease with significant medical complications. Roux-en-Y gastric bypass (RYGB) surgery is one of the few interventions that remit T2D in ~60% of patients. However, there is no accurate method for predicting preoperatively the probability for T2D remission. METHODS A retrospective cohort of 2,300 RYGB patients at Geisinger Clinic was used to identify 690 patients with T2D and complete electronic data. Two additional T2D cohorts (N=276, and N=113) were used for replication at 14 months following RYGB. Kaplan-Meier analysis was used in the primary cohort to create survival curves until remission. A Cox proportional hazards model was used to estimate the hazard ratios on T2D remission. FINDINGS Using 259 preoperative clinical variables, four (use of insulin, age, HbA1c, and type of antidiabetic medication) were sufficient to develop an algorithm that produces a type 2 diabetes remission (DiaRem) score over five years. The DiaRem score spans from 0 to 22 and was divided into five groups corresponding to five probability-ranges for T2D remission: 0–2 (88%–99%), 3–7 (64%–88%), 8–12 (23%–49%), 13–17 (11%–33%), 18–22 (2%–16%). The DiaRem scores in the replication cohorts, as well as under various definitions of diabetes remission, conformed to the DiaRem score of the primary cohort. INTERPRETATION The DiaRem score is a novel preoperative method for predicting the probability (from 2% to 99%) for T2D remission following RYGB surgery. FUNDING This research was supported by the Geisinger Health System and the National Institutes of Health. PMID:24579062
Wargo, John M
2013-01-01
Written for experienced mobile developers, Apache Cordova 3 Programming is a complete introduction to Apache Cordova 3 and Adobe PhoneGap 3. It describes what makes Cordova important and shows how to install and use the tools, the new Cordova CLI, the native SDKs, and more. If you’re brand new to Cordova, this book will be just what you need to get started. If you’re familiar with an older version of Cordova, this book will show you in detail how to use all of the new stuff that’s in Cordova 3 plus stuff that has been around for a while (like the Cordova core APIs). After walking you through the process of downloading and setting up the framework, mobile expert John M. Wargo shows you how to install and use the command line tools to manage the Cordova application lifecycle and how to set up and use development environments for several of the more popular Cordova supported mobile device platforms. Of special interest to new developers are the chapters on the anatomy of a Cordova application, as well ...
Shiryaev, A N
1996-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, ergodic theory, weak convergence of probability measures, stationary stochastic processes, and the Kalman-Bucy filter Many examples are discussed in detail, and there are a large number of exercises The book is accessible to advanced undergraduates and can be used as a text for self-study This new edition contains substantial revisions and updated references The reader will find a deeper study of topics such as the distance between probability measures, metrization of weak convergence, and contiguity of probability measures Proofs for a number of some important results which were merely stated in the first edition have been added The author included new material on the probability of large deviations, and on the central limit theorem for sums of dependent random variables
Bourdaud, Nathalie; Devys, Jean-Michel; Bientz, Jocelyne; Lejus, Corinne; Hebrard, Anne; Tirel, Olivier; Lecoutre, Damien; Sabourdin, Nada; Nivoche, Yves; Baujard, Catherine; Nikasinovic, Lydia; Orliaguet, Gilles A
2014-09-01
Few data are available in the literature on risk factors for postoperative vomiting (POV) in children. The aim of the study was to establish independent risk factors for POV and to construct a pediatric specific risk score to predict POV in children. Characteristics of 2392 children operated under general anesthesia were recorded. The dataset was randomly split into an evaluation set (n = 1761), analyzed with a multivariate analysis including logistic regression and backward stepwise procedure, and a validation set (n = 450), used to confirm the accuracy of prediction using the area under the receiver operating characteristic curve (ROCAUC ), to optimize sensitivity and specificity. The overall incidence of POV was 24.1%. Five independent risk factors were identified: stratified age (>3 and 13 years: adjusted OR 2.46 [95% CI 1.75-3.45]; ≥6 and ≤13 years: aOR 3.09 [95% CI 2.23-4.29]), duration of anesthesia (aOR 1.44 [95% IC 1.06-1.96]), surgery at risk (aOR 2.13 [95% IC 1.49-3.06]), predisposition to POV (aOR 1.81 [95% CI 1.43-2.31]), and multiple opioids doses (aOR 2.76 [95% CI 2.06-3.70], P risk score ranged from 0 to 6. The model yielded a ROCAUC of 0.73 [95% CI 0.67-0.78] when applied to the validation dataset. Independent risk factors for POV were identified and used to create a new score to predict which children are at high risk of POV. © 2014 John Wiley & Sons Ltd.
Haloi, Saurav
2015-01-01
Whether you are a novice to ZooKeeper or already have some experience, you will be able to master the concepts of ZooKeeper and its usage with ease. This book assumes you to have some prior knowledge of distributed systems and high-level programming knowledge of C, Java, or Python, but no experience with Apache ZooKeeper is required.
Instant Apache Camel message routing
Ibryam, Bilgin
2013-01-01
Filled with practical, step-by-step instructions and clear explanations for the most important and useful tasks. This short, instruction-based guide shows you how to perform application integration using the industry standard Enterprise Integration Patterns.This book is intended for Java developers who are new to Apache Camel and message- oriented applications.
The Jicarilla Apaches. A Study in Survival.
Gunnerson, Dolores A.
Focusing on the ultimate fate of the Cuartelejo and/or Paloma Apaches known in archaeological terms as the Dismal River people of the Central Plains, this book is divided into 2 parts. The early Apache (1525-1700) and the Jicarilla Apache (1700-1800) tribes are studied in terms of their: persistent cultural survival, social/political adaptability,…
Directory of Open Access Journals (Sweden)
Marie Juanchich
2013-05-01
Full Text Available In most previous studies of verbal probabilities, participants are asked to translate expressions such as possible and not certain into numeric probability values. This probabilistic translation approach can be contrasted with a novel which-outcome (WO approach that focuses on the outcomes that people naturally associate with probability terms. The WO approach has revealed that, when given bell-shaped distributions of quantitative outcomes, people tend to associate certainty with minimum (unlikely outcome magnitudes and possibility with (unlikely maximal ones. The purpose of the present paper is to test the factors that foster these effects and the conditions in which they apply. Experiment 1 showed that the association of probability term and outcome was related to the association of scalar modifiers (i.e., it is certain that the battery will last at least..., it is possible that the battery will last up to.... Further, we tested whether this pattern was dependent on the frequency (e.g., increasing vs. decreasing distribution or the nature of the outcomes presented (i.e., categorical vs. continuous. Results showed that despite being slightly affected by the shape of the distribution, participants continue to prefer to associate possible with maximum outcomes and certain with minimum outcomes. The final experiment provided a boundary condition to the effect, showing that it applies to verbal but not numerical probabilities.
Dai, Huanping; Micheyl, Christophe
2015-05-01
Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.
Kim, Kwang Hyeon; Lee, Suk; Shim, Jang Bo; Yang, Dae Sik; Yoon, Won Sup; Park, Young Je; Kim, Chul Yong; Cao, Yuan Jie; Chang, Kyung Hwan
2018-01-01
The aim of this study was to derive a new plan-scoring index using normal tissue complication probabilities to verify different plans in the selection of personalized treatment. Plans for 12 patients treated with tomotherapy were used to compare scoring for ranking. Dosimetric and biological indexes were analyzed for the plans for a clearly distinguishable group ( n = 7) and a similar group ( n = 12), using treatment plan verification software that we developed. The quality factor ( QF) of our support software for treatment decisions was consistent with the final treatment plan for the clearly distinguishable group (average QF = 1.202, 100% match rate, n = 7) and the similar group (average QF = 1.058, 33% match rate, n = 12). Therefore, we propose a normal tissue complication probability (NTCP) based on the plan scoring index for verification of different plans for personalized treatment-plan selection. Scoring using the new QF showed a 100% match rate (average NTCP QF = 1.0420). The NTCP-based new QF scoring method was adequate for obtaining biological verification quality and organ risk saving using the treatment-planning decision-support software we developed for prostate cancer.
Learning Apache Solr high performance
Mohan, Surendra
2014-01-01
This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected.
Instant Apache Camel messaging system
Sharapov, Evgeniy
2013-01-01
Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A beginner's guide to Apache Camel that walks you through basic operations like installation and setup right through to developing simple applications.This book is a good starting point for Java developers who have to work on an application dealing with various systems and interfaces but who haven't yet started using Enterprise System Buses or Java Business Integration frameworks.
Sasidharan, Lekshmi; Donnell, Eric T
2014-10-01
Accurate estimation of the expected number of crashes at different severity levels for entities with and without countermeasures plays a vital role in selecting countermeasures in the framework of the safety management process. The current practice is to use the American Association of State Highway and Transportation Officials' Highway Safety Manual crash prediction algorithms, which combine safety performance functions and crash modification factors, to estimate the effects of safety countermeasures on different highway and street facility types. Many of these crash prediction algorithms are based solely on crash frequency, or assume that severity outcomes are unchanged when planning for, or implementing, safety countermeasures. Failing to account for the uncertainty associated with crash severity outcomes, and assuming crash severity distributions remain unchanged in safety performance evaluations, limits the utility of the Highway Safety Manual crash prediction algorithms in assessing the effect of safety countermeasures on crash severity. This study demonstrates the application of a propensity scores-potential outcomes framework to estimate the probability distribution for the occurrence of different crash severity levels by accounting for the uncertainties associated with them. The probability of fatal and severe injury crash occurrence at lighted and unlighted intersections is estimated in this paper using data from Minnesota. The results show that the expected probability of occurrence of fatal and severe injury crashes at a lighted intersection was 1 in 35 crashes and the estimated risk ratio indicates that the respective probabilities at an unlighted intersection was 1.14 times higher compared to lighted intersections. The results from the potential outcomes-propensity scores framework are compared to results obtained from traditional binary logit models, without application of propensity scores matching. Traditional binary logit analysis suggests that
Random Decision Forests on Apache Spark
CERN. Geneva
2016-01-01
About the speaker Tom White has been an Apache Hadoop committer since February 2007, and is a member of the Apache Software Foundation. He works for Cloudera, a company set up to offer Hadoop support and training. Previously he was as an independent Hadoop consultant, work...
Apache Flume distributed log collection for Hadoop
D'Souza, Subas
2013-01-01
A starter guide that covers Apache Flume in detail.Apache Flume: Distributed Log Collection for Hadoop is intended for people who are responsible for moving datasets into Hadoop in a timely and reliable manner like software engineers, database administrators, and data warehouse administrators
Mistura, Lorenza; D'Addezio, Laura; Sette, Stefania; Piccinelli, Raffaela; Turrini, Aida
2016-01-01
The diet quality in yogurt consumers and non-consumers was evaluated by applying the probability of adequate nutrient intake (PANDiet) index to a sample of adults and elderly from the Italian food consumption survey INRAN SCAI 2005-06. Overall, yogurt consumers had a significantly higher mean intake of energy, calcium and percentage of energy from total sugars whereas the mean percentage of energy from total fat, saturated fatty acid and total carbohydrate were significantly (p yogurt consumers than in non-consumers, (60.58 ± 0.33 vs. 58.58 ± 0.19, p yogurt consumers. The items of calcium, potassium and riboflavin showed the major percentage variation between consumers and non-consumers. Yogurt consumers were more likely to have adequate intakes of vitamins and minerals, and a higher quality score of the diet.
The Apache OODT Project: An Introduction
Mattmann, C. A.; Crichton, D. J.; Hughes, J. S.; Ramirez, P.; Goodale, C. E.; Hart, A. F.
2012-12-01
Apache OODT is a science data system framework, borne over the past decade, with 100s of FTEs of investment, tens of sponsoring agencies (NASA, NIH/NCI, DoD, NSF, universities, etc.), and hundreds of projects and science missions that it powers everyday to their success. At its core, Apache OODT carries with it two fundamental classes of software services and components: those that deal with information integration from existing science data repositories and archives, that themselves have already-in-use business processes and models for populating those archives. Information integration allows search, retrieval, and dissemination across these heterogeneous systems, and ultimately rapid, interactive data access, and retrieval. The other suite of services and components within Apache OODT handle population and processing of those data repositories and archives. Workflows, resource management, crawling, remote data retrieval, curation and ingestion, along with science data algorithm integration all are part of these Apache OODT software elements. In this talk, I will provide an overview of the use of Apache OODT to unlock and populate information from science data repositories and archives. We'll cover the basics, along with some advanced use cases and success stories.
APACHE II as an indicator of ventilator-associated pneumonia (VAP.
Directory of Open Access Journals (Sweden)
Kelser de Souza Kock
2015-01-01
Full Text Available Background and objectives: strategies for risk stratification in severe pathologies are extremely important. The aim of this study was to analyze the accuracy of the APACHE II score as an indicator of Ventilator-Associated Pneumonia (VAP in ICU patient sat Hospital Nossa Senhora da Conceição (HNSC Tubarão-SC. Methods: It was conducted a prospective cohort study with 120 patients admitted between March and August 2013, being held APACHE II in the first 24 hours of mechanical ventilation (MV. Patients were followed until the following gout comes: discharge or death. It was also analyzed the cause of ICU admission, age, gender, days of mechanical ventilation, length of ICU and outcome. Results: The incidence of VAP was 31.8% (38/120. Two variables showed a relative riskin the development of VAP, APACHE II above average (RR = 1,62; IC 95% 1,03-2,55 and males (RR = 1,56; IC 95 % 1,18-2,08. The duration of mechanical ventilation (days above average18.4± 14.9(p =0.001, ICU stay (days above average 20.4± 15.3(p =0.003 presented the development of VAP. The accuracy of APACHE II in predicting VAP score >23, showed a sensitivity of 84% and specificity of 33%. Inrelation to death, two variables showed relative risk, age above average (RR=2.08; 95% CI =1.34 to 3.23 and ICU stay above average (RR=2.05; CI 95 =1.28 to 3.28%. Conclusion: The APACHE II score above or equal 23 might to indicate the risk of VAP. Keywords: Pneumonia, Ventilator-Associated, Intensive Care Units, APACHE. Prognosis
Mechanical characterization of densely welded Apache Leap tuff
International Nuclear Information System (INIS)
Fuenkajorn, K.; Daemen, J.J.K.
1991-06-01
An empirical criterion is formulated to describe the compressive strength of the densely welded Apache Leap tuff. The criterion incorporates the effects of size, L/D ratio, loading rate and density variations. The criterion improves the correlation between the test results and the failure envelope. Uniaxial and triaxial compressive strengths, Brazilian tensile strength and elastic properties of the densely welded brown unit of the Apache Leap tuff have been determined using the ASTM standard test methods. All tuff samples are tested dry at room temperature (22 ± 2 degrees C), and have the core axis normal to the flow layers. The uniaxial compressive strength is 73.2 ± 16.5 MPa. The Brazilian tensile strength is 5.12 ± 1.2 MPa. The Young's modulus and Poisson's ratio are 22.6 ± 5.7 GPa and 0.20 ± 0.03. Smoothness and perpendicularity do not fully meet the ASTM requirements for all samples, due to the presence of voids and inclusions on the sample surfaces and the sample preparation methods. The investigations of loading rate, L/D radio and cyclic loading effects on the compressive strength and of the size effect on the tensile strength are not conclusive. The Coulomb strength criterion adequately represents the failure envelope of the tuff under confining pressures from 0 to 62 MPa. Cohesion and internal friction angle are 16 MPa and 43 degrees. The brown unit of the Apache Leap tuff is highly heterogeneous as suggested by large variations of the test results. The high intrinsic variability of the tuff is probably caused by the presence of flow layers and by nonuniform distributions of inclusions, voids and degree of welding. Similar variability of the properties has been found in publications on the Topopah Spring tuff at Yucca Mountain. 57 refs., 32 figs., 29 tabs
Conservation priorities in the Apache Highlands ecoregion
Dale Turner; Rob Marshall; Carolyn A. F. Enquist; Anne Gondor; David F. Gori; Eduardo Lopez; Gonzalo Luna; Rafaela Paredes Aguilar; Chris Watts; Sabra Schwartz
2005-01-01
The Apache Highlands ecoregion incorporates the entire Madrean Archipelago/Sky Island region. We analyzed the current distribution of 223 target species and 26 terrestrial ecological systems there, and compared them with constraints on ecosystem integrity (e.g., road density) to determine the most efficient set of areas needed to maintain current biodiversity. The...
Apache Flume distributed log collection for Hadoop
Hoffman, Steve
2015-01-01
If you are a Hadoop programmer who wants to learn about Flume to be able to move datasets into Hadoop in a timely and replicable manner, then this book is ideal for you. No prior knowledge about Apache Flume is necessary, but a basic knowledge of Hadoop and the Hadoop File System (HDFS) is assumed.
Use of APACHE II and SAPS II to predict mortality for hemorrhagic and ischemic stroke patients.
Moon, Byeong Hoo; Park, Sang Kyu; Jang, Dong Kyu; Jang, Kyoung Sool; Kim, Jong Tae; Han, Yong Min
2015-01-01
We studied the applicability of the Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) in patients admitted to the intensive care unit (ICU) with acute stroke and compared the results with the Glasgow Coma Scale (GCS) and National Institutes of Health Stroke Scale (NIHSS). We also conducted a comparative study of accuracy for predicting hemorrhagic and ischemic stroke mortality. Between January 2011 and December 2012, ischemic or hemorrhagic stroke patients admitted to the ICU were included in the study. APACHE II and SAPS II-predicted mortalities were compared using a calibration curve, the Hosmer-Lemeshow goodness-of-fit test, and the receiver operating characteristic (ROC) curve, and the results were compared with the GCS and NIHSS. Overall 498 patients were included in this study. The observed mortality was 26.3%, whereas APACHE II and SAPS II-predicted mortalities were 35.12% and 35.34%, respectively. The mean GCS and NIHSS scores were 9.43 and 21.63, respectively. The calibration curve was close to the line of perfect prediction. The ROC curve showed a slightly better prediction of mortality for APACHE II in hemorrhagic stroke patients and SAPS II in ischemic stroke patients. The GCS and NIHSS were inferior in predicting mortality in both patient groups. Although both the APACHE II and SAPS II systems can be used to measure performance in the neurosurgical ICU setting, the accuracy of APACHE II in hemorrhagic stroke patients and SAPS II in ischemic stroke patients was superior. Copyright © 2014 Elsevier Ltd. All rights reserved.
Optimizing CMS build infrastructure via Apache Mesos
Abduracmanov, David; Degano, Alessandro; Elmer, Peter; Eulisse, Giulio; Mendez, David; Muzaffar, Shahzad
2015-12-23
The Offline Software of the CMS Experiment at the Large Hadron Collider (LHC) at CERN consists of 6M lines of in-house code, developed over a decade by nearly 1000 physicists, as well as a comparable amount of general use open-source code. A critical ingredient to the success of the construction and early operation of the WLCG was the convergence, around the year 2000, on the use of a homogeneous environment of commodity x86-64 processors and Linux. Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications, or frameworks. It can run Hadoop, Jenkins, Spark, Aurora, and other applications on a dynamically shared pool of nodes. We present how we migrated our continuos integration system to schedule jobs on a relatively small Apache Mesos enabled cluster and how this resulted in better resource usage, higher peak performance and lower latency thanks to the dynamic scheduling capabilities of Mesos.
Growth and survival of Apache Trout under static and fluctuating temperature regimes
Recsetar, Matthew S.; Bonar, Scott A.; Feuerbacher, Olin
2014-01-01
Increasing stream temperatures have important implications for arid-region fishes. Little is known about effects of high water temperatures that fluctuate over extended periods on Apache Trout Oncorhynchus gilae apache, a federally threatened species of southwestern USA streams. We compared survival and growth of juvenile Apache Trout held for 30 d in static temperatures (16, 19, 22, 25, and 28°C) and fluctuating diel temperatures (±3°C from 16, 19, 22 and 25°C midpoints and ±6°C from 19°C and 22°C midpoints). Lethal temperature for 50% (LT50) of the Apache Trout under static temperatures (mean [SD] = 22.8 [0.6]°C) was similar to that of ±3°C diel temperature fluctuations (23.1 [0.1]°C). Mean LT50 for the midpoint of the ±6°C fluctuations could not be calculated because survival in the two treatments (19 ± 6°C and 22 ± 6°C) was not below 50%; however, it probably was also between 22°C and 25°C because the upper limb of a ±6°C fluctuation on a 25°C midpoint is above critical thermal maximum for Apache Trout (28.5–30.4°C). Growth decreased as temperatures approached the LT50. Apache Trout can survive short-term exposure to water temperatures with daily maxima that remain below 25°C and midpoint diel temperatures below 22°C. However, median summer stream temperatures must remain below 19°C for best growth and even lower if daily fluctuations are high (≥12°C).
Better prognostic marker in ICU - APACHE II, SOFA or SAP II!
Naqvi, Iftikhar Haider; Mahmood, Khalid; Ziaullaha, Syed; Kashif, Syed Mohammad; Sharif, Asim
2016-01-01
This study was designed to determine the comparative efficacy of different scoring system in assessing the prognosis of critically ill patients. This was a retrospective study conducted in medical intensive care unit (MICU) and high dependency unit (HDU) Medical Unit III, Civil Hospital, from April 2012 to August 2012. All patients over age 16 years old who have fulfilled the criteria for MICU admission were included. Predictive mortality of APACHE II, SAP II and SOFA were calculated. Calibration and discrimination were used for validity of each scoring model. A total of 96 patients with equal gender distribution were enrolled. The average APACHE II score in non-survivors (27.97+8.53) was higher than survivors (15.82+8.79) with statistically significant p value (discrimination power than SAP II and SOFA.
Evaluation of APACHE II system among intensive care patients at a teaching hospital
Directory of Open Access Journals (Sweden)
Paulo Antonio Chiavone
Full Text Available CONTEXT: The high-complexity features of intensive care unit services and the clinical situation of patients themselves render correct prognosis fundamentally important not only for patients, their families and physicians, but also for hospital administrators, fund-providers and controllers. Prognostic indices have been developed for estimating hospital mortality rates for hospitalized patients, based on demographic, physiological and clinical data. OBJECTIVE: The APACHE II system was applied within an intensive care unit to evaluate its ability to predict patient outcome; to compare illness severity with outcomes for clinical and surgical patients; and to compare the recorded result with the predicted death rate. DESIGN: Diagnostic test. SETTING: Clinical and surgical intensive care unit in a tertiary-care teaching hospital. PARTICIPANTS: The study involved 521 consecutive patients admitted to the intensive care unit from July 1998 to June 1999. MAIN MEASUREMENTS: APACHE II score, in-hospital mortality, receiver operating characteristic curve, decision matrices and linear regression analysis. RESULTS: The patients' mean age was 50 ± 19 years and the APACHE II score was 16.7 ± 7.3. There were 166 clinical patients (32%, 173 (33% post-elective surgery patients (33%, and 182 post-emergency surgery patients (35%, thus producing statistically similar proportions. The APACHE II scores for clinical patients (18.5 ± 7.8 were similar to those for non-elective surgery patients (18.6 ± 6.5 and both were greater than for elective surgery patients (13.0 ± 6.3 (p < 0.05. The higher this score was, the higher the mortality rate was (p < 0.05. The predicted death rate was 25.6% and the recorded death rate was 35.5%. Through the use of receiver operating curve analysis, good discrimination was found (area under the curve = 0.80. From the 2 x 2 decision matrix, 72.2% of patients were correctly classified (sensitivity = 35.1%; specificity = 92.6%. Linear
Network Intrusion Detection System using Apache Storm
Directory of Open Access Journals (Sweden)
Muhammad Asif Manzoor
2017-06-01
Full Text Available Network security implements various strategies for the identification and prevention of security breaches. Network intrusion detection is a critical component of network management for security, quality of service and other purposes. These systems allow early detection of network intrusion and malicious activities; so that the Network Security infrastructure can react to mitigate these threats. Various systems are proposed to enhance the network security. We are proposing to use anomaly based network intrusion detection system in this work. Anomaly based intrusion detection system can identify the new network threats. We also propose to use Real-time Big Data Stream Processing Framework, Apache Storm, for the implementation of network intrusion detection system. Apache Storm can help to manage the network traffic which is generated at enormous speed and size and the network traffic speed and size is constantly increasing. We have used Support Vector Machine in this work. We use Knowledge Discovery and Data Mining 1999 (KDD’99 dataset to test and evaluate our proposed solution.
Demarco, Daniela Cassar; Papachristidis, Alexandros; Roper, Damian; Tsironis, Ioannis; Byrne, Jonathan; Monaghan, Mark
2015-01-01
Objectives To compare how patients with chest pain would be investigated, based on the two guidelines available for UK cardiologists, on the management of patients with stable chest pain. The UK National Institute of Clinical Excellence (NICE) guideline which was published in 2010 and the European society of cardiology (ESC) guideline published in 2013. Both guidelines utilise pre-test probability risk scores, to guide the choice of investigation. Design We undertook a large retrospective study to investigate the outcomes of stress echocardiography. Setting A large tertiary centre in the UK in a contemporary clinical practice. Participants Two thirds of the patients in the cohort were referred from our rapid access chest pain clinics. Results We found that the NICE risk score overestimates risk by 20% compared to the ESC Risk score. We also found that based on the NICE guidelines, 44% of the patients presenting with chest pain, in this cohort, would have been investigated invasively, with diagnostic coronary angiography. Using the ESC guidelines, only 0.3% of the patients would be investigated invasively. Conclusion The large discrepancy between the two guidelines can be easily reduced if NICE adopted the ESC risk score. PMID:26673458
Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph
2001-01-01
Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223
Bastien, Olivier; Ortet, Philippe; Roy, Sylvaine; Maréchal, Eric
2005-03-10
Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic reconstruction. We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space) and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP) allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.
Directory of Open Access Journals (Sweden)
Maréchal Eric
2005-03-01
Full Text Available Abstract Background Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons and be the basis for a novel method of consistent and stable phylogenetic reconstruction. Results We have built up a spatial representation of protein sequences using concepts from particle physics (configuration space and respecting a frame of constraints deduced from pair-wise alignment score properties in information theory. The obtained configuration space of homologous proteins (CSHP allows the representation of real and shuffled sequences, and thereupon an expression of the TULIP theorem for Z-score probabilities. Based on the CSHP, we propose a phylogeny reconstruction using Z-scores. Deduced trees, called TULIP trees, are consistent with multiple-alignment based trees. Furthermore, the TULIP tree reconstruction method provides a solution for some previously reported incongruent results, such as the apicomplexan enolase phylogeny. Conclusion The CSHP is a unified model that conserves mutual information between proteins in the way physical models conserve energy. Applications include the reconstruction of evolutionary consistent and robust trees, the topology of which is based on a spatial representation that is not reordered after addition or removal of sequences. The CSHP and its assigned phylogenetic topology, provide a powerful and easily updated representation for massive pair-wise genome comparisons based on Z-score computations.
LHCbDIRAC as Apache Mesos microservices
Couturier, Ben
2016-01-01
The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and ran on virtual machines (VM) or bare metal hardware. Due to the increased load of work, high availability is becoming more and more important for the LHCbDIRAC services, and the current installation model is showing its limitations. Apache Mesos is a cluster manager which aims at abstracting heterogeneous physical resources on which various tasks can be distributed thanks to so called "framework". The Marathon framework is suitable for long running tasks such as the DIRAC services, while the Chronos framework meets the needs of cron-like tasks like the DIRAC agents. A combination of the service discovery tool Consul together with HAProxy allows to expose the running containers to the outside world while hiding their dynamic placements. Such an arc...
San Carlos Apache Tribe - Energy Organizational Analysis
Energy Technology Data Exchange (ETDEWEB)
Rapp, James; Albert, Steve
2012-04-01
The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded: The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA"). Start-up staffing and other costs associated with the Phase 1 SCAT energy organization. An intern program. Staff training. Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.
Turney, Benjamin; Robertson, William; Wiseman, Oliver; Amaro, Carmen Regina P R; Leitão, Victor A; Silva, Isabela Leme da; Amaro, João Luiz
2014-01-01
The aim was to confirm that PSF (probability of stone formation) changed appropriately following medical therapy on recurrent stone formers. Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate, calcium, urate, pH, citrate and magnesium. A repeat 24-hour urine sample was performed for PSF calculation after treatment. Comparison was made between the PSF scores before and during treatment. At baseline, 20 of the 26 patients (77%) had a high PSF score (> 0.5). Of the 26 patients, 17 (65%) showed an overall reduction in their PSF profiles with a medical treatment regimen. Eleven patients (42%) changed from a high risk (PSF > 0.5) to a low risk (PSF 0.5) during both assessments. The PSF score reduced following medical treatment in the majority of patients in this cohort.
Directory of Open Access Journals (Sweden)
Benjamin Turney
2014-08-01
Full Text Available Introduction The aim was to confirm that PSF (probability of stone formation changed appropriately following medical therapy on recurrent stone formers. Materials and Methods Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate, calcium, urate, pH, citrate and magnesium. A repeat 24-hour urine sample was performed for PSF calculation after treatment. Comparison was made between the PSF scores before and during treatment. Results At baseline, 20 of the 26 patients (77% had a high PSF score (> 0.5. Of the 26 patients, 17 (65% showed an overall reduction in their PSF profiles with a medical treatment regimen. Eleven patients (42% changed from a high risk (PSF > 0.5 to a low risk (PSF 0.5 during both assessments. Conclusions The PSF score reduced following medical treatment in the majority of patients in this cohort.
Various scoring systems for predicting mortality in Intensive Care Unit
African Journals Online (AJOL)
Age, gender, body weight, initial diagnosis, clinic of referral, intubation, comorbidities, APACHE II, APACHE IV, Glasgow coma scale, SAPS III scores, length of hospitalization before referral to ICU, length of stay in ICU, mechanical ventilation were recorded. Results: Most of the patients (54.6%) were consulted from ...
Lewin, Joel W; O'Rourke, Nicholas A; Chiow, Adrian K H; Bryant, Richard; Martin, Ian; Nathanson, Leslie K; Cavallucci, David J
2016-02-01
This study compares long-term outcomes between intention-to-treat laparoscopic and open approaches to colorectal liver metastases (CLM), using inverse probability of treatment weighting (IPTW) based on propensity scores to control for selection bias. Patients undergoing liver resection for CLM by 5 surgeons at 3 institutions from 2000 to early 2014 were analysed. IPTW based on propensity scores were generated and used to assess the marginal treatment effect of the laparoscopic approach via a weighted Cox proportional hazards model. A total of 298 operations were performed in 256 patients. 7 patients with planned two-stage resections were excluded leaving 284 operations in 249 patients for analysis. After IPTW, the population was well balanced. With a median follow up of 36 months, 5-year overall survival (OS) and recurrence-free survival (RFS) for the cohort were 59% and 38%. 146 laparoscopic procedures were performed in 140 patients, with weighted 5-year OS and RFS of 54% and 36% respectively. In the open group, 138 procedures were performed in 122 patients, with a weighted 5-year OS and RFS of 63% and 38% respectively. There was no significant difference between the two groups in terms of OS or RFS. In the Brisbane experience, after accounting for bias in treatment assignment, long term survival after LLR for CLM is equivalent to outcomes in open surgery. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
LHCbDIRAC as Apache Mesos microservices
Haen, Christophe; Couturier, Benjamin
2017-10-01
The LHCb experiment relies on LHCbDIRAC, an extension of DIRAC, to drive its offline computing. This middleware provides a development framework and a complete set of components for building distributed computing systems. These components are currently installed and run on virtual machines (VM) or bare metal hardware. Due to the increased workload, high availability is becoming more and more important for the LHCbDIRAC services, and the current installation model is showing its limitations. Apache Mesos is a cluster manager which aims at abstracting heterogeneous physical resources on which various tasks can be distributed thanks to so called “frameworks” The Marathon framework is suitable for long running tasks such as the DIRAC services, while the Chronos framework meets the needs of cron-like tasks like the DIRAC agents. A combination of the service discovery tool Consul together with HAProxy allows to expose the running containers to the outside world while hiding their dynamic placements. Such an architecture brings a greater flexibility in the deployment of LHCbDirac services, allowing for easier deployment maintenance and scaling of services on demand (e..g LHCbDirac relies on 138 services and 116 agents). Higher reliability is also easier, as clustering is part of the toolset, which allows constraints on the location of the services. This paper describes the investigations carried out to package the LHCbDIRAC and DIRAC components into Docker containers and orchestrate them using the previously described set of tools.
Schalk, Enrico; Hanus, Lynn; Färber, Jacqueline; Fischer, Thomas; Heidel, Florian H
2015-09-01
The aim of this study was to predict the probability of central venous catheter-related bloodstream infections (CRBSIs) in patients with haematologic malignancies using a modified version of the Infection Probability Score (mIPS). In order to perform a prospective, mono-centric surveillance of complications in clinical routine due to short-term central venous catheters (CVCs) in consecutive patients receiving chemotherapy from March 2013 to September 2014, IPS was calculated at CVC insertion and removal (mIPSin and mIPSex, respectively). We used the 2012 Infectious Diseases Working Party of the German Society of Haematology and Medical Oncology (AGIHO/DGHO) criteria to define CRBSI. In total, 143 patients (mean 59.5 years, 61.4 % male) with 267 triple-lumen CVCs (4044 CVC days; mean 15.1 days, range 1-60 days) were analysed. CVCs were inserted for therapy of acute leukaemia (53.2 %), multiple myeloma (24.3 %) or lymphoma (11.2 %), and 93.6 % were inserted in the jugular vein. A total of 66 CRBSI cases (24.7 %) were documented (12 definite/13 probable/41 possible). The incidence was 16.3/1000 CVC days (2.9/3.1/10.1 per 1000 CVC days for definite/probable/possible CRBSI, respectively). In CRBSI cases, the mIPSex was higher as compared to cases without CRBSI (13.1 vs. 7.1; p < 0.001). The best mIPSex cutoff for CRBSI prediction was 8 points (area under the curve (AUC) = 0.77; sensitivity = 84.9 %, specificity = 60.7 %, negative predictive value = 92.4 %). For patients with an mIPSex ≥8, the risk for a CRBSI was high (odds ratio [OR] = 5.9; p < 0.001) and even increased if, additionally, CVC had been in use for about 10 days (OR = 9.8; p < 0.001). In case other causes of infection are excluded, a mIPSex ≥8 and duration of CVC use of about 10 days predict a very high risk of CRBSI. Patients with a mIPSex <8 have a low risk of CRBSI of 8 %.
Mescalero Apache Tribe Monitored Retrievable Storage (MRS)
Energy Technology Data Exchange (ETDEWEB)
Peso, F.
1992-03-13
The Nuclear Waste Policy Act of 1982, as amended, authorizes the siting, construction and operation of a Monitored Retrievable Storage (MRS) facility. The MRS is intended to be used for the temporary storage of spent nuclear fuel from the nation's nuclear power plants beginning as early as 1998. Pursuant to the Nuclear Waste Policy Act, the Office of the Nuclear Waste Negotiator was created. On October 7, 1991, the Nuclear Waste Negotiator invited the governors of states and the Presidents of Indian tribes to apply for government grants in order to conduct a study to assess under what conditions, if any, they might consider hosting an MRS facility. Pursuant to this invitation, on October 11, 1991 the Mescalero Apache Indian Tribe of Mescalero, NM applied for a grant to conduct a phased, preliminary study of the safety, technical, political, environmental, social and economic feasibility of hosting an MRS. The preliminary study included: (1) An investigative education process to facilitate the Tribe's comprehensive understanding of the safety, environmental, technical, social, political, and economic aspects of hosting an MRS, and; (2) The development of an extensive program that is enabling the Tribe, in collaboration with the Negotiator, to reach an informed and carefully researched decision regarding the conditions, (if any), under which further pursuit of the MRS would be considered. The Phase 1 grant application enabled the Tribe to begin the initial activities necessary to determine whether further consideration is warranted for hosting the MRS facility. The Tribe intends to pursue continued study of the MRS in order to meet the following objectives: (1) Continuing the education process towards a comprehensive understanding of the safety, environmental, technical, social and economic aspects of the MRS; (2) Conducting an effective public participation and information program; (3) Participating in MRS meetings.
Mescalero Apache Tribe Monitored Retrievable Storage (MRS)
International Nuclear Information System (INIS)
Peso, F.
1992-01-01
The Nuclear Waste Policy Act of 1982, as amended, authorizes the siting, construction and operation of a Monitored Retrievable Storage (MRS) facility. The MRS is intended to be used for the temporary storage of spent nuclear fuel from the nation's nuclear power plants beginning as early as 1998. Pursuant to the Nuclear Waste Policy Act, the Office of the Nuclear Waste Negotiator was created. On October 7, 1991, the Nuclear Waste Negotiator invited the governors of states and the Presidents of Indian tribes to apply for government grants in order to conduct a study to assess under what conditions, if any, they might consider hosting an MRS facility. Pursuant to this invitation, on October 11, 1991 the Mescalero Apache Indian Tribe of Mescalero, NM applied for a grant to conduct a phased, preliminary study of the safety, technical, political, environmental, social and economic feasibility of hosting an MRS. The preliminary study included: (1) An investigative education process to facilitate the Tribe's comprehensive understanding of the safety, environmental, technical, social, political, and economic aspects of hosting an MRS, and; (2) The development of an extensive program that is enabling the Tribe, in collaboration with the Negotiator, to reach an informed and carefully researched decision regarding the conditions, (if any), under which further pursuit of the MRS would be considered. The Phase 1 grant application enabled the Tribe to begin the initial activities necessary to determine whether further consideration is warranted for hosting the MRS facility. The Tribe intends to pursue continued study of the MRS in order to meet the following objectives: (1) Continuing the education process towards a comprehensive understanding of the safety, environmental, technical, social and economic aspects of the MRS; (2) Conducting an effective public participation and information program; (3) Participating in MRS meetings
Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita
2010-01-01
Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832
Directory of Open Access Journals (Sweden)
Jindal Shveta
2010-01-01
Full Text Available Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT glaucoma probability score (GPS with that of Moorfield′s regression analysis (MRA. Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 - 0.315. The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives and least specific criteria (borderline results included as test positives. The MRA sensitivity and specificity were 30.61 and 98% (most specific and 57.14 and 98% (least specific. The GPS sensitivity and specificity were 81.63 and 73.47% (most specific and 95.92 and 34.69% (least specific. The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08 and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44.The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs.
Apache, Santa Fe energy units awarded two Myanmar blocks
International Nuclear Information System (INIS)
Anon.
1992-01-01
This paper reports that Myanmar's state oil company has awarded production sharing contracts (PSCs) on two blocks to units of Apache Corp. and Santa Fe Energy Resources Inc., both of Houston. That comes on the heels of a report by County NatWest Woodmac that notes Myanmar's oil production, currently meeting less than half the country's demand, is set to fall further this year. 150 line km of new seismic data could be acquired and one well drilled. During the initial 2 year exploration period on Block EP-3, Apache will conduct geological studies and conduct at least 200 line km of seismic data
77 FR 51475 - Safety Zone; Apache Pier Labor Day Fireworks; Myrtle Beach, SC
2012-08-24
...-AA00 Safety Zone; Apache Pier Labor Day Fireworks; Myrtle Beach, SC AGENCY: Coast Guard, DHS. ACTION... Atlantic Ocean in the vicinity of Apache Pier in Myrtle Beach, SC, during the Labor Day fireworks... [[Page 51476
Fallugia paradoxa (D. Don) Endl. ex Torr.: Apache-plume
Susan E. Meyer
2008-01-01
The genus Fallugia contains a single species - Apache-plume, F. paradoxa (D. Don) Endl. ex Torr. - found throughout the southwestern United States and northern Mexico. It occurs mostly on coarse soils on benches and especially along washes and canyons in both warm and cool desert shrub communities and up into the pinyon-juniper vegetation type. It is a sprawling, much-...
The Apache Point Observatory Galactic Evolution Experiment (APOGEE)
DEFF Research Database (Denmark)
Majewski, Steven R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.
2017-01-01
The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the...
Ergonomic and anthropometric issues of the forward Apache crew station
Oudenhuijzen, A.J.K.
1999-01-01
This paper describes the anthropometric accommodation in the Apache crew systems. These activities are part of a comprehensive project, in a cooperative effort from the Armstrong Laboratory at Wright Patterson Air Force Base (Dayton, Ohio, USA) and TNO Human Factors Research Institute (TNO HFRI) in
Turney, Benjamin; Robertson, William; Wiseman, Oliver; Amaro, Carmen Regina P. R.; Leitão, Victor A.; Silva, Isabela Leme da; Amaro, João Luiz
2014-01-01
Introduction: The aim was to confirm that PSF (probability of stone formation) changed appropriately following medical therapy on recurrent stone formers.Materials and Methods: Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate...
Uncomfortable Experience: Lessons Lost in the Apache War
2015-03-01
the Apache War gripped the focus of American and Mexican citizens throughout Arizona, New Mexico, Chihuahua , and Sonora for a period greater than...Arizona and portions of New Mexico, and Northern Sonora and Chihuahua .5 Although confusion exists as to their true subdivisions, the Chokonen led by...contributed directly to the Victorio War, the Loco and Geronimo campaigns, and the Nana and Chatto- Chihuahua raids that followed.38 Once again, failure to
Integration of event streaming and microservices with Apache Kafka
Kljun, Matija
2017-01-01
Over the last decade, the microservice architecture has become a standard for big and successful internet companies, like Netflix, Amazon and LinkedIn. The importance of stream processing, aggregation and exchange of data is growing, as it allows companies to compete better and move faster. In this diploma, we have analyzed the interactions between microservices, described the streaming platform and ordinary message queues. We have described the Apache Kafka platform and how...
Satellite Imagery Production and Processing Using Apache Hadoop
Hill, D. V.; Werpy, J.
2011-12-01
The United States Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Center Land Science Research and Development (LSRD) project has devised a method to fulfill its processing needs for Essential Climate Variable (ECV) production from the Landsat archive using Apache Hadoop. Apache Hadoop is the distributed processing technology at the heart of many large-scale, processing solutions implemented at well-known companies such as Yahoo, Amazon, and Facebook. It is a proven framework and can be used to process petabytes of data on thousands of processors concurrently. It is a natural fit for producing satellite imagery and requires only a few simple modifications to serve the needs of science data processing. This presentation provides an invaluable learning opportunity and should be heard by anyone doing large scale image processing today. The session will cover a description of the problem space, evaluation of alternatives, feature set overview, configuration of Hadoop for satellite image processing, real-world performance results, tuning recommendations and finally challenges and ongoing activities. It will also present how the LSRD project built a 102 core processing cluster with no financial hardware investment and achieved ten times the initial daily throughput requirements with a full time staff of only one engineer. Satellite Imagery Production and Processing Using Apache Hadoop is presented by David V. Hill, Principal Software Architect for USGS LSRD.
Solar Feasibility Study May 2013 - San Carlos Apache Tribe
Energy Technology Data Exchange (ETDEWEB)
Rapp, Jim [Parametrix; Duncan, Ken [San Carlos Apache Tribe; Albert, Steve [Parametrix
2013-05-01
The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribe’s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas. Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.
Maréchal Eric; Ortet Philippe; Roy Sylvaine; Bastien Olivier
2005-01-01
Abstract Background Popular methods to reconstruct molecular phylogenies are based on multiple sequence alignments, in which addition or removal of data may change the resulting tree topology. We have sought a representation of homologous proteins that would conserve the information of pair-wise sequence alignments, respect probabilistic properties of Z-scores (Monte Carlo methods applied to pair-wise comparisons) and be the basis for a novel method of consistent and stable phylogenetic recon...
Murphy, Dominic; Ross, Jana; Ashwick, Rachel; Armour, Cherie; Busuttil, Walter
2017-01-01
ABSTRACT Background: Previous research exploring the psychometric properties of the scores of measures of posttraumatic stress disorder (PTSD) suggests there is variation in their functioning depending on the target population. To date, there has been little study of these properties within UK veteran populations. Objective: This study aimed to determine optimally efficient cut-off values for the Impact of Event Scale-Revised (IES-R) and the PTSD Checklist for DSM-5 (PCL-5) that can be used to assess for differential diagnosis of presumptive PTSD. Methods: Data from a sample of 242 UK veterans assessed for mental health difficulties were analysed. The criterion-related validity of the PCL-5 and IES-R were evaluated against the Clinician-Administered PTSD Scale for DSM-5 (CAPS-5). Kappa statistics were used to assess the level of agreement between the DSM-IV and DSM-5 classification systems. Results: The optimal cut-off scores observed within this sample were 34 or above on the PCL-5 and 46 or above on the IES-R. The PCL-5 cut-off is similar to the previously reported values, but the IES-R cut-off identified in this study is higher than has previously been recommended. Overall, a moderate level of agreement was found between participants screened positive using the DSM-IV and DSM-5 classification systems of PTSD. Conclusions: Our findings suggest that the PCL-5 and IES-R can be used as brief measures within veteran populations presenting at secondary care to assess for PTSD. The use of a higher cut-off for the IES-R may be helpful for differentiating between veterans who present with PTSD and those who may have some sy`mptoms of PTSD but are sub-threshold for meeting a diagnosis. Further, the use of more accurate optimal cut-offs may aid clinicians to better monitor changes in PTSD symptoms during and after treatment. PMID:29435200
Murphy, Dominic; Ross, Jana; Ashwick, Rachel; Armour, Cherie; Busuttil, Walter
2017-01-01
Background : Previous research exploring the psychometric properties of the scores of measures of posttraumatic stress disorder (PTSD) suggests there is variation in their functioning depending on the target population. To date, there has been little study of these properties within UK veteran populations. Objective : This study aimed to determine optimally efficient cut-off values for the Impact of Event Scale-Revised (IES-R) and the PTSD Checklist for DSM-5 (PCL-5) that can be used to assess for differential diagnosis of presumptive PTSD. Methods : Data from a sample of 242 UK veterans assessed for mental health difficulties were analysed. The criterion-related validity of the PCL-5 and IES-R were evaluated against the Clinician-Administered PTSD Scale for DSM-5 (CAPS-5). Kappa statistics were used to assess the level of agreement between the DSM-IV and DSM-5 classification systems. Results : The optimal cut-off scores observed within this sample were 34 or above on the PCL-5 and 46 or above on the IES-R. The PCL-5 cut-off is similar to the previously reported values, but the IES-R cut-off identified in this study is higher than has previously been recommended. Overall, a moderate level of agreement was found between participants screened positive using the DSM-IV and DSM-5 classification systems of PTSD. Conclusions : Our findings suggest that the PCL-5 and IES-R can be used as brief measures within veteran populations presenting at secondary care to assess for PTSD. The use of a higher cut-off for the IES-R may be helpful for differentiating between veterans who present with PTSD and those who may have some sy`mptoms of PTSD but are sub-threshold for meeting a diagnosis. Further, the use of more accurate optimal cut-offs may aid clinicians to better monitor changes in PTSD symptoms during and after treatment.
Directory of Open Access Journals (Sweden)
Jothy Basu K
2009-01-01
Full Text Available Aim: The main objective of this study was to analyze the radiobiological effect of different treatment strategies on high-risk prostate adenocarcinoma. Materials and Methods: Ten cases of high-risk prostate adenocarcinoma were selected for this dosimetric study. Four different treatment strategies used for treating prostate cancer were compared. Conventional four-field box technique covering prostate and nodal volumes followed by three-field conformal boost (3D + 3DCRT, four-field box technique followed by intensity-modulated radiotherapy (IMRT boost (3D + IMRT, IMRT followed by IMRT boost (IMRT + IMRT, and simultaneous integrated boost IMRT (SIBIMRT were compared in terms of tumor control probability (TCP and normal tissue complication probability (NTCP. The dose prescription except for SIBIMRT was 45 Gy in 25 fractions for the prostate and nodal volumes in the initial phase and 27 Gy in 15 fractions for the prostate in the boost phase. For SIBIMRT, equivalent doses were calculated using biologically equivalent dose assuming the α/β ratio of 1.5 Gy with a dose prescription of 60.75 Gy for the gross tumor volume (GTV and 45 Gy for the clinical target volume in 25 fractions. IMRT plans were made with 15-MV equispaced seven coplanar fields. NTCP was calculated using the Lyman-Kutcher-Burman (LKB model. Results: An NTCP of 10.7 ± 0.99%, 8.36 ± 0.66%, 6.72 ± 0.85%, and 1.45 ± 0.11% for the bladder and 14.9 ± 0.99%, 14.04 ± 0.66%, 11.38 ± 0.85%, 5.12 ± 0.11% for the rectum was seen with 3D + 3DCRT, 3D + IMRT, IMRT + IMRT, and SIBIMRT respectively. Conclusions: SIBIMRT had the least NTCP over all other strategies with a reduced treatment time (3 weeks less. It should be the technique of choice for dose escalation in prostate carcinoma.
Large-Scale Graph Processing Using Apache Giraph
Sakr, Sherif
2017-01-07
This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.
Large-Scale Graph Processing Using Apache Giraph
Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair
2017-01-01
This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.
Beginning PHP, Apache, MySQL web development
Glass, Michael K; Naramore, Elizabeth; Mailer, Gary; Stolz, Jeremy; Gerner, Jason
2004-01-01
An ideal introduction to the entire process of setting up a Web site using PHP (a scripting language), MySQL (a database management system), and Apache (a Web server)* Programmers will be up and running in no time, whether they're using Linux or Windows servers* Shows readers step by step how to create several Web sites that share common themes, enabling readers to use these examples in real-world projects* Invaluable reading for even the experienced programmer whose current site has outgrown the traditional static structure and who is looking for a way to upgrade to a more efficient, user-f
Perl and Apache Your visual blueprint for developing dynamic Web content
McDaniel, Adam
2010-01-01
Visually explore the range of built-in and third-party libraries of Perl and Apache. Perl and Apache have been providing Common Gateway Interface (CGI) access to Web sites for 20 years and are constantly evolving to support the ever-changing demands of Internet users. With this book, you will heighten your knowledge and see how to usePerl and Apache to develop dynamic Web sites. Beginning with a clear, step-by-step explanation of how to install Perl and Apache on both Windows and Linux servers, you then move on to configuring each to securely provide CGI Services. CGI developer and author Adam
Rubinsky, Anna D; Dawson, Deborah A; Williams, Emily C; Kivlahan, Daniel R; Bradley, Katharine A
2013-08-01
Brief alcohol screening questionnaires are increasingly used to identify alcohol misuse in routine care, but clinicians also need to assess the level of consumption and the severity of misuse so that appropriate intervention can be offered. Information provided by a patient's alcohol screening score might provide a practical tool for assessing the level of consumption and severity of misuse. This post hoc analysis of data from the 2001 to 2002 National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) included 26,546 U.S. adults who reported drinking in the past year and answered additional questions about their consumption, including Alcohol Use Disorders Identification Test-Consumption questionnaire (AUDIT-C) alcohol screening. Linear or logistic regression models and postestimation methods were used to estimate mean daily drinking, the number of endorsed alcohol use disorder (AUD) criteria ("AUD severity"), and the probability of alcohol dependence associated with each individual AUDIT-C score (1 to 12), after testing for effect modification by gender and age. Among eligible past-year drinkers, mean daily drinking, AUD severity, and the probability of alcohol dependence increased exponentially across increasing AUDIT-C scores. Mean daily drinking ranged from alcohol dependence ranged from used to estimate patient-specific consumption and severity based on age, gender, and alcohol screening score. This information could be integrated into electronic decision support systems to help providers estimate and provide feedback about patient-specific risks and identify those patients most likely to benefit from further diagnostic assessment. Copyright © 2013 by the Research Society on Alcoholism.
Evaluation of Apache Hadoop for parallel data analysis with ROOT
International Nuclear Information System (INIS)
Lehrack, S; Duckeck, G; Ebke, J
2014-01-01
The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.
Evaluation of Apache Hadoop for parallel data analysis with ROOT
Lehrack, S.; Duckeck, G.; Ebke, J.
2014-06-01
The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.
Managing Variant Calling Files the Big Data Way: Using HDFS and Apache Parquet
Boufea, Aikaterini; Finkers, H.J.; Kaauwen, van M.P.W.; Kramer, M.R.; Athanasiadis, I.N.
2017-01-01
Big Data has been seen as a remedy for the efficient management of the ever-increasing genomic data. In this paper, we investigate the use of Apache Spark to store and process Variant Calling Files (VCF) on a Hadoop cluster. We demonstrate Tomatula, a software tool for converting VCF files to Apache
75 FR 68607 - BP Canada Energy Marketing Corp. Apache Corporation; Notice for Temporary Waivers
2010-11-08
... Energy Marketing Corp. Apache Corporation; Notice for Temporary Waivers November 1, 2010. Take notice that on October 29, 2010, BP Canada Energy Marketing Corp. and Apache Corporation filed with the... assistance with any FERC Online service, please e-mail [email protected] , or call (866) 208-3676...
Spinal Pain and Occupational Disability: A Cohort Study of British Apache AH Mk1 Pilots
2013-09-01
British RW community. 33 References Apache AH Mk1. 2012. Agusta Westland. http://www.agustawestland.com/ product /apache-ah- mk1-0. Ang, B., and...muscles Physical ex and stretching Continued pt and stretching exercises Use pt session included pumpkin bobs to stretch the neck. No effects noticed
Biology and distribution of Lutzomyia apache as it relates to VSV
Phlebotomine sand flies are vectors of bacteria, parasites, and viruses. Lutzomyia apache was incriminated as a vector of vesicular stomatitis viruses(VSV)due to overlapping ranges of the sand fly and outbreaks of VSV. I report on newly discovered populations of L. apache in Wyoming from Albany and ...
Scoring Rules for Subjective Probability Distributions
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
2017-01-01
significantly due to risk aversion. We characterize an approach for eliciting the entire subjective belief distribution that is minimally biased due to risk aversion. We offer simulated examples to demonstrate the intuition of our approach. We also provide theory to formally characterize our framework. And we...... provide experimental evidence which corroborates our theoretical results. We conclude that for empirically plausible levels of risk aversion, one can reliably elicit most important features of the latent subjective belief distribution without undertaking calibration for risk attitudes providing one...
International Nuclear Information System (INIS)
Tang Wei; Zhang Xiaoming; Xiao Bo; Zeng Nanlin; Pan Huashan; Feng Zhisong; Xu Xiaoxue
2011-01-01
Objective: To study the correlation between established magnetic resonance (MR) imaging criteria of disease severity in acute pancreatitis and the Acute Physiology And Chronic Healthy Evaluation II (APACHE II) score, and to assess the utility of each prognostic indicators in acute pancreatitis. Materials and methods: In this study there were 94 patients with acute pancreatitis (AP), all had abdominal MR imaging. MR findings were categorized into edematous and necrotizing AP and graded according to the MR severity index (MRSI). The APACHE II score was calculated within 24 h of admission, and local complications, death, duration of hospitalization and ICU were recorded. Statistical analysis was performed to determine their correlation. Results: In patients with pancreatitis, no significant correlation can be found between the APACHE II score and the MRSI score (P = 0.196). The MRSI score correlated well with morbidity (P = 0.006) but not with mortality (P = 0.137). The APACHE II score correlated well with mortality (P = 0.002) but not with the morbidity (P = 0.112). The MRSI score was superior to the APACHE II score as a predictor of the length of hospitalization (r = 0.52 vs. r = 0.35). A high MRSI and APACHE II score correlated with the need for being in the intensive care unit (ICU) (P = 0.000 and P = 0.000, respectively). Conclusion: In patients with pancreatitis, MRSI is superior to APACHE II in assessing local complications from pancreatitis but has a limited role in determining systemic complications in which the APACHE II score excels.
ASSESSMENT OF SEVERITY OF PERFORATED PERITONITIS USING MODIFIED APACHE II SCORE
Directory of Open Access Journals (Sweden)
L. Rajeswar Reddy
2016-06-01
Full Text Available Acute generalised peritonitis from gastrointestinal hollow viscus perforation is a potentially life threatening condition. It is a common surgical emergency in many general surgical units in the developing countries and it is often associated with high morbidity and mortality. Grading the severity of acute peritonitis has assisted in no small way in decision making and has improved therapy in the management of severely ill patients. Empirically based risk assessment for important clinical events has been extremely useful in evaluating new therapies, in monitoring resources for effective use and improving quality of care. MATERIAL AND METHODS A prospective survey of patients with acute generalised peritonitis due to gastrointestinal perforation was carried out in general surgical wards of KIMS Hospital, Amalapuram during the period starting from July 2013-November 2016. The study population consisted of 50 consecutive patients who had laparotomy during the study period for acute peritonitis due to gastrointestinal perforation, after diagnostic conformation. RESULT AND DISCUSSION The most common cause of peritonitis in our study was perforated duodenal ulcer (31 cases, followed by appendicular perforation (7 cases, followed by stomach perforation (7 cases. Despite delay in seeking treatment, the overall mortality rate (14% was favourably comparable with other published series.
CMS Analysis and Data Reduction with Apache Spark
Energy Technology Data Exchange (ETDEWEB)
Gutsche, Oliver [Fermilab; Canali, Luca [CERN; Cremer, Illia [Magnetic Corp., Waltham; Cremonesi, Matteo [Fermilab; Elmer, Peter [Princeton U.; Fisk, Ian [Flatiron Inst., New York; Girone, Maria [CERN; Jayatilaka, Bo [Fermilab; Kowalkowski, Jim [Fermilab; Khristenko, Viktor [CERN; Motesnitsalis, Evangelos [CERN; Pivarski, Jim [Princeton U.; Sehrish, Saba [Fermilab; Surdy, Kacper [CERN; Svyatkovskiy, Alexey [Princeton U.
2017-10-31
Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.
DEFF Research Database (Denmark)
Asmussen, Søren; Albrecher, Hansjörg
The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities......, extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence....
Generalized Probability-Probability Plots
Mushkudiani, N.A.; Einmahl, J.H.J.
2004-01-01
We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P
Chugh, Saryu; Arivu Selvan, K.; Nadesh, RK
2017-11-01
Numerous destructive things influence the working arrangement of human body as hypertension, smoking, obesity, inappropriate medication taking which causes many contrasting diseases as diabetes, thyroid, strokes and coronary diseases. The impermanence and horribleness of the environment situation is also the reason for the coronary disease. The structure of Apache start relies on the evolution which requires gathering of the data. To break down the significance of use programming focused on data structure the Apache stop ought to be utilized and it gives various central focuses as it is fast in light as it uses memory worked in preparing. Apache Spark continues running on dispersed environment and chops down the data in bunches giving a high profitability rate. Utilizing mining procedure as a part of the determination of coronary disease has been exhaustively examined indicating worthy levels of precision. Decision trees, Neural Network, Gradient Boosting Algorithm are the various apache spark proficiencies which help in collecting the information.
Shiryaev, Albert N
2016-01-01
This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.
Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...
Analyzing large data sets from XGC1 magnetic fusion simulations using apache spark
Energy Technology Data Exchange (ETDEWEB)
Churchill, R. Michael [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)
2016-11-21
Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.
Modified poisoning severity score for early prognostic evaluation in acute paraquat poisoning
Directory of Open Access Journals (Sweden)
Feng-lin SONG
2018-04-01
Full Text Available Objective To study the applied value of modified poisoning severity score (PSS for early prognostic evaluation in acute paraquat poisoning. Methods Thirty-seven patients with acute paraquat poisoning from June 2013 to June 2016 were enrolled. The PSS score, the modified PSS score, the acute physiology and the chronic health status Ⅱ score (APACHE Ⅱ of the patients were calculated. The relationship between modified PSS and APACHE Ⅱ was analyzed. Also the factors that affect outcome were analyzed by logistic regression analysis. The work characteristic curve (ROC curve of the PSS, the modified PSS and the APECH Ⅱ were drawn and compared. Results There was a positive correlation between the risk of death and admission time, poisonous dose, the concentration of urine paraquat, and white blood cell count (P<0.05. There was a significant correlation between the modified PSS and the APACHE Ⅱ(P<0.0001. The immediate PSS score, the modified PSS score, and the APACHE Ⅱ score were significant for the prognosis of patients with acute paraquat poisoning. The area under the curve (AUC was in turn 0.774, 0.788, 0.799. Among them, the best bound of the modified PSS score was 6.5 (when the score is greater than 6.5, the risk of death is higher. Further comparison of the area under the three curves showed that there was no significant difference in the area under the ROC curve between the three scores in predicting the prognosis of death [P=0.7633(PSS-DPSS, P=0.7791(PSS-APACHE Ⅱ, P=0.8918(DPSS-APACHE Ⅱ]. Conclusion Modified PSS is helpful in early predicting the prognosis of acute paraquat poisoning. DOI: 10.11855/j.issn.0577-7402.2018.04.13
Quantum Probabilities as Behavioral Probabilities
Directory of Open Access Journals (Sweden)
Vyacheslav I. Yukalov
2017-03-01
Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.
Aplikasi Search Engine Perpustakaan Petra Berbasis Android dengan Apache SOLR
Directory of Open Access Journals (Sweden)
Andreas Handojo
2016-07-01
Full Text Available Abstrak: Pendidikan merupakan kebutuhan yang penting bagi manusia untuk meningkatkan kemampuan serta taraf hidupnya.Selain melalui pendidikan formal, ilmu juga dapat diperoleh melalui media cetak atau buku.Perpustakaan merupakan salah satu sarana yang penting dalam menunjang hal tersebut.Meskipun sangat bermanfaat, terdapat kesulitan penggunaan layanan perpustakaan, karena terlalu banyaknya koleksi pustaka yang ada (buku, jurnal, majalah, dan sebagainya sehingga sulit untuk menemukan buku yang ingin dicari.Oleh sebab itu, selain harus berkembang dengan penyediaan koleksi pustaka, perpustakaan harus dapat mengikuti perkembangan zaman yang ada sehingga mempermudah penggunaan layanan perpustakaan.Saat iniperpustakaan Universitas Kristen Petra memiliki perpustakaan dengan kurang lebih 230.000 koleksi fisik maupun digital (berdasarkan data 2014.Dimana daftar koleksi fisik dan dokumen digital dapat diakses melalui website perpustakaan.Adanya koleksi pustaka yang sangat banyak ini menyebabkan kesulitan pengguna dalam melakukan proses pencarian. Sehingga guna menambah fitur layanan yang diberikan maka pada penelitian ini dibuatlah sebuah aplikasi layanan search engine perpustakaan menggunakan platform Apache SOLR dan database PostgreSQL. Selain itu, guna lebih meningkatkan kemudahan akses maka aplikasi ini dibuat dengan menggunakan platform mobile device berbasis Android.Selain pengujian terhadap aplikasi dilakukan juga pengujian dengan mengedarkan kuesioner terhadap 50 calon pengguna.Dari hasil kuestioner tersebut menunjukkan bahwa fitur-fitur yang dibuat telah sesuai dengan kebutuhan pengguna (78%. Kata kunci: SOLR, Mesin Pencarian, Perpustakaan, PostgreSQL Abstract: Education is an essential requirement for people to improve their standard of living. Other than through formal education, science can also be obtained through the print media or books. Library is one important tool supporting it. Although it is useful, there are difficulties use library
The Apache Point Observatory Galactic Evolution Experiment (APOGEE)
Majewski, Steven R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.; Allende Prieto, Carlos; Barkhouser, Robert; Bizyaev, Dmitry; Blank, Basil; Brunner, Sophia; Burton, Adam; Carrera, Ricardo; Chojnowski, S. Drew; Cunha, Kátia; Epstein, Courtney; Fitzgerald, Greg; García Pérez, Ana E.; Hearty, Fred R.; Henderson, Chuck; Holtzman, Jon A.; Johnson, Jennifer A.; Lam, Charles R.; Lawler, James E.; Maseman, Paul; Mészáros, Szabolcs; Nelson, Matthew; Nguyen, Duy Coung; Nidever, David L.; Pinsonneault, Marc; Shetrone, Matthew; Smee, Stephen; Smith, Verne V.; Stolberg, Todd; Skrutskie, Michael F.; Walker, Eric; Wilson, John C.; Zasowski, Gail; Anders, Friedrich; Basu, Sarbani; Beland, Stephane; Blanton, Michael R.; Bovy, Jo; Brownstein, Joel R.; Carlberg, Joleen; Chaplin, William; Chiappini, Cristina; Eisenstein, Daniel J.; Elsworth, Yvonne; Feuillet, Diane; Fleming, Scott W.; Galbraith-Frew, Jessica; García, Rafael A.; García-Hernández, D. Aníbal; Gillespie, Bruce A.; Girardi, Léo; Gunn, James E.; Hasselquist, Sten; Hayden, Michael R.; Hekker, Saskia; Ivans, Inese; Kinemuchi, Karen; Klaene, Mark; Mahadevan, Suvrath; Mathur, Savita; Mosser, Benoît; Muna, Demitri; Munn, Jeffrey A.; Nichol, Robert C.; O'Connell, Robert W.; Parejko, John K.; Robin, A. C.; Rocha-Pinto, Helio; Schultheis, Matthias; Serenelli, Aldo M.; Shane, Neville; Silva Aguirre, Victor; Sobeck, Jennifer S.; Thompson, Benjamin; Troup, Nicholas W.; Weinberg, David H.; Zamora, Olga
2017-09-01
The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the Sloan 2.5 m Telescope, APOGEE has collected a half million high-resolution (R ˜ 22,500), high signal-to-noise ratio (>100), infrared (1.51-1.70 μm) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design—hardware, field placement, target selection, operations—and gives an overview of these aspects as well as the data reduction, analysis, and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity, and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12 and later releases, all of the APOGEE data products are publicly available.
The Apache Point Observatory Galactic Evolution Experiment (APOGEE)
International Nuclear Information System (INIS)
Majewski, Steven R.; Brunner, Sophia; Burton, Adam; Chojnowski, S. Drew; Pérez, Ana E. García; Hearty, Fred R.; Lam, Charles R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.; Prieto, Carlos Allende; Carrera, Ricardo; Barkhouser, Robert; Bizyaev, Dmitry; Blank, Basil; Henderson, Chuck; Cunha, Kátia; Epstein, Courtney; Johnson, Jennifer A.; Fitzgerald, Greg; Holtzman, Jon A.
2017-01-01
The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the Sloan 2.5 m Telescope, APOGEE has collected a half million high-resolution ( R ∼ 22,500), high signal-to-noise ratio (>100), infrared (1.51–1.70 μ m) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design—hardware, field placement, target selection, operations—and gives an overview of these aspects as well as the data reduction, analysis, and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity, and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12 and later releases, all of the APOGEE data products are publicly available.
The Apache Point Observatory Galactic Evolution Experiment (APOGEE)
Energy Technology Data Exchange (ETDEWEB)
Majewski, Steven R.; Brunner, Sophia; Burton, Adam; Chojnowski, S. Drew; Pérez, Ana E. García; Hearty, Fred R.; Lam, Charles R. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Schiavon, Ricardo P. [Gemini Observatory, 670 N. A’Ohoku Place, Hilo, HI 96720 (United States); Frinchaboy, Peter M. [Department of Physics and Astronomy, Texas Christian University, Fort Worth, TX 76129 (United States); Prieto, Carlos Allende; Carrera, Ricardo [Instituto de Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Barkhouser, Robert [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Bizyaev, Dmitry [Apache Point Observatory and New Mexico State University, P.O. Box 59, Sunspot, NM, 88349-0059 (United States); Blank, Basil; Henderson, Chuck [Pulse Ray Machining and Design, 4583 State Route 414, Beaver Dams, NY 14812 (United States); Cunha, Kátia [Observatório Nacional, Rio de Janeiro, RJ 20921-400 (Brazil); Epstein, Courtney; Johnson, Jennifer A. [The Ohio State University, Columbus, OH 43210 (United States); Fitzgerald, Greg [New England Optical Systems, 237 Cedar Hill Street, Marlborough, MA 01752 (United States); Holtzman, Jon A. [New Mexico State University, Las Cruces, NM 88003 (United States); and others
2017-09-01
The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the Sloan 2.5 m Telescope, APOGEE has collected a half million high-resolution ( R ∼ 22,500), high signal-to-noise ratio (>100), infrared (1.51–1.70 μ m) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design—hardware, field placement, target selection, operations—and gives an overview of these aspects as well as the data reduction, analysis, and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity, and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12 and later releases, all of the APOGEE data products are publicly available.
DEFF Research Database (Denmark)
Rojas-Nandayapa, Leonardo
Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think...... analytic expression for the distribution function of a sum of random variables. The presence of heavy-tailed random variables complicates the problem even more. The objective of this dissertation is to provide better approximations by means of sharp asymptotic expressions and Monte Carlo estimators...
Hashmi, M; Asghar, A; Shamim, F; Khan, F H
2016-01-01
To assess the predictive performance of Acute Physiologic and Chronic Health Evaluation II (APACHE II) software available on the hospital intranet and analyze interrater reliability of calculating the APACHE II score by the gold standard manual method or automatically using the software. An expert scorer not involved in the data collection had calculated APACHE II score of 213 patients admitted to surgical Intensive Care Unit using the gold standard manual method for a previous study performed in the department. The same data were entered into the computer software available on the hospital intranet (http://intranet/apacheii) to recalculate the APACHE II score automatically along with the predicted mortality. Receiver operating characteristic curve (ROC), Hosmer-Lemeshow goodness-of-fit statistical test and Pearson's correlation coefficient was computed. The 213 patients had an average APACHE II score of 17.20 ± 8.24, the overall mortality rate was 32.8% and standardized mortality ratio was 1.00. The area under the ROC curve of 0.827 was significantly >0.5 (P test showed a good calibration (H = 5.46, P = 0.71). Interrater reliability using Pearson's product moment correlations demonstrated a strong positive relationship between the computer and the manual expert scorer (r = 0.98, P = 0.0005). APACHE II software available on the hospital's intranet has satisfactory calibration and discrimination and interrater reliability is good when compared with the gold standard manual method.
Grinstead, Charles M; Snell, J Laurie
2011-01-01
This book explores four real-world topics through the lens of probability theory. It can be used to supplement a standard text in probability or statistics. Most elementary textbooks present the basic theory and then illustrate the ideas with some neatly packaged examples. Here the authors assume that the reader has seen, or is learning, the basic theory from another book and concentrate in some depth on the following topics: streaks, the stock market, lotteries, and fingerprints. This extended format allows the authors to present multiple approaches to problems and to pursue promising side discussions in ways that would not be possible in a book constrained to cover a fixed set of topics. To keep the main narrative accessible, the authors have placed the more technical mathematical details in appendices. The appendices can be understood by someone who has taken one or two semesters of calculus.
Dorogovtsev, A Ya; Skorokhod, A V; Silvestrov, D S; Skorokhod, A V
1997-01-01
This book of problems is intended for students in pure and applied mathematics. There are problems in traditional areas of probability theory and problems in the theory of stochastic processes, which has wide applications in the theory of automatic control, queuing and reliability theories, and in many other modern science and engineering fields. Answers to most of the problems are given, and the book provides hints and solutions for more complicated problems.
An External Independent Validation of APACHE IV in a Malaysian Intensive Care Unit.
Wong, Rowena S Y; Ismail, Noor Azina; Tan, Cheng Cheng
2015-04-01
Intensive care unit (ICU) prognostic models are predominantly used in more developed nations such as the United States, Europe and Australia. These are not that popular in Southeast Asian countries due to costs and technology considerations. The purpose of this study is to evaluate the suitability of the acute physiology and chronic health evaluation (APACHE) IV model in a single centre Malaysian ICU. A prospective study was conducted at the single centre ICU in Hospital Sultanah Aminah (HSA) Malaysia. External validation of APACHE IV involved a cohort of 916 patients who were admitted in 2009. Model performance was assessed through its calibration and discrimination abilities. A first-level customisation using logistic regression approach was also applied to improve model calibration. APACHE IV exhibited good discrimination, with an area under receiver operating characteristic (ROC) curve of 0.78. However, the model's overall fit was observed to be poor, as indicated by the Hosmer-Lemeshow goodness-of-fit test (Ĉ = 113, P discrimination was not affected. APACHE IV is not suitable for application in HSA ICU, without further customisation. The model's lack of fit in the Malaysian study is attributed to differences in the baseline characteristics between HSA ICU and APACHE IV datasets. Other possible factors could be due to differences in clinical practice, quality and services of health care systems between Malaysia and the United States.
International Nuclear Information System (INIS)
Rabago, K.R.
2008-01-01
The purpose of this Strategic Plan Report is to provide an introduction and in-depth analysis of the issues and opportunities, resources, and technologies of energy efficiency and renewable energy that have potential beneficial application for the people of the Jicarilla Apache Nation and surrounding communities. The Report seeks to draw on the best available information that existed at the time of writing, and where necessary, draws on new research to assess this potential. This study provides a strategic assessment of opportunities for maximizing the potential for electrical energy efficiency and renewable energy development by the Jicarilla Apache Nation. The report analyzes electricity use on the Jicarilla Apache Reservation in buildings. The report also assesses particular resources and technologies in detail, including energy efficiency, solar, wind, geothermal, biomass, and small hydropower. The closing sections set out the elements of a multi-year, multi-phase strategy for development of resources to the maximum benefit of the Nation
Jicarilla Apache Utility Authority Renewable Energy and Energy Efficiency Strategic Planning
Energy Technology Data Exchange (ETDEWEB)
Rabago, K.R.
2008-06-28
The purpose of this Strategic Plan Report is to provide an introduction and in-depth analysis of the issues and opportunities, resources, and technologies of energy efficiency and renewable energy that have potential beneficial application for the people of the Jicarilla Apache Nation and surrounding communities. The Report seeks to draw on the best available information that existed at the time of writing, and where necessary, draws on new research to assess this potential. This study provides a strategic assessment of opportunities for maximizing the potential for electrical energy efficiency and renewable energy development by the Jicarilla Apache Nation. The report analyzes electricity use on the Jicarilla Apache Reservation in buildings. The report also assesses particular resources and technologies in detail, including energy efficiency, solar, wind, geothermal, biomass, and small hydropower. The closing sections set out the elements of a multi-year, multi-phase strategy for development of resources to the maximum benefit of the Nation.
Directory of Open Access Journals (Sweden)
Rahmad Dawood
2014-04-01
Full Text Available Raspberry Pi is a small-sized computer, but it can function like an ordinary computer. Because it can function like a regular PC then it is also possible to run a web server application on the Raspberry Pi. This paper will report results from testing the feasibility and performance of running a web server on the Raspberry Pi. The test was conducted on the current top three most popular web servers, which are: Apache, Nginx, and Lighttpd. The parameters used to evaluate the feasibility and performance of these web servers were: maximum request and reply time. The results from the test showed that it is feasible to run all three web servers on the Raspberry Pi but Nginx gave the best performance followed by Lighttpd and Apache.Keywords: Raspberry Pi, web server, Apache, Lighttpd, Nginx, web server performance
Hammond, Vanessa Lea; Watson, P. J.; O'Leary, Brian J.; Cothran, D. Lisa
2009-01-01
Hopelessness is central to prominent mental health problems within American Indian (AI) communities. Apaches living on a reservation in Arizona responded to diverse expressions of hope along with Hopelessness, Personal Self-Esteem, and Collective Self-Esteem scales. An Apache Hopefulness Scale expressed five themes of hope and correlated…
2012-03-29
... DEPARTMENT OF AGRICULTURE Forest Service Rim Lakes Forest Restoration Project; Apache-Sitgreavese National Forest, Black Mesa Ranger District, Coconino County, AZ AGENCY: Forest Service, USDA. ACTION: Notice of intent to prepare an environmental impact statement. SUMMARY: The U.S. Forest Service (FS) will...
Lutzomyia (Helcocyrtomyia) Apache Young and Perkins (Diptera: Psychodidae) feeds on reptiles
Phlebotomine sand flies are vectors of bacteria, parasites, and viruses. In the western USA a sand fly, Lutzomyia apache Young and Perkins, was initially associated with epizootics of vesicular stomatitis virus (VSV), because sand flies were trapped at sites of an outbreak. Additional studies indica...
MR imaging of acute pancreatitis: Correlation of abdominal wall edema with severity scores
Energy Technology Data Exchange (ETDEWEB)
Yang, Ru, E-mail: yangru0904@163.com [Sichuan Key laboratory of Medical Imaging, Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000 (China); Jing, Zong Lin, E-mail: jzl325@163.com [Sichuan Key laboratory of Medical Imaging, Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000 (China); Zhang, Xiao Ming, E-mail: zhangxm@nsmc.edu.cn [Sichuan Key laboratory of Medical Imaging, Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000 (China); Tang, Wei, E-mail: tw-n-g-up@163.com [Sichuan Key laboratory of Medical Imaging, Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000 (China); Xiao, Bo, E-mail: xiaoboimaging@163.com [Sichuan Key laboratory of Medical Imaging, Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000 (China); Huang, Xiao Hua, E-mail: nc_hxh1966@yahoo.com.cn [Sichuan Key laboratory of Medical Imaging, Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000 (China); Yang, Lin, E-mail: llinyangmd@163.com [Sichuan Key laboratory of Medical Imaging, Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000 (China); Feng, Zhi Song, E-mail: fengzhisong@medmail.com.cn [Department of Gastroenterology, Affiliated Hospital of North Sichuan Medical College, Nanchong 637000 (China)
2012-11-15
Objective: To study MRI findings of abdominal wall edema (AWE) in acute pancreatitis as well as correlations between AWE and the severity of acute pancreatitis according to the MR severity index (MRSI) and the Acute Physiology And Chronic Healthy Evaluation III (APACHE III) scoring system. Materials and methods: A total of 160 patients with AP admitted to our institution between December 2009 and March 2011 were included in this study. MRI was performed within 48 h after admission. MRI findings of acute pancreatitis were noted, including AWE on the MRI. The abdominal wall area was divided into quarters, and each area involved was recorded as 1 point to score the severity of AWE. The severity of acute pancreatitis was studied using both the MRSI and the APACHE III scoring system. Spearman correlation of AWE with the MRSI and the APACHE III scoring system was analyzed. Results: In 160 patients with acute pancreatitis, 53.8% had AWE on MRI. The average AWE score was 1.2 {+-} 1.4 points. The prevalence of AWE was 30.5%, 64.5% and 100% in mild, moderate and severe AP, respectively, according to MRSI. AWE on MRI was correlated with MRSI scores (r = 0.441, p = 0.000). According to APACHE III scores, the averages were 2.0 {+-} 1.1 and 2.6 {+-} 1.1 points in mild AP and severe AP, respectively (P = 0.016). AWE was slightly correlated with the APACHE III scores (r = 0.222, p = 0.005). Conclusion: AWE on MRI in acute pancreatitis is common, which may be a supplementary indicator in determining the severity of AP.
Prognostic factors and scoring system for survival in colonic perforation.
Komatsu, Shuhei; Shimomatsuya, Takumi; Nakajima, Masayuki; Amaya, Hirokazu; Kobuchi, Taketsune; Shiraishi, Susumu; Konishi, Sayuri; Ono, Susumu; Maruhashi, Kazuhiro
2005-01-01
No ideal and generally accepted prognostic factors and scoring systems exist to determine the prognosis of peritonitis associated with colonic perforation. This study was designed to investigate prognostic factors and evaluate the various scoring systems to allow identification of high-risk patients. Between 1996 and 2003, excluding iatrogenic and trauma cases, 26 consecutive patients underwent emergency operations for colorectal perforation and were selected for this retrospective study. Several clinical factors were analyzed as possible predictive factors, and APACHE II, SOFA, MPI, and MOF scores were calculated. The overall mortality was 26.9%. Compared with the survivors, non-survivors were found more frequently in Hinchey's stage III-IV, a low preoperative marker of pH, base excess (BE), and a low postoperative marker of white blood cell count, PaO2/FiO2 ratio, and renal output (24h). According to the logistic regression model, BE was a significant independent variable. Concerning the prognostic scoring systems, an APACHE II score of 19, a SOFA score of 8, an MPI score of 30, and an MOF score of 7 or more were significantly related to poor prognosis. Preoperative BE and postoperative white blood cell count were reliable prognostic factors and early classification using prognostic scoring systems at specific points in the disease process are useful to improve our understanding of the problems involved.
Directory of Open Access Journals (Sweden)
Qing-Bian Ma
2017-01-01
Conclusions: The SAPS 3 score system exhibited satisfactory performance even superior to APACHE II in discrimination. In predicting hospital mortality, SAPS 3 did not exhibit good calibration and overestimated hospital mortality, which demonstrated that SAPS 3 needs improvement in the future.
Lot 4 AH-64E Apache Attack Helicopter Follow-on Operational Test and Evaluation Report
2014-12-01
engine is tested to determine its Engine Torque Factor ( ETF ) rating.6 To meet contract specifications, a new engine must have an ETF of 1.0. The...published AH-64E operator’s manual estimates performance based on engines with an ETF of 1.0, and pilots normally plan missions anticipating the 717...pound shortfall in hover performance at KPP conditions. The Apache Program Manager reports that new engines are delivered with an average ETF of
Developer Initiation and Social Interactions in OSS: A Case Study of the Apache Software Foundation
2014-08-01
pp. 201–215, 2003. 2. K. Crowston, K. Wei, J. Howison, and A. Wiggins, “Free/ libre open-source software devel- opment: What we know and what we do not...Understanding the process of participating in open source communities,” in International Workshop on Emerging Trends in Free/ Libre /Open Source Software ...Noname manuscript No. (will be inserted by the editor) Developer Initiation and Social Interactions in OSS: A Case Study of the Apache Software
2010-03-25
... Ranger, Lakeside Ranger District, Apache-Sitgreaves National Forests, c/o TEC Inc., 514 Via de la Valle... to other papers serving areas affected by this proposal: Tucson Citizen, Sierra Vista Herald, Nogales...
Probability Aggregates in Probability Answer Set Programming
Saad, Emad
2013-01-01
Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...
Uchida, Mai; Faraone, Stephen V; Martelon, MaryKate; Kenworthy, Tara; Woodworth, K Yvonne; Spencer, Thomas; Wozniak, Janet; Biederman, Joseph
2014-01-01
Background Previous work shows that children with high scores (2 SD, combined score ≥ 210) on the Attention Problems, Aggressive Behavior, and Anxious-Depressed (A-A-A) subscales of the Child Behavior Checklist (CBCL) are more likely than other children to meet criteria for bipolar (BP)-I disorder. However, the utility of this profile as a screening tool has remained unclear. Methods We compared 140 patients with pediatric BP-I disorder, 83 with attention deficit hyperactivity disorder (ADHD), and 114 control subjects. We defined the CBCL-Severe Dysregulation profile as an aggregate cutoff score of ≥ 210 on the A-A-A scales. Patients were assessed with structured diagnostic interviews and functional measures. Results Patients with BP-I disorder were significantly more likely than both control subjects (Odds Ratio [OR]: 173.2; 95% Confidence Interval [CI], 21.2 to 1413.8; P < 0.001) and those with ADHD (OR: 14.6; 95% CI, 6.2 to 34.3; P < 0.001) to have a positive CBCL-Severe Dysregulation profile. Receiver Operating Characteristics analyses showed that the area under the curve for this profile comparing children with BP-I disorder against control subjects and those with ADHD was 99% and 85%, respectively. The corresponding positive predictive values for this profile were 99% and 92% with false positive rates of < 0.2% and 8% for the comparisons with control subjects and patients with ADHD, respectively. Limitations Non-clinician raters administered structured diagnostic interviews, and the sample was referred and largely Caucasian. Conclusions The CBCL-Severe Dysregulation profile can be useful as a screen for BP-I disorder in children in clinical practice. PMID:24882182
Scaling Qualitative Probability
Burgin, Mark
2017-01-01
There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...
Briggs, William M.
2012-01-01
The probability leakage of model M with respect to evidence E is defined. Probability leakage is a kind of model error. It occurs when M implies that events $y$, which are impossible given E, have positive probability. Leakage does not imply model falsification. Models with probability leakage cannot be calibrated empirically. Regression models, which are ubiquitous in statistical practice, often evince probability leakage.
Koo, Reginald; Jones, Martin L.
2011-01-01
Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.
Goldberg, Samuel
1960-01-01
Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.
Seguridad en la configuración del servidor web Apache
Directory of Open Access Journals (Sweden)
Carlos Eduardo Gómez Montoya
2013-07-01
Full Text Available Apache es el servidor Web con mayor presencia en el mercado mundial. Aunque su configuración es relativamente sencilla, fortalecer sus condiciones de seguridad implica entender y aplicar un conjunto de reglas generales conocidas, aceptadas y disponibles. Por otra parte, a pesar de ser un tema aparentemente resuelto, la seguridad en los servidores HTTP constituye un problema en aumento, y no todas las compañías lo toman en serio. Este artículo identifica y verifica un conjunto de buenas prácticas de seguridad informática aplicadas a la configuración de Apache. Para alcanzar los objetivos, y con el fin de garantizar un proceso adecuado, se eligió una metodología basada en el Círculo de Calidad de Deming, el cual comprende cuatro fases: planear, hacer, verificar y actuar, y su aplicación condujo el desarrollo del proyecto. Este artículo consta de cinco secciones: Introducción, Marco de referencia, Metodología, Resultados y discusión, y Conclusiones.
The customization of APACHE II for patients receiving orthotopic liver transplants
Moreno, Rui
2002-01-01
General outcome prediction models developed for use with large, multicenter databases of critically ill patients may not correctly estimate mortality if applied to a particular group of patients that was under-represented in the original database. The development of new diagnostic weights has been proposed as a method of adapting the general model – the Acute Physiology and Chronic Health Evaluation (APACHE) II in this case – to a new group of patients. Such customization must be empirically tested, because the original model cannot contain an appropriate set of predictive variables for the particular group. In this issue of Critical Care, Arabi and co-workers present the results of the validation of a modified model of the APACHE II system for patients receiving orthotopic liver transplants. The use of a highly heterogeneous database for which not all important variables were taken into account and of a sample too small to use the Hosmer–Lemeshow goodness-of-fit test appropriately makes their conclusions uncertain. PMID:12133174
SIDELOADING – INGESTION OF LARGE POINT CLOUDS INTO THE APACHE SPARK BIG DATA ENGINE
Directory of Open Access Journals (Sweden)
J. Boehm
2016-06-01
Full Text Available In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.
Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine
Boehm, J.; Liu, K.; Alis, C.
2016-06-01
In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.
Directory of Open Access Journals (Sweden)
David C. Tomblin
2016-07-01
Full Text Available Among American Indian nations, the White Mountain Apache Tribe has been at the forefront of a struggle to control natural resource management within reservation boundaries. In 1952, they developed the first comprehensive tribal natural resource management program, the White Mountain Recreational Enterprise (WMRE, which became a cornerstone for fighting legal battles over the tribe’s right to manage cultural and natural resources on the reservation for the benefit of the tribal community rather than outside interests. This article examines how White Mountain Apaches used the WMRE, while embracing both Euro-American and Apache traditions, as an institutional foundation for resistance and exchange with Euro-American society so as to reassert control over tribal eco-cultural resources in east-central Arizona.
Quantum probability measures and tomographic probability densities
Amosov, GG; Man'ko, [No Value
2004-01-01
Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the
Toward a generalized probability theory: conditional probabilities
International Nuclear Information System (INIS)
Cassinelli, G.
1979-01-01
The main mathematical object of interest in the quantum logic approach to the foundations of quantum mechanics is the orthomodular lattice and a set of probability measures, or states, defined by the lattice. This mathematical structure is studied per se, independently from the intuitive or physical motivation of its definition, as a generalized probability theory. It is thought that the building-up of such a probability theory could eventually throw light on the mathematical structure of Hilbert-space quantum mechanics as a particular concrete model of the generalized theory. (Auth.)
Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
Eliciting Subjective Probabilities with Binary Lotteries
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
objective probabilities. Drawing a sample from the same subject population, we find evidence that the binary lottery procedure induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct revelation...
Constructing Flexible, Configurable, ETL Pipelines for the Analysis of "Big Data" with Apache OODT
Hart, A. F.; Mattmann, C. A.; Ramirez, P.; Verma, R.; Zimdars, P. A.; Park, S.; Estrada, A.; Sumarlidason, A.; Gil, Y.; Ratnakar, V.; Krum, D.; Phan, T.; Meena, A.
2013-12-01
A plethora of open source technologies for manipulating, transforming, querying, and visualizing 'big data' have blossomed and matured in the last few years, driven in large part by recognition of the tremendous value that can be derived by leveraging data mining and visualization techniques on large data sets. One facet of many of these tools is that input data must often be prepared into a particular format (e.g.: JSON, CSV), or loaded into a particular storage technology (e.g.: HDFS) before analysis can take place. This process, commonly known as Extract-Transform-Load, or ETL, often involves multiple well-defined steps that must be executed in a particular order, and the approach taken for a particular data set is generally sensitive to the quantity and quality of the input data, as well as the structure and complexity of the desired output. When working with very large, heterogeneous, unstructured or semi-structured data sets, automating the ETL process and monitoring its progress becomes increasingly important. Apache Object Oriented Data Technology (OODT) provides a suite of complementary data management components called the Process Control System (PCS) that can be connected together to form flexible ETL pipelines as well as browser-based user interfaces for monitoring and control of ongoing operations. The lightweight, metadata driven middleware layer can be wrapped around custom ETL workflow steps, which themselves can be implemented in any language. Once configured, it facilitates communication between workflow steps and supports execution of ETL pipelines across a distributed cluster of compute resources. As participants in a DARPA-funded effort to develop open source tools for large-scale data analysis, we utilized Apache OODT to rapidly construct custom ETL pipelines for a variety of very large data sets to prepare them for analysis and visualization applications. We feel that OODT, which is free and open source software available through the Apache
Severity scores in trauma patients admitted to ICU. Physiological and anatomic models.
Serviá, L; Badia, M; Montserrat, N; Trujillano, J
2018-02-02
The goals of this project were to compare both the anatomic and physiologic severity scores in trauma patients admitted to intensive care unit (ICU), and to elaborate mixed statistical models to improve the precision of the scores. A prospective study of cohorts. The combined medical/surgical ICU in a secondary university hospital. Seven hundred and eighty trauma patients admitted to ICU older than 16 years of age. Anatomic models (ISS and NISS) were compared and combined with physiological models (T-RTS, APACHE II [APII], and MPM II). The probability of death was calculated following the TRISS method. The discrimination was assessed using ROC curves (ABC [CI 95%]), and the calibration using the Hosmer-Lemeshoẃs H test. The mixed models were elaborated with the tree classification method type Chi Square Automatic Interaction Detection. A 14% global mortality was recorded. The physiological models presented the best discrimination values (APII of 0.87 [0.84-0.90]). All models were affected by bad calibration (P<.01). The best mixed model resulted from the combination of APII and ISS (0.88 [0.83-0.90]). This model was able to differentiate between a 7.5% mortality for elderly patients with pathological antecedents and a 25% mortality in patients presenting traumatic brain injury, from a pool of patients with APII values ranging from 10 to 17 and an ISS threshold of 22. The physiological models perform better than the anatomical models in traumatic patients admitted to the ICU. Patients with low scores in the physiological models require an anatomic analysis of the injuries to determine their severity. Copyright © 2017 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.
Outcrop Analysis of the Cretaceous Mesaverde Group: Jicarilla Apache Reservation, New Mexico
Energy Technology Data Exchange (ETDEWEB)
Ridgley, Jennie; Dunbar, Robin Wright
2001-04-24
Field work for this project was conducted during July and April 1998, at which time fourteen measured sections were described and correlated on or adjacent to Jicarilla Apache Reservation lands. A fifteenth section, described east of the main field area, is included in this report, although its distant location precluded use in the correlations and cross sections presented herein. Ground-based photo mosaics were shot for much of the exposed Mesaverde outcrop belt and were used to assist in correlation. Outcrop gamma-ray surveys at six of the fifteen measured sections using a GAD-6 scintillometer was conducted. The raw gamma-ray data are included in this report, however, analysis of those data is part of the ongoing Phase Two of this project.
Extraction of UMLS® Concepts Using Apache cTAKES™ for German Language.
Becker, Matthias; Böckmann, Britta
2016-01-01
Automatic information extraction of medical concepts and classification with semantic standards from medical reports is useful for standardization and for clinical research. This paper presents an approach for an UMLS concept extraction with a customized natural language processing pipeline for German clinical notes using Apache cTAKES. The objectives are, to test the natural language processing tool for German language if it is suitable to identify UMLS concepts and map these with SNOMED-CT. The German UMLS database and German OpenNLP models extended the natural language processing pipeline, so the pipeline can normalize to domain ontologies such as SNOMED-CT using the German concepts. For testing, the ShARe/CLEF eHealth 2013 training dataset translated into German was used. The implemented algorithms are tested with a set of 199 German reports, obtaining a result of average 0.36 F1 measure without German stemming, pre- and post-processing of the reports.
Chełkowski, Tadeusz; Gloor, Peter; Jemielniak, Dariusz
2016-01-01
While researchers are becoming increasingly interested in studying OSS phenomenon, there is still a small number of studies analyzing larger samples of projects investigating the structure of activities among OSS developers. The significant amount of information that has been gathered in the publicly available open-source software repositories and mailing-list archives offers an opportunity to analyze projects structures and participant involvement. In this article, using on commits data from 263 Apache projects repositories (nearly all), we show that although OSS development is often described as collaborative, but it in fact predominantly relies on radically solitary input and individual, non-collaborative contributions. We also show, in the first published study of this magnitude, that the engagement of contributors is based on a power-law distribution.
Andersen, Douglas C.
1994-01-01
Apache cicada (Homoptera: Cicadidae: Diceroprocta apache Davis) densities were estimated to be 10 individuals/m2 within a closed-canopy stand of Fremont cottonwood (Populus fremontii) and Goodding willow (Salix gooddingii) in a revegetated site adjacent to the Colorado River near Parker, Arizona. Coupled with data drawn from the literature, I estimate that up to 1.3 cm (13 1/m2) of water may be added to the upper soil layers annually through the feeding activities of cicada nymphs. This is equivalent to 12% of the annual precipitation received in the study area. Apache cicadas may have significant effects on ecosystem functioning via effects on water transport and thus act as a critical-link species in this southwest desert riverine ecosystem. Cicadas emerged later within the cottonwood-willow stand than in relatively open saltcedar-mesquite stands; this difference in temporal dynamics would affect their availability to several insectivorous bird species and may help explain the birds' recent declines. Resource managers in this region should be sensitive to the multiple and strong effects that Apache cicadas may have on ecosystem structure and functioning.
Overview of the SDSS-IV MaNGA Survey: Mapping nearby Galaxies at Apache Point Observatory
Bundy, Kevin; Bershady, Matthew A.; Law, David R.; Yan, Renbin; Drory, Niv; MacDonald, Nicholas; Wake, David A.; Cherinka, Brian; Sánchez-Gallego, José R.; Weijmans, Anne-Marie; Thomas, Daniel; Tremonti, Christy; Masters, Karen; Coccato, Lodovico; Diamond-Stanic, Aleksandar M.; Aragón-Salamanca, Alfonso; Avila-Reese, Vladimir; Badenes, Carles; Falcón-Barroso, Jésus; Belfiore, Francesco; Bizyaev, Dmitry; Blanc, Guillermo A.; Bland-Hawthorn, Joss; Blanton, Michael R.; Brownstein, Joel R.; Byler, Nell; Cappellari, Michele; Conroy, Charlie; Dutton, Aaron A.; Emsellem, Eric; Etherington, James; Frinchaboy, Peter M.; Fu, Hai; Gunn, James E.; Harding, Paul; Johnston, Evelyn J.; Kauffmann, Guinevere; Kinemuchi, Karen; Klaene, Mark A.; Knapen, Johan H.; Leauthaud, Alexie; Li, Cheng; Lin, Lihwai; Maiolino, Roberto; Malanushenko, Viktor; Malanushenko, Elena; Mao, Shude; Maraston, Claudia; McDermid, Richard M.; Merrifield, Michael R.; Nichol, Robert C.; Oravetz, Daniel; Pan, Kaike; Parejko, John K.; Sanchez, Sebastian F.; Schlegel, David; Simmons, Audrey; Steele, Oliver; Steinmetz, Matthias; Thanjavur, Karun; Thompson, Benjamin A.; Tinker, Jeremy L.; van den Bosch, Remco C. E.; Westfall, Kyle B.; Wilkinson, David; Wright, Shelley; Xiao, Ting; Zhang, Kai
We present an overview of a new integral field spectroscopic survey called MaNGA (Mapping Nearby Galaxies at Apache Point Observatory), one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV) that began on 2014 July 1. MaNGA will investigate the internal kinematic
Philosophical theories of probability
Gillies, Donald
2000-01-01
The Twentieth Century has seen a dramatic rise in the use of probability and statistics in almost all fields of research. This has stimulated many new philosophical ideas on probability. Philosophical Theories of Probability is the first book to present a clear, comprehensive and systematic account of these various theories and to explain how they relate to one another. Gillies also offers a distinctive version of the propensity theory of probability, and the intersubjective interpretation, which develops the subjective theory.
Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia
We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned
Interpretations of probability
Khrennikov, Andrei
2009-01-01
This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.
Gregoric, Pavle; Pavle, Gregoric; Sijacki, Ana; Ana, Sijacki; Stankovic, Sanja; Sanja, Stankovic; Radenkovic, Dejan; Dejan, Radenkovic; Ivancevic, Nenad; Nenad, Ivancevic; Karamarkovic, Aleksandar; Aleksandar, Karamarkovic; Popovic, Nada; Nada, Popovic; Karadzic, Borivoje; Borivoje, Karadzic; Stijak, Lazar; Stefanovic, Branislav; Branislav, Stefanovic; Milosevic, Zoran; Zoran, Milosević; Bajec, Djordje; Djordje, Bajec
2010-01-01
Early recognition of severe form of acute pancreatitis is important because these patients need more agressive diagnostic and therapeutical approach an can develope systemic complications such as: sepsis, coagulopathy, Acute Lung Injury (ALI), Acute Respiratory Distress Syndrome (ARDS), Multiple Organ Dysfunction Syndrome (MODS), Multiple Organ Failure (MOF). To determine role of the combination of Systemic Inflammatory Response Syndrome (SIRS) score and serum Interleukin-6 (IL-6) level on admission as predictor of illness severity and outcome of Severe Acute Pancreatitis (SAP). We evaluated 234 patients with first onset of SAP appears in last twenty four hours. A total of 77 (33%) patients died. SIRS score and serum IL-6 concentration were measured in first hour after admission. In 105 patients with SIRS score 3 and higher, initial measured IL-6 levels were significantly higher than in the group of remaining 129 patients (72 +/- 67 pg/mL, vs 18 +/- 15 pg/mL). All nonsurvivals were in the first group, with SIRS score 3 and 4 and initial IL-6 concentration 113 +/- 27 pg/mL. The values of C-reactive Protein (CRP) measured after 48h, Acute Physiology and Chronic Health Evaluation (APACHE II) score on admission and Ranson score showed the similar correlation, but serum amylase level did not correlate significantly with Ranson score, IL-6 concentration and APACHE II score. The combination of SIRS score on admission and IL-6 serum concentration can be early, predictor of illness severity and outcome in SAP.
International Nuclear Information System (INIS)
Fraassen, B.C. van
1979-01-01
The interpretation of probabilities in physical theories are considered, whether quantum or classical. The following points are discussed 1) the functions P(μ, Q) in terms of which states and propositions can be represented, are classical (Kolmogoroff) probabilities, formally speaking, 2) these probabilities are generally interpreted as themselves conditional, and the conditions are mutually incompatible where the observables are maximal and 3) testing of the theory typically takes the form of confronting the expectation values of observable Q calculated with probability measures P(μ, Q) for states μ; hence, of comparing the probabilities P(μ, Q)(E) with the frequencies of occurrence of the corresponding events. It seems that even the interpretation of quantum mechanics, in so far as it concerns what the theory says about the empirical (i.e. actual, observable) phenomena, deals with the confrontation of classical probability measures with observable frequencies. This confrontation is studied. (Auth./C.F.)
The quantum probability calculus
International Nuclear Information System (INIS)
Jauch, J.M.
1976-01-01
The Wigner anomaly (1932) for the joint distribution of noncompatible observables is an indication that the classical probability calculus is not applicable for quantum probabilities. It should, therefore, be replaced by another, more general calculus, which is specifically adapted to quantal systems. In this article this calculus is exhibited and its mathematical axioms and the definitions of the basic concepts such as probability field, random variable, and expectation values are given. (B.R.H)
Choice Probability Generating Functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....
Probability of satellite collision
Mccarter, J. W.
1972-01-01
A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2013-01-01
This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...
Florescu, Ionut
2013-01-01
THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio
Ash, Robert B; Lukacs, E
1972-01-01
Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var
Assessing the clinical probability of pulmonary embolism
International Nuclear Information System (INIS)
Miniati, M.; Pistolesi, M.
2001-01-01
Clinical assessment is a cornerstone of the recently validated diagnostic strategies for pulmonary embolism (PE). Although the diagnostic yield of individual symptoms, signs, and common laboratory tests is limited, the combination of these variables, either by empirical assessment or by a prediction rule, can be used to express a clinical probability of PE. The latter may serve as pretest probability to predict the probability of PE after further objective testing (posterior or post-test probability). Over the last few years, attempts have been made to develop structured prediction models for PE. In a Canadian multicenter prospective study, the clinical probability of PE was rated as low, intermediate, or high according to a model which included assessment of presenting symptoms and signs, risk factors, and presence or absence of an alternative diagnosis at least as likely as PE. Recently, a simple clinical score was developed to stratify outpatients with suspected PE into groups with low, intermediate, or high clinical probability. Logistic regression was used to predict parameters associated with PE. A score ≤ 4 identified patients with low probability of whom 10% had PE. The prevalence of PE in patients with intermediate (score 5-8) and high probability (score ≥ 9) was 38 and 81%, respectively. As opposed to the Canadian model, this clinical score is standardized. The predictor variables identified in the model, however, were derived from a database of emergency ward patients. This model may, therefore, not be valid in assessing the clinical probability of PE in inpatients. In the PISA-PED study, a clinical diagnostic algorithm was developed which rests on the identification of three relevant clinical symptoms and on their association with electrocardiographic and/or radiographic abnormalities specific for PE. Among patients who, according to the model, had been rated as having a high clinical probability, the prevalence of proven PE was 97%, while it was 3
Hammergren, Mark; Brucker, Melissa J.; Nault, Kristie A.; Gyuk, Geza; Solontoi, Michael R.
2015-11-01
Near-Earth Objects (NEOs) are interesting to scientists and the general public for diverse reasons: their impacts pose a threat to life and property; they present important albeit biased records of the formation and evolution of the Solar System; and their materials may provide in situ resources for future space exploration and habitation.In January 2015 we began a program of NEO astrometric follow-up and physical characterization using a 17% share of time on the Astrophysical Research Consortium (ARC) 3.5-meter telescope at Apache Point Observatory (APO). Our 500 hours of annual observing time are split into frequent, short astrometric runs (see poster by K. A. Nault et. al), and half-night runs devoted to physical characterization (see poster by M. J. Brucker et. al for preliminary rotational lightcurve results). NEO surface compositions are investigated with 0.36-1.0 μm reflectance spectroscopy using the Dual Imaging Spectrograph (DIS) instrument. As of August 25, 2015, including testing runs during fourth quarter 2014, we have obtained reflectance spectra of 68 unique NEOs, ranging in diameter from approximately 5m to 8km.In addition to investigating the compositions of individual NEOs to inform impact hazard and space resource evaluations, we may examine the distribution of taxonomic types and potential trends with other physical and orbital properties. For example, the Yarkovsky effect, which is dependent on asteroid shape, mass, rotation, and thermal characteristics, is believed to dominate other dynamical effects in driving the delivery of small NEOs from the main asteroid belt. Studies of the taxonomic distribution of a large sample of NEOs of a wide range of sizes will test this hypothesis.We present a preliminary analysis of the reflectance spectra obtained in our survey to date, including taxonomic classifications and potential trends with size.Acknowledgements: Based on observations obtained with the Apache Point Observatory 3.5-meter telescope, which
Update on Astrometric Follow-Up at Apache Point Observatory by Adler Planetarium
Nault, Kristie A.; Brucker, Melissa; Hammergren, Mark
2016-10-01
We began our NEO astrometric follow-up and characterization program in 2014 Q4 using about 500 hours of observing time per year with the Astrophysical Research Consortium (ARC) 3.5m telescope at Apache Point Observatory (APO). Our observing is split into 2 hour blocks approximately every other night for astrometry (this poster) and several half-nights per month for spectroscopy (see poster by M. Hammergren et al.) and light curve studies.For astrometry, we use the ARC Telescope Imaging Camera (ARCTIC) with an SDSS r filter, in 2 hour observing blocks centered around midnight. ARCTIC has a magnitude limit of V~23 in 60s, and we target 20 NEOs per session. ARCTIC has a FOV 1.57 times larger and a readout time half as long as the previous imager, SPIcam, which we used from 2014 Q4 through 2015 Q3. Targets are selected primarily from the Minor Planet Center's (MPC) NEO Confirmation Page (NEOCP), and NEA Observation Planning Aid; we also refer to JPL's What's Observable page, the Spaceguard Priority List and Faint NEOs List, and requests from other observers. To quickly adapt to changing weather and seeing conditions, we create faint, midrange, and bright target lists. Detected NEOs are measured with Astrometrica and internal software, and the astrometry is reported to the MPC.As of June 19, 2016, we have targeted 2264 NEOs, 1955 with provisional designations, 1582 of which were detected. We began observing NEOCP asteroids on January 30, 2016, and have targeted 309, 207 of which were detected. In addition, we serendipitously observed 281 moving objects, 201 of which were identified as previously known objects.This work is based on observations obtained with the Apache Point Observatory 3.5m telescope, which is owned and operated by the Astrophysical Research Consortium. We gratefully acknowledge support from NASA NEOO award NNX14AL17G and thank the University of Chicago Department of Astronomy and Astrophysics for observing time in 2014.
Freund, John E
1993-01-01
Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.
Probability, Nondeterminism and Concurrency
DEFF Research Database (Denmark)
Varacca, Daniele
Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particula...
Directory of Open Access Journals (Sweden)
Lattanzi M.G.
2013-04-01
Full Text Available Small-size ground-based telescopes can effectively be used to look for transiting rocky planets around nearby low-mass M stars using the photometric transit method, as recently demonstrated for example by the MEarth project. Since 2008 at the Astronomical Observatory of the Autonomous Region of Aosta Valley (OAVdA, we have been preparing for the long-term photometric survey APACHE, aimed at finding transiting small-size planets around thousands of nearby early and mid-M dwarfs. APACHE (A PAthway toward the Characterization of Habitable Earths is designed to use an array of five dedicated and identical 40-cm Ritchey-Chretien telescopes and its observations started at the beginning of summer 2012. The main characteristics of the survey final set up and the preliminary results from the first weeks of observations will be discussed.
Rocchi, Paolo
2014-01-01
The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.
Khwannimit, Bodin
2008-09-01
To perform a serial assessment and compare ability in predicting the intensive care unit (ICU) mortality of the multiple organ dysfunction score (MODS), sequential organ failure assessment (SOFA) and logistic organ dysfunction (LOD) score. The data were collected prospectively on consecutive ICU admissions over a 24-month period at a tertiary referral university hospital. The MODS, SOFA, and LOD scores were calculated on initial and repeated every 24 hrs. Two thousand fifty four patients were enrolled in the present study. The maximum and delta-scores of all the organ dysfunction scores correlated with ICU mortality. The maximum score of all models had better ability for predicting ICU mortality than initial or delta score. The areas under the receiver operating characteristic curve (AUC) for maximum scores was 0.892 for the MODS, 0.907 for the SOFA, and 0.92for the LOD. No statistical difference existed between all maximum scores and Acute Physiology and Chronic Health Evaluation II (APACHE II) score. Serial assessment of organ dysfunction during the ICU stay is reliable with ICU mortality. The maximum scores is the best discrimination comparable with APACHE II score in predicting ICU mortality.
THE DATA REDUCTION PIPELINE FOR THE APACHE POINT OBSERVATORY GALACTIC EVOLUTION EXPERIMENT
International Nuclear Information System (INIS)
Nidever, David L.; Holtzman, Jon A.; Prieto, Carlos Allende; Mészáros, Szabolcs; Beland, Stephane; Bender, Chad; Desphande, Rohit; Bizyaev, Dmitry; Burton, Adam; García Pérez, Ana E.; Hearty, Fred R.; Majewski, Steven R.; Skrutskie, Michael F.; Sobeck, Jennifer S.; Wilson, John C.; Fleming, Scott W.; Muna, Demitri; Nguyen, Duy; Schiavon, Ricardo P.; Shetrone, Matthew
2015-01-01
The Apache Point Observatory Galactic Evolution Experiment (APOGEE), part of the Sloan Digital Sky Survey III, explores the stellar populations of the Milky Way using the Sloan 2.5-m telescope linked to a high resolution (R ∼ 22,500), near-infrared (1.51–1.70 μm) spectrograph with 300 optical fibers. For over 150,000 predominantly red giant branch stars that APOGEE targeted across the Galactic bulge, disks and halo, the collected high signal-to-noise ratio (>100 per half-resolution element) spectra provide accurate (∼0.1 km s −1 ) RVs, stellar atmospheric parameters, and precise (≲0.1 dex) chemical abundances for about 15 chemical species. Here we describe the basic APOGEE data reduction software that reduces multiple 3D raw data cubes into calibrated, well-sampled, combined 1D spectra, as implemented for the SDSS-III/APOGEE data releases (DR10, DR11 and DR12). The processing of the near-IR spectral data of APOGEE presents some challenges for reduction, including automated sky subtraction and telluric correction over a 3°-diameter field and the combination of spectrally dithered spectra. We also discuss areas for future improvement
McElwain, Michael W.; Grady, Carol A.; Bally, John; Brinkmann, Jonathan V.; Bubeck, James; Gong, Qian; Hilton, George M.; Ketzeback, William F.; Lindler, Don; Llop Sayson, Jorge; Malatesta, Michael A.; Norton, Timothy; Rauscher, Bernard J.; Rothe, Johannes; Straka, Lorrie; Wilkins, Ashlee N.; Wisniewski, John P.; Woodgate, Bruce E.; York, Donald G.
2015-01-01
We present the current status and progress towards photon counting with the Goddard Integral Field Spectrograph (GIFS), a new instrument at the Apache Point Observatory's ARC 3.5m telescope. GIFS is a visible light imager and integral field spectrograph operating from 400-1000 nm over a 2.8' x 2.8' and 14' x 14' field of view, respectively. As an IFS, GIFS obtains over 1000 spectra simultaneously and its data reduction pipeline reconstructs them into an image cube that has 32 x 32 spatial elements and more than 200 spectral channels. The IFS mode can be applied to a wide variety of science programs including exoplanet transit spectroscopy, protostellar jets, the galactic interstellar medium probed by background quasars, Lyman-alpha emission line objects, and spectral imaging of galactic winds. An electron-multiplying CCD (EMCCD) detector enables photon counting in the high spectral resolution mode to be demonstrated at the ARC 3.5m in early 2015. The EMCCD work builds upon successful operational and characterization tests that have been conducted in the IFS laboratory at NASA Goddard. GIFS sets out to demonstrate an IFS photon-counting capability on-sky in preparation for future exoplanet direct imaging missions such as the AFTA-Coronagraph, Exo-C, and ATLAST mission concepts. This work is supported by the NASA APRA program under RTOP 10-APRA10-0103.
High performance Spark best practices for scaling and optimizing Apache Spark
Karau, Holden
2017-01-01
Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources. Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you’ll also learn how to make it sing. With this book, you’ll explore: How Spark SQL’s new interfaces improve performance over SQL’s RDD data structure The choice between data joins in Core Spark and Spark SQL Techniques for getting the most out of standard RDD transformations How to work around performance issues i...
Norman, Laura M.; Middleton, Barry R.; Wilson, Natalie R.
2018-01-01
Mapping of vegetation types is of great importance to the San Carlos Apache Tribe and their management of forestry and fire fuels. Various remote sensing techniques were applied to classify multitemporal Landsat 8 satellite data, vegetation index, and digital elevation model data. A multitiered unsupervised classification generated over 900 classes that were then recoded to one of the 16 generalized vegetation/land cover classes using the Southwest Regional Gap Analysis Project (SWReGAP) map as a guide. A supervised classification was also run using field data collected in the SWReGAP project and our field campaign. Field data were gathered and accuracy assessments were generated to compare outputs. Our hypothesis was that a resulting map would update and potentially improve upon the vegetation/land cover class distributions of the older SWReGAP map over the 24,000 km2 study area. The estimated overall accuracies ranged between 43% and 75%, depending on which method and field dataset were used. The findings demonstrate the complexity of vegetation mapping, the importance of recent, high-quality-field data, and the potential for misleading results when insufficient field data are collected.
Detection of attack-targeted scans from the Apache HTTP Server access logs
Directory of Open Access Journals (Sweden)
Merve Baş Seyyar
2018-01-01
Full Text Available A web application could be visited for different purposes. It is possible for a web site to be visited by a regular user as a normal (natural visit, to be viewed by crawlers, bots, spiders, etc. for indexing purposes, lastly to be exploratory scanned by malicious users prior to an attack. An attack targeted web scan can be viewed as a phase of a potential attack and can lead to more attack detection as compared to traditional detection methods. In this work, we propose a method to detect attack-oriented scans and to distinguish them from other types of visits. In this context, we use access log files of Apache (or ISS web servers and try to determine attack situations through examination of the past data. In addition to web scan detections, we insert a rule set to detect SQL Injection and XSS attacks. Our approach has been applied on sample data sets and results have been analyzed in terms of performance measures to compare our method and other commonly used detection techniques. Furthermore, various tests have been made on log samples from real systems. Lastly, several suggestions about further development have been also discussed.
Price-Jones, Natalie; Bovy, Jo
2018-03-01
Chemical tagging of stars based on their similar compositions can offer new insights about the star formation and dynamical history of the Milky Way. We investigate the feasibility of identifying groups of stars in chemical space by forgoing the use of model derived abundances in favour of direct analysis of spectra. This facilitates the propagation of measurement uncertainties and does not pre-suppose knowledge of which elements are important for distinguishing stars in chemical space. We use ˜16 000 red giant and red clump H-band spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and perform polynomial fits to remove trends not due to abundance-ratio variations. Using expectation maximized principal component analysis, we find principal components with high signal in the wavelength regions most important for distinguishing between stars. Different subsamples of red giant and red clump stars are all consistent with needing about 10 principal components to accurately model the spectra above the level of the measurement uncertainties. The dimensionality of stellar chemical space that can be investigated in the H band is therefore ≲10. For APOGEE observations with typical signal-to-noise ratios of 100, the number of chemical space cells within which stars cannot be distinguished is approximately 1010±2 × (5 ± 2)n - 10 with n the number of principal components. This high dimensionality and the fine-grained sampling of chemical space are a promising first step towards chemical tagging based on spectra alone.
FEASIBILITY STUDY FOR A PETROLEUM REFINERY FOR THE JICARILLA APACHE TRIBE
International Nuclear Information System (INIS)
Jones, John D.
2004-01-01
A feasibility study for a proposed petroleum refinery for the Jicarilla Apache Indian Reservation was performed. The available crude oil production was identified and characterized. There is 6,000 barrels per day of crude oil production available for processing in the proposed refinery. The proposed refinery will utilize a lower temperature, smaller crude fractionation unit. It will have a Naphtha Hydrodesulfurizer and Reformer to produce high octane gasoline. The surplus hydrogen from the reformer will be used in a specialized hydrocracker to convert the heavier crude oil fractions to ultra low sulfur gasoline and diesel fuel products. The proposed refinery will produce gasoline, jet fuel, diesel fuel, and a minimal amount of lube oil. The refinery will require about $86,700,000 to construct. It will have net annual pre-tax profit of about $17,000,000. The estimated return on investment is 20%. The feasibility is positive subject to confirmation of long term crude supply. The study also identified procedures for evaluating processing options as a means for American Indian Tribes and Native American Corporations to maximize the value of their crude oil production
THE DATA REDUCTION PIPELINE FOR THE APACHE POINT OBSERVATORY GALACTIC EVOLUTION EXPERIMENT
Energy Technology Data Exchange (ETDEWEB)
Nidever, David L. [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109 (United States); Holtzman, Jon A. [New Mexico State University, Las Cruces, NM 88003 (United States); Prieto, Carlos Allende; Mészáros, Szabolcs [Instituto de Astrofísica de Canarias, Via Láctea s/n, E-38205 La Laguna, Tenerife (Spain); Beland, Stephane [Laboratory for Atmospheric and Space Sciences, University of Colorado at Boulder, Boulder, CO (United States); Bender, Chad; Desphande, Rohit [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Bizyaev, Dmitry [Apache Point Observatory and New Mexico State University, P.O. Box 59, sunspot, NM 88349-0059 (United States); Burton, Adam; García Pérez, Ana E.; Hearty, Fred R.; Majewski, Steven R.; Skrutskie, Michael F.; Sobeck, Jennifer S.; Wilson, John C. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Fleming, Scott W. [Computer Sciences Corporation, 3700 San Martin Dr, Baltimore, MD 21218 (United States); Muna, Demitri [Department of Astronomy and the Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Nguyen, Duy [Department of Astronomy and Astrophysics, University of Toronto, Toronto, Ontario, M5S 3H4 (Canada); Schiavon, Ricardo P. [Gemini Observatory, 670 N. A’Ohoku Place, Hilo, HI 96720 (United States); Shetrone, Matthew, E-mail: dnidever@umich.edu [University of Texas at Austin, McDonald Observatory, Fort Davis, TX 79734 (United States)
2015-12-15
The Apache Point Observatory Galactic Evolution Experiment (APOGEE), part of the Sloan Digital Sky Survey III, explores the stellar populations of the Milky Way using the Sloan 2.5-m telescope linked to a high resolution (R ∼ 22,500), near-infrared (1.51–1.70 μm) spectrograph with 300 optical fibers. For over 150,000 predominantly red giant branch stars that APOGEE targeted across the Galactic bulge, disks and halo, the collected high signal-to-noise ratio (>100 per half-resolution element) spectra provide accurate (∼0.1 km s{sup −1}) RVs, stellar atmospheric parameters, and precise (≲0.1 dex) chemical abundances for about 15 chemical species. Here we describe the basic APOGEE data reduction software that reduces multiple 3D raw data cubes into calibrated, well-sampled, combined 1D spectra, as implemented for the SDSS-III/APOGEE data releases (DR10, DR11 and DR12). The processing of the near-IR spectral data of APOGEE presents some challenges for reduction, including automated sky subtraction and telluric correction over a 3°-diameter field and the combination of spectrally dithered spectra. We also discuss areas for future improvement.
Ellingson, A.R.; Andersen, D.C.
2002-01-01
1. The hypothesis that the habitat-scale spatial distribution of the Apache cicada Diceroprocta apache Davis is unaffected by the presence of the invasive exotic saltcedar Tamarix ramosissima was tested using data from 205 1-m2 quadrats placed within the flood-plain of the Bill Williams River, Arizona, U.S.A. Spatial dependencies within and between cicada density and habitat variables were estimated using Moran's I and its bivariate analogue to discern patterns and associations at spatial scales from 1 to 30 m.2. Apache cicadas were spatially aggregated in high-density clusters averaging 3 m in diameter. A positive association between cicada density, estimated by exuvial density, and the per cent canopy cover of a native tree, Goodding's willow Salix gooddingii, was detected in a non-spatial correlation analysis. No non-spatial association between cicada density and saltcedar canopy cover was detected.3. Tests for spatial cross-correlation using the bivariate IYZ indicated the presence of a broad-scale negative association between cicada density and saltcedar canopy cover. This result suggests that large continuous stands of saltcedar are associated with reduced cicada density. In contrast, positive associations detected at spatial scales larger than individual quadrats suggested a spill-over of high cicada density from areas featuring Goodding's willow canopy into surrounding saltcedar monoculture.4. Taken together and considered in light of the Apache cicada's polyphagous habits, the observed spatial patterns suggest that broad-scale factors such as canopy heterogeneity affect cicada habitat use more than host plant selection. This has implications for management of lower Colorado River riparian woodlands to promote cicada presence and density through maintenance or creation of stands of native trees as well as manipulation of the characteristically dense and homogeneous saltcedar canopies.
Grigel, Rudolf
2015-01-01
The main objective of this thesis was to enhance the organization and maintenance of big data with Apache Solr in IBM WebSphere Commerce deployments. This objective can be split into several subtasks: reorganization of data, fast and optimised exporting and importing, efficient update and cleanup operations. E-Commerce is a fast growing and frequently changing environment. There is a constant flow of data that is rapidly growing larger and larger every day which is becoming an ...
International Nuclear Information System (INIS)
Chenoweth, W.L.
1980-03-01
This report is a brief review of the uranium and/or vanadium mining in the eastern Carrizo Mountains, San Juan County, New Mexico and Apache County, Arizona. It was prepared at the request of the Navajo Tribe, the New Mexico Energy and Minerals Department, and the Arizona Bureau of Geology and Mineral Technology. This report deals only with historical production data. The locations of the mines and the production are presented in figures and tables
Zhou, Lianjie; Chen, Nengcheng; Chen, Zeqiang
2017-04-10
The efficient data access of streaming vehicle data is the foundation of analyzing, using and mining vehicle data in smart cities, which is an approach to understand traffic environments. However, the number of vehicles in urban cities has grown rapidly, reaching hundreds of thousands in number. Accessing the mass streaming data of vehicles is hard and takes a long time due to limited computation capability and backward modes. We propose an efficient streaming spatio-temporal data access based on Apache Storm (ESDAS) to achieve real-time streaming data access and data cleaning. As a popular streaming data processing tool, Apache Storm can be applied to streaming mass data access and real time data cleaning. By designing the Spout/bolt workflow of topology in ESDAS and by developing the speeding bolt and other bolts, Apache Storm can achieve the prospective aim. In our experiments, Taiyuan BeiDou bus location data is selected as the mass spatio-temporal data source. In the experiments, the data access results with different bolts are shown in map form, and the filtered buses' aggregation forms are different. In terms of performance evaluation, the consumption time in ESDAS for ten thousand records per second for a speeding bolt is approximately 300 milliseconds, and that for MongoDB is approximately 1300 milliseconds. The efficiency of ESDAS is approximately three times higher than that of MongoDB.
Directory of Open Access Journals (Sweden)
Lianjie Zhou
2017-04-01
Full Text Available The efficient data access of streaming vehicle data is the foundation of analyzing, using and mining vehicle data in smart cities, which is an approach to understand traffic environments. However, the number of vehicles in urban cities has grown rapidly, reaching hundreds of thousands in number. Accessing the mass streaming data of vehicles is hard and takes a long time due to limited computation capability and backward modes. We propose an efficient streaming spatio-temporal data access based on Apache Storm (ESDAS to achieve real-time streaming data access and data cleaning. As a popular streaming data processing tool, Apache Storm can be applied to streaming mass data access and real time data cleaning. By designing the Spout/bolt workflow of topology in ESDAS and by developing the speeding bolt and other bolts, Apache Storm can achieve the prospective aim. In our experiments, Taiyuan BeiDou bus location data is selected as the mass spatio-temporal data source. In the experiments, the data access results with different bolts are shown in map form, and the filtered buses’ aggregation forms are different. In terms of performance evaluation, the consumption time in ESDAS for ten thousand records per second for a speeding bolt is approximately 300 milliseconds, and that for MongoDB is approximately 1300 milliseconds. The efficiency of ESDAS is approximately three times higher than that of MongoDB.
Billingsley, Patrick
2012-01-01
Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this
International Nuclear Information System (INIS)
Bitsakis, E.I.; Nicolaides, C.A.
1989-01-01
The concept of probability is now, and always has been, central to the debate on the interpretation of quantum mechanics. Furthermore, probability permeates all of science, as well as our every day life. The papers included in this volume, written by leading proponents of the ideas expressed, embrace a broad spectrum of thought and results: mathematical, physical epistemological, and experimental, both specific and general. The contributions are arranged in parts under the following headings: Following Schroedinger's thoughts; Probability and quantum mechanics; Aspects of the arguments on nonlocality; Bell's theorem and EPR correlations; Real or Gedanken experiments and their interpretation; Questions about irreversibility and stochasticity; and Epistemology, interpretation and culture. (author). refs.; figs.; tabs
Shorack, Galen R
2017-01-01
This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...
Concepts of probability theory
Pfeiffer, Paul E
1979-01-01
Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
Probability and Statistical Inference
Prosper, Harrison B.
2006-01-01
These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.
Hartmann, Stephan
2011-01-01
Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...
Grimmett, Geoffrey
2014-01-01
Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...
Hemmo, Meir
2012-01-01
What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive.
Probability in quantum mechanics
Directory of Open Access Journals (Sweden)
J. G. Gilson
1982-01-01
Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.
Quantum computing and probability.
Ferry, David K
2009-11-25
Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.
Quantum computing and probability
International Nuclear Information System (INIS)
Ferry, David K
2009-01-01
Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction. (viewpoint)
Mescalero Apache Tribe Monitored Retrievable Storage (MRS). Phase 1 feasibility study report
Energy Technology Data Exchange (ETDEWEB)
Peso, F.
1992-03-13
The Nuclear Waste Policy Act of 1982, as amended, authorizes the siting, construction and operation of a Monitored Retrievable Storage (MRS) facility. The MRS is intended to be used for the temporary storage of spent nuclear fuel from the nation`s nuclear power plants beginning as early as 1998. Pursuant to the Nuclear Waste Policy Act, the Office of the Nuclear Waste Negotiator was created. On October 7, 1991, the Nuclear Waste Negotiator invited the governors of states and the Presidents of Indian tribes to apply for government grants in order to conduct a study to assess under what conditions, if any, they might consider hosting an MRS facility. Pursuant to this invitation, on October 11, 1991 the Mescalero Apache Indian Tribe of Mescalero, NM applied for a grant to conduct a phased, preliminary study of the safety, technical, political, environmental, social and economic feasibility of hosting an MRS. The preliminary study included: (1) An investigative education process to facilitate the Tribe`s comprehensive understanding of the safety, environmental, technical, social, political, and economic aspects of hosting an MRS, and; (2) The development of an extensive program that is enabling the Tribe, in collaboration with the Negotiator, to reach an informed and carefully researched decision regarding the conditions, (if any), under which further pursuit of the MRS would be considered. The Phase 1 grant application enabled the Tribe to begin the initial activities necessary to determine whether further consideration is warranted for hosting the MRS facility. The Tribe intends to pursue continued study of the MRS in order to meet the following objectives: (1) Continuing the education process towards a comprehensive understanding of the safety, environmental, technical, social and economic aspects of the MRS; (2) Conducting an effective public participation and information program; (3) Participating in MRS meetings.
Field studies at the Apache Leap Research Site in support of alternative conceptual models
Energy Technology Data Exchange (ETDEWEB)
Woodhouse, E.G.; Davidson, G.R.; Theis, C. [eds.] [and others
1997-08-01
This is a final technical report for a project of the U.S Nuclear Regulatory Commission (sponsored contract NRC-04-090-51) with the University of Arizona. The contract was an optional extension that was initiated on July 21, 1994 and that expired on May 31, 1995. The project manager was Thomas J. Nicholson, Office of Nuclear Regulatory Research. The objectives of this contract were to examine hypotheses and conceptual models concerning unsaturated flow and transport through fractured rock, and to design and execute confirmatory field and laboratory experiments to test these hypotheses and conceptual models at the Apache Leap Research Site near Superior, Arizona. The results discussed here are products of specific tasks that address a broad spectrum of issues related to flow and transport through fractures. Each chapter in this final report summarizes research related to a specific set of objectives and can be read and interpreted as a separate entity. The tasks include detection and characterization of historical rapid fluid flow through fractured rock and the relationship to perched water systems using environmental isotopic tracers of {sup 3}H and {sup 14}C, fluid- and rock-derived {sup 2343}U/{sup 238}U measurements, and geophysical data. The water balance in a small watershed at the ALRS demonstrates the methods of acounting for ET, and estimating the quantity of water available for infiltration through fracture networks. Grain density measurements were made for core-sized samples using a newly designed gas pycnometer. The distribution and magnitude of air permeability measurements have been measured in a three-dimensional setting; the subsequent geostatistical analysis is presented. Electronic versions of the data presented here are available from authors; more detailed discussions and analyses are available in technical publications referenced herein, or soon to appear in the professional literature.
Field studies at the Apache Leap Research Site in support of alternative conceptual models
International Nuclear Information System (INIS)
Woodhouse, E.G.; Davidson, G.R.; Theis, C.
1997-08-01
This is a final technical report for a project of the U.S Nuclear Regulatory Commission (sponsored contract NRC-04-090-51) with the University of Arizona. The contract was an optional extension that was initiated on July 21, 1994 and that expired on May 31, 1995. The project manager was Thomas J. Nicholson, Office of Nuclear Regulatory Research. The objectives of this contract were to examine hypotheses and conceptual models concerning unsaturated flow and transport through fractured rock, and to design and execute confirmatory field and laboratory experiments to test these hypotheses and conceptual models at the Apache Leap Research Site near Superior, Arizona. The results discussed here are products of specific tasks that address a broad spectrum of issues related to flow and transport through fractures. Each chapter in this final report summarizes research related to a specific set of objectives and can be read and interpreted as a separate entity. The tasks include detection and characterization of historical rapid fluid flow through fractured rock and the relationship to perched water systems using environmental isotopic tracers of 3 H and 14 C, fluid- and rock-derived 2343 U/ 238 U measurements, and geophysical data. The water balance in a small watershed at the ALRS demonstrates the methods of acounting for ET, and estimating the quantity of water available for infiltration through fracture networks. Grain density measurements were made for core-sized samples using a newly designed gas pycnometer. The distribution and magnitude of air permeability measurements have been measured in a three-dimensional setting; the subsequent geostatistical analysis is presented. Electronic versions of the data presented here are available from authors; more detailed discussions and analyses are available in technical publications referenced herein, or soon to appear in the professional literature
The perception of probability.
Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E
2014-01-01
We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Irreversibility and conditional probability
International Nuclear Information System (INIS)
Stuart, C.I.J.M.
1989-01-01
The mathematical entropy - unlike physical entropy - is simply a measure of uniformity for probability distributions in general. So understood, conditional entropies have the same logical structure as conditional probabilities. If, as is sometimes supposed, conditional probabilities are time-reversible, then so are conditional entropies and, paradoxically, both then share this symmetry with physical equations of motion. The paradox is, of course that probabilities yield a direction to time both in statistical mechanics and quantum mechanics, while the equations of motion do not. The supposed time-reversibility of both conditionals seems also to involve a form of retrocausality that is related to, but possibly not the same as, that described by Costa de Beaurgard. The retrocausality is paradoxically at odds with the generally presumed irreversibility of the quantum mechanical measurement process. Further paradox emerges if the supposed time-reversibility of the conditionals is linked with the idea that the thermodynamic entropy is the same thing as 'missing information' since this confounds the thermodynamic and mathematical entropies. However, it is shown that irreversibility is a formal consequence of conditional entropies and, hence, of conditional probabilities also. 8 refs. (Author)
Isaac, Richard
1995-01-01
The ideas of probability are all around us. Lotteries, casino gambling, the al most non-stop polling which seems to mold public policy more and more these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...
Experimental Probability in Elementary School
Andrew, Lane
2009-01-01
Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.
Improving Ranking Using Quantum Probability
Melucci, Massimo
2011-01-01
The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...
Choice probability generating functions
DEFF Research Database (Denmark)
Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel
2010-01-01
This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....
Probability and stochastic modeling
Rotar, Vladimir I
2012-01-01
Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...
Collision Probability Analysis
DEFF Research Database (Denmark)
Hansen, Peter Friis; Pedersen, Preben Terndrup
1998-01-01
It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...
Estimating Subjective Probabilities
DEFF Research Database (Denmark)
Andersen, Steffen; Fountain, John; Harrison, Glenn W.
2014-01-01
either construct elicitation mechanisms that control for risk aversion, or construct elicitation mechanisms which undertake 'calibrating adjustments' to elicited reports. We illustrate how the joint estimation of risk attitudes and subjective probabilities can provide the calibration adjustments...... that theory calls for. We illustrate this approach using data from a controlled experiment with real monetary consequences to the subjects. This allows the observer to make inferences about the latent subjective probability, under virtually any well-specified model of choice under subjective risk, while still...
Introduction to imprecise probabilities
Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M
2014-01-01
In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin
Classic Problems of Probability
Gorroochurn, Prakash
2012-01-01
"A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin
Counterexamples in probability
Stoyanov, Jordan M
2013-01-01
While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.
Plotnitsky, Arkady
2010-01-01
Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general
Transition probabilities for atoms
International Nuclear Information System (INIS)
Kim, Y.K.
1980-01-01
Current status of advanced theoretical methods for transition probabilities for atoms and ions is discussed. An experiment on the f values of the resonance transitions of the Kr and Xe isoelectronic sequences is suggested as a test for the theoretical methods
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
D-score: a search engine independent MD-score.
Vaudel, Marc; Breiter, Daniela; Beck, Florian; Rahnenführer, Jörg; Martens, Lennart; Zahedi, René P
2013-03-01
While peptides carrying PTMs are routinely identified in gel-free MS, the localization of the PTMs onto the peptide sequences remains challenging. Search engine scores of secondary peptide matches have been used in different approaches in order to infer the quality of site inference, by penalizing the localization whenever the search engine similarly scored two candidate peptides with different site assignments. In the present work, we show how the estimation of posterior error probabilities for peptide candidates allows the estimation of a PTM score called the D-score, for multiple search engine studies. We demonstrate the applicability of this score to three popular search engines: Mascot, OMSSA, and X!Tandem, and evaluate its performance using an already published high resolution data set of synthetic phosphopeptides. For those peptides with phosphorylation site inference uncertainty, the number of spectrum matches with correctly localized phosphorylation increased by up to 25.7% when compared to using Mascot alone, although the actual increase depended on the fragmentation method used. Since this method relies only on search engine scores, it can be readily applied to the scoring of the localization of virtually any modification at no additional experimental or in silico cost. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Negative probability in the framework of combined probability
Burgin, Mark
2013-01-01
Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...
Espol; Delgado Quishpe, Byron Alberto
2017-01-01
Realizar un hardening al servidor web, se procederá a revisar las directivas en los archivos de configuración del servicio Apache, PHP, y se procederá a realizar instalación y configuración de un firewall de aplicaciones llamado mod_security la cual nos permitirá mitigar ataques a nuestro servidor web. realizando un análisis de vulnerabilidades encontrado en el servidor. Guayaquil Magíster en Seguridad Informática Aplicada
Extension of the lod score: the mod score.
Clerget-Darpoux, F
2001-01-01
In 1955 Morton proposed the lod score method both for testing linkage between loci and for estimating the recombination fraction between them. If a disease is controlled by a gene at one of these loci, the lod score computation requires the prior specification of an underlying model that assigns the probabilities of genotypes from the observed phenotypes. To address the case of linkage studies for diseases with unknown mode of inheritance, we suggested (Clerget-Darpoux et al., 1986) extending the lod score function to a so-called mod score function. In this function, the variables are both the recombination fraction and the disease model parameters. Maximizing the mod score function over all these parameters amounts to maximizing the probability of marker data conditional on the disease status. Under the absence of linkage, the mod score conforms to a chi-square distribution, with extra degrees of freedom in comparison to the lod score function (MacLean et al., 1993). The mod score is asymptotically maximum for the true disease model (Clerget-Darpoux and Bonaïti-Pellié, 1992; Hodge and Elston, 1994). Consequently, the power to detect linkage through mod score will be highest when the space of models where the maximization is performed includes the true model. On the other hand, one must avoid overparametrization of the model space. For example, when the approach is applied to affected sibpairs, only two constrained disease model parameters should be used (Knapp et al., 1994) for the mod score maximization. It is also important to emphasize the existence of a strong correlation between the disease gene location and the disease model. Consequently, there is poor resolution of the location of the susceptibility locus when the disease model at this locus is unknown. Of course, this is true regardless of the statistics used. The mod score may also be applied in a candidate gene strategy to model the potential effect of this gene in the disease. Since, however, it
Contributions to quantum probability
International Nuclear Information System (INIS)
Fritz, Tobias
2010-01-01
Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a finite set can occur as the outcome
von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo
2014-06-01
Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.
Contributions to quantum probability
Energy Technology Data Exchange (ETDEWEB)
Fritz, Tobias
2010-06-25
Chapter 1: On the existence of quantum representations for two dichotomic measurements. Under which conditions do outcome probabilities of measurements possess a quantum-mechanical model? This kind of problem is solved here for the case of two dichotomic von Neumann measurements which can be applied repeatedly to a quantum system with trivial dynamics. The solution uses methods from the theory of operator algebras and the theory of moment problems. The ensuing conditions reveal surprisingly simple relations between certain quantum-mechanical probabilities. It also shown that generally, none of these relations holds in general probabilistic models. This result might facilitate further experimental discrimination between quantum mechanics and other general probabilistic theories. Chapter 2: Possibilistic Physics. I try to outline a framework for fundamental physics where the concept of probability gets replaced by the concept of possibility. Whereas a probabilistic theory assigns a state-dependent probability value to each outcome of each measurement, a possibilistic theory merely assigns one of the state-dependent labels ''possible to occur'' or ''impossible to occur'' to each outcome of each measurement. It is argued that Spekkens' combinatorial toy theory of quantum mechanics is inconsistent in a probabilistic framework, but can be regarded as possibilistic. Then, I introduce the concept of possibilistic local hidden variable models and derive a class of possibilistic Bell inequalities which are violated for the possibilistic Popescu-Rohrlich boxes. The chapter ends with a philosophical discussion on possibilistic vs. probabilistic. It can be argued that, due to better falsifiability properties, a possibilistic theory has higher predictive power than a probabilistic one. Chapter 3: The quantum region for von Neumann measurements with postselection. It is determined under which conditions a probability distribution on a
Waste Package Misload Probability
International Nuclear Information System (INIS)
Knudsen, J.K.
2001-01-01
The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a
Probability theory and applications
Hsu, Elton P
1999-01-01
This volume, with contributions by leading experts in the field, is a collection of lecture notes of the six minicourses given at the IAS/Park City Summer Mathematics Institute. It introduces advanced graduates and researchers in probability theory to several of the currently active research areas in the field. Each course is self-contained with references and contains basic materials and recent results. Topics include interacting particle systems, percolation theory, analysis on path and loop spaces, and mathematical finance. The volume gives a balanced overview of the current status of probability theory. An extensive bibliography for further study and research is included. This unique collection presents several important areas of current research and a valuable survey reflecting the diversity of the field.
Paradoxes in probability theory
Eckhardt, William
2013-01-01
Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory. Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies. Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.
Measurement uncertainty and probability
Willink, Robin
2013-01-01
A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.
Model uncertainty and probability
International Nuclear Information System (INIS)
Parry, G.W.
1994-01-01
This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example
Retrocausality and conditional probability
International Nuclear Information System (INIS)
Stuart, C.I.J.M.
1989-01-01
Costa de Beauregard has proposed that physical causality be identified with conditional probability. The proposal is shown to be vulnerable on two accounts. The first, though mathematically trivial, seems to be decisive so far as the current formulation of the proposal is concerned. The second lies in a physical inconsistency which seems to have its source in a Copenhagenlike disavowal of realism in quantum mechanics. 6 refs. (Author)
Whittle, Peter
1992-01-01
This book is a complete revision of the earlier work Probability which ap peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...
The probability and severity of decompression sickness
Hada, Ethan A.; Vann, Richard D.; Denoble, Petar J.
2017-01-01
Decompression sickness (DCS), which is caused by inert gas bubbles in tissues, is an injury of concern for scuba divers, compressed air workers, astronauts, and aviators. Case reports for 3322 air and N2-O2 dives, resulting in 190 DCS events, were retrospectively analyzed and the outcomes were scored as (1) serious neurological, (2) cardiopulmonary, (3) mild neurological, (4) pain, (5) lymphatic or skin, and (6) constitutional or nonspecific manifestations. Following standard U.S. Navy medical definitions, the data were grouped into mild—Type I (manifestations 4–6)–and serious–Type II (manifestations 1–3). Additionally, we considered an alternative grouping of mild–Type A (manifestations 3–6)–and serious–Type B (manifestations 1 and 2). The current U.S. Navy guidance allows for a 2% probability of mild DCS and a 0.1% probability of serious DCS. We developed a hierarchical trinomial (3-state) probabilistic DCS model that simultaneously predicts the probability of mild and serious DCS given a dive exposure. Both the Type I/II and Type A/B discriminations of mild and serious DCS resulted in a highly significant (p probability of ‘mild’ DCS resulted in a longer allowable bottom time for the same 2% limit. However, for the 0.1% serious DCS limit, we found a vastly decreased allowable bottom dive time for all dive depths. If the Type A/B scoring was assigned to outcome severity, the no decompression limits (NDL) for air dives were still controlled by the acceptable serious DCS risk limit rather than the acceptable mild DCS risk limit. However, in this case, longer NDL limits were allowed than with the Type I/II scoring. The trinomial model mild and serious probabilities agree reasonably well with the current air NDL only with the Type A/B scoring and when 0.2% risk of serious DCS is allowed. PMID:28296928
Directory of Open Access Journals (Sweden)
Zhihong Feng
2017-01-01
Full Text Available We aimed to investigate the efficacy of four severity-of-disease scoring systems in predicting the 28-day survival rate among patients with acute exacerbation of chronic obstructive pulmonary disease (AECOPD requiring emergency care. Clinical data of patients with AECOPD who required emergency care were recorded over 2 years. APACHE II, SAPS II, SOFA, and MEDS scores were calculated from severity-of-disease indicators recorded at admission and compared between patients who died within 28 days of admission (death group; 46 patients and those who did not (survival group; 336 patients. Compared to the survival group, the death group had a significantly higher GCS score, frequency of comorbidities including hypertension and heart failure, and age (P<0.05 for all. With all four systems, scores of age, gender, renal inadequacy, hypertension, coronary heart disease, heart failure, arrhythmia, anemia, fracture leading to bedridden status, tumor, and the GCS were significantly higher in the death group than the survival group. The prediction efficacy of the APACHE II and SAPS II scores was 88.4%. The survival rates did not differ significantly between APACHE II and SAPS II (P=1.519. Our results may guide triage for early identification of critically ill patients with AECOPD in the emergency department.
Probability mapping of contaminants
Energy Technology Data Exchange (ETDEWEB)
Rautman, C.A.; Kaplan, P.G. [Sandia National Labs., Albuquerque, NM (United States); McGraw, M.A. [Univ. of California, Berkeley, CA (United States); Istok, J.D. [Oregon State Univ., Corvallis, OR (United States); Sigda, J.M. [New Mexico Inst. of Mining and Technology, Socorro, NM (United States)
1994-04-01
Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds).
Probability mapping of contaminants
International Nuclear Information System (INIS)
Rautman, C.A.; Kaplan, P.G.; McGraw, M.A.; Istok, J.D.; Sigda, J.M.
1994-01-01
Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds)
Probability of causation approach
International Nuclear Information System (INIS)
Jose, D.E.
1988-01-01
Probability of causation (PC) is sometimes viewed as a great improvement by those persons who are not happy with the present rulings of courts in radiation cases. The author does not share that hope and expects that PC will not play a significant role in these issues for at least the next decade. If it is ever adopted in a legislative compensation scheme, it will be used in a way that is unlikely to please most scientists. Consequently, PC is a false hope for radiation scientists, and its best contribution may well lie in some of the spin-off effects, such as an influence on medical practice
Generalized Probability Functions
Directory of Open Access Journals (Sweden)
Alexandre Souto Martinez
2009-01-01
Full Text Available From the integration of nonsymmetrical hyperboles, a one-parameter generalization of the logarithmic function is obtained. Inverting this function, one obtains the generalized exponential function. Motivated by the mathematical curiosity, we show that these generalized functions are suitable to generalize some probability density functions (pdfs. A very reliable rank distribution can be conveniently described by the generalized exponential function. Finally, we turn the attention to the generalization of one- and two-tail stretched exponential functions. We obtain, as particular cases, the generalized error function, the Zipf-Mandelbrot pdf, the generalized Gaussian and Laplace pdf. Their cumulative functions and moments were also obtained analytically.
2014-06-30
precisely the content of the following result. The price we pay is that the assumption that A is a packing in (F, k ·k1) is too weak to make this happen...Regularité des trajectoires des fonctions aléatoires gaussiennes. In: École d’Été de Probabilités de Saint- Flour , IV-1974, pp. 1–96. Lecture Notes in...Lectures on probability theory and statistics (Saint- Flour , 1994), Lecture Notes in Math., vol. 1648, pp. 165–294. Springer, Berlin (1996) 50. Ledoux
Wang, R; Sun, B; Li, X Y; He, H Y; Tang, X; Zhan, Q Y; Tong, Z H
2016-09-01
To investigate the predictive values of different critical scoring systems for mortality in patients with severe acute respiratory failure (ARF) supported by venovenous extracorporeal membrane oxygenation (VV-ECMO). Forty-two patients with severe ARF supported by VV-ECMO were enrolled from November 2009 to July 2015.There were 25 males and 17 females. The mean age was (44±18) years (rang 18-69 years). Acute Physiology and Chronic Health Evaluation (APACHE) Ⅱ, Ⅲ, Ⅳ, Simplified Acute Physiology Score Ⅱ (SAPS) Ⅱ, Sequential Organ Failure Assessment (SOFA), ECMO net, PRedicting dEath for SEvere ARDS on VVECMO (PRESERVE), and Respiratory ECMO Survival Prediction (RESP) scores were collected within 6 hours before VV-ECMO support. The patients were divided into the survivors group (n=17) and the nonsurvivors group (n=25) by survival at 180 d after receiving VV-ECMO. The patient clinical characteristics and aforementioned scoring systems were compared between groups. Scoring systems for predicting prognosis were assessed using the area under the receiver-operating characteristic (ROC) curve. The Kaplan-Meier method was used to draw the surviving curve, and the survival of the patients was analyzed by the Log-rank test. The risk factors were assessed for prognosis by multiple logistic regression analysis. (1) Positive end expiratory pressure (PEEP) 6 hours prior to VV-ECMO support in the survivors group [(9.7±5.0)cmH2O, (1 cmH2O=0.098 kPa)] was lower than that in the nonsurvivors group [(13.2±5.4)cmH2O, t=-2.134, P=0.039]. VV-ECMO combination with continuous renal replacement therapy(CRRT) in the nonsurvivors group (32%) was used more than in the survivors group (6%, χ(2)=4.100, P=0.043). Duration of VV-ECMO support in the nonsurvivors group [(15±13) d] was longer than that in the survivors group [(12±11)d, t=-2.123, P=0.041]. APACHE Ⅱ, APACHE Ⅲ, APACHE Ⅳ, ECMO net, PRESERVE, and RESP scores in the survivors group were superior to the nonsurvivors
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Walk Score measures the walkability of any address using a patented system developed by the Walk Score company. For each 2010 Census Tract centroid, Walk Score...
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Probability and rational choice
Directory of Open Access Journals (Sweden)
David Botting
2014-05-01
Full Text Available http://dx.doi.org/10.5007/1808-1711.2014v18n1p1 In this paper I will discuss the rationality of reasoning about the future. There are two things that we might like to know about the future: which hypotheses are true and what will happen next. To put it in philosophical language, I aim to show that there are methods by which inferring to a generalization (selecting a hypothesis and inferring to the next instance (singular predictive inference can be shown to be normative and the method itself shown to be rational, where this is due in part to being based on evidence (although not in the same way and in part on a prior rational choice. I will also argue that these two inferences have been confused, being distinct not only conceptually (as nobody disputes but also in their results (the value given to the probability of the hypothesis being not in general that given to the next instance and that methods that are adequate for one are not by themselves adequate for the other. A number of debates over method founder on this confusion and do not show what the debaters think they show.
2016-01-01
While researchers are becoming increasingly interested in studying OSS phenomenon, there is still a small number of studies analyzing larger samples of projects investigating the structure of activities among OSS developers. The significant amount of information that has been gathered in the publicly available open-source software repositories and mailing-list archives offers an opportunity to analyze projects structures and participant involvement. In this article, using on commits data from 263 Apache projects repositories (nearly all), we show that although OSS development is often described as collaborative, but it in fact predominantly relies on radically solitary input and individual, non-collaborative contributions. We also show, in the first published study of this magnitude, that the engagement of contributors is based on a power-law distribution. PMID:27096157
Energy Technology Data Exchange (ETDEWEB)
Efroymson, R.A.
2002-05-09
This ecological risk assessment for a testing program at Yuma Proving Ground, Arizona, is a demonstration of the Military Ecological Risk Assessment Framework (MERAF; Suter et al. 2001). The demonstration is intended to illustrate how risk assessment guidance concerning-generic military training and testing activities and guidance concerning a specific type of activity (e.g., low-altitude aircraft overflights) may be implemented at a military installation. MERAF was developed with funding from the Strategic Research and Development Program (SERDP) of the Department of Defense. Novel aspects of MERAF include: (1) the assessment of risks from physical stressors using an ecological risk assessment framework, (2) the consideration of contingent or indirect effects of stressors (e.g., population-level effects that are derived from habitat or hydrological changes), (3) the integration of risks associated with different component activities or stressors, (4) the emphasis on quantitative risk estimates and estimates of uncertainty, and (5) the modularity of design, permitting components of the framework to be used in various military risk assessments that include similar activities. The particular subject of this report is the assessment of ecological risks associated with a testing program at Cibola Range of Yuma Proving Ground, Arizona. The program involves an Apache Longbow helicopter firing Hellfire missiles at moving targets, i.e., M60-A1 tanks. Thus, the three component activities of the Apache-Hellfire test were: (1) helicopter overflight, (2) missile firing, and (3) tracked vehicle movement. The demonstration was limited, to two ecological endpoint entities (i.e., potentially susceptible and valued populations or communities): woody desert wash communities and mule deer populations. The core assessment area is composed of about 126 km{sup 2} between the Chocolate and Middle Mountains. The core time of the program is a three-week period, including fourteen days of
APOGEE-2: The Second Phase of the Apache Point Observatory Galactic Evolution Experiment in SDSS-IV
Sobeck, Jennifer; Majewski, S.; Hearty, F.; Schiavon, R. P.; Holtzman, J. A.; Johnson, J.; Frinchaboy, P. M.; Skrutskie, M. F.; Munoz, R.; Pinsonneault, M. H.; Nidever, D. L.; Zasowski, G.; Garcia Perez, A.; Fabbian, D.; Meza Cofre, A.; Cunha, K. M.; Smith, V. V.; Chiappini, C.; Beers, T. C.; Steinmetz, M.; Anders, F.; Bizyaev, D.; Roman, A.; Fleming, S. W.; Crane, J. D.; SDSS-IV/APOGEE-2 Collaboration
2014-01-01
The second phase of the Apache Point Observatory Galactic Evolution Experiment (APOGEE-2), a part of the Sloan Digital Sky Survey IV (SDSS-IV), will commence operations in 2014. APOGEE-2 represents a significant expansion over APOGEE-1, not only in the size of the stellar sample, but also in the coverage of the sky through observations in both the Northern and Southern Hemispheres. Observations on the 2.5m Sloan Foundation Telescope of the Apache Point Observatory (APOGEE-2N) will continue immediately after the conclusion of APOGEE-1, to be followed by observations with the 2.5m du Pont Telescope of the Las Campanas Observatory (APOGEE-2S) within three years. Over the six-year lifetime of the project, high resolution (R˜22,500), high signal-to-noise (≥100) spectroscopic data in the H-band wavelength regime (1.51-1.69 μm) will be obtained for several hundred thousand stars, more than tripling the total APOGEE-1 sample. Accurate radial velocities and detailed chemical compositions will be generated for target stars in the main Galactic components (bulge, disk, and halo), open/globular clusters, and satellite dwarf galaxies. The spectroscopic follow-up program of Kepler targets with the APOGEE-2N instrument will be continued and expanded. APOGEE-2 will significantly extend and enhance the APOGEE-1 legacy of scientific contributions to understanding the origin and evolution of the elements, the assembly and formation history of galaxies like the Milky Way, and fundamental stellar astrophysics.
International Nuclear Information System (INIS)
Efroymson, R.A.
2002-01-01
This ecological risk assessment for a testing program at Yuma Proving Ground, Arizona, is a demonstration of the Military Ecological Risk Assessment Framework (MERAF; Suter et al. 2001). The demonstration is intended to illustrate how risk assessment guidance concerning-generic military training and testing activities and guidance concerning a specific type of activity (e.g., low-altitude aircraft overflights) may be implemented at a military installation. MERAF was developed with funding from the Strategic Research and Development Program (SERDP) of the Department of Defense. Novel aspects of MERAF include: (1) the assessment of risks from physical stressors using an ecological risk assessment framework, (2) the consideration of contingent or indirect effects of stressors (e.g., population-level effects that are derived from habitat or hydrological changes), (3) the integration of risks associated with different component activities or stressors, (4) the emphasis on quantitative risk estimates and estimates of uncertainty, and (5) the modularity of design, permitting components of the framework to be used in various military risk assessments that include similar activities. The particular subject of this report is the assessment of ecological risks associated with a testing program at Cibola Range of Yuma Proving Ground, Arizona. The program involves an Apache Longbow helicopter firing Hellfire missiles at moving targets, i.e., M60-A1 tanks. Thus, the three component activities of the Apache-Hellfire test were: (1) helicopter overflight, (2) missile firing, and (3) tracked vehicle movement. The demonstration was limited, to two ecological endpoint entities (i.e., potentially susceptible and valued populations or communities): woody desert wash communities and mule deer populations. The core assessment area is composed of about 126 km 2 between the Chocolate and Middle Mountains. The core time of the program is a three-week period, including fourteen days of
COVAL, Compound Probability Distribution for Function of Probability Distribution
International Nuclear Information System (INIS)
Astolfi, M.; Elbaz, J.
1979-01-01
1 - Nature of the physical problem solved: Computation of the probability distribution of a function of variables, given the probability distribution of the variables themselves. 'COVAL' has been applied to reliability analysis of a structure subject to random loads. 2 - Method of solution: Numerical transformation of probability distributions
Falk, Ruma; Kendig, Keith
2013-01-01
Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.
Introduction to probability with R
Baclawski, Kenneth
2008-01-01
FOREWORD PREFACE Sets, Events, and Probability The Algebra of Sets The Bernoulli Sample Space The Algebra of Multisets The Concept of Probability Properties of Probability Measures Independent Events The Bernoulli Process The R Language Finite Processes The Basic Models Counting Rules Computing Factorials The Second Rule of Counting Computing Probabilities Discrete Random Variables The Bernoulli Process: Tossing a Coin The Bernoulli Process: Random Walk Independence and Joint Distributions Expectations The Inclusion-Exclusion Principle General Random Variable
Ross, Sheldon
2014-01-01
A First Course in Probability, Ninth Edition, features clear and intuitive explanations of the mathematics of probability theory, outstanding problem sets, and a variety of diverse examples and applications. This book is ideal for an upper-level undergraduate or graduate level introduction to probability for math, science, engineering and business students. It assumes a background in elementary calculus.
[The diagnostic scores for deep venous thrombosis].
Junod, A
2015-08-26
Seven diagnostic scores for the deep venous thrombosis (DVT) of lower limbs are analyzed and compared. Two features make this exer- cise difficult: the problem of distal DVT and of their proximal extension and the status of patients, whether out- or in-patients. The most popular score is the Wells score (1997), modi- fied in 2003. It includes one subjective ele- ment based on clinical judgment. The Primary Care score 12005), less known, has similar pro- perties, but uses only objective data. The pre- sent trend is to associate clinical scores with the dosage of D-Dimers to rule out with a good sensitivity the probability of TVP. For the upper limb DVT, the Constans score (2008) is available, which can also be coupled with D-Dimers testing (Kleinjan).
Radtke, Anne; Pfister, Roman; Kuhr, Kathrin; Kochanek, Matthias; Michels, Guido
2017-10-01
The aim of the FEELING-ON-ICU study was to compare mortality estimations of critically ill patients based on 'gut feeling' of medical staff and by Acute Physiology And Chronic Health Evaluation (APACHE) II, Simplified Acute Physiology Score (SAPS) II and Sequential Organ Failure Assessment (SOFA). Medical staff estimated patients' mortality risks via questionnaires. APACHE II, SAPS II and SOFA were calculated retrospectively from records. Estimations were compared with actual in-hospital mortality using receiver operating characteristic (ROC) curves and the area under the ROC curve (AUC). 66 critically ill patients (60.6% male, mean age 63±15years (range 30-86)) were evaluated each by a nurse (n=66, male 32.4%) and a physician (n=66, male 67.6%). 15 (22.7%) patients died on the intensive care unit. AUC was largest for estimations by physicians (AUC 0.814 (95% CI 0.705-0.923)), followed by SOFA (AUC 0.749 (95% CI 0.629-0.868)), SAPS II (AUC 0.723 (95% CI 0.597-0.849)), APACHE II (AUC 0.721 (95% CI 0.595-0.847)) and nursing staff (AUC 0.669 (95% CI 0.529-0.810)) (p<0.05 for all results). The concept of physicians' 'gut feeling' was comparable to classical objective scores in mortality estimations of critically ill patients. Concerning practicability physicians' evaluations were advantageous to complex score calculation. Copyright © 2017 Elsevier Inc. All rights reserved.
A brief introduction to probability.
Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio
2018-02-01
The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.
Zhou, Lin; Guo, Jianming; Wang, Hang; Wang, Guomin
2015-01-01
Abstract In the zero ischemia era of nephron-sparing surgery (NSS), a new anatomic classification system (ACS) is needed to adjust to these new surgical techniques. We devised a novel and simple ACS, and compared it with the RENAL and PADUA scores to predict the risk of NSS outcomes. We retrospectively evaluated 789 patients who underwent NSS with available imaging between January 2007 and July 2014. Demographic and clinical data were assessed. The Zhongshan (ZS) score consisted of three parameters. RENAL, PADUA, and ZS scores are divided into three groups, that is, high, moderate, and low scores. For operative time (OT), significant differences were seen between any two groups of ZS score and PADUA score (all P RENAL showed no significant difference between moderate and high complexity in OT, WIT, estimated blood loss, and increase in SCr. Compared with patients with a low score of ZS, those with a high or moderate score had 8.1-fold or 3.3-fold higher risk of surgical complications, respectively (all P RENAL score, patients with a high or moderate score had 5.7-fold or 1.9-fold higher risk of surgical complications, respectively (all P RENAL and PADUA scores. ZS score could be used to reflect the surgical complexity and predict the risk of surgical complications in patients undergoing NSS. PMID:25654399
Propensity, Probability, and Quantum Theory
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
WebScore: An Effective Page Scoring Approach for Uncertain Web Social Networks
Directory of Open Access Journals (Sweden)
Shaojie Qiao
2011-10-01
Full Text Available To effectively score pages with uncertainty in web social networks, we first proposed a new concept called transition probability matrix and formally defined the uncertainty in web social networks. Second, we proposed a hybrid page scoring algorithm, called WebScore, based on the PageRank algorithm and three centrality measures including degree, betweenness, and closeness. Particularly,WebScore takes into a full consideration of the uncertainty of web social networks by computing the transition probability from one page to another. The basic idea ofWebScore is to: (1 integrate uncertainty into PageRank in order to accurately rank pages, and (2 apply the centrality measures to calculate the importance of pages in web social networks. In order to verify the performance of WebScore, we developed a web social network analysis system which can partition web pages into distinct groups and score them in an effective fashion. Finally, we conducted extensive experiments on real data and the results show that WebScore is effective at scoring uncertain pages with less time deficiency than PageRank and centrality measures based page scoring algorithms.
Prediction and probability in sciences
International Nuclear Information System (INIS)
Klein, E.; Sacquin, Y.
1998-01-01
This book reports the 7 presentations made at the third meeting 'physics and fundamental questions' whose theme was probability and prediction. The concept of probability that was invented to apprehend random phenomena has become an important branch of mathematics and its application range spreads from radioactivity to species evolution via cosmology or the management of very weak risks. The notion of probability is the basis of quantum mechanics and then is bound to the very nature of matter. The 7 topics are: - radioactivity and probability, - statistical and quantum fluctuations, - quantum mechanics as a generalized probability theory, - probability and the irrational efficiency of mathematics, - can we foresee the future of the universe?, - chance, eventuality and necessity in biology, - how to manage weak risks? (A.C.)
Applied probability and stochastic processes
Sumita, Ushio
1999-01-01
Applied Probability and Stochastic Processes is an edited work written in honor of Julien Keilson. This volume has attracted a host of scholars in applied probability, who have made major contributions to the field, and have written survey and state-of-the-art papers on a variety of applied probability topics, including, but not limited to: perturbation method, time reversible Markov chains, Poisson processes, Brownian techniques, Bayesian probability, optimal quality control, Markov decision processes, random matrices, queueing theory and a variety of applications of stochastic processes. The book has a mixture of theoretical, algorithmic, and application chapters providing examples of the cutting-edge work that Professor Keilson has done or influenced over the course of his highly-productive and energetic career in applied probability and stochastic processes. The book will be of interest to academic researchers, students, and industrial practitioners who seek to use the mathematics of applied probability i...
Risk-adjusted scoring systems in colorectal surgery.
Leung, Edmund; McArdle, Kirsten; Wong, Ling S
2011-01-01
Consequent to recent advances in surgical techniques and management, survival rate has increased substantially over the last 25 years, particularly in colorectal cancer patients. However, post-operative morbidity and mortality from colorectal cancer vary widely across the country. Therefore, standardised outcome measures are emphasised not only for professional accountability, but also for comparison between treatment units and regions. In a heterogeneous population, the use of crude mortality as an outcome measure for patients undergoing surgery is simply misleading. Meaningful comparisons, however, require accurate risk stratification of patients being analysed before conclusions can be reached regarding the outcomes recorded. Sub-specialised colorectal surgical units usually dedicated to more complex and high-risk operations. The need for accurate risk prediction is necessary in these units as both mortality and morbidity often are tools to justify the practice of high-risk surgery. The Acute Physiology And Chronic Health Evaluation (APACHE) is a system for classifying patients in the intensive care unit. However, APACHE score was considered too complex for general surgical use. The American Society of Anaesthesiologists (ASA) grade has been considered useful as an adjunct to informed consent and for monitoring surgical performance through time. ASA grade is simple but too subjective. The Physiological & Operative Severity Score for the enUmeration of Mortality and morbidity (POSSUM) and its variant Portsmouth POSSUM (P-POSSUM) were devised to predict outcomes in surgical patients in general, taking into account of the variables in the case-mix. POSSUM has two parts, which include assessment of physiological parameters and operative scores. There are 12 physiological parameters and 6 operative measures. The physiological parameters are taken at the time of surgery. Each physiological parameter or operative variable is sub-divided into three or four levels with
Poisson Processes in Free Probability
An, Guimei; Gao, Mingchu
2015-01-01
We prove a multidimensional Poisson limit theorem in free probability, and define joint free Poisson distributions in a non-commutative probability space. We define (compound) free Poisson process explicitly, similar to the definitions of (compound) Poisson processes in classical probability. We proved that the sum of finitely many freely independent compound free Poisson processes is a compound free Poisson processes. We give a step by step procedure for constructing a (compound) free Poisso...
PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT
We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...
An introduction to probability and statistical inference
Roussas, George G
2003-01-01
"The text is wonderfully written and has the mostcomprehensive range of exercise problems that I have ever seen." - Tapas K. Das, University of South Florida"The exposition is great; a mixture between conversational tones and formal mathematics; the appropriate combination for a math text at [this] level. In my examination I could find no instance where I could improve the book." - H. Pat Goeters, Auburn, University, Alabama* Contains more than 200 illustrative examples discussed in detail, plus scores of numerical examples and applications* Chapters 1-8 can be used independently for an introductory course in probability* Provides a substantial number of proofs
Fernandes,Natáia Maria da Silva; Pinto,Patrícia dos Santos; Lacet,Thiago Bento de Paiva; Rodrigues,Dominique Fonseca; Bastos,Marcus Gomes; Stella,Sérgio Reinaldo; Cendoroglo Neto,Miguel
2009-01-01
INTRODUÇÃO: A insuficiência renal aguda (IRA) mantém alta prevalência, morbidade e mortalidade. OBJETIVO: Comparar o uso do escore prognóstico APACHE II com o ATN-ISS e determinar se o APACHE II pode ser utilizado para pacientes com IRA, fora da UTI. MÉTODOS: Coorte prospectiva, 205 pacientes com IRA. Analisamos dados demográficos, condições pré-existentes, falência de órgãos e características da IRA. Os escores prognósticos foram realizados no dia da avaliação do nefrologista. RESULTADOS: A ...
Alkasem, Ameen; Liu, Hongwei; Zuo, Decheng; Algarash, Basheer
2018-01-01
The volume of data being collected, analyzed, and stored has exploded in recent years, in particular in relation to the activity on the cloud computing. While large-scale data processing, analysis, storage, and platform model such as cloud computing were previously and currently are increasingly. Today, the major challenge is it address how to monitor and control these massive amounts of data and perform analysis in real-time at scale. The traditional methods and model systems are unable to cope with these quantities of data in real-time. Here we present a new methodology for constructing a model for optimizing the performance of real-time monitoring of big datasets, which includes a machine learning algorithms and Apache Spark Streaming to accomplish fine-grained fault diagnosis and repair of big dataset. As a case study, we use the failure of Virtual Machines (VMs) to start-up. The methodology proposition ensures that the most sensible action is carried out during the procedure of fine-grained monitoring and generates the highest efficacy and cost-saving fault repair through three construction control steps: (I) data collection; (II) analysis engine and (III) decision engine. We found that running this novel methodology can save a considerate amount of time compared to the Hadoop model, without sacrificing the classification accuracy or optimization of performance. The accuracy of the proposed method (92.13%) is an improvement on traditional approaches.
Murray, Peter J; Oyri, Karl
2005-01-01
Many health informatics organisations do not seem to use, on a practical basis, for the benefit of their activities and interaction with their members, the very technologies that they often promote for use within healthcare environments. In particular, many organisations seem to be slow to take up the benefits of interactive web technologies. This paper presents an introduction to some of the many free/libre and open source (FLOSS) applications currently available and using the LAMP - Linux, Apache, MySQL, PHP architecture - as a way of cheaply deploying reliable, scalable, and secure web applications. The experience of moving to applications using LAMP architecture, in particular that of the Open Source Nursing Informatics (OSNI) Working Group of the Special Interest Group in Nursing Informatics of the International Medical Informatics Association (IMIA-NI), in using PostNuke, a FLOSS Content Management System (CMS) illustrates many of the benefits of such applications. The experiences of the authors in installing and maintaining a large number of websites using FLOSS CMS to develop dynamic, interactive websites that facilitate real engagement with the members of IMIA-NI OSNI, the IMIA Open Source Working Group, and the Centre for Health Informatics Research and Development (CHIRAD), as well as other organisations, is used as the basis for discussing the potential benefits that could be realised by others within the health informatics community.
International Nuclear Information System (INIS)
Jones, Daniel Steven; Efroymson, Rebecca Ann; Hargrove, William Walter; Suter, Glenn; Pater, Larry
2008-01-01
A multiple stressor risk assessment was conducted at Yuma Proving Ground, Arizona, as a demonstration of the Military Ecological Risk Assessment Framework. The focus was a testing program at Cibola Range, which involved an Apache Longbow helicopter firing Hellfire missiles at moving targets, M60-A1 tanks. This paper describes the ecological risk assessment for the missile launch and detonation. The primary stressor associated with this activity was sound. Other minor stressors included the detonation impact, shrapnel, and fire. Exposure to desert mule deer (Odocoileus hemionus crooki) was quantified using the Army sound contour program BNOISE2, as well as distances from the explosion to deer. Few effects data were available from related studies. Exposure-response models for the characterization of effects consisted of human 'disturbance' and hearing damage thresholds in units of C-weighted decibels (sound exposure level) and a distance-based No Observed Adverse Effects Level for moose and cannonfire. The risk characterization used a weight-of-evidence approach and concluded that risk to mule deer behavior from the missile firing was likely for a negligible number of deer, but that no risk to mule deer abundance and reproduction is expected
Energy Technology Data Exchange (ETDEWEB)
Nidever, David L.; Zasowski, Gail; Majewski, Steven R.; Beaton, Rachael L.; Wilson, John C.; Skrutskie, Michael F.; O' Connell, Robert W. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Bird, Jonathan; Schoenrich, Ralph; Johnson, Jennifer A.; Sellgren, Kris [Department of Astronomy and the Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Robin, Annie C.; Schultheis, Mathias [Institut Utinam, CNRS UMR 6213, OSU THETA, Universite de Franche-Comte, 41bis avenue de l' Observatoire, F-25000 Besancon (France); Martinez-Valpuesta, Inma; Gerhard, Ortwin [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Shetrone, Matthew [McDonald Observatory, University of Texas at Austin, Fort Davis, TX 79734 (United States); Schiavon, Ricardo P. [Gemini Observatory, 670 North A' Ohoku Place, Hilo, HI 96720 (United States); Weiner, Benjamin [Steward Observatory, 933 North Cherry Street, University of Arizona, Tucson, AZ 85721 (United States); Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Allende Prieto, Carlos, E-mail: dln5q@virginia.edu [Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife (Spain); and others
2012-08-20
Commissioning observations with the Apache Point Observatory Galactic Evolution Experiment (APOGEE), part of the Sloan Digital Sky Survey III, have produced radial velocities (RVs) for {approx}4700 K/M-giant stars in the Milky Way (MW) bulge. These high-resolution (R {approx} 22, 500), high-S/N (>100 per resolution element), near-infrared (NIR; 1.51-1.70 {mu}m) spectra provide accurate RVs ({epsilon}{sub V} {approx} 0.2 km s{sup -1}) for the sample of stars in 18 Galactic bulge fields spanning -1 Degree-Sign
Directory of Open Access Journals (Sweden)
Rabiya Abbas
2017-12-01
Full Text Available Software testing is the process of verifying and validating the user’s requirements. Testing is ongoing process during whole software development. Software testing is characterized into three main types. That is, in Black box testing, user doesn’t know domestic knowledge, internal logics and design of system. In white box testing, Tester knows the domestic logic of code. In Grey box testing, Tester has little bit knowledge about the internal structure and working of the system. It is commonly used in case of Integration testing.Load testing helps us to analyze the performance of the system under heavy load or under Zero load. This is achieved with the help of a Load Testing Tool. The intention for writing this research is to carry out a comparison of four load testing tools i.e. Apache JMeter, LoadRunner, Microsoft Visual Studio (TFS, Siege based on certain criteria i.e. test scripts generation , result reports, application support, plug-in supports, and cost . The main focus is to study these load testing tools and identify which tool is better and more efficient . We assume this comparison can help in selecting the most appropriate tool and motivates the use of open source load testing tools.
International Nuclear Information System (INIS)
Woodhouse, E.G.; Bassett, R.L.; Neuman, S.P.; Chen, G.
1997-08-01
This report documents the research performed during the period May 1995-May 1996 for a project of the U.S. Regulatory Commission (sponsored contract NRC-04-090-051) by the University of Arizona. The project manager for this research in Thomas J. Nicholson, Office of Nuclear Regulatory Research. The objectives of this research were to examine hypotheses and test alternative conceptual models concerning unsaturated flow and transport through fractured rock, and to design and execute confirmatory field and laboratory experiments to test these hypotheses and conceptual models at the Apache Leap Research Site near Superior, Arizona. Each chapter in this report summarizes research related to a specific set of objectives and can be read and interpreted as a separate entity. Topics include: crosshole pneumatic and gaseous tracer field and modeling experiments designed to help validate the applicability of contiuum geostatistical and stochastic concepts, theories, models, and scaling relations relevant to unsaturated flow and transport in fractured porous tuffs; use of geochemistry and aquifer testing to evaluate fracture flow and perching mechanisms; investigations of 234 U/ 238 U fractionation to evaluate leaching selectivity; and transport and modeling of both conservative and non-conservative tracers
2015-05-19
the day, night, and in adverse weather through the use of nose-mounted, forward-looking infrared (FLIR) pilotage and targeting sensors that provide a...sensor video and/or symbology to each crewmember via a helmet display unit (HDU). The HDU contains a 1-inch (in.) diameter cathode ray tube (CRT...American Association for Pediatric Ophthalmology and Strabismus, 12(4): 365–369. Sale, D. F., and Lund, G. J. 1993. AH-64 Apache program update
Mirajkar, Nandan; Bhujbal, Sandeep; Deshmukh, Aaradhana
2013-01-01
Applications like Yahoo, Facebook, Twitter have huge data which has to be stored and retrieved as per client access. This huge data storage requires huge database leading to increase in physical storage and becomes complex for analysis required in business growth. This storage capacity can be reduced and distributed processing of huge data can be done using Apache Hadoop which uses Map-reduce algorithm and combines the repeating data so that entire data is stored in reduced format. The paper ...
Probability inequalities for decomposition integrals
Czech Academy of Sciences Publication Activity Database
Agahi, H.; Mesiar, Radko
2017-01-01
Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics OBOR OECD: Statistics and probability Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf
Expected utility with lower probabilities
DEFF Research Database (Denmark)
Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte
1994-01-01
An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...
Invariant probabilities of transition functions
Zaharopol, Radu
2014-01-01
The structure of the set of all the invariant probabilities and the structure of various types of individual invariant probabilities of a transition function are two topics of significant interest in the theory of transition functions, and are studied in this book. The results obtained are useful in ergodic theory and the theory of dynamical systems, which, in turn, can be applied in various other areas (like number theory). They are illustrated using transition functions defined by flows, semiflows, and one-parameter convolution semigroups of probability measures. In this book, all results on transition probabilities that have been published by the author between 2004 and 2008 are extended to transition functions. The proofs of the results obtained are new. For transition functions that satisfy very general conditions the book describes an ergodic decomposition that provides relevant information on the structure of the corresponding set of invariant probabilities. Ergodic decomposition means a splitting of t...
Introduction to probability with Mathematica
Hastings, Kevin J
2009-01-01
Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...
International Nuclear Information System (INIS)
Ridgley, Jennie; Wright Dunbar, Robyn
2000-01-01
This is the Phase One contract report to the United States Department of Energy, United State Geological Survey and the Jicarilla Apache Indian Tribe on the project entitled''Outcrop Analysis of the Cretaceous Mesaverde Group: Jicarilla Apache Reservation, New Mexico.'' Field work for this project was conducted during July and August 1998, at which time fourteen measured sections were described and correlated on or adjacent to Jicarilla Apache Reservation lands. A fifteen section, described east of the main field area, is included in this report, although its distant location precluded use in the correlation's and cross-sections presented herein. Ground-based photo mosaics were shot for much of the exposed Mesaverde outcrop belt and were used to assist in correlation. Outcrop gamma-ray surveys at six of the fifteen measured sections using a GAD-6 scintillometer was conducted. The raw gamma-ray data are included in this report, however, analysis of those data is part of the ongoing Phase Two of this project
Linear positivity and virtual probability
International Nuclear Information System (INIS)
Hartle, James B.
2004-01-01
We investigate the quantum theory of closed systems based on the linear positivity decoherence condition of Goldstein and Page. The objective of any quantum theory of a closed system, most generally the universe, is the prediction of probabilities for the individual members of sets of alternative coarse-grained histories of the system. Quantum interference between members of a set of alternative histories is an obstacle to assigning probabilities that are consistent with the rules of probability theory. A quantum theory of closed systems therefore requires two elements: (1) a condition specifying which sets of histories may be assigned probabilities and (2) a rule for those probabilities. The linear positivity condition of Goldstein and Page is the weakest of the general conditions proposed so far. Its general properties relating to exact probability sum rules, time neutrality, and conservation laws are explored. Its inconsistency with the usual notion of independent subsystems in quantum mechanics is reviewed. Its relation to the stronger condition of medium decoherence necessary for classicality is discussed. The linear positivity of histories in a number of simple model systems is investigated with the aim of exhibiting linearly positive sets of histories that are not decoherent. The utility of extending the notion of probability to include values outside the range of 0-1 is described. Alternatives with such virtual probabilities cannot be measured or recorded, but can be used in the intermediate steps of calculations of real probabilities. Extended probabilities give a simple and general way of formulating quantum theory. The various decoherence conditions are compared in terms of their utility for characterizing classicality and the role they might play in further generalizations of quantum mechanics
Hofstee, W.K.B.; Ten Berge, J.M.F.; Hendriks, A.A.J.
The standard practice in scoring questionnaires consists of adding item scores and standardizing these sums. We present a set of alternative procedures, consisting of (a) correcting for the acquiescence variance that disturbs the structure of the questionnaire; (b) establishing item weights through
SLACK, CHARLES W.
REINFORCEMENT AND ROLE-REVERSAL TECHNIQUES ARE USED IN THE SCORE PROJECT, A LOW-COST PROGRAM OF DELINQUENCY PREVENTION FOR HARD-CORE TEENAGE STREET CORNER BOYS. COMMITTED TO THE BELIEF THAT THE BOYS HAVE THE POTENTIAL FOR ETHICAL BEHAVIOR, THE SCORE WORKER FOLLOWS B.F. SKINNER'S THEORY OF OPERANT CONDITIONING AND REINFORCES THE DELINQUENT'S GOOD…
Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines
Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.
2011-01-01
Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433
Probable Inference and Quantum Mechanics
International Nuclear Information System (INIS)
Grandy, W. T. Jr.
2009-01-01
In its current very successful interpretation the quantum theory is fundamentally statistical in nature. Although commonly viewed as a probability amplitude whose (complex) square is a probability, the wavefunction or state vector continues to defy consensus as to its exact meaning, primarily because it is not a physical observable. Rather than approach this problem directly, it is suggested that it is first necessary to clarify the precise role of probability theory in quantum mechanics, either as applied to, or as an intrinsic part of the quantum theory. When all is said and done the unsurprising conclusion is that quantum mechanics does not constitute a logic and probability unto itself, but adheres to the long-established rules of classical probability theory while providing a means within itself for calculating the relevant probabilities. In addition, the wavefunction is seen to be a description of the quantum state assigned by an observer based on definite information, such that the same state must be assigned by any other observer based on the same information, in much the same way that probabilities are assigned.
Directory of Open Access Journals (Sweden)
Tonguç U. Yilmaz
2014-10-01
Full Text Available Objectives: Resistin, a hormone secreted from adipocytes and considered to be a likely cause of insulin resistance, has recently been accepted as a proinflammatory cytokine. This study aimed to determine the correlation between resistin levels in patients with intra-abdominal sepsis and mortality. Methods: Of 45 patients with intraabdominal sepsis, a total of 35 adult patients were included in the study. This study was undertaken from December 2011 to December 2012 and included patients who had no history of diabetes mellitus and who were admitted to the general surgery intensive care units of Gazi University and Bülent Ecevit University School of Medicine, Turkey. Evaluations were performed on 12 patients with sepsis, 10 patients with severe sepsis, 13 patients with septic shock and 15 healthy controls. The patients’ plasma resistin, interleukin-6 (IL-6, tumour necrosis factor alpha (TNF-α, interleukin-1 beta (IL-1β, procalcitonin, lactate and glucose levels and Acute Physiology and Chronic Health Evaluation (APACHE II scores were studied daily for the first five days after admission. A correlation analysis of serum resistin levels with cytokine levels and APACHE II scores was performed. Results: Serum resistin levels in patients with sepsis were significantly higher than in the healthy controls (P <0.001. A significant correlation was found between serum resistin levels and APACHE II scores, serum IL-6, IL-1β, TNF-α, procalcitonin, lactate and glucose levels. Furthermore, a significant correlation was found between serum resistin levels and all-cause mortality (P = 0.02. Conclusion: The levels of resistin were significantly positively correlated with the severity of disease and were a possible mediator of a prolonged inflammatory state in patients with intra-abdominal sepsis.
Failure probability under parameter uncertainty.
Gerrard, R; Tsanakas, A
2011-05-01
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.
Probability with applications and R
Dobrow, Robert P
2013-01-01
An introduction to probability at the undergraduate level Chance and randomness are encountered on a daily basis. Authored by a highly qualified professor in the field, Probability: With Applications and R delves into the theories and applications essential to obtaining a thorough understanding of probability. With real-life examples and thoughtful exercises from fields as diverse as biology, computer science, cryptology, ecology, public health, and sports, the book is accessible for a variety of readers. The book's emphasis on simulation through the use of the popular R software language c
A philosophical essay on probabilities
Laplace, Marquis de
1996-01-01
A classic of science, this famous essay by ""the Newton of France"" introduces lay readers to the concepts and uses of probability theory. It is of especial interest today as an application of mathematical techniques to problems in social and biological sciences.Generally recognized as the founder of the modern phase of probability theory, Laplace here applies the principles and general results of his theory ""to the most important questions of life, which are, in effect, for the most part, problems in probability."" Thus, without the use of higher mathematics, he demonstrates the application
Heart sounds analysis using probability assessment.
Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P
2017-07-31
This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.
Logic, probability, and human reasoning.
Johnson-Laird, P N; Khemlani, Sangeet S; Goodwin, Geoffrey P
2015-04-01
This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn - not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Free probability and random matrices
Mingo, James A
2017-01-01
This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.
Introduction to probability and measure
Parthasarathy, K R
2005-01-01
According to a remark attributed to Mark Kac 'Probability Theory is a measure theory with a soul'. This book with its choice of proofs, remarks, examples and exercises has been prepared taking both these aesthetic and practical aspects into account.
DEFF Research Database (Denmark)
Rudolf, Frauke; Joaquim, Luis Carlos; Vieira, Cesaltina
2013-01-01
Background: This study was carried out in Guinea-Bissau ’ s capital Bissau among inpatients and outpatients attending for tuberculosis (TB) treatment within the study area of the Bandim Health Project, a Health and Demographic Surveillance Site. Our aim was to assess the variability between 2...... physicians in performing the Bandim tuberculosis score (TBscore), a clinical severity score for pulmonary TB (PTB), and to compare it to the Karnofsky performance score (KPS). Method : From December 2008 to July 2009 we assessed the TBscore and the KPS of 100 PTB patients at inclusion in the TB cohort and...
Joint probabilities and quantum cognition
International Nuclear Information System (INIS)
Acacio de Barros, J.
2012-01-01
In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.
Joint probabilities and quantum cognition
Energy Technology Data Exchange (ETDEWEB)
Acacio de Barros, J. [Liberal Studies, 1600 Holloway Ave., San Francisco State University, San Francisco, CA 94132 (United States)
2012-12-18
In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.
Default probabilities and default correlations
Erlenmaier, Ulrich; Gersbach, Hans
2001-01-01
Starting from the Merton framework for firm defaults, we provide the analytics and robustness of the relationship between default correlations. We show that loans with higher default probabilities will not only have higher variances but also higher correlations between loans. As a consequence, portfolio standard deviation can increase substantially when loan default probabilities rise. This result has two important implications. First, relative prices of loans with different default probabili...
The Probabilities of Unique Events
2012-08-30
Washington, DC USA Max Lotstein and Phil Johnson-Laird Department of Psychology Princeton University Princeton, NJ USA August 30th 2012...social justice and also participated in antinuclear demonstrations. The participants ranked the probability that Linda is a feminist bank teller as...retorted that such a flagrant violation of the probability calculus was a result of a psychological experiment that obscured the rationality of the
Probability Matching, Fast and Slow
Koehler, Derek J.; James, Greta
2014-01-01
A prominent point of contention among researchers regarding the interpretation of probability-matching behavior is whether it represents a cognitively sophisticated, adaptive response to the inherent uncertainty of the tasks or settings in which it is observed, or whether instead it represents a fundamental shortcoming in the heuristics that support and guide human decision making. Put crudely, researchers disagree on whether probability matching is "smart" or "dumb." Here, we consider eviden...
OVERVIEW OF THE SDSS-IV MaNGA SURVEY: MAPPING NEARBY GALAXIES AT APACHE POINT OBSERVATORY
Energy Technology Data Exchange (ETDEWEB)
Bundy, Kevin [Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), Todai Institutes for Advanced Study, the University of Tokyo, Kashiwa 277-8583 (Japan); Bershady, Matthew A.; Wake, David A.; Tremonti, Christy; Diamond-Stanic, Aleksandar M. [Department of Astronomy, University of Wisconsin-Madison, 475 North Charter Street, Madison, WI 53706 (United States); Law, David R.; Cherinka, Brian [Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, Ontario M5S 3H4 (Canada); Yan, Renbin; Sánchez-Gallego, José R. [Department of Physics and Astronomy, University of Kentucky, 505 Rose Street, Lexington, KY 40506-0055 (United States); Drory, Niv [McDonald Observatory, Department of Astronomy, University of Texas at Austin, 1 University Station, Austin, TX 78712-0259 (United States); MacDonald, Nicholas [Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195 (United States); Weijmans, Anne-Marie [School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16 9SS (United Kingdom); Thomas, Daniel; Masters, Karen; Coccato, Lodovico [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth (United Kingdom); Aragón-Salamanca, Alfonso [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD (United Kingdom); Avila-Reese, Vladimir [Instituto de Astronomia, Universidad Nacional Autonoma de Mexico, A.P. 70-264, 04510 Mexico D.F. (Mexico); Badenes, Carles [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 OHara St, Pittsburgh, PA 15260 (United States); Falcón-Barroso, Jésus [Instituto de Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Belfiore, Francesco [Cavendish Laboratory, University of Cambridge, 19 J. J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); and others
2015-01-01
We present an overview of a new integral field spectroscopic survey called MaNGA (Mapping Nearby Galaxies at Apache Point Observatory), one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV) that began on 2014 July 1. MaNGA will investigate the internal kinematic structure and composition of gas and stars in an unprecedented sample of 10,000 nearby galaxies. We summarize essential characteristics of the instrument and survey design in the context of MaNGA's key science goals and present prototype observations to demonstrate MaNGA's scientific potential. MaNGA employs dithered observations with 17 fiber-bundle integral field units that vary in diameter from 12'' (19 fibers) to 32'' (127 fibers). Two dual-channel spectrographs provide simultaneous wavelength coverage over 3600-10300 Å at R ∼ 2000. With a typical integration time of 3 hr, MaNGA reaches a target r-band signal-to-noise ratio of 4-8 (Å{sup –1} per 2'' fiber) at 23 AB mag arcsec{sup –2}, which is typical for the outskirts of MaNGA galaxies. Targets are selected with M {sub *} ≳ 10{sup 9} M {sub ☉} using SDSS-I redshifts and i-band luminosity to achieve uniform radial coverage in terms of the effective radius, an approximately flat distribution in stellar mass, and a sample spanning a wide range of environments. Analysis of our prototype observations demonstrates MaNGA's ability to probe gas ionization, shed light on recent star formation and quenching, enable dynamical modeling, decompose constituent components, and map the composition of stellar populations. MaNGA's spatially resolved spectra will enable an unprecedented study of the astrophysics of nearby galaxies in the coming 6 yr.
OVERVIEW OF THE SDSS-IV MaNGA SURVEY: MAPPING NEARBY GALAXIES AT APACHE POINT OBSERVATORY
International Nuclear Information System (INIS)
Bundy, Kevin; Bershady, Matthew A.; Wake, David A.; Tremonti, Christy; Diamond-Stanic, Aleksandar M.; Law, David R.; Cherinka, Brian; Yan, Renbin; Sánchez-Gallego, José R.; Drory, Niv; MacDonald, Nicholas; Weijmans, Anne-Marie; Thomas, Daniel; Masters, Karen; Coccato, Lodovico; Aragón-Salamanca, Alfonso; Avila-Reese, Vladimir; Badenes, Carles; Falcón-Barroso, Jésus; Belfiore, Francesco
2015-01-01
We present an overview of a new integral field spectroscopic survey called MaNGA (Mapping Nearby Galaxies at Apache Point Observatory), one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV) that began on 2014 July 1. MaNGA will investigate the internal kinematic structure and composition of gas and stars in an unprecedented sample of 10,000 nearby galaxies. We summarize essential characteristics of the instrument and survey design in the context of MaNGA's key science goals and present prototype observations to demonstrate MaNGA's scientific potential. MaNGA employs dithered observations with 17 fiber-bundle integral field units that vary in diameter from 12'' (19 fibers) to 32'' (127 fibers). Two dual-channel spectrographs provide simultaneous wavelength coverage over 3600-10300 Å at R ∼ 2000. With a typical integration time of 3 hr, MaNGA reaches a target r-band signal-to-noise ratio of 4-8 (Å –1 per 2'' fiber) at 23 AB mag arcsec –2 , which is typical for the outskirts of MaNGA galaxies. Targets are selected with M * ≳ 10 9 M ☉ using SDSS-I redshifts and i-band luminosity to achieve uniform radial coverage in terms of the effective radius, an approximately flat distribution in stellar mass, and a sample spanning a wide range of environments. Analysis of our prototype observations demonstrates MaNGA's ability to probe gas ionization, shed light on recent star formation and quenching, enable dynamical modeling, decompose constituent components, and map the composition of stellar populations. MaNGA's spatially resolved spectra will enable an unprecedented study of the astrophysics of nearby galaxies in the coming 6 yr
Calhoun, William; Dargahi-Noubary, G. R.; Shi, Yixun
2002-01-01
The widespread interest in sports in our culture provides an excellent opportunity to catch students' attention in mathematics and statistics classes. One mathematically interesting aspect of volleyball, which can be used to motivate students, is the scoring system. (MM)
Probably not future prediction using probability and statistical inference
Dworsky, Lawrence N
2008-01-01
An engaging, entertaining, and informative introduction to probability and prediction in our everyday lives Although Probably Not deals with probability and statistics, it is not heavily mathematical and is not filled with complex derivations, proofs, and theoretical problem sets. This book unveils the world of statistics through questions such as what is known based upon the information at hand and what can be expected to happen. While learning essential concepts including "the confidence factor" and "random walks," readers will be entertained and intrigued as they move from chapter to chapter. Moreover, the author provides a foundation of basic principles to guide decision making in almost all facets of life including playing games, developing winning business strategies, and managing personal finances. Much of the book is organized around easy-to-follow examples that address common, everyday issues such as: How travel time is affected by congestion, driving speed, and traffic lights Why different gambling ...
Personalized Risk Scoring for Critical Care Prognosis Using Mixtures of Gaussian Processes.
Alaa, Ahmed M; Yoon, Jinsung; Hu, Scott; van der Schaar, Mihaela
2018-01-01
In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients' heterogeneity. The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.
Assessment of PANC3 score in predicting severity of acute pancreatitis
Directory of Open Access Journals (Sweden)
Avreen Singh Shah
2017-01-01
Full Text Available Introduction: Acute pancreatitis is inflammatory process of the pancreas associated with local and systemic complications. At present, there are lots of scores (such as Ransons, APACHE II, bedside index for severity in acute pancreatitis that help us in predicting severity at the time of admission but these are time consuming or require complex calculation and are costly. Material and Methods: PANC3 Scoring System is one of the better systems because the three criteria used (hematocrit, body mass index, and pleural effusion are simple, easy to assess, readily available, and economic. In this prospective study, 100 cases were evaluated to see the prospects of PANC3 scoring in predicting the severity of acute pancreatitis as decided by modified Marshals score. Results: The results showed that PANC3 score had a 96.43% specificity, 75% sensitivity, 80% positive predictive value, and 95.29% negative predictive value. Conclusion: Hence, the PANC3 score is a cost-effective, promising score that helps in predicting the severity of acute pancreatitis leading to prompt treatment and early referral to higher center.
Salica, Andrea; Weltert, Luca; Scaffa, Raffaele; Guerrieri Wolf, Lorenzo; Nardella, Saverio; Bellisario, Alessandro; De Paulis, Ruggero
2014-11-01
Optimal management of poststernotomy mediastinitis is controversial. Negative pressure wound treatment improves wound environment and sternal stability with low surgical invasiveness. Our protocol was based on negative pressure followed by delayed surgical closure. The aim of this study was to provide the results at early follow-up and to identify the risk factors for adverse outcome. In 5400 cardiac procedures, 44 consecutive patients with mediastinitis were enrolled in the study. Mediastinitis treatment was based on urgent debridement and negative pressure as the first-line approach. After wound sterilization, chest closure was achieved by elective pectoralis muscle advancement flap. Each patient's hospital data were collected prospectively. Variables included patient demographics and clinical and biological data. Acute Physiology and Chronic Health Evaluation (APACHE) II score was calculated at the time of diagnosis and 48 hours after debridement. Focus outcome measures were mediastinitis-related death and need for reintervention after pectoralis muscle closure. El Oakley type I and type IIIA mediastinitis were the most frequent types (63.6%). Methicillin-resistant Staphylococcus aureus was present in 25 patients (56.8%). Mean APACHE II score was 19.4±4 at the time of diagnosis, and 30 patients (68.2%) required intensive care unit transfer before surgical debridement. APACHE II score improved 48 hours after wound debridement and negative pressure application (mean value, 19.4±4 vs 7.2±2; P=.005) independently of any other variables included in the study. One patient in septic shock at the time of diagnosis died (2.2%). Negative pressure promotes a significant improvement in clinical status according to APACHE II score and allows a successful elective surgical closure. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Normal probability plots with confidence.
Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang
2015-01-01
Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Marino, Marcello; Crimi, Gabriele; Maiorana, Florinda; Rizzotti, Diego; Lettieri, Corrado; Bettari, Luca; Zuccari, Marco; Sganzerla, Paolo; Tresoldi, Simone; Adamo, Marianna; Ghiringhelli, Sergio; Sponzilli, Carlo; Pasquetto, Giampaolo; Pavei, Andrea; Pedon, Luigi; Bassan, Luciano; Bollati, Mario; Camisasca, Paola; Trabattoni, Daniela; Brancati, Marta; Poli, Arnaldo; Panciroli, Claudio; Lettino, Maddalena; Tarelli, Giuseppe; Tarantini, Giuseppe; De Luca, Leonardo; Varbella, Ferdinando; Musumeci, Giuseppe; De Servi, Stefano
2017-01-01
Objectives To first explore in Italy appropriateness of indication, adherence to guideline recommendations and mode of selection for coronary revascularisation. Design Retrospective, pilot study. Setting 22 percutaneous coronary intervention (PCI)-performing hospitals (20 patients per site), 13 (59%) with on-site cardiac surgery. Participants 440 patients who received PCI for stable coronary artery disease (CAD) or non-ST elevation acute coronary syndrome were independently selected in a 4:1 ratio with half diabetics. Primary and secondary outcome measures Proportion of patients who received appropriate PCI using validated appropriate use scores (ie, AUS≥7). Also, in patients with stable CAD, we examined adherence to the following European Society of Cardiology recommendations: (A) per cent of patients with complex coronary anatomy treated after heart team discussion; (B) per cent of fractional flow reserve-guided PCI for borderline stenoses in patients without documented ischaemia; (C) per cent of patients receiving guideline-directed medical therapy at the time of PCI as well as use of provocative test of ischaemia according to pretest probability (PTP) of CAD. Results Of the 401 mappable PCIs (91%), 38.7% (95% CI 33.9 to 43.6) were classified as appropriate, 47.6% (95% CI 42.7 to 52.6) as uncertain and 13.7% (95% CI 10.5% to 17.5%) as inappropriate. Median PTP in patients with stable CAD without known coronary anatomy was 69% (78% intermediate PTP, 22% high PTP). Ischaemia testing use was similar (p=0.71) in patients with intermediate (n=140, 63%) and with high PTP (n=40, 66%). In patients with stable CAD (n=352) guideline adherence to the three recommendations explored was: (A) 11%; (B) 25%; (C) 23%. AUS was higher in patients evaluated by the heart team as compared with patients who were not (7 (6.8) vs 5 (4.7); p=0.001). Conclusions Use of heart team approaches and adherence to guideline recommendations on coronary revascularisation in a real-world setting
Probability theory a foundational course
Pakshirajan, R P
2013-01-01
This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.
VIBRATION ISOLATION SYSTEM PROBABILITY ANALYSIS
Directory of Open Access Journals (Sweden)
Smirnov Vladimir Alexandrovich
2012-10-01
Full Text Available The article deals with the probability analysis for a vibration isolation system of high-precision equipment, which is extremely sensitive to low-frequency oscillations even of submicron amplitude. The external sources of low-frequency vibrations may include the natural city background or internal low-frequency sources inside buildings (pedestrian activity, HVAC. Taking Gauss distribution into account, the author estimates the probability of the relative displacement of the isolated mass being still lower than the vibration criteria. This problem is being solved in the three dimensional space, evolved by the system parameters, including damping and natural frequency. According to this probability distribution, the chance of exceeding the vibration criteria for a vibration isolation system is evaluated. Optimal system parameters - damping and natural frequency - are being developed, thus the possibility of exceeding vibration criteria VC-E and VC-D is assumed to be less than 0.04.
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
Model uncertainty: Probabilities for models?
International Nuclear Information System (INIS)
Winkler, R.L.
1994-01-01
Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising
Knowledge typology for imprecise probabilities.
Energy Technology Data Exchange (ETDEWEB)
Wilson, G. D. (Gregory D.); Zucker, L. J. (Lauren J.)
2002-01-01
When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.
Probability, Statistics, and Stochastic Processes
Olofsson, Peter
2011-01-01
A mathematical and intuitive approach to probability, statistics, and stochastic processes This textbook provides a unique, balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. This text combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The text begins with three chapters that d
Statistical probability tables CALENDF program
International Nuclear Information System (INIS)
Ribon, P.
1989-01-01
The purpose of the probability tables is: - to obtain dense data representation - to calculate integrals by quadratures. They are mainly used in the USA for calculations by Monte Carlo and in the USSR and Europe for self-shielding calculations by the sub-group method. The moment probability tables, in addition to providing a more substantial mathematical basis and calculation methods, are adapted for condensation and mixture calculations, which are the crucial operations for reactor physics specialists. However, their extension is limited by the statistical hypothesis they imply. Efforts are being made to remove this obstacle, at the cost, it must be said, of greater complexity
Probability, statistics, and queueing theory
Allen, Arnold O
1990-01-01
This is a textbook on applied probability and statistics with computer science applications for students at the upper undergraduate level. It may also be used as a self study book for the practicing computer science professional. The successful first edition of this book proved extremely useful to students who need to use probability, statistics and queueing theory to solve problems in other fields, such as engineering, physics, operations research, and management science. The book has also been successfully used for courses in queueing theory for operations research students. This second edit
Probability and Statistics: 5 Questions
DEFF Research Database (Denmark)
Probability and Statistics: 5 Questions is a collection of short interviews based on 5 questions presented to some of the most influential and prominent scholars in probability and statistics. We hear their views on the fields, aims, scopes, the future direction of research and how their work fits...... in these respects. Interviews with Nick Bingham, Luc Bovens, Terrence L. Fine, Haim Gaifman, Donald Gillies, James Hawthorne, Carl Hoefer, James M. Joyce, Joseph B. Kadane Isaac Levi, D.H. Mellor, Patrick Suppes, Jan von Plato, Carl Wagner, Sandy Zabell...
Estimating the concordance probability in a survival analysis with a discrete number of risk groups.
Heller, Glenn; Mo, Qianxing
2016-04-01
A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.
Shinn, Maxwell
2013-01-01
Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. Instant MuseScore is written in an easy-to follow format, packed with illustrations that will help you get started with this music composition software.This book is for musicians who would like to learn how to notate music digitally with MuseScore. Readers should already have some knowledge about musical terminology; however, no prior experience with music notation software is necessary.
Explaining soccer match outcomes with goal scoring opportunities predictive analytics
Eggels, H.; van Elk, R.; Pechenizkiy, M.
2016-01-01
In elite soccer, decisions are often based on recent results and emotions. In this paper, we propose a method to determine the expected winner of a match in elite soccer. The expected result of a soccer match is determined by estimating the probability of scoring for the individual goal scoring
Dynamic SEP event probability forecasts
Kahler, S. W.; Ling, A.
2015-10-01
The forecasting of solar energetic particle (SEP) event probabilities at Earth has been based primarily on the estimates of magnetic free energy in active regions and on the observations of peak fluxes and fluences of large (≥ M2) solar X-ray flares. These forecasts are typically issued for the next 24 h or with no definite expiration time, which can be deficient for time-critical operations when no SEP event appears following a large X-ray flare. It is therefore important to decrease the event probability forecast with time as a SEP event fails to appear. We use the NOAA listing of major (≥10 pfu) SEP events from 1976 to 2014 to plot the delay times from X-ray peaks to SEP threshold onsets as a function of solar source longitude. An algorithm is derived to decrease the SEP event probabilities with time when no event is observed to reach the 10 pfu threshold. In addition, we use known SEP event size distributions to modify probability forecasts when SEP intensity increases occur below the 10 pfu event threshold. An algorithm to provide a dynamic SEP event forecast, Pd, for both situations of SEP intensities following a large flare is derived.
Conditional Independence in Applied Probability.
Pfeiffer, Paul E.
This material assumes the user has the background provided by a good undergraduate course in applied probability. It is felt that introductory courses in calculus, linear algebra, and perhaps some differential equations should provide the requisite experience and proficiency with mathematical concepts, notation, and argument. The document is…
Stretching Probability Explorations with Geoboards
Wheeler, Ann; Champion, Joe
2016-01-01
Students are faced with many transitions in their middle school mathematics classes. To build knowledge, skills, and confidence in the key areas of algebra and geometry, students often need to practice using numbers and polygons in a variety of contexts. Teachers also want students to explore ideas from probability and statistics. Teachers know…
GPS: Geometry, Probability, and Statistics
Field, Mike
2012-01-01
It might be said that for most occupations there is now less of a need for mathematics than there was say fifty years ago. But, the author argues, geometry, probability, and statistics constitute essential knowledge for everyone. Maybe not the geometry of Euclid, but certainly geometrical ways of thinking that might enable us to describe the world…
Swedish earthquakes and acceleration probabilities
International Nuclear Information System (INIS)
Slunga, R.
1979-03-01
A method to assign probabilities to ground accelerations for Swedish sites is described. As hardly any nearfield instrumental data is available we are left with the problem of interpreting macroseismic data in terms of acceleration. By theoretical wave propagation computations the relation between seismic strength of the earthquake, focal depth, distance and ground accelerations are calculated. We found that most Swedish earthquake of the area, the 1904 earthquake 100 km south of Oslo, is an exception and probably had a focal depth exceeding 25 km. For the nuclear power plant sites an annual probability of 10 -5 has been proposed as interesting. This probability gives ground accelerations in the range 5-20 % for the sites. This acceleration is for a free bedrock site. For consistency all acceleration results in this study are given for bedrock sites. When applicating our model to the 1904 earthquake and assuming the focal zone to be in the lower crust we get the epicentral acceleration of this earthquake to be 5-15 % g. The results above are based on an analyses of macrosismic data as relevant instrumental data is lacking. However, the macroseismic acceleration model deduced in this study gives epicentral ground acceleration of small Swedish earthquakes in agreement with existent distant instrumental data. (author)
DECOFF Probabilities of Failed Operations
DEFF Research Database (Denmark)
Gintautas, Tomas
2015-01-01
A statistical procedure of estimation of Probabilities of Failed Operations is described and exemplified using ECMWF weather forecasts and SIMO output from Rotor Lift test case models. Also safety factor influence is investigated. DECOFF statistical method is benchmarked against standard Alpha-factor...
Risk estimation using probability machines
2014-01-01
Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306
Probability and statistics: A reminder
International Nuclear Information System (INIS)
Clement, B.
2013-01-01
The main purpose of these lectures is to provide the reader with the tools needed to data analysis in the framework of physics experiments. Basic concepts are introduced together with examples of application in experimental physics. The lecture is divided into two parts: probability and statistics. It is build on the introduction from 'data analysis in experimental sciences' given in [1]. (authors)
Nash equilibrium with lower probabilities
DEFF Research Database (Denmark)
Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte
1998-01-01
We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible...
On probability-possibility transformations
Klir, George J.; Parviz, Behzad
1992-01-01
Several probability-possibility transformations are compared in terms of the closeness of preserving second-order properties. The comparison is based on experimental results obtained by computer simulation. Two second-order properties are involved in this study: noninteraction of two distributions and projections of a joint distribution.
Tiwary, Chandramani
2015-01-01
If you are a Java developer and want to use Mahout and machine learning to solve Big Data Analytics use cases then this book is for you. Familiarity with shell scripts is assumed but no prior experience is required.
Apache Accumulo for developers
Halldórsson, Guðmundur Jón
2013-01-01
The book will have a tutorial-based approach that will show the readers how to start from scratch with building an Accumulo cluster and learning how to monitor the system and implement aspects such as security.This book is great for developers new to Accumulo, who are looking to get a good grounding in how to use Accumulo. It's assumed that you have an understanding of how Hadoop works, both HDFS and the Map/Reduce. No prior knowledge of ZooKeeper is assumed.
Du, Dayong
2015-01-01
If you are a data analyst, developer, or simply someone who wants to use Hive to explore and analyze data in Hadoop, this is the book for you. Whether you are new to big data or an expert, with this book, you will be able to master both the basic and the advanced features of Hive. Since Hive is an SQL-like language, some previous experience with the SQL language and databases is useful to have a better understanding of this book.
Learning Apache Mahout classification
Gupta, Ashish
2015-01-01
If you are a data scientist who has some experience with the Hadoop ecosystem and machine learning methods and want to try out classification on large datasets using Mahout, this book is ideal for you. Knowledge of Java is essential.
Apache Maven dependency management
Lalou, Jonathan
2013-01-01
An easy-to-follow, tutorial-based guide with chapters progressing from basic to advanced dependency management.If you are working with Java or Java EE projects and you want to take advantage of Maven dependency management, then this book is ideal for you. This book is also particularly useful if you are a developer or an architect. You should be well versed with Maven and its basic functionalities if you wish to get the most out of this book.
Siriwardena, Prabath
2014-01-01
If you are working with Java or Java EE projects and you want to take full advantage of Maven in designing, executing, and maintaining your build system for optimal developer productivity, then this book is ideal for you. You should be well versed with Maven and its basic functionality if you wish to get the most out of the book.
Brown, Mat
2015-01-01
If you're an application developer familiar with SQL databases such as MySQL or Postgres, and you want to explore distributed databases such as Cassandra, this is the perfect guide for you. Even if you've never worked with a distributed database before, Cassandra's intuitive programming interface coupled with the step-by-step examples in this book will have you building highly scalable persistence layers for your applications in no time.
Kumar, Jayant
2013-01-01
This book is full of step-by-step example-oriented tutorials which will show readers how to integrate Solr in PHP applications using the available libraries, and boost the inherent search facilities that Solr offers.If you are a developer who knows PHP and is interested in integrating search into your applications, this is the book for you. No advanced knowledge of Solr is required. Very basic knowledge of system commands and the command-line interface on both Linux and Windows is required. You should also be familiar with the concept of Web servers.
Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D
2016-01-01
The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit
Kleibergen, F.R.; Kleijn, R.; Paap, R.
2000-01-01
We propose a novel Bayesian test under a (noninformative) Jeffreys'priorspecification. We check whether the fixed scalar value of the so-calledBayesian Score Statistic (BSS) under the null hypothesis is aplausiblerealization from its known and standardized distribution under thealternative. Unlike
African Journals Online (AJOL)
2014-11-18
Nov 18, 2014 ... for 80% (SASS score) and 75% (NOT) of the variation in the regression model. Consequently, SASS ... further investigation: spatial analyses of macroinvertebrate assemblages; and the use of structural and functional metrics. Keywords: .... conductivity levels was assessed using multiple linear regres- sion.
We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.
[Severity of disease scoring systems and mortality after non-cardiac surgery].
Reis, Pedro Videira; Sousa, Gabriela; Lopes, Ana Martins; Costa, Ana Vera; Santos, Alice; Abelha, Fernando José
2018-04-05
Mortality after surgery is frequent and severity of disease scoring systems are used for prediction. Our aim was to evaluate predictors for mortality after non-cardiac surgery. Adult patients admitted at our surgical intensive care unit between January 2006 and July 2013 was included. Univariate analysis was carried using Mann-Whitney, Chi-square or Fisher's exact test. Logistic regression was performed to assess independent factors with calculation of odds ratio and 95% confidence interval (95% CI). 4398 patients were included. Mortality was 1.4% in surgical intensive care unit and 7.4% during hospital stay. Independent predictors of mortality in surgical intensive care unit were APACHE II (OR=1.24); emergent surgery (OR=4.10), serum sodium (OR=1.06) and FiO 2 at admission (OR=14.31). Serum bicarbonate at admission (OR=0.89) was considered a protective factor. Independent predictors of hospital mortality were age (OR=1.02), APACHE II (OR=1.09), emergency surgery (OR=1.82), high-risk surgery (OR=1.61), FiO 2 at admission (OR=1.02), postoperative acute renal failure (OR=1.96), heart rate (OR=1.01) and serum sodium (OR=1.04). Dying patients had higher scores in severity of disease scoring systems and longer surgical intensive care unit stay. Some factors influenced both surgical intensive care unit and hospital mortality. Copyright © 2017 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.
Cazzaniga, R; Francescani, A; Saetti, C; Spinnler, H
2003-11-01
The aim of the present study was to provide a statistically sound way of reciprocally converting scores of the mini-mental state examination (MMSE) and the Milan overall dementia assessment (MODA). A consecutive series of 182 patients with "probable" Alzheimer's disease patients was examined with both tests. MODA and MMSE scores proved to be highly correlated. A formula for converting MODA and MMSE scores was generated.
Large deviations and idempotent probability
Puhalskii, Anatolii
2001-01-01
In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...
Probability biases as Bayesian inference
Directory of Open Access Journals (Sweden)
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Probability matching and strategy availability.
Koehler, Derek J; James, Greta
2010-09-01
Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought to their attention, more participants subsequently engage in maximizing. Third, matchers are more likely than maximizers to base decisions in other tasks on their initial intuitions, suggesting that they are more inclined to use a choice strategy that comes to mind quickly. These results indicate that a substantial subset of probability matchers are victims of "underthinking" rather than "overthinking": They fail to engage in sufficient deliberation to generate a superior alternative to the matching strategy that comes so readily to mind.
Probability as a Physical Motive
Directory of Open Access Journals (Sweden)
Peter Martin
2007-04-01
Full Text Available Recent theoretical progress in nonequilibrium thermodynamics, linking thephysical principle of Maximum Entropy Production (Ã¢Â€ÂœMEPÃ¢Â€Â to the information-theoreticalÃ¢Â€ÂœMaxEntÃ¢Â€Â principle of scientific inference, together with conjectures from theoreticalphysics that there may be no fundamental causal laws but only probabilities for physicalprocesses, and from evolutionary theory that biological systems expand Ã¢Â€Âœthe adjacentpossibleÃ¢Â€Â as rapidly as possible, all lend credence to the proposition that probability shouldbe recognized as a fundamental physical motive. It is further proposed that spatial order andtemporal order are two aspects of the same thing, and that this is the essence of the secondlaw of thermodynamics.
Logic, Probability, and Human Reasoning
2015-01-01
accordingly suggest a way to integrate probability and deduction. The nature of deductive reasoning To be rational is to be able to make deductions...3–6] and they underlie mathematics, science, and tech- nology [7–10]. Plato claimed that emotions upset reason- ing. However, individuals in the grip...fundamental to human rationality . So, if counterexamples to its principal predictions occur, the theory will at least explain its own refutation
Probability Measures on Groups IX
1989-01-01
The latest in this series of Oberwolfach conferences focussed on the interplay between structural probability theory and various other areas of pure and applied mathematics such as Tauberian theory, infinite-dimensional rotation groups, central limit theorems, harmonizable processes, and spherical data. Thus it was attended by mathematicians whose research interests range from number theory to quantum physics in conjunction with structural properties of probabilistic phenomena. This volume contains 5 survey articles submitted on special invitation and 25 original research papers.
Probability matching and strategy availability
J. Koehler, Derek; Koehler, Derek J.; James, Greta
2010-01-01
Findings from two experiments indicate that probability matching in sequential choice arises from an asymmetry in strategy availability: The matching strategy comes readily to mind, whereas a superior alternative strategy, maximizing, does not. First, compared with the minority who spontaneously engage in maximizing, the majority of participants endorse maximizing as superior to matching in a direct comparison when both strategies are described. Second, when the maximizing strategy is brought...
Technology Performance Level (TPL) Scoring Tool
Energy Technology Data Exchange (ETDEWEB)
Weber, Jochem [National Renewable Energy Lab. (NREL), Golden, CO (United States); Roberts, Jesse D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Costello, Ronan [Wave Venture, Penstraze (United Kingdom); Bull, Diana L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Babarit, Aurelien [Ecole Centrale de Nantes (France). Lab. of Research in Hydrodynamics, Energetics, and Atmospheric Environment (LHEEA); Neilson, Kim [Ramboll, Copenhagen (Denmark); Bittencourt, Claudio [DNV GL, London (United Kingdom); Kennedy, Ben [Wave Venture, Penstraze (United Kingdom)
2016-09-01
Three different ways of combining scores are used in the revised formulation. These are arithmetic mean, geometric mean and multiplication with normalisation. Arithmetic mean is used when combining scores that measure similar attributes, e.g. used for combining costs. The arithmetic mean has the property that it is similar to a logical OR, e.g. when combining costs it does not matter what the individual costs are only what the combined cost is. Geometric mean and Multiplication are used when combining scores that measure disparate attributes. Multiplication is similar to a logical AND, it is used to combine ‘must haves.’ As a result, this method is more punitive than the geometric mean; to get a good score in the combined result it is necessary to have a good score in ALL of the inputs. e.g. the different types of survivability are ‘must haves.’ On balance, the revised TPL is probably less punitive than the previous spreadsheet, multiplication is used sparingly as a method of combining scores. This is in line with the feedback of the Wave Energy Prize judges.
Czech Academy of Sciences Publication Activity Database
Vojtek, Martin; Kočenda, Evžen
2006-01-01
Roč. 56, 3-4 (2006), s. 152-167 ISSN 0015-1920 R&D Projects: GA ČR GA402/05/0931 Institutional research plan: CEZ:AV0Z70850503 Keywords : banking sector * credit scoring * discrimination analysis Subject RIV: AH - Economics Impact factor: 0.190, year: 2006 http://journal.fsv.cuni.cz/storage/1050_s_152_167.pdf
Credit scoring for individuals
Directory of Open Access Journals (Sweden)
Maria DIMITRIU
2010-12-01
Full Text Available Lending money to different borrowers is profitable, but risky. The profits come from the interest rate and the fees earned on the loans. Banks do not want to make loans to borrowers who cannot repay them. Even if the banks do not intend to make bad loans, over time, some of them can become bad. For instance, as a result of the recent financial crisis, the capability of many borrowers to repay their loans were affected, many of them being on default. That’s why is important for the bank to monitor the loans. The purpose of this paper is to focus on credit scoring main issues. As a consequence of this, we presented in this paper the scoring model of an important Romanian Bank. Based on this credit scoring model and taking into account the last lending requirements of the National Bank of Romania, we developed an assessment tool, in Excel, for retail loans which is presented in the case study.
Directory of Open Access Journals (Sweden)
Luciana Gonzaga dos Santos Cardoso
2013-06-01
Full Text Available OBJECTIVE: to analyze the performance of the Acute Physiology and Chronic Health Evaluation (APACHE II, measured based on the data from the last 24 hours of hospitalization in ICU, for patients transferred to the wards. METHOD: an observational, prospective and quantitative study using the data from 355 patients admitted to the ICU between January and July 2010, who were transferred to the wards. RESULTS: the discriminatory power of the AII-OUT prognostic index showed a statistically significant area beneath the ROC curve. The mortality observed in the sample was slightly greater than that predicted by the AII-OUT, with a Standardized Mortality Ratio of 1.12. In the calibration curve the linear regression analysis showed the R2 value to be statistically significant. CONCLUSION: the AII-OUT could predict mortality after discharge from ICU, with the observed mortality being slightly greater than that predicted, which shows good discrimination and good calibration. This system was shown to be useful for stratifying the patients at greater risk of death after discharge from ICU. This fact deserves special attention from health professionals, particularly nurses, in managing human and technological resources for this group of patients. OBJETIVO: analizar el desempeño del Acute Physiology and Chronic Health Evaluation (APACHE II, medido con base en los datos de la últims 24 horas de internación en la UTI, en los pacientes con transferencia para las enfermerías. MÉTODO: estudio observacional, prospectivo y cuantitativo con datos de 355 pacientes admitidos en la UTI entre enero y julio de 2010 que fueron transferidos para las enfermerías. RESULTADOS: el poder discriminatorio del índice pronóstico AII-SALIDA demostró un área debajo de la curva ROC estadísticamente significativa. La mortalidad observada en la muestra fue discretamente mayor que la prevista por el AII-SALIDA, con una Razón de Mortalidad Estandarizada de 1,12. En la curva de
[Biometric bases: basic concepts of probability calculation].
Dinya, E
1998-04-26
The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.
Jones, Michael J; Neal, Christopher P; Ngu, Wee Sing; Dennison, Ashley R; Garcea, Giuseppe
2017-08-01
The aim of this study was to compare the prognostic value of established scoring systems with early warning scores in a large cohort of patients with acute pancreatitis. In patients presenting with acute pancreatitis, age, sex, American Society of Anaesthesiologists (ASA) grade, Modified Glasgow Score, Ranson criteria, APACHE II scores and early warning score (EWS) were recorded for the first 72 h following admission. These variables were compared between survivors and non-survivors, between patients with mild/moderate and severe pancreatitis (based on the 2012 Atlanta Classification) and between patients with a favourable or adverse outcome. A total of 629 patients were identified. EWS was the best predictor of adverse outcome amongst all of the assessed variables (area under curve (AUC) values 0.81, 0.84 and 0.83 for days 1, 2 and 3, respectively) and was the most accurate predictor of mortality on both days 2 and 3 (AUC values of 0.88 and 0.89, respectively). Multivariable analysis revealed that an EWS ≥2 was independently associated with severity of pancreatitis, adverse outcome and mortality. This study confirms the usefulness of EWS in predicting the outcome of acute pancreatitis. It should become the mainstay of risk stratification in patients with acute pancreatitis.
Probability for Weather and Climate
Smith, L. A.
2013-12-01
Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of
College Math Assessment: SAT Scores vs. College Math Placement Scores
Foley-Peres, Kathleen; Poirier, Dawn
2008-01-01
Many colleges and university's use SAT math scores or math placement tests to place students in the appropriate math course. This study compares the use of math placement scores and SAT scores for 188 freshman students. The student's grades and faculty observations were analyzed to determine if the SAT scores and/or college math assessment scores…
Probability, Statistics, and Stochastic Processes
Olofsson, Peter
2012-01-01
This book provides a unique and balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. The Second Edition features new coverage of analysis of variance (ANOVA), consistency and efficiency of estimators, asymptotic theory for maximum likelihood estimators, empirical distribution function and the Kolmogorov-Smirnov test, general linear models, multiple comparisons, Markov chain Monte Carlo (MCMC), Brownian motion, martingales, and
Probability, statistics, and computational science.
Beerenwinkel, Niko; Siebourg, Juliane
2012-01-01
In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.
Sensitivity analysis using probability bounding
International Nuclear Information System (INIS)
Ferson, Scott; Troy Tucker, W.
2006-01-01
Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values
COMPARATIVE ANALYSIS OF ESTIMATION METHODS OF PHARMACY ORGANIZATION BANKRUPTCY PROBABILITY
Directory of Open Access Journals (Sweden)
V. L. Adzhienko
2014-01-01
Full Text Available A purpose of this study was to determine the probability of bankruptcy by various methods in order to predict the financial crisis of pharmacy organization. Estimating the probability of pharmacy organization bankruptcy was conducted using W. Beaver’s method adopted in the Russian Federation, with integrated assessment of financial stability use on the basis of scoring analysis. The results obtained by different methods are comparable and show that the risk of bankruptcy of the pharmacy organization is small.
Buttrey, Samuel E.; Washburn, Alan R.; Price, Wilson L.; Operations Research
2011-01-01
The article of record as published may be located at http://dx.doi.org/10.2202/1559-0410.1334 We propose a model to estimate the rates at which NHL teams score and yield goals. In the model, goals occur as if from a Poisson process whose rate depends on the two teams playing, the home-ice advantage, and the manpower (power-play, short-handed) situation. Data on all the games from the 2008-2009 season was downloaded and processed into a form suitable for the analysis. The model...
Lectures on probability and statistics
International Nuclear Information System (INIS)
Yost, G.P.
1984-09-01
These notes are based on a set of statistics lectures delivered at Imperial College to the first-year postgraduate students in High Energy Physics. They are designed for the professional experimental scientist. We begin with the fundamentals of probability theory, in which one makes statements about the set of possible outcomes of an experiment, based upon a complete a priori understanding of the experiment. For example, in a roll of a set of (fair) dice, one understands a priori that any given side of each die is equally likely to turn up. From that, we can calculate the probability of any specified outcome. We finish with the inverse problem, statistics. Here, one begins with a set of actual data (e.g., the outcomes of a number of rolls of the dice), and attempts to make inferences about the state of nature which gave those data (e.g., the likelihood of seeing any given side of any given die turn up). This is a much more difficult problem, of course, and one's solutions often turn out to be unsatisfactory in one respect or another
The International Bleeding Risk Score
DEFF Research Database (Denmark)
Laursen, Stig Borbjerg; Laine, L.; Dalton, H.
2017-01-01
The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding.......The International Bleeding Risk Score: A New Risk Score that can Accurately Predict Mortality in Patients with Upper GI-Bleeding....
Building a Scoring Model for Small and Medium Enterprises
Directory of Open Access Journals (Sweden)
Răzvan Constantin CARACOTA
2010-09-01
Full Text Available The purpose of the paper is to produce a scoring model for small and medium enterprises seeking financing through a bank loan. To analyze the loan application, scoring system developed for companies is as follows: scoring quantitative factors and scoring qualitative factors. We have estimated the probability of default using logistic regression. Regression coefficients determination was made with a solver in Excel using five ratios as input data. Analyses and simulations were conducted on a sample of 113 companies, all accepted for funding. Based on financial information obtained over two years, 2007 and 2008, we could establishe and appreciate the default value.
Díaz-Barrientos, C Z; Aquino-González, A; Heredia-Montaño, M; Navarro-Tovar, F; Pineda-Espinosa, M A; Espinosa de Santillana, I A
2018-02-06
Acute appendicitis is the first cause of surgical emergencies. It is still a difficult diagnosis to make, especially in young persons, the elderly, and in reproductive-age women, in whom a series of inflammatory conditions can have signs and symptoms similar to those of acute appendicitis. Different scoring systems have been created to increase diagnostic accuracy, and they are inexpensive, noninvasive, and easy to use and reproduce. The modified Alvarado score is probably the most widely used and accepted in emergency services worldwide. On the other hand, the RIPASA score was formulated in 2010 and has greater sensitivity and specificity. There are very few studies conducted in Mexico that compare the different scoring systems for appendicitis. The aim of our article was to compare the modified Alvarado score and the RIPASA score in the diagnosis of patients with abdominal pain and suspected acute appendicitis. An observational, analytic, and prolective study was conducted within the time frame of July 2002 and February 2014 at the Hospital Universitario de Puebla. The questionnaires used for the evaluation process were applied to the patients suspected of having appendicitis. The RIPASA score with 8.5 as the optimal cutoff value: ROC curve (area .595), sensitivity (93.3%), specificity (8.3%), PPV (91.8%), NPV (10.1%). Modified Alvarado score with 6 as the optimal cutoff value: ROC curve (area .719), sensitivity (75%), specificity (41.6%), PPV (93.7%), NPV (12.5%). The RIPASA score showed no advantages over the modified Alvarado score when applied to patients presenting with suspected acute appendicitis. Copyright © 2018 Asociación Mexicana de Gastroenterología. Publicado por Masson Doyma México S.A. All rights reserved.
Probability theory a comprehensive course
Klenke, Achim
2014-01-01
This second edition of the popular textbook contains a comprehensive course in modern probability theory. Overall, probabilistic concepts play an increasingly important role in mathematics, physics, biology, financial engineering and computer science. They help us in understanding magnetism, amorphous media, genetic diversity and the perils of random developments at financial markets, and they guide us in constructing more efficient algorithms. To address these concepts, the title covers a wide variety of topics, many of which are not usually found in introductory textbooks, such as: • limit theorems for sums of random variables • martingales • percolation • Markov chains and electrical networks • construction of stochastic processes • Poisson point process and infinite divisibility • large deviation principles and statistical physics • Brownian motion • stochastic integral and stochastic differential equations. The theory is developed rigorously and in a self-contained way, with the c...
Using inferred probabilities to measure the accuracy of imprecise forecasts
Directory of Open Access Journals (Sweden)
Paul Lehner
2012-11-01
Full Text Available Research on forecasting is effectively limited to forecasts that are expressed with clarity; which is to say that the forecasted event must be sufficiently well-defined so that it can be clearly resolved whether or not the event occurred and forecasts certainties are expressed as quantitative probabilities. When forecasts are expressed with clarity, then quantitative measures (scoring rules, calibration, discrimination, etc. can be used to measure forecast accuracy, which in turn can be used to measure the comparative accuracy of different forecasting methods. Unfortunately most real world forecasts are not expressed clearly. This lack of clarity extends to both the description of the forecast event and to the use of vague language to express forecast certainty. It is thus difficult to assess the accuracy of most real world forecasts, and consequently the accuracy the methods used to generate real world forecasts. This paper addresses this deficiency by presenting an approach to measuring the accuracy of imprecise real world forecasts using the same quantitative metrics routinely used to measure the accuracy of well-defined forecasts. To demonstrate applicability, the Inferred Probability Method is applied to measure the accuracy of forecasts in fourteen documents examining complex political domains. Key words: inferred probability, imputed probability, judgment-based forecasting, forecast accuracy, imprecise forecasts, political forecasting, verbal probability, probability calibration.
Betting on Illusory Patterns: Probability Matching in Habitual Gamblers.
Gaissmaier, Wolfgang; Wilke, Andreas; Scheibehenne, Benjamin; McCanney, Paige; Barrett, H Clark
2016-03-01
Why do people gamble? A large body of research suggests that cognitive distortions play an important role in pathological gambling. Many of these distortions are specific cases of a more general misperception of randomness, specifically of an illusory perception of patterns in random sequences. In this article, we provide further evidence for the assumption that gamblers are particularly prone to perceiving illusory patterns. In particular, we compared habitual gamblers to a matched sample of community members with regard to how much they exhibit the choice anomaly 'probability matching'. Probability matching describes the tendency to match response proportions to outcome probabilities when predicting binary outcomes. It leads to a lower expected accuracy than the maximizing strategy of predicting the most likely event on each trial. Previous research has shown that an illusory perception of patterns in random sequences fuels probability matching. So does impulsivity, which is also reported to be higher in gamblers. We therefore hypothesized that gamblers will exhibit more probability matching than non-gamblers, which was confirmed in a controlled laboratory experiment. Additionally, gamblers scored much lower than community members on the cognitive reflection task, which indicates higher impulsivity. This difference could account for the difference in probability matching between the samples. These results suggest that gamblers are more willing to bet impulsively on perceived illusory patterns.
Excluding joint probabilities from quantum theory
Allahverdyan, Armen E.; Danageozian, Arshag
2018-03-01
Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.
Probability theory and mathematical statistics for engineers
Pugachev, V S
1984-01-01
Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector
Introduction to probability theory with contemporary applications
Helms, Lester L
2010-01-01
This introduction to probability theory transforms a highly abstract subject into a series of coherent concepts. Its extensive discussions and clear examples, written in plain language, expose students to the rules and methods of probability. Suitable for an introductory probability course, this volume requires abstract and conceptual thinking skills and a background in calculus.Topics include classical probability, set theory, axioms, probability functions, random and independent random variables, expected values, and covariance and correlations. Additional subjects include stochastic process
International Nuclear Information System (INIS)
Camargo, David O; Gomez, Clara; Martinez, Teresa
1999-01-01
They are multiple the indexes of severity that have been carried out to value the predict and the quality of a patient's life, especially when this it enters to the unit of intensive care (UIC); however, the oncologic patient presents particularities in their mobility, that it supposes a different behavior in the results of the Indexes. Presently work is compared the Apache scale and the oncologic history like morbid mortality as predictors in the UCI. 207 patients were included that entered the UCI between September of 1996 and December of 1997. It was a mortality of 29%, the stay of most of this group of patient smaller than 24 hours or bigger than 8 days. To the entrance, 50% of the patients presented superior averages at 15 in the Apache Scale and at the 48 hours, alone 30.4% continued with this value. The patients with hematologic neoplasia presented superior average at 15 in 87%, with a mortality of 63.3% with average between 15 and 24 to the entrance, the risk of dying was 9.8 times but that with inferior average. In the hematologic patient, the risk of dying was 5.7 times but regarding the solid tumors. The system but altered it was the breathing one, with an increase in the risk of dying from 2,8 times for each increment utility in the scale. Contrary to described in the literature, the oncologic diagnoses and the neoplasia statistic they didn't influence in the mortality of the patients
K-forbidden transition probabilities
International Nuclear Information System (INIS)
Saitoh, T.R.; Sletten, G.; Bark, R.A.; Hagemann, G.B.; Herskind, B.; Saitoh-Hashimoto, N.; Tsukuba Univ., Ibaraki
2000-01-01
Reduced hindrance factors of K-forbidden transitions are compiled for nuclei with A∝180 where γ-vibrational states are observed. Correlations between these reduced hindrance factors and Coriolis forces, statistical level mixing and γ-softness have been studied. It is demonstrated that the K-forbidden transition probabilities are related to γ-softness. The decay of the high-K bandheads has been studied by means of the two-state mixing, which would be induced by the γ-softness, with the use of a number of K-forbidden transitions compiled in the present work, where high-K bandheads are depopulated by both E2 and ΔI=1 transitions. The validity of the two-state mixing scheme has been examined by using the proposed identity of the B(M1)/B(E2) ratios of transitions depopulating high-K bandheads and levels of low-K bands. A break down of the identity might indicate that other levels would mediate transitions between high- and low-K states. (orig.)
Direct probability mapping of contaminants
International Nuclear Information System (INIS)
Rautman, C.A.
1993-01-01
Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. Geostatistical simulation provides powerful tools for investigating contaminant levels, and in particular, for identifying and using the spatial interrelationships among a set of isolated sample values. This additional information can be used to assess the likelihood of encountering contamination at unsampled locations and to evaluate the risk associated with decisions to remediate or not to remediate specific regions within a site. Past operation of the DOE Feed Materials Production Center has contaminated a site near Fernald, Ohio, with natural uranium. Soil geochemical data have been collected as part of the Uranium-in-Soils Integrated Demonstration Project. These data have been used to construct a number of stochastic images of potential contamination for parcels approximately the size of a selective remediation unit. Each such image accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely, statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination. Evaluation of the geostatistical simulations can yield maps representing the expected magnitude of the contamination for various regions and other information that may be important in determining a suitable remediation process or in sizing equipment to accomplish the restoration
Credit scoring analysis using kernel discriminant
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
Psychophysics of the probability weighting function
Takahashi, Taiki
2011-03-01
A probability weighting function w(p) for an objective probability p in decision under risk plays a pivotal role in Kahneman-Tversky prospect theory. Although recent studies in econophysics and neuroeconomics widely utilized probability weighting functions, psychophysical foundations of the probability weighting functions have been unknown. Notably, a behavioral economist Prelec (1998) [4] axiomatically derived the probability weighting function w(p)=exp(-() (01e)=1e,w(1)=1), which has extensively been studied in behavioral neuroeconomics. The present study utilizes psychophysical theory to derive Prelec's probability weighting function from psychophysical laws of perceived waiting time in probabilistic choices. Also, the relations between the parameters in the probability weighting function and the probability discounting function in behavioral psychology are derived. Future directions in the application of the psychophysical theory of the probability weighting function in econophysics and neuroeconomics are discussed.
THE BLACK HOLE FORMATION PROBABILITY
Energy Technology Data Exchange (ETDEWEB)
Clausen, Drew; Piro, Anthony L.; Ott, Christian D., E-mail: dclausen@tapir.caltech.edu [TAPIR, Walter Burke Institute for Theoretical Physics, California Institute of Technology, Mailcode 350-17, Pasadena, CA 91125 (United States)
2015-02-01
A longstanding question in stellar evolution is which massive stars produce black holes (BHs) rather than neutron stars (NSs) upon death. It has been common practice to assume that a given zero-age main sequence (ZAMS) mass star (and perhaps a given metallicity) simply produces either an NS or a BH, but this fails to account for a myriad of other variables that may effect this outcome, such as spin, binarity, or even stochastic differences in the stellar structure near core collapse. We argue that instead a probabilistic description of NS versus BH formation may be better suited to account for the current uncertainties in understanding how massive stars die. We present an initial exploration of the probability that a star will make a BH as a function of its ZAMS mass, P {sub BH}(M {sub ZAMS}). Although we find that it is difficult to derive a unique P {sub BH}(M {sub ZAMS}) using current measurements of both the BH mass distribution and the degree of chemical enrichment by massive stars, we demonstrate how P {sub BH}(M {sub ZAMS}) changes with these various observational and theoretical uncertainties. We anticipate that future studies of Galactic BHs and theoretical studies of core collapse will refine P {sub BH}(M {sub ZAMS}) and argue that this framework is an important new step toward better understanding BH formation. A probabilistic description of BH formation will be useful as input for future population synthesis studies that are interested in the formation of X-ray binaries, the nature and event rate of gravitational wave sources, and answering questions about chemical enrichment.
THE BLACK HOLE FORMATION PROBABILITY
International Nuclear Information System (INIS)
Clausen, Drew; Piro, Anthony L.; Ott, Christian D.
2015-01-01
A longstanding question in stellar evolution is which massive stars produce black holes (BHs) rather than neutron stars (NSs) upon death. It has been common practice to assume that a given zero-age main sequence (ZAMS) mass star (and perhaps a given metallicity) simply produces either an NS or a BH, but this fails to account for a myriad of other variables that may effect this outcome, such as spin, binarity, or even stochastic differences in the stellar structure near core collapse. We argue that instead a probabilistic description of NS versus BH formation may be better suited to account for the current uncertainties in understanding how massive stars die. We present an initial exploration of the probability that a star will make a BH as a function of its ZAMS mass, P BH (M ZAMS ). Although we find that it is difficult to derive a unique P BH (M ZAMS ) using current measurements of both the BH mass distribution and the degree of chemical enrichment by massive stars, we demonstrate how P BH (M ZAMS ) changes with these various observational and theoretical uncertainties. We anticipate that future studies of Galactic BHs and theoretical studies of core collapse will refine P BH (M ZAMS ) and argue that this framework is an important new step toward better understanding BH formation. A probabilistic description of BH formation will be useful as input for future population synthesis studies that are interested in the formation of X-ray binaries, the nature and event rate of gravitational wave sources, and answering questions about chemical enrichment
The Black Hole Formation Probability
Clausen, Drew; Piro, Anthony L.; Ott, Christian D.
2015-02-01
A longstanding question in stellar evolution is which massive stars produce black holes (BHs) rather than neutron stars (NSs) upon death. It has been common practice to assume that a given zero-age main sequence (ZAMS) mass star (and perhaps a given metallicity) simply produces either an NS or a BH, but this fails to account for a myriad of other variables that may effect this outcome, such as spin, binarity, or even stochastic differences in the stellar structure near core collapse. We argue that instead a probabilistic description of NS versus BH formation may be better suited to account for the current uncertainties in understanding how massive stars die. We present an initial exploration of the probability that a star will make a BH as a function of its ZAMS mass, P BH(M ZAMS). Although we find that it is difficult to derive a unique P BH(M ZAMS) using current measurements of both the BH mass distribution and the degree of chemical enrichment by massive stars, we demonstrate how P BH(M ZAMS) changes with these various observational and theoretical uncertainties. We anticipate that future studies of Galactic BHs and theoretical studies of core collapse will refine P BH(M ZAMS) and argue that this framework is an important new step toward better understanding BH formation. A probabilistic description of BH formation will be useful as input for future population synthesis studies that are interested in the formation of X-ray binaries, the nature and event rate of gravitational wave sources, and answering questions about chemical enrichment.
McCluskey, Neal
2017-01-01
Since at least the enactment of No Child Left Behind in 2002, standardized test scores have served as the primary measures of public school effectiveness. Yet, such scores fail to measure the ultimate goal of education: maximizing happiness. This exploratory analysis assesses nation level associations between test scores and happiness, controlling…
Foundations of the theory of probability
Kolmogorov, AN
2018-01-01
This famous little book remains a foundational text for the understanding of probability theory, important both to students beginning a serious study of probability and to historians of modern mathematics. 1956 second edition.
The Probability Distribution for a Biased Spinner
Foster, Colin
2012-01-01
This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)
Conditional Probability Modulates Visual Search Efficiency
Directory of Open Access Journals (Sweden)
Bryan eCort
2013-10-01
Full Text Available We investigated the effects of probability on visual search. Previous work has shown that people can utilize spatial and sequential probability information to improve target detection. We hypothesized that performance improvements from probability information would extend to the efficiency of visual search. Our task was a simple visual search in which the target was always present among a field of distractors, and could take one of two colors. The absolute probability of the target being either color was 0.5; however, the conditional probability – the likelihood of a particular color given a particular combination of two cues – varied from 0.1 to 0.9. We found that participants searched more efficiently for high conditional probability targets and less efficiently for low conditional probability targets, but only when they were explicitly informed of the probability relationship between cues and target color.
Analytic Neutrino Oscillation Probabilities in Matter: Revisited
Energy Technology Data Exchange (ETDEWEB)
Parke, Stephen J. [Fermilab; Denton, Peter B. [Copenhagen U.; Minakata, Hisakazu [Madrid, IFT
2018-01-02
We summarize our recent paper on neutrino oscillation probabilities in matter, explaining the importance, relevance and need for simple, highly accurate approximations to the neutrino oscillation probabilities in matter.
Predicting occupational personality test scores.
Furnham, A; Drakeley, R
2000-01-01
The relationship between students' actual test scores and their self-estimated scores on the Hogan Personality Inventory (HPI; R. Hogan & J. Hogan, 1992), an omnibus personality questionnaire, was examined. Despite being given descriptive statistics and explanations of each of the dimensions measured, the students tended to overestimate their scores; yet all correlations between actual and estimated scores were positive and significant. Correlations between self-estimates and actual test scores were highest for sociability, ambition, and adjustment (r = .62 to r = .67). The results are discussed in terms of employers' use and abuse of personality assessment for job recruitment.
Void probability scaling in hadron nucleus interactions
International Nuclear Information System (INIS)
Ghosh, Dipak; Deb, Argha; Bhattacharyya, Swarnapratim; Ghosh, Jayita; Bandyopadhyay, Prabhat; Das, Rupa; Mukherjee, Sima
2002-01-01
Heygi while investigating with the rapidity gap probability (that measures the chance of finding no particle in the pseudo-rapidity interval Δη) found that a scaling behavior in the rapidity gap probability has a close correspondence with the scaling of a void probability in galaxy correlation study. The main aim in this paper is to study the scaling behavior of the rapidity gap probability
Pre-Service Teachers' Conceptions of Probability
Odafe, Victor U.
2011-01-01
Probability knowledge and skills are needed in science and in making daily decisions that are sometimes made under uncertain conditions. Hence, there is the need to ensure that the pre-service teachers of our children are well prepared to teach probability. Pre-service teachers' conceptions of probability are identified, and ways of helping them…
Using Playing Cards to Differentiate Probability Interpretations
López Puga, Jorge
2014-01-01
The aprioristic (classical, naïve and symmetric) and frequentist interpretations of probability are commonly known. Bayesian or subjective interpretation of probability is receiving increasing attention. This paper describes an activity to help students differentiate between the three types of probability interpretations.
Gatot, D.; Mardia, A. I.
2018-03-01
Deep Vein Thrombosis (DVT) is the venous thrombus in lower limbs. Diagnosis is by using venography or ultrasound compression. However, these examinations are not available yet in some health facilities. Therefore many scoring systems are developed for the diagnosis of DVT. The scoring method is practical and safe to use in addition to efficacy, and effectiveness in terms of treatment and costs. The existing scoring systems are wells, caprini and padua score. There have been many studies comparing the accuracy of this score but not in Medan. Therefore, we are interested in comparative research of wells, capriniand padua score in Medan.An observational, analytical, case-control study was conducted to perform diagnostic tests on the wells, caprini and padua score to predict the risk of DVT. The study was at H. Adam Malik Hospital in Medan.From a total of 72 subjects, 39 people (54.2%) are men and the mean age are 53.14 years. Wells score, caprini score and padua score has a sensitivity of 80.6%; 61.1%, 50% respectively; specificity of 80.65; 66.7%; 75% respectively, and accuracy of 87.5%; 64.3%; 65.7% respectively.Wells score has better sensitivity, specificity and accuracy than caprini and padua score in diagnosing DVT.
Dependent Human Error Probability Assessment
International Nuclear Information System (INIS)
Simic, Z.; Mikulicic, V.; Vukovic, I.
2006-01-01
This paper presents an assessment of the dependence between dynamic operator actions modeled in a Nuclear Power Plant (NPP) PRA and estimate the associated impact on Core damage frequency (CDF). This assessment was done improve HEP dependencies implementation inside existing PRA. All of the dynamic operator actions modeled in the NPP PRA are included in this assessment. Determining the level of HEP dependence and the associated influence on CDF are the major steps of this assessment. A decision on how to apply the results, i.e., should permanent HEP model changes be made, is based on the resulting relative CDF increase. Some CDF increase was selected as a threshold based on the NPP base CDF value and acceptance guidelines from the Regulatory Guide 1.174. HEP dependence resulting in a CDF increase of > 5E-07 would be considered potential candidates for specific incorporation into the baseline model. The approach used to judge the level of dependence between operator actions is based on dependency level categories and conditional probabilities developed in the Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications NUREG/CR-1278. To simplify the process, NUREG/CR-1278 identifies five levels of dependence: ZD (zero dependence), LD (low dependence), MD (moderate dependence), HD (high dependence), and CD (complete dependence). NUREG/CR-1278 also identifies several qualitative factors that could be involved in determining the level of dependence. Based on the NUREG/CR-1278 information, Time, Function, and Spatial attributes were judged to be the most important considerations when determining the level of dependence between operator actions within an accident sequence. These attributes were used to develop qualitative criteria (rules) that were used to judge the level of dependence (CD, HD, MD, LD, ZD) between the operator actions. After the level of dependence between the various HEPs is judged, quantitative values associated with the
Fundamentals of applied probability and random processes
Ibe, Oliver
2014-01-01
The long-awaited revision of Fundamentals of Applied Probability and Random Processes expands on the central components that made the first edition a classic. The title is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability t
Probability of Failure in Random Vibration
DEFF Research Database (Denmark)
Nielsen, Søren R.K.; Sørensen, John Dalsgaard
1988-01-01
Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out......-crossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval and thus for the first-passage probability...
An Objective Theory of Probability (Routledge Revivals)
Gillies, Donald
2012-01-01
This reissue of D. A. Gillies highly influential work, first published in 1973, is a philosophical theory of probability which seeks to develop von Mises' views on the subject. In agreement with von Mises, the author regards probability theory as a mathematical science like mechanics or electrodynamics, and probability as an objective, measurable concept like force, mass or charge. On the other hand, Dr Gillies rejects von Mises' definition of probability in terms of limiting frequency and claims that probability should be taken as a primitive or undefined term in accordance with modern axioma
Paraconsistent Probabilities: Consistency, Contradictions and Bayes’ Theorem
Directory of Open Access Journals (Sweden)
Juliana Bueno-Soler
2016-09-01
Full Text Available This paper represents the first steps towards constructing a paraconsistent theory of probability based on the Logics of Formal Inconsistency (LFIs. We show that LFIs encode very naturally an extension of the notion of probability able to express sophisticated probabilistic reasoning under contradictions employing appropriate notions of conditional probability and paraconsistent updating, via a version of Bayes’ theorem for conditionalization. We argue that the dissimilarity between the notions of inconsistency and contradiction, one of the pillars of LFIs, plays a central role in our extended notion of probability. Some critical historical and conceptual points about probability theory are also reviewed.
Dual Diagnosis and Suicide Probability in Poly-Drug Users.
Youssef, Ismail M; Fahmy, Magda T; Haggag, Wafaa L; Mohamed, Khalid A; Baalash, Amany A
2016-02-01
To determine the frequency of suicidal thoughts and suicidal probability among poly-substance abusers in Saudi population, and to examine the relation between dual diagnosis and suicidal thoughts. Case control study. Al-Baha Psychiatric Hospital, Saudi Arabia, from May 2011 to June 2012. Participants were 239 subjects, aged 18 - 45 years. We reviewed 122 individuals who fulfilled the DSM-IV-TR criteria of substance abuse for two or more substances, and their data were compared with that collected from 117 control persons. Suicidal cases were highly present among poly-substance abusers 64.75%. Amphetamine and cannabis were the most abused substances, (87.7% and 70.49%, respectively). Astatistically significant association with suicidality was found with longer duration of substance abuse (p Suicidal cases showed significant higher scores (p suicide probability scale and higher scores in Beck depressive inventory (p Abusing certain substances for long duration, in addition to comorbid psychiatric disorders especially with disturbed-mood element, may trigger suicidal thoughts in poly-substance abusers. Depression and suicide probability is common consequences of substance abuse.
[Propensity score matching in SPSS].
Huang, Fuqiang; DU, Chunlin; Sun, Menghui; Ning, Bing; Luo, Ying; An, Shengli
2015-11-01
To realize propensity score matching in PS Matching module of SPSS and interpret the analysis results. The R software and plug-in that could link with the corresponding versions of SPSS and propensity score matching package were installed. A PS matching module was added in the SPSS interface, and its use was demonstrated with test data. Score estimation and nearest neighbor matching was achieved with the PS matching module, and the results of qualitative and quantitative statistical description and evaluation were presented in the form of a graph matching. Propensity score matching can be accomplished conveniently using SPSS software.
[Prognostic scores for pulmonary embolism].
Junod, Alain
2016-03-23
Nine prognostic scores for pulmonary embolism (PE), based on retrospective and prospective studies, published between 2000 and 2014, have been analyzed and compared. Most of them aim at identifying PE cases with a low risk to validate their ambulatory care. Important differences in the considered outcomes: global mortality, PE-specific mortality, other complications, sizes of low risk groups, exist between these scores. The most popular score appears to be the PESI and its simplified version. Few good quality studies have tested the applicability of these scores to PE outpatient care, although this approach tends to already generalize in the medical practice.
Velissaris, Dimitrios; Karanikolas, Menelaos; Flaris, Nikolaos; Fligou, Fotini; Marangos, Markos; Filos, Kriton S
2012-01-01
Introduction. Severe leptospirosis, also known as Weil's disease, can cause multiorgan failure with high mortality. Scoring systems for disease severity have not been validated for leptospirosis, and there is no documented method to predict mortality. Methods. This is a case series on 10 patients admitted to ICU for multiorgan failure from severe leptospirosis. Data were collected retrospectively, with approval from the Institution Ethics Committee. Results. Ten patients with severe leptospirosis were admitted in the Patras University Hospital ICU in a four-year period. Although, based on SOFA scores, predicted mortality was over 80%, seven of 10 patients survived and were discharged from the hospital in good condition. There was no association between SAPS II or SOFA scores and mortality, but survivors had significantly lower APACHE II scores compared to nonsurvivors. Conclusion. Commonly used severity scores do not seem to be useful in predicting mortality in severe leptospirosis. Early ICU admission and resuscitation based on a goal-directed therapy protocol are recommended and may reduce mortality. However, this study is limited by retrospective data collection and small sample size. Data from large prospective studies are needed to validate our findings.
Craig, D G; Zafar, S; Reid, T W D J; Martin, K G; Davidson, J S; Hayes, P C; Simpson, K J
2012-06-01
The sequential organ failure assessment (SOFA) score is an effective triage marker following single time point paracetamol (acetaminophen) overdose, but has not been evaluated following staggered (multiple supratherapeutic doses over >8 h, resulting in cumulative dose of >4 g/day) overdoses. To evaluate the prognostic accuracy of the SOFA score following staggered paracetamol overdose. Time-course analysis of 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Individual timed laboratory samples were correlated with corresponding clinical parameters and the daily SOFA scores were calculated. A total of 39/50 (78%) patients developed hepatic encephalopathy. The area under the SOFA receiver operator characteristic for death/liver transplantation was 87.4 (95% CI 73.2-95.7), 94.3 (95% CI 82.5-99.1), and 98.4 (95% CI 84.3-100.0) at 0, 24 and 48 h, respectively, postadmission. A SOFA score of paracetamol overdose, is associated with a good prognosis. Both the SOFA and APACHE II scores could improve triage of high-risk staggered paracetamol overdose patients. © 2012 Blackwell Publishing Ltd.
Probability concepts in quality risk management.
Claycamp, H Gregg
2012-01-01
Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although risk is generally a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management tools are relatively silent on the meaning and uses of "probability." The probability concept is typically applied by risk managers as a combination of frequency-based calculation and a "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management. A rich history of probability in risk management applied to other fields suggests that high-quality risk management decisions benefit from the implementation of more thoughtful probability concepts in both risk modeling and risk management. Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although "risk" generally describes a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management methodologies and respective tools focus on managing severity but are relatively silent on the in-depth meaning and uses of "probability." Pharmaceutical manufacturers are expanding their use of quality risk management to identify and manage risks to the patient that might occur in phases of the pharmaceutical life cycle from drug development to manufacture, marketing to product discontinuation. A probability concept is typically applied by risk managers as a combination of data-based measures of probability and a subjective "degree of belief" meaning of probability. Probability as
Transition probability spaces in loop quantum gravity
Guo, Xiao-Kan
2018-03-01
We study the (generalized) transition probability spaces, in the sense of Mielnik and Cantoni, for spacetime quantum states in loop quantum gravity. First, we show that loop quantum gravity admits the structures of transition probability spaces. This is exemplified by first checking such structures in covariant quantum mechanics and then identifying the transition probability spaces in spin foam models via a simplified version of general boundary formulation. The transition probability space thus defined gives a simple way to reconstruct the discrete analog of the Hilbert space of the canonical theory and the relevant quantum logical structures. Second, we show that the transition probability space and in particular the spin foam model are 2-categories. Then we discuss how to realize in spin foam models two proposals by Crane about the mathematical structures of quantum gravity, namely, the quantum topos and causal sites. We conclude that transition probability spaces provide us with an alternative framework to understand various foundational questions of loop quantum gravity.
Towards a Categorical Account of Conditional Probability
Directory of Open Access Journals (Sweden)
Robert Furber
2015-11-01
Full Text Available This paper presents a categorical account of conditional probability, covering both the classical and the quantum case. Classical conditional probabilities are expressed as a certain "triangle-fill-in" condition, connecting marginal and joint probabilities, in the Kleisli category of the distribution monad. The conditional probabilities are induced by a map together with a predicate (the condition. The latter is a predicate in the logic of effect modules on this Kleisli category. This same approach can be transferred to the category of C*-algebras (with positive unital maps, whose predicate logic is also expressed in terms of effect modules. Conditional probabilities can again be expressed via a triangle-fill-in property. In the literature, there are several proposals for what quantum conditional probability should be, and also there are extra difficulties not present in the classical case. At this stage, we only describe quantum systems with classical parametrization.
UT Biomedical Informatics Lab (BMIL) probability wheel
Huang, Sheng-Cheng; Lee, Sara; Wang, Allen; Cantor, Scott B.; Sun, Clement; Fan, Kaili; Reece, Gregory P.; Kim, Min Soon; Markey, Mia K.
A probability wheel app is intended to facilitate communication between two people, an "investigator" and a "participant", about uncertainties inherent in decision-making. Traditionally, a probability wheel is a mechanical prop with two colored slices. A user adjusts the sizes of the slices to indicate the relative value of the probabilities assigned to them. A probability wheel can improve the adjustment process and attenuate the effect of anchoring bias when it is used to estimate or communicate probabilities of outcomes. The goal of this work was to develop a mobile application of the probability wheel that is portable, easily available, and more versatile. We provide a motivating example from medical decision-making, but the tool is widely applicable for researchers in the decision sciences.
A probability space for quantum models
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
Fundamentals of applied probability and random processes
Ibe, Oliver
2005-01-01
This book is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability to real-world problems, and introduce the basics of statistics. The book''s clear writing style and homework problems make it ideal for the classroom or for self-study.* Good and solid introduction to probability theory and stochastic processes * Logically organized; writing is presented in a clear manner * Choice of topics is comprehensive within the area of probability * Ample homework problems are organized into chapter sections
Striatal activity is modulated by target probability.
Hon, Nicholas
2017-06-14
Target probability has well-known neural effects. In the brain, target probability is known to affect frontal activity, with lower probability targets producing more prefrontal activation than those that occur with higher probability. Although the effect of target probability on cortical activity is well specified, its effect on subcortical structures such as the striatum is less well understood. Here, I examined this issue and found that the striatum was highly responsive to target probability. This is consistent with its hypothesized role in the gating of salient information into higher-order task representations. The current data are interpreted in light of that fact that different components of the striatum are sensitive to different types of task-relevant information.
Defining Probability in Sex Offender Risk Assessment.
Elwood, Richard W
2016-12-01
There is ongoing debate and confusion over using actuarial scales to predict individuals' risk of sexual recidivism. Much of the debate comes from not distinguishing Frequentist from Bayesian definitions of probability. Much of the confusion comes from applying Frequentist probability to individuals' risk. By definition, only Bayesian probability can be applied to the single case. The Bayesian concept of probability resolves most of the confusion and much of the debate in sex offender risk assessment. Although Bayesian probability is well accepted in risk assessment generally, it has not been widely used to assess the risk of sex offenders. I review the two concepts of probability and show how the Bayesian view alone provides a coherent scheme to conceptualize individuals' risk of sexual recidivism.
Spatial probability aids visual stimulus discrimination
Directory of Open Access Journals (Sweden)
Michael Druker
2010-08-01
Full Text Available We investigated whether the statistical predictability of a target's location would influence how quickly and accurately it was classified. Recent results have suggested that spatial probability can be a cue for the allocation of attention in visual search. One explanation for probability cuing is spatial repetition priming. In our two experiments we used probability distributions that were continuous across the display rather than relying on a few arbitrary screen locations. This produced fewer spatial repeats and allowed us to dissociate the effect of a high probability location from that of short-term spatial repetition. The task required participants to quickly judge the color of a single dot presented on a computer screen. In Experiment 1, targets were more probable in an off-center hotspot of high probability that gradually declined to a background rate. Targets garnered faster responses if they were near earlier target locations (priming and if they were near the high probability hotspot (probability cuing. In Experiment 2, target locations were chosen on three concentric circles around fixation. One circle contained 80% of targets. The value of this ring distribution is that it allowed for a spatially restricted high probability zone in which sequentially repeated trials were not likely to be physically close. Participant performance was sensitive to the high-probability circle in addition to the expected effects of eccentricity and the distance to recent targets. These two experiments suggest that inhomogeneities in spatial probability can be learned and used by participants on-line and without prompting as an aid for visual stimulus discrimination and that spatial repetition priming is not a sufficient explanation for this effect. Future models of attention should consider explicitly incorporating the probabilities of targets locations and features.
Trends in Classroom Observation Scores
Casabianca, Jodi M.; Lockwood, J. R.; McCaffrey, Daniel F.
2015-01-01
Observations and ratings of classroom teaching and interactions collected over time are susceptible to trends in both the quality of instruction and rater behavior. These trends have potential implications for inferences about teaching and for study design. We use scores on the Classroom Assessment Scoring System-Secondary (CLASS-S) protocol from…
Quadratic prediction of factor scores
Wansbeek, T
1999-01-01
Factor scores are naturally predicted by means of their conditional expectation given the indicators y. Under normality this expectation is linear in y but in general it is an unknown function of y. II is discussed that under nonnormality factor scores can be more precisely predicted by a quadratic
The Machine Scoring of Writing
McCurry, Doug
2010-01-01
This article provides an introduction to the kind of computer software that is used to score student writing in some high stakes testing programs, and that is being promoted as a teaching and learning tool to schools. It sketches the state of play with machines for the scoring of writing, and describes how these machines work and what they do.…
Matching score based face recognition
Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
2006-01-01
Accurate face registration is of vital importance to the performance of a face recognition algorithm. We propose a new method: matching score based face registration, which searches for optimal alignment by maximizing the matching score output of a classifier as a function of the different
Modelling sequentially scored item responses
Akkermans, W.
2000-01-01
The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is
Use of soft probabilities in evaluating physical-security systems
International Nuclear Information System (INIS)
Green, J.N.
1982-03-01
The complexity of evaluating how a physical security system would perform against a broad array of threat situations dictates the use by an inspector of methods which are not completely rigorous. Intuition and judgment based on experience have a large role to play. The use of soft probabilities can give meaningful results when the nature of the situation to which they are applied is sufficiently understood. Although the scoring method proposed is based on complex theory, it is feasible to apply on an intuitive basis. 6 figures
Is probability of frequency too narrow?
International Nuclear Information System (INIS)
Martz, H.F.
1993-01-01
Modern methods of statistical data analysis, such as empirical and hierarchical Bayesian methods, should find increasing use in future Probabilistic Risk Assessment (PRA) applications. In addition, there will be a more formalized use of expert judgment in future PRAs. These methods require an extension of the probabilistic framework of PRA, in particular, the popular notion of probability of frequency, to consideration of frequency of frequency, frequency of probability, and probability of probability. The genesis, interpretation, and examples of these three extended notions are discussed
Directory of Open Access Journals (Sweden)
Chitra Mehta
2016-01-01
Full Text Available Background: Timely decision making in Intensive Care Unit (ICU is very essential to improve the outcome of critically sick patients. Conventional scores like Acute Physiology and Chronic Health Evaluation (APACHE IV are quite cumbersome with calculations and take minimum 24 hours. Procalcitonin has shown to have prognostic value in ICU/Emergency department (ED in disease states like pneumonia, sepsis etc. NTproBNP has demonstrated excellent diagnostic and prognostic importance in cardiac diseases. It has also been found elevated in non-cardiac diseases. We chose to study the prognostic utility of these markers on ICU admission. Settings and Design: Retrospective observational study. Materials and Methods: A Retrospective analysis of 100 eligible patients was done who had undergone PCT and NTproBNP measurements on ICU admission. Their correlations with all cause mortality, length of hospital stay, need for ventilator support, need for vasopressors were performed. Results: Among 100 randomly selected ICU patients, 28 were non-survivors. NTproBNP values on admission significantly correlated with all cause mortality (P = 0.036, AUC = 0.643 and morbidity (P = 0.000, AUC = 0.763, comparable to that of APACHE-IV score. PCT values on admission did not show significant association with mortality, but correlated well with morbidity and prolonged hospital length of stay (AUC = 0.616, P = 0.045. Conclusion: The current study demonstrated a good predictive value of NTproBNP, in terms of mortality and morbidity comparable to that of APACHE-IV score. Procalcitonin, however, was found to have doubtful prognostic importance. These findings need to be confirmed in a prospective larger study.
International Nuclear Information System (INIS)
Efroymson, Rebecca Ann; Peterson, Mark J.; Jones, Daniel Steven; Suter, Glenn
2008-01-01
An ecological risk assessment was conducted at Yuma Proving Ground, Arizona, as a demonstration of the Military Ecological Risk Assessment Framework (MERAF). The focus of the assessment was a testing program at Cibola Range, which involved an Apache Longbow helicopter firing Hellfire missiles at moving targets, i.e., M60-A1 tanks. The problem formulation for the assessment included conceptual models for three component activities of the test, helicopter overflight, missile firing, and tracked vehicle movement, and two ecological endpoint entities, woody desert wash communities and desert mule deer (Odocoileus hemionus crooki) populations. An activity-specific risk assessment framework was available to provide guidance for assessing risks associated with aircraft overflights. Key environmental features of the study area include barren desert pavement and tree-lined desert washes. The primary stressors associated with helicopter overflights were sound and the view of the aircraft. The primary stressor associated with Hellfire missile firing was sound. The principal stressor associated with tracked vehicle movement was soil disturbance, and a resulting, secondary stressor was hydrological change. Water loss to washes and wash vegetation was expected to result from increased ponding, infiltration and/or evaporation associated with disturbances to desert pavement. A plan for estimating integrated risks from the three military activities was included in the problem formulation
Energy Technology Data Exchange (ETDEWEB)
Ridgley, Jennie
2001-08-21
The purpose of the phase 2 Mesaverde study part of the Department of Energy funded project ''Analysis of oil-bearing Cretaceous Sandstone Hydrocarbon Reservoirs, exclusive of the Dakota Sandstone, on the Jicarilla Apache Indian Reservation, New Mexico'' was to define the facies of the oil-producing units within the subsurface units of the Mesaverde Group and integrate these results with outcrop studies that defined the depositional environments of these facies within a sequence stratigraphic context. The focus of this report will center on (1) integration of subsurface correlations with outcrop correlations of components of the Mesaverde, (2) application of the sequence stratigraphic model determined in the phase one study to these correlations, (3) determination of the facies distribution of the Mesaverde Group and their relationship to sites of oil and gas accumulation, (4) evaluation of the thermal maturity and potential source rocks for oil and gas in the Mesaverde Group, and (5) evaluation of the structural features on the Reservation as they may control sites of oil accumulation.
A Novel Scoring System Approach to Assess Patients with Lyme Disease (Nutech Functional Score).
Shroff, Geeta; Hopf-Seidel, Petra
2018-01-01
A bacterial infection by Borrelia burgdorferi referred to as Lyme disease (LD) or borreliosis is transmitted mostly by a bite of the tick Ixodes scapularis in the USA and Ixodes ricinus in Europe. Various tests are used for the diagnosis of LD, but their results are often unreliable. We compiled a list of clinically visible and patient-reported symptoms that are associated with LD. Based on this list, we developed a novel scoring system. Nutech functional Score (NFS), which is a 43 point positional (every symptom is subgraded and each alternative gets some points according to its position) and directional (moves in direction bad to good) scoring system that assesses the patient's condition. The grades of the scoring system have been converted into numeric values for conducting probability based studies. Each symptom is graded from 1 to 5 that runs in direction BAD → GOOD. NFS is a unique tool that can be used universally to assess the condition of patients with LD.
A Novel Scoring System Approach to Assess Patients with Lyme Disease (Nutech Functional Score
Directory of Open Access Journals (Sweden)
Geeta Shroff
2018-01-01
Full Text Available Introduction: A bacterial infection by Borrelia burgdorferi referred to as Lyme disease (LD or borreliosis is transmitted mostly by a bite of the tick Ixodes scapularis in the USA and Ixodes ricinus in Europe. Various tests are used for the diagnosis of LD, but their results are often unreliable. We compiled a list of clinically visible and patient-reported symptoms that are associated with LD. Based on this list, we developed a novel scoring system. Methodology: Nutech functional Score (NFS, which is a 43 point positional (every symptom is subgraded and each alternative gets some points according to its position and directional (moves in direction bad to good scoring system that assesses the patient's condition. Results: The grades of the scoring system have been converted into numeric values for conducting probability based studies. Each symptom is graded from 1 to 5 that runs in direction BAD → GOOD. Conclusion: NFS is a unique tool that can be used universally to assess the condition of patients with LD.
Probability of Grounding and Collision Events
DEFF Research Database (Denmark)
Pedersen, Preben Terndrup
1996-01-01
To quantify the risks involved in ship traffic, rational criteria for collision and grounding accidents are developed. This implies that probabilities as well as inherent consequences can be analysed and assessed. The presnt paper outlines a method for evaluation of the probability of ship...
Probability of Grounding and Collision Events
DEFF Research Database (Denmark)
Pedersen, Preben Terndrup
1996-01-01
To quantify the risks involved in ship traffic, rational criteria for collision and grounding accidents have to be developed. This implies that probabilities as well as inherent consequences have to be analyzed and assessed.The present notes outline a method for evaluation of the probability...
Introducing Disjoint and Independent Events in Probability.
Kelly, I. W.; Zwiers, F. W.
Two central concepts in probability theory are those of independence and mutually exclusive events. This document is intended to provide suggestions to teachers that can be used to equip students with an intuitive, comprehensive understanding of these basic concepts in probability. The first section of the paper delineates mutually exclusive and…
Selected papers on probability and statistics
2009-01-01
This volume contains translations of papers that originally appeared in the Japanese journal Sūgaku. The papers range over a variety of topics in probability theory, statistics, and applications. This volume is suitable for graduate students and research mathematicians interested in probability and statistics.
Collective probabilities algorithm for surface hopping calculations
International Nuclear Information System (INIS)
Bastida, Adolfo; Cruz, Carlos; Zuniga, Jose; Requena, Alberto
2003-01-01
General equations that transition probabilities of the hopping algorithms in surface hopping calculations must obey to assure the equality between the average quantum and classical populations are derived. These equations are solved for two particular cases. In the first it is assumed that probabilities are the same for all trajectories and that the number of hops is kept to a minimum. These assumptions specify the collective probabilities (CP) algorithm, for which the transition probabilities depend on the average populations for all trajectories. In the second case, the probabilities for each trajectory are supposed to be completely independent of the results from the other trajectories. There is, then, a unique solution of the general equations assuring that the transition probabilities are equal to the quantum population of the target state, which is referred to as the independent probabilities (IP) algorithm. The fewest switches (FS) algorithm developed by Tully is accordingly understood as an approximate hopping algorithm which takes elements from the accurate CP and IP solutions. A numerical test of all these hopping algorithms is carried out for a one-dimensional two-state problem with two avoiding crossings which shows the accuracy and computational efficiency of the collective probabilities algorithm proposed, the limitations of the FS algorithm and the similarity between the results offered by the IP algorithm and those obtained with the Ehrenfest method
Examples of Neutrosophic Probability in Physics
Directory of Open Access Journals (Sweden)
Fu Yuhua
2015-01-01
Full Text Available This paper re-discusses the problems of the so-called “law of nonconservation of parity” and “accelerating expansion of the universe”, and presents the examples of determining Neutrosophic Probability of the experiment of Chien-Shiung Wu et al in 1957, and determining Neutrosophic Probability of accelerating expansion of the partial universe.
Probability Issues in without Replacement Sampling
Joarder, A. H.; Al-Sabah, W. S.
2007-01-01
Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…
Some open problems in noncommutative probability
International Nuclear Information System (INIS)
Kruszynski, P.
1981-01-01
A generalization of probability measures to non-Boolean structures is discussed. The starting point of the theory is the Gleason theorem about the form of measures on closed subspaces of a Hilbert space. The problems are formulated in terms of probability on lattices of projections in arbitrary von Neumann algebras. (Auth.)
Probability: A Matter of Life and Death
Hassani, Mehdi; Kippen, Rebecca; Mills, Terence
2016-01-01
Life tables are mathematical tables that document probabilities of dying and life expectancies at different ages in a society. Thus, the life table contains some essential features of the health of a population. Probability is often regarded as a difficult branch of mathematics. Life tables provide an interesting approach to introducing concepts…
Teaching Probability: A Socio-Constructivist Perspective
Sharma, Sashi
2015-01-01
There is a considerable and rich literature on students' misconceptions in probability. However, less attention has been paid to the development of students' probabilistic thinking in the classroom. This paper offers a sequence, grounded in socio-constructivist perspective for teaching probability.
Stimulus Probability Effects in Absolute Identification
Kent, Christopher; Lamberts, Koen
2016-01-01
This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…
47 CFR 1.1623 - Probability calculation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be...
Simulations of Probabilities for Quantum Computing
Zak, M.
1996-01-01
It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.
Against All Odds: When Logic Meets Probability
van Benthem, J.; Katoen, J.-P.; Langerak, R.; Rensink, A.
2017-01-01
This paper is a light walk along interfaces between logic and probability, triggered by a chance encounter with Ed Brinksma. It is not a research paper, or a literature survey, but a pointer to issues. I discuss both direct combinations of logic and probability and structured ways in which logic can
An introduction to probability and stochastic processes
Melsa, James L
2013-01-01
Geared toward college seniors and first-year graduate students, this text is designed for a one-semester course in probability and stochastic processes. Topics covered in detail include probability theory, random variables and their functions, stochastic processes, linear system response to stochastic processes, Gaussian and Markov processes, and stochastic differential equations. 1973 edition.
The probability of the false vacuum decay
International Nuclear Information System (INIS)
Kiselev, V.; Selivanov, K.
1983-01-01
The closed expession for the probability of the false vacuum decay in (1+1) dimensions is given. The probability of false vacuum decay is expessed as the product of exponential quasiclassical factor and a functional determinant of the given form. The method for calcutation of this determinant is developed and a complete answer for (1+1) dimensions is given
Probability elements of the mathematical theory
Heathcote, C R
2000-01-01
Designed for students studying mathematical statistics and probability after completing a course in calculus and real variables, this text deals with basic notions of probability spaces, random variables, distribution functions and generating functions, as well as joint distributions and the convergence properties of sequences of random variables. Includes worked examples and over 250 exercises with solutions.
The transition probabilities of the reciprocity model
Snijders, T.A.B.
1999-01-01
The reciprocity model is a continuous-time Markov chain model used for modeling longitudinal network data. A new explicit expression is derived for its transition probability matrix. This expression can be checked relatively easily. Some properties of the transition probabilities are given, as well
Probability numeracy and health insurance purchase
Dillingh, Rik; Kooreman, Peter; Potters, Jan
2016-01-01
This paper provides new field evidence on the role of probability numeracy in health insurance purchase. Our regression results, based on rich survey panel data, indicate that the expenditure on two out of three measures of health insurance first rises with probability numeracy and then falls again.
The enigma of probability and physics
International Nuclear Information System (INIS)
Mayants, L.
1984-01-01
This volume contains a coherent exposition of the elements of two unique sciences: probabilistics (science of probability) and probabilistic physics (application of probabilistics to physics). Proceeding from a key methodological principle, it starts with the disclosure of the true content of probability and the interrelation between probability theory and experimental statistics. This makes is possible to introduce a proper order in all the sciences dealing with probability and, by conceiving the real content of statistical mechanics and quantum mechanics in particular, to construct both as two interconnected domains of probabilistic physics. Consistent theories of kinetics of physical transformations, decay processes, and intramolecular rearrangements are also outlined. The interrelation between the electromagnetic field, photons, and the theoretically discovered subatomic particle 'emon' is considered. Numerous internal imperfections of conventional probability theory, statistical physics, and quantum physics are exposed and removed - quantum physics no longer needs special interpretation. EPR, Bohm, and Bell paradoxes are easily resolved, among others. (Auth.)
Optimizing Probability of Detection Point Estimate Demonstration
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Alternative probability theories for cognitive psychology.
Narens, Louis
2014-01-01
Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.
ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS
Directory of Open Access Journals (Sweden)
Pablo Rogers
2015-01-01
Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.
International Nuclear Information System (INIS)
Shimada, Yoshio
2000-01-01
It is anticipated that the change of frequency of surveillance tests, preventive maintenance or parts replacement of safety related components may cause the change of component failure probability and result in the change of core damage probability. It is also anticipated that the change is different depending on the initiating event frequency or the component types. This study assessed the change of core damage probability using simplified PSA model capable of calculating core damage probability in a short time period, which is developed by the US NRC to process accident sequence precursors, when various component's failure probability is changed between 0 and 1, or Japanese or American initiating event frequency data are used. As a result of the analysis, (1) It was clarified that frequency of surveillance test, preventive maintenance or parts replacement of motor driven pumps (high pressure injection pumps, residual heat removal pumps, auxiliary feedwater pumps) should be carefully changed, since the core damage probability's change is large, when the base failure probability changes toward increasing direction. (2) Core damage probability change is insensitive to surveillance test frequency change, since the core damage probability change is small, when motor operated valves and turbine driven auxiliary feed water pump failure probability changes around one figure. (3) Core damage probability change is small, when Japanese failure probability data are applied to emergency diesel generator, even if failure probability changes one figure from the base value. On the other hand, when American failure probability data is applied, core damage probability increase is large, even if failure probability changes toward increasing direction. Therefore, when Japanese failure probability data is applied, core damage probability change is insensitive to surveillance tests frequency change etc. (author)
From Rasch scores to regression
DEFF Research Database (Denmark)
Christensen, Karl Bang
2006-01-01
Rasch models provide a framework for measurement and modelling latent variables. Having measured a latent variable in a population a comparison of groups will often be of interest. For this purpose the use of observed raw scores will often be inadequate because these lack interval scale propertie....... This paper compares two approaches to group comparison: linear regression models using estimated person locations as outcome variables and latent regression models based on the distribution of the score....
Upgrading Probability via Fractions of Events
Directory of Open Access Journals (Sweden)
Frič Roman
2016-08-01
Full Text Available The influence of “Grundbegriffe” by A. N. Kolmogorov (published in 1933 on education in the area of probability and its impact on research in stochastics cannot be overestimated. We would like to point out three aspects of the classical probability theory “calling for” an upgrade: (i classical random events are black-and-white (Boolean; (ii classical random variables do not model quantum phenomena; (iii basic maps (probability measures and observables { dual maps to random variables have very different “mathematical nature”. Accordingly, we propose an upgraded probability theory based on Łukasiewicz operations (multivalued logic on events, elementary category theory, and covering the classical probability theory as a special case. The upgrade can be compared to replacing calculations with integers by calculations with rational (and real numbers. Namely, to avoid the three objections, we embed the classical (Boolean random events (represented by the f0; 1g-valued indicator functions of sets into upgraded random events (represented by measurable {0; 1}-valued functions, the minimal domain of probability containing “fractions” of classical random events, and we upgrade the notions of probability measure and random variable.
Failure probability analysis of optical grid
Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng
2008-11-01
Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.
Uncertainty about probability: a decision analysis perspective
International Nuclear Information System (INIS)
Howard, R.A.
1988-01-01
The issue of how to think about uncertainty about probability is framed and analyzed from the viewpoint of a decision analyst. The failure of nuclear power plants is used as an example. The key idea is to think of probability as describing a state of information on an uncertain event, and to pose the issue of uncertainty in this quantity as uncertainty about a number that would be definitive: it has the property that you would assign it as the probability if you knew it. Logical consistency requires that the probability to assign to a single occurrence in the absence of further information be the mean of the distribution of this definitive number, not the medium as is sometimes suggested. Any decision that must be made without the benefit of further information must also be made using the mean of the definitive number's distribution. With this formulation, they find further that the probability of r occurrences in n exchangeable trials will depend on the first n moments of the definitive number's distribution. In making decisions, the expected value of clairvoyance on the occurrence of the event must be at least as great as that on the definitive number. If one of the events in question occurs, then the increase in probability of another such event is readily computed. This means, in terms of coin tossing, that unless one is absolutely sure of the fairness of a coin, seeing a head must increase the probability of heads, in distinction to usual thought. A numerical example for nuclear power shows that the failure of one plant of a group with a low probability of failure can significantly increase the probability that must be assigned to failure of a second plant in the group
Probability an introduction with statistical applications
Kinney, John J
2014-01-01
Praise for the First Edition""This is a well-written and impressively presented introduction to probability and statistics. The text throughout is highly readable, and the author makes liberal use of graphs and diagrams to clarify the theory."" - The StatisticianThoroughly updated, Probability: An Introduction with Statistical Applications, Second Edition features a comprehensive exploration of statistical data analysis as an application of probability. The new edition provides an introduction to statistics with accessible coverage of reliability, acceptance sampling, confidence intervals, h
Dependency models and probability of joint events
International Nuclear Information System (INIS)
Oerjasaeter, O.
1982-08-01
Probabilistic dependencies between components/systems are discussed with reference to a broad classification of potential failure mechanisms. Further, a generalized time-dependency model, based on conditional probabilities for estimation of the probability of joint events and event sequences is described. The applicability of this model is clarified/demonstrated by various examples. It is concluded that the described model of dependency is a useful tool for solving a variety of practical problems concerning the probability of joint events and event sequences where common cause and time-dependent failure mechanisms are involved. (Auth.)
Handbook of probability theory and applications
Rudas, Tamas
2008-01-01
""This is a valuable reference guide for readers interested in gaining a basic understanding of probability theory or its applications in problem solving in the other disciplines.""-CHOICEProviding cutting-edge perspectives and real-world insights into the greater utility of probability and its applications, the Handbook of Probability offers an equal balance of theory and direct applications in a non-technical, yet comprehensive, format. Editor Tamás Rudas and the internationally-known contributors present the material in a manner so that researchers of vari
Probabilities on Streams and Reflexive Games
Directory of Open Access Journals (Sweden)
Andrew Schumann
2014-01-01
Full Text Available Probability measures on streams (e.g. on hypernumbers and p-adic numbers have been defined. It was shown that these probabilities can be used for simulations of reflexive games. In particular, it can be proved that Aumann's agreement theorem does not hold for these probabilities. Instead of this theorem, there is a statement that is called the reflexion disagreement theorem. Based on this theorem, probabilistic and knowledge conditions can be defined for reflexive games at various reflexion levels up to the infinite level. (original abstract
Concept of probability in statistical physics
Guttmann, Y M
1999-01-01
Foundational issues in statistical mechanics and the more general question of how probability is to be understood in the context of physical theories are both areas that have been neglected by philosophers of physics. This book fills an important gap in the literature by providing a most systematic study of how to interpret probabilistic assertions in the context of statistical mechanics. The book explores both subjectivist and objectivist accounts of probability, and takes full measure of work in the foundations of probability theory, in statistical mechanics, and in mathematical theory. It will be of particular interest to philosophers of science, physicists and mathematicians interested in foundational issues, and also to historians of science.
Computation of the Complex Probability Function
Energy Technology Data Exchange (ETDEWEB)
Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-22
The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the n^{th} degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.
Pre-aggregation for Probability Distributions
DEFF Research Database (Denmark)
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....
Comparing linear probability model coefficients across groups
DEFF Research Database (Denmark)
Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt
2015-01-01
of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....
Modeling experiments using quantum and Kolmogorov probability
International Nuclear Information System (INIS)
Hess, Karl
2008-01-01
Criteria are presented that permit a straightforward partition of experiments into sets that can be modeled using both quantum probability and the classical probability framework of Kolmogorov. These new criteria concentrate on the operational aspects of the experiments and lead beyond the commonly appreciated partition by relating experiments to commuting and non-commuting quantum operators as well as non-entangled and entangled wavefunctions. In other words the space of experiments that can be understood using classical probability is larger than usually assumed. This knowledge provides advantages for areas such as nanoscience and engineering or quantum computation.
DEFF Research Database (Denmark)
Gasselseder, Hans-Peter
2014-01-01
This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self-report questionnai......This study explores immersive presence as well as emotional valence and arousal in the context of dynamic and non-dynamic music scores in the 3rd person action-adventure video game genre while also considering relevant personality traits of the player. 60 subjects answered self......-temporal alignment in the resulting emotional congruency of nondiegetic music. Whereas imaginary aspects of immersive presence are systemically affected by the presentation of dynamic music, sensory spatial aspects show higher sensitivity towards the arousal potential of the music score. It is argued...
Keren, G.; Teigen, K.H.
2001-01-01
This article presents a framework for lay people's internal representations of probabilities, which supposedly reflect the strength of underlying dispositions, or propensities, associated with the predicted event. From this framework, we derive the probability-outcome correspondence principle, which
Dual Diagnosis and Suicide Probability in Poly-Drug Users
International Nuclear Information System (INIS)
Youssef, I. M.; Fahmy, M. T.; Haggag, W. L.; Mohamed, K. A.; Baalash, A. A.
2016-01-01
Objective: To determine the frequency of suicidal thoughts and suicidal probability among poly-substance abusers in Saudi population, and to examine the relation between dual diagnosis and suicidal thoughts. Study Design: Case control study. Place and Duration of Study: Al-Baha Psychiatric Hospital, Saudi Arabia, from May 2011 to Jun 2012. Methodology: Participants were 239 subjects, aged 18 - 45 years. We reviewed 122 individuals who fulfilled the DSM-IV-TR criteria of substance abuse for two or more substances, and their data were compared with that collected from 117 control persons. Results: Suicidal cases were highly present among poly-substance abusers 64.75 percentage. Amphetamine and cannabis were the most abused substances, (87.7 percentage and 70.49 percentage, respectively). A statistically significant association with suicidality was found with longer duration of substance abuse (p < 0.001), using alcohol (p=0.001), amphetamine (p=0.007), volatile substances (p=0.034), presence of comorbid psychiatric disorders (dual diagnosis) as substance induced mood disorder (p=0.001), schizo-affective disorder (p=0.017), major depressive disorders (p=0.001), antisocial (p=0.016) and borderline (p=0.005) personality disorder. Suicidal cases showed significant higher scores (p < 0.001) of suicide probability scale and higher scores in Beck depressive inventory (p < 0.001). Conclusion: Abusing certain substances for long duration, in addition to comorbid psychiatric disorders especially with disturbed-mood element, may trigger suicidal thoughts in poly-substance abusers. Depression and suicide probability is common consequences of substance abuse. (author)
International Nuclear Information System (INIS)
Nicholson, T.J.; Guzman-Guzman, A.; Hills, R.; Rasmussen, T.C.
1997-01-01
The Working Group 1 final report summaries two test case studies, the Las Cruces Trench (LCT), and Apache Leap Tuff Site (ALTS) experiments. The objectives of these two field studies were to evaluate models for water flow and contaminant transport in unsaturated, heterogeneous soils and fractured tuff. The LCT experiments were specifically designed to test various deterministic and stochastic models of water flow and solute transport in heterogeneous, unsaturated soils. Experimental data from the first tow LCT experiments, and detailed field characterisation studies provided information for developing and calibrating the models. Experimental results from the third experiment were held confidential from the modellers, and were used for model comparison. Comparative analyses included: point comparisons of water content; predicted mean behavior for water flow; point comparisons of solute concentrations; and predicted mean behavior for tritium transport. These analyses indicated that no model, whether uniform or heterogeneous, proved superior. Since the INTRAVAL study, however, a new method has been developed for conditioning the hydraulic properties used for flow and transport modelling based on the initial field-measured water content distributions and a set of scale-mean hydraulic parameters. Very good matches between the observed and simulated flow and transport behavior were obtained using the conditioning procedure, without model calibration. The ALTS experiments were designed to evaluate characterisation methods and their associated conceptual models for coupled matrix-fracture continua over a range of scales (i.e., 2.5 centimeter rock samples; 10 centimeter cores; 1 meter block; and 30 meter boreholes). Within these spatial scales, laboratory and field tests were conducted for estimating pneumatic, thermal, hydraulic, and transport property values for different conceptual models. The analyses included testing of current conceptual, mathematical and physical
International Nuclear Information System (INIS)
Guzman, A.G.; Geddis, A.M.; Henrich, M.J.; Lohrstorfer, C.F.; Neuman, S.P.
1996-03-01
This document summarizes air permeability estimates obtained from single hole pneumatic injection tests in unsaturated fractured tuffs at the Covered Borehole Site (CBS) within the larger apache Leap Research Site (ALRS). Only permeability estimates obtained from a steady state interpretation of relatively stable pressure and flow rate data are included. Tests were conducted in five boreholes inclined at 45 degree to the horizontal, and one vertical borehole. Over 180 borehole segments were tested by setting the packers 1 m apart. Additional tests were conducted in segments of lengths 0.5, 2.0, and 3.0 m in one borehole, and 2.0 m in another borehole, bringing the total number of tests to over 270. Tests were conducted by maintaining a constant injection rate until air pressure became relatively stable and remained so for some time. The injection rate was then incremented by a constant value and the procedure repeated. The air injection rate, pressure, temperature, and relative humidity were recorded. For each relatively stable period of injection rate and pressure, air permeability was estimated by treating the rock around each test interval as a uniform, isotropic porous medium within which air flows as a single phase under steady state, in a pressure field exhibiting prolate spheroidal symmetry. For each permeability estimate the authors list the corresponding injection rate, pressure, temperature and relative humidity. They also present selected graphs which show how the latter quantities vary with time; logarithmic plots of pressure versus time which demonstrate the importance of borehole storage effects during the early transient portion of each incremental test period; and semilogarithmic plots of pressure versus recovery time at the end of each test sequence
International Nuclear Information System (INIS)
Peterson, Mark J; Efroymson, Rebecca Ann; Hargrove, William Walter
2008-01-01
A multiple stressor risk assessment was conducted at Yuma Proving Ground, Arizona, as a demonstration of the Military Ecological Risk Assessment Framework. The focus was a testing program at Cibola Range, which involved an Apache Longbow helicopter firing Hellfire missiles at moving targets, M60-A1 tanks. This paper describes the ecological risk assessment for the tracked vehicle movement component of the testing program. The principal stressor associated with tracked vehicle movement was soil disturbance, and a resulting, secondary stressor was hydrological change. Water loss to washes and wash vegetation was expected to result from increased infiltration and/or evaporation associated with disturbances to desert pavement. The simulated exposure of wash vegetation to water loss was quantified using estimates of exposed land area from a digital ortho quarter quad aerial photo and field observations, a 30 30 m digital elevation model, the flow accumulation feature of ESRI ArcInfo, and a two-step process in which runoff was estimated from direct precipitation to a land area and from water that flowed from upgradient to a land area. In all simulated scenarios, absolute water loss decreased with distance from the disturbance, downgradient in the washes; however, percentage water loss was greatest in land areas immediately downgradient of a disturbance. Potential effects on growth and survival of wash trees were quantified by using an empirical relationship derived from a local unpublished study of water infiltration rates. The risk characterization concluded that neither risk to wash vegetation growth or survival nor risk to mule deer abundance and reproduction was expected. The risk characterization was negative for both the incremental risk of the test program and the combination of the test and pretest disturbances
Albareti, Franco D.; Allende Prieto, Carlos; Almeida, Andres; Anders, Friedrich; Anderson, Scott; Andrews, Brett H.; Aragón-Salamanca, Alfonso; Argudo-Fernández, Maria; Armengaud, Eric; Aubourg, Eric; Avila-Reese, Vladimir; Badenes, Carles; Bailey, Stephen; Barbuy, Beatriz; Barger, Kat; Barrera-Ballesteros, Jorge; Bartosz, Curtis; Basu, Sarbani; Bates, Dominic; Battaglia, Giuseppina; Baumgarten, Falk; Baur, Julien; Bautista, Julian; Beers, Timothy C.; Belfiore, Francesco; Bershady, Matthew; Bertran de Lis, Sara; Bird, Jonathan C.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blanton, Michael; Blomqvist, Michael; Bolton, Adam S.; Borissova, J.; Bovy, Jo; Nielsen Brandt, William; Brinkmann, Jonathan; Brownstein, Joel R.; Bundy, Kevin; Burtin, Etienne; Busca, Nicolás G.; Orlando Camacho Chavez, Hugo; Cano Díaz, M.; Cappellari, Michele; Carrera, Ricardo; Chen, Yanping; Cherinka, Brian; Cheung, Edmond; Chiappini, Cristina; Chojnowski, Drew; Chuang, Chia-Hsun; Chung, Haeun; Cirolini, Rafael Fernando; Clerc, Nicolas; Cohen, Roger E.; Comerford, Julia M.; Comparat, Johan; Correa do Nascimento, Janaina; Cousinou, Marie-Claude; Covey, Kevin; Crane, Jeffrey D.; Croft, Rupert; Cunha, Katia; Darling, Jeremy; Davidson, James W., Jr.; Dawson, Kyle; Da Costa, Luiz; Da Silva Ilha, Gabriele; Deconto Machado, Alice; Delubac, Timothée; De Lee, Nathan; De la Macorra, Axel; De la Torre, Sylvain; Diamond-Stanic, Aleksandar M.; Donor, John; Downes, Juan Jose; Drory, Niv; Du, Cheng; Du Mas des Bourboux, Hélion; Dwelly, Tom; Ebelke, Garrett; Eigenbrot, Arthur; Eisenstein, Daniel J.; Elsworth, Yvonne P.; Emsellem, Eric; Eracleous, Michael; Escoffier, Stephanie; Evans, Michael L.; Falcón-Barroso, Jesús; Fan, Xiaohui; Favole, Ginevra; Fernandez-Alvar, Emma; Fernandez-Trincado, J. G.; Feuillet, Diane; Fleming, Scott W.; Font-Ribera, Andreu; Freischlad, Gordon; Frinchaboy, Peter; Fu, Hai; Gao, Yang; Garcia, Rafael A.; Garcia-Dias, R.; Garcia-Hernández, D. A.; Garcia Pérez, Ana E.; Gaulme, Patrick; Ge, Junqiang; Geisler, Douglas; Gillespie, Bruce; Gil Marin, Hector; Girardi, Léo; Goddard, Daniel; Gomez Maqueo Chew, Yilen; Gonzalez-Perez, Violeta; Grabowski, Kathleen; Green, Paul; Grier, Catherine J.; Grier, Thomas; Guo, Hong; Guy, Julien; Hagen, Alex; Hall, Matt; Harding, Paul; Harley, R. E.; Hasselquist, Sten; Hawley, Suzanne; Hayes, Christian R.; Hearty, Fred; Hekker, Saskia; Hernandez Toledo, Hector; Ho, Shirley; Hogg, David W.; Holley-Bockelmann, Kelly; Holtzman, Jon A.; Holzer, Parker H.; Hu, Jian; Huber, Daniel; Hutchinson, Timothy Alan; Hwang, Ho Seong; Ibarra-Medel, Héctor J.; Ivans, Inese I.; Ivory, KeShawn; Jaehnig, Kurt; Jensen, Trey W.; Johnson, Jennifer A.; Jones, Amy; Jullo, Eric; Kallinger, T.; Kinemuchi, Karen; Kirkby, David; Klaene, Mark; Kneib, Jean-Paul; Kollmeier, Juna A.; Lacerna, Ivan; Lane, Richard R.; Lang, Dustin; Laurent, Pierre; Law, David R.; Leauthaud, Alexie; Le Goff, Jean-Marc; Li, Chen; Li, Cheng; Li, Niu; Li, Ran; Liang, Fu-Heng; Liang, Yu; Lima, Marcos; Lin, Lihwai; Lin, Lin; Lin, Yen-Ting; Liu, Chao; Long, Dan; Lucatello, Sara; MacDonald, Nicholas; MacLeod, Chelsea L.; Mackereth, J. Ted; Mahadevan, Suvrath; Geimba Maia, Marcio Antonio; Maiolino, Roberto; Majewski, Steven R.; Malanushenko, Olena; Malanushenko, Viktor; Dullius Mallmann, Nícolas; Manchado, Arturo; Maraston, Claudia; Marques-Chaves, Rui; Martinez Valpuesta, Inma; Masters, Karen L.; Mathur, Savita; McGreer, Ian D.; Merloni, Andrea; Merrifield, Michael R.; Meszáros, Szabolcs; Meza, Andres; Miglio, Andrea; Minchev, Ivan; Molaverdikhani, Karan; Montero-Dorta, Antonio D.; Mosser, Benoit; Muna, Demitri; Myers, Adam; Nair, Preethi; Nandra, Kirpal; Ness, Melissa; Newman, Jeffrey A.; Nichol, Robert C.; Nidever, David L.; Nitschelm, Christian; O’Connell, Julia; Oravetz, Audrey; Oravetz, Daniel J.; Pace, Zachary; Padilla, Nelson; Palanque-Delabrouille, Nathalie; Pan, Kaike; Parejko, John; Paris, Isabelle; Park, Changbom; Peacock, John A.; Peirani, Sebastien; Pellejero-Ibanez, Marcos; Penny, Samantha; Percival, Will J.; Percival, Jeffrey W.; Perez-Fournon, Ismael; Petitjean, Patrick; Pieri, Matthew; Pinsonneault, Marc H.; Pisani, Alice; Prada, Francisco; Prakash, Abhishek; Price-Jones, Natalie; Raddick, M. Jordan; Rahman, Mubdi; Raichoor, Anand; Barboza Rembold, Sandro; Reyna, A. M.; Rich, James; Richstein, Hannah; Ridl, Jethro; Riffel, Rogemar A.; Riffel, Rogério; Rix, Hans-Walter; Robin, Annie C.; Rockosi, Constance M.; Rodríguez-Torres, Sergio; Rodrigues, Thaíse S.; Roe, Natalie; Lopes, A. Roman; Román-Zúñiga, Carlos; Ross, Ashley J.; Rossi, Graziano; Ruan, John; Ruggeri, Rossana; Runnoe, Jessie C.; Salazar-Albornoz, Salvador; Salvato, Mara; Sanchez, Sebastian F.; Sanchez, Ariel G.; Sanchez-Gallego, José R.; Santiago, Basílio Xavier; Schiavon, Ricardo; Schimoia, Jaderson S.; Schlafly, Eddie; Schlegel, David J.; Schneider, Donald P.; Schönrich, Ralph; Schultheis, Mathias; Schwope, Axel; Seo, Hee-Jong; Serenelli, Aldo; Sesar, Branimir; Shao, Zhengyi; Shetrone, Matthew; Shull, Michael; Silva Aguirre, Victor; Skrutskie, M. F.; Slosar, Anže; Smith, Michael; Smith, Verne V.; Sobeck, Jennifer; Somers, Garrett; Souto, Diogo; Stark, David V.; Stassun, Keivan G.; Steinmetz, Matthias; Stello, Dennis; Storchi Bergmann, Thaisa; Strauss, Michael A.; Streblyanska, Alina; Stringfellow, Guy S.; Suarez, Genaro; Sun, Jing; Taghizadeh-Popp, Manuchehr; Tang, Baitian; Tao, Charling; Tayar, Jamie; Tembe, Mita; Thomas, Daniel; Tinker, Jeremy; Tojeiro, Rita; Tremonti, Christy; Troup, Nicholas; Trump, Jonathan R.; Unda-Sanzana, Eduardo; Valenzuela, O.; Van den Bosch, Remco; Vargas-Magaña, Mariana; Vazquez, Jose Alberto; Villanova, Sandro; Vivek, M.; Vogt, Nicole; Wake, David; Walterbos, Rene; Wang, Yuting; Wang, Enci; Weaver, Benjamin Alan; Weijmans, Anne-Marie; Weinberg, David H.; Westfall, Kyle B.; Whelan, David G.; Wilcots, Eric; Wild, Vivienne; Williams, Rob A.; Wilson, John; Wood-Vasey, W. M.; Wylezalek, Dominika; Xiao, Ting; Yan, Renbin; Yang, Meng; Ybarra, Jason E.; Yeche, Christophe; Yuan, Fang-Ting; Zakamska, Nadia; Zamora, Olga; Zasowski, Gail; Zhang, Kai; Zhao, Cheng; Zhao, Gong-Bo; Zheng, Zheng; Zheng, Zheng; Zhou, Zhi-Min; Zhu, Guangtun; Zinn, Joel C.; Zou, Hu
2017-12-01
The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) began observations in 2014 July. It pursues three core programs: the Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2), Mapping Nearby Galaxies at APO (MaNGA), and the Extended Baryon Oscillation Spectroscopic Survey (eBOSS). As well as its core program, eBOSS contains two major subprograms: the Time Domain Spectroscopic Survey (TDSS) and the SPectroscopic IDentification of ERosita Sources (SPIDERS). This paper describes the first data release from SDSS-IV, Data Release 13 (DR13). DR13 makes publicly available the first 1390 spatially resolved integral field unit observations of nearby galaxies from MaNGA. It includes new observations from eBOSS, completing the Sloan Extended QUasar, Emission-line galaxy, Luminous red galaxy Survey (SEQUELS), which also targeted variability-selected objects and X-ray-selected objects. DR13 includes new reductions of the SDSS-III BOSS data, improving the spectrophotometric calibration and redshift classification, and new reductions of the SDSS-III APOGEE-1 data, improving stellar parameters for dwarf stars and cooler stars. DR13 provides more robust and precise photometric calibrations. Value-added target catalogs relevant for eBOSS, TDSS, and SPIDERS and an updated red-clump catalog for APOGEE are also available. This paper describes the location and format of the data and provides references to important technical papers. The SDSS web site, http://www.sdss.org, provides links to the data, tutorials, examples of data access, and extensive documentation of the reduction and analysis procedures. DR13 is the first of a scheduled set that will contain new data and analyses from the planned ∼6 yr operations of SDSS-IV.
Modelling the probability of building fires
Directory of Open Access Journals (Sweden)
Vojtěch Barták
2014-12-01
Full Text Available Systematic spatial risk analysis plays a crucial role in preventing emergencies.In the Czech Republic, risk mapping is currently based on the risk accumulationprinciple, area vulnerability, and preparedness levels of Integrated Rescue Systemcomponents. Expert estimates are used to determine risk levels for individualhazard types, while statistical modelling based on data from actual incidents andtheir possible causes is not used. Our model study, conducted in cooperation withthe Fire Rescue Service of the Czech Republic as a model within the Liberec andHradec Králové regions, presents an analytical procedure leading to the creation ofbuilding fire probability maps based on recent incidents in the studied areas andon building parameters. In order to estimate the probability of building fires, aprediction model based on logistic regression was used. Probability of fire calculatedby means of model parameters and attributes of specific buildings can subsequentlybe visualized in probability maps.
Encounter Probability of Individual Wave Height
DEFF Research Database (Denmark)
Liu, Z.; Burcharth, H. F.
1998-01-01
wave height corresponding to a certain exceedence probability within a structure lifetime (encounter probability), based on the statistical analysis of long-term extreme significant wave height. Then the design individual wave height is calculated as the expected maximum individual wave height...... associated with the design significant wave height, with the assumption that the individual wave heights follow the Rayleigh distribution. However, the exceedence probability of such a design individual wave height within the structure lifetime is unknown. The paper presents a method for the determination...... of the design individual wave height corresponding to an exceedence probability within the structure lifetime, given the long-term extreme significant wave height. The method can also be applied for estimation of the number of relatively large waves for fatigue analysis of constructions....
Predicting binary choices from probability phrase meanings.
Wallsten, Thomas S; Jang, Yoonhee
2008-08-01
The issues of how individuals decide which of two events is more likely and of how they understand probability phrases both involve judging relative likelihoods. In this study, we investigated whether derived scales representing probability phrase meanings could be used within a choice model to predict independently observed binary choices. If they can, this simultaneously provides support for our model and suggests that the phrase meanings are measured meaningfully. The model assumes that, when deciding which of two events is more likely, judges take a single sample from memory regarding each event and respond accordingly. The model predicts choice probabilities by using the scaled meanings of individually selected probability phrases as proxies for confidence distributions associated with sampling from memory. Predictions are sustained for 34 of 41 participants but, nevertheless, are biased slightly low. Sequential sampling models improve the fit. The results have both theoretical and applied implications.
Certainties and probabilities of the IPCC
International Nuclear Information System (INIS)
2004-01-01
Based on an analysis of information about the climate evolution, simulations of a global warming and the snow coverage monitoring of Meteo-France, the IPCC presented its certainties and probabilities concerning the greenhouse effect. (A.L.B.)
The probability factor in establishing causation
International Nuclear Information System (INIS)
Hebert, J.
1988-01-01
This paper discusses the possibilities and limitations of methods using the probability factor in establishing the causal link between bodily injury, whether immediate or delayed, and the nuclear incident presumed to have caused it (NEA) [fr
Bayesian optimization for computationally extensive probability distributions.
Tamura, Ryo; Hukushima, Koji
2018-01-01
An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.
Characteristic length of the knotting probability revisited
International Nuclear Information System (INIS)
Uehara, Erica; Deguchi, Tetsuo
2015-01-01
We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(−N/N K ), where the estimates of parameter N K are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius r ex , i.e. the screening length of double-stranded DNA. (paper)
Probability of Survival Decision Aid (PSDA)
National Research Council Canada - National Science Library
Xu, Xiaojiang; Amin, Mitesh; Santee, William R
2008-01-01
A Probability of Survival Decision Aid (PSDA) is developed to predict survival time for hypothermia and dehydration during prolonged exposure at sea in both air and water for a wide range of environmental conditions...
Probability and statistics with integrated software routines
Deep, Ronald
2005-01-01
Probability & Statistics with Integrated Software Routines is a calculus-based treatment of probability concurrent with and integrated with statistics through interactive, tailored software applications designed to enhance the phenomena of probability and statistics. The software programs make the book unique.The book comes with a CD containing the interactive software leading to the Statistical Genie. The student can issue commands repeatedly while making parameter changes to observe the effects. Computer programming is an excellent skill for problem solvers, involving design, prototyping, data gathering, testing, redesign, validating, etc, all wrapped up in the scientific method.See also: CD to accompany Probability and Stats with Integrated Software Routines (0123694698)* Incorporates more than 1,000 engaging problems with answers* Includes more than 300 solved examples* Uses varied problem solving methods
Determining probabilities of geologic events and processes
International Nuclear Information System (INIS)
Hunter, R.L.; Mann, C.J.; Cranwell, R.M.
1985-01-01
The Environmental Protection Agency has recently published a probabilistic standard for releases of high-level radioactive waste from a mined geologic repository. The standard sets limits for contaminant releases with more than one chance in 100 of occurring within 10,000 years, and less strict limits for releases of lower probability. The standard offers no methods for determining probabilities of geologic events and processes, and no consensus exists in the waste-management community on how to do this. Sandia National Laboratories is developing a general method for determining probabilities of a given set of geologic events and processes. In addition, we will develop a repeatable method for dealing with events and processes whose probability cannot be determined. 22 refs., 4 figs
Pre-Aggregation with Probability Distributions
DEFF Research Database (Denmark)
Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach
2006-01-01
Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...... the distributions can be subsequently used in pre-aggregation. Since the probability distributions can become large, we show how to achieve good time and space efficiency by approximating the distributions. We present the results of several experiments that demonstrate the effectiveness of our methods. The work...... is motivated with a real-world case study, based on our collaboration with a leading Danish vendor of location-based services. This paper is the first to consider the approximate processing of probabilistic OLAP queries over probability distributions....
Probability of spent fuel transportation accidents
International Nuclear Information System (INIS)
McClure, J.D.
1981-07-01
The transported volume of spent fuel, incident/accident experience and accident environment probabilities were reviewed in order to provide an estimate of spent fuel accident probabilities. In particular, the accident review assessed the accident experience for large casks of the type that could transport spent (irradiated) nuclear fuel. This review determined that since 1971, the beginning of official US Department of Transportation record keeping for accidents/incidents, there has been one spent fuel transportation accident. This information, coupled with estimated annual shipping volumes for spent fuel, indicated an estimated annual probability of a spent fuel transport accident of 5 x 10 -7 spent fuel accidents per mile. This is consistent with ordinary truck accident rates. A comparison of accident environments and regulatory test environments suggests that the probability of truck accidents exceeding regulatory test for impact is approximately 10 -9 /mile
Sampling, Probability Models and Statistical Reasoning Statistical
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...
Imprecise Probability Methods for Weapons UQ
Energy Technology Data Exchange (ETDEWEB)
Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-13
Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.
Escape and transmission probabilities in cylindrical geometry
International Nuclear Information System (INIS)
Bjerke, M.A.
1980-01-01
An improved technique for the generation of escape and transmission probabilities in cylindrical geometry was applied to the existing resonance cross section processing code ROLAIDS. The algorithm of Hwang and Toppel, [ANL-FRA-TM-118] (with modifications) was employed. The probabilities generated were found to be as accurate as those given by the method previously applied in ROLAIDS, while requiring much less computer core storage and CPU time
Probability and statistics for computer science
Johnson, James L
2011-01-01
Comprehensive and thorough development of both probability and statistics for serious computer scientists; goal-oriented: ""to present the mathematical analysis underlying probability results"" Special emphases on simulation and discrete decision theory Mathematically-rich, but self-contained text, at a gentle pace Review of calculus and linear algebra in an appendix Mathematical interludes (in each chapter) which examine mathematical techniques in the context of probabilistic or statistical importance Numerous section exercises, summaries, historical notes, and Further Readings for reinforcem
Collision Probabilities for Finite Cylinders and Cuboids
Energy Technology Data Exchange (ETDEWEB)
Carlvik, I
1967-05-15
Analytical formulae have been derived for the collision probabilities of homogeneous finite cylinders and cuboids. The formula for the finite cylinder contains double integrals, and the formula for the cuboid only single integrals. Collision probabilities have been calculated by means of the formulae and compared with values obtained by other authors. It was found that the calculations using the analytical formulae are much quicker and give higher accuracy than Monte Carlo calculations.
Interactive design of probability density functions for shape grammars
Dang, Minh
2015-11-02
A shape grammar defines a procedural shape space containing a variety of models of the same class, e.g. buildings, trees, furniture, airplanes, bikes, etc. We present a framework that enables a user to interactively design a probability density function (pdf) over such a shape space and to sample models according to the designed pdf. First, we propose a user interface that enables a user to quickly provide preference scores for selected shapes and suggest sampling strategies to decide which models to present to the user to evaluate. Second, we propose a novel kernel function to encode the similarity between two procedural models. Third, we propose a framework to interpolate user preference scores by combining multiple techniques: function factorization, Gaussian process regression, autorelevance detection, and l1 regularization. Fourth, we modify the original grammars to generate models with a pdf proportional to the user preference scores. Finally, we provide evaluations of our user interface and framework parameters and a comparison to other exploratory modeling techniques using modeling tasks in five example shape spaces: furniture, low-rise buildings, skyscrapers, airplanes, and vegetation.
Skin scoring in systemic sclerosis
DEFF Research Database (Denmark)
Zachariae, Hugh; Bjerring, Peter; Halkier-Sørensen, Lars
1994-01-01
Forty-one patients with systemic sclerosis were investigated with a new and simple skin score method measuring the degree of thickening and pliability in seven regions together with area involvement in each region. The highest values were, as expected, found in diffuse cutaneous systemic sclerosis...... (type III SS) and the lowest in limited cutaneous systemic sclerosis (type I SS) with no lesions extending above wrists and ancles. A positive correlation was found to the aminoterminal propeptide of type III procollagen, a serological marker for synthesis of type III collagen. The skin score...
Confidence Intervals for True Scores Using the Skew-Normal Distribution
Garcia-Perez, Miguel A.
2010-01-01
A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…
The persistence of depression score
Spijker, J.; de Graaf, R.; Ormel, J.; Nolen, W. A.; Grobbee, D. E.; Burger, H.
2006-01-01
Objective: To construct a score that allows prediction of major depressive episode (MDE) persistence in individuals with MDE using determinants of persistence identified in previous research. Method: Data were derived from 250 subjects from the general population with new MDE according to DSM-III-R.
Score distributions in information retrieval
Arampatzis, A.; Robertson, S.; Kamps, J.
2009-01-01
We review the history of modeling score distributions, focusing on the mixture of normal-exponential by investigating the theoretical as well as the empirical evidence supporting its use. We discuss previously suggested conditions which valid binary mixture models should satisfy, such as the
Developing Scoring Algorithms (Earlier Methods)
We developed scoring procedures to convert screener responses to estimates of individual dietary intake for fruits and vegetables, dairy, added sugars, whole grains, fiber, and calcium using the What We Eat in America 24-hour dietary recall data from the 2003-2006 NHANES.
Causal inference, probability theory, and graphical insights.
Baker, Stuart G
2013-11-10
Causal inference from observational studies is a fundamental topic in biostatistics. The causal graph literature typically views probability theory as insufficient to express causal concepts in observational studies. In contrast, the view here is that probability theory is a desirable and sufficient basis for many topics in causal inference for the following two reasons. First, probability theory is generally more flexible than causal graphs: Besides explaining such causal graph topics as M-bias (adjusting for a collider) and bias amplification and attenuation (when adjusting for instrumental variable), probability theory is also the foundation of the paired availability design for historical controls, which does not fit into a causal graph framework. Second, probability theory is the basis for insightful graphical displays including the BK-Plot for understanding Simpson's paradox with a binary confounder, the BK2-Plot for understanding bias amplification and attenuation in the presence of an unobserved binary confounder, and the PAD-Plot for understanding the principal stratification component of the paired availability design. Published 2013. This article is a US Government work and is in the public domain in the USA.
2014-01-01
Background The aim of the study was to construct a new scoring system for more accurate diagnostics of acute appendicitis. Applying the new score into clinical practice could reduce the need of potentially harmful diagnostic imaging. Methods This prospective study enrolled 829 adults presenting with clinical suspicion of appendicitis, including 392 (47%) patients with appendicitis. The collected data included clinical findings and symptoms together with laboratory tests (white cell count, neutrophil count and C-reactive protein), and the timing of the onset of symptoms. The score was constructed by logistic regression analysis using multiple imputations for missing values. Performance of the constructed score in patients with complete data (n = 725) was compared with Alvarado score and Appendicitis inflammatory response score. Results 343 (47%) of patients with complete data had appendicitis. 199 (58%) patients with appendicitis had score value at least 16 and were classified as high probability group with 93% specificity.Patients with score below 11 were classified as low probability of appendicitis. Only 4% of patients with appendicitis had a score below 11, and none of them had complicated appendicitis. In contrast, 207 (54%) of non-appendicitis patients had score below 11. There were no cases with complicated appendicitis in the low probability group. The area under ROC curve was significantly larger with the new score 0.882 (95% CI 0.858 – 0.906) compared with AUC of Alvarado score 0.790 (0.758 – 0.823) and Appendicitis inflammatory response score 0.810 (0.779 – 0.840). Conclusions The new diagnostic score is fast and accurate in categorizing patients with suspected appendicitis, and roughly halves the need of diagnostic imaging. PMID:24970111
On the Possibility of Assigning Probabilities to Singular Cases, or: Probability Is Subjective Too!
Directory of Open Access Journals (Sweden)
Mark R. Crovelli
2009-06-01
Full Text Available Both Ludwig von Mises and Richard von Mises claimed that numerical probability could not be legitimately applied to singular cases. This paper challenges this aspect of the von Mises brothers’ theory of probability. It is argued that their denial that numerical probability could be applied to singular cases was based solely upon Richard von Mises’ exceptionally restrictive definition of probability. This paper challenges Richard von Mises’ definition of probability by arguing that the definition of probability necessarily depends upon whether the world is governed by time-invariant causal laws. It is argued that if the world is governed by time-invariant causal laws, a subjective definition of probability must be adopted. It is further argued that both the nature of human action and the relative frequency method for calculating numerical probabilities both presuppose that the world is indeed governed by time-invariant causal laws. It is finally argued that the subjective definition of probability undercuts the von Mises claim that numerical probability cannot legitimately be applied to singular, non-replicable cases.
DEFF Research Database (Denmark)
Azarang, Leyla; Scheike, Thomas; de Uña-Álvarez, Jacobo
2017-01-01
In this work, we present direct regression analysis for the transition probabilities in the possibly non-Markov progressive illness–death model. The method is based on binomial regression, where the response is the indicator of the occupancy for the given state along time. Randomly weighted score...
Uncertainty relation and probability. Numerical illustration
International Nuclear Information System (INIS)
Fujikawa, Kazuo; Umetsu, Koichiro
2011-01-01
The uncertainty relation and the probability interpretation of quantum mechanics are intrinsically connected, as is evidenced by the evaluation of standard deviations. It is thus natural to ask if one can associate a very small uncertainty product of suitably sampled events with a very small probability. We have shown elsewhere that some examples of the evasion of the uncertainty relation noted in the past are in fact understood in this way. We here numerically illustrate that a very small uncertainty product is realized if one performs a suitable sampling of measured data that occur with a very small probability. We introduce a notion of cyclic measurements. It is also shown that our analysis is consistent with the Landau-Pollak-type uncertainty relation. It is suggested that the present analysis may help reconcile the contradicting views about the 'standard quantum limit' in the detection of gravitational waves. (author)
Comparing coefficients of nested nonlinear probability models
DEFF Research Database (Denmark)
Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders
2011-01-01
In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...
A basic course in probability theory
Bhattacharya, Rabi
2016-01-01
This text develops the necessary background in probability theory underlying diverse treatments of stochastic processes and their wide-ranging applications. In this second edition, the text has been reorganized for didactic purposes, new exercises have been added and basic theory has been expanded. General Markov dependent sequences and their convergence to equilibrium is the subject of an entirely new chapter. The introduction of conditional expectation and conditional probability very early in the text maintains the pedagogic innovation of the first edition; conditional expectation is illustrated in detail in the context of an expanded treatment of martingales, the Markov property, and the strong Markov property. Weak convergence of probabilities on metric spaces and Brownian motion are two topics to highlight. A selection of large deviation and/or concentration inequalities ranging from those of Chebyshev, Cramer–Chernoff, Bahadur–Rao, to Hoeffding have been added, with illustrative comparisons of thei...
Ignition probabilities for Compact Ignition Tokamak designs
International Nuclear Information System (INIS)
Stotler, D.P.; Goldston, R.J.
1989-09-01
A global power balance code employing Monte Carlo techniques had been developed to study the ''probability of ignition'' and has been applied to several different configurations of the Compact Ignition Tokamak (CIT). Probability distributions for the critical physics parameters in the code were estimated using existing experimental data. This included a statistical evaluation of the uncertainty in extrapolating the energy confinement time. A substantial probability of ignition is predicted for CIT if peaked density profiles can be achieved or if one of the two higher plasma current configurations is employed. In other cases, values of the energy multiplication factor Q of order 10 are generally obtained. The Ignitor-U and ARIES designs are also examined briefly. Comparisons of our empirically based confinement assumptions with two theory-based transport models yield conflicting results. 41 refs., 11 figs
Independent events in elementary probability theory
Csenki, Attila
2011-07-01
In Probability and Statistics taught to mathematicians as a first introduction or to a non-mathematical audience, joint independence of events is introduced by requiring that the multiplication rule is satisfied. The following statement is usually tacitly assumed to hold (and, at best, intuitively motivated): quote specific-use="indent"> If the n events E 1, E 2, … , E n are jointly independent then any two events A and B built in finitely many steps from two disjoint subsets of E 1, E 2, … , E n are also independent. The operations 'union', 'intersection' and 'complementation' are permitted only when forming the events A and B. quote>Here we examine this statement from the point of view of elementary probability theory. The approach described here is accessible also to users of probability theory and is believed to be novel.
Pointwise probability reinforcements for robust statistical inference.
Frénay, Benoît; Verleysen, Michel
2014-02-01
Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.