sensitivity analysis on flexible road pavement life cycle cost model
African Journals Online (AJOL)
user
of sensitivity analysis on a developed flexible pavement life cycle cost model using varying discount rate. The study .... organizations and specific projects needs based. Life-cycle ... developed and completed urban road infrastructure corridor ...
Sequence length variation, indel costs, and congruence in sensitivity analysis
DEFF Research Database (Denmark)
Aagesen, Lone; Petersen, Gitte; Seberg, Ole
2005-01-01
The behavior of two topological and four character-based congruence measures was explored using different indel treatments in three empirical data sets, each with different alignment difficulties. The analyses were done using direct optimization within a sensitivity analysis framework in which...... the cost of indels was varied. Indels were treated either as a fifth character state, or strings of contiguous gaps were considered single events by using linear affine gap cost. Congruence consistently improved when indels were treated as single events, but no congruence measure appeared as the obviously...... preferable one. However, when combining enough data, all congruence measures clearly tended to select the same alignment cost set as the optimal one. Disagreement among congruence measures was mostly caused by a dominant fragment or a data partition that included all or most of the length variation...
Economic impact analysis for global warming: Sensitivity analysis for cost and benefit estimates
International Nuclear Information System (INIS)
Ierland, E.C. van; Derksen, L.
1994-01-01
Proper policies for the prevention or mitigation of the effects of global warming require profound analysis of the costs and benefits of alternative policy strategies. Given the uncertainty about the scientific aspects of the process of global warming, in this paper a sensitivity analysis for the impact of various estimates of costs and benefits of greenhouse gas reduction strategies is carried out to analyze the potential social and economic impacts of climate change
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost
Integrated thermal and nonthermal treatment technology and subsystem cost sensitivity analysis
International Nuclear Information System (INIS)
Harvego, L.A.; Schafer, J.J.
1997-02-01
The U.S. Department of Energy's (DOE) Environmental Management Office of Science and Technology (EM-50) authorized studies on alternative systems for treating contact-handled DOE mixed low-level radioactive waste (MLLW). The on-going Integrated Thermal Treatment Systems' (ITTS) and the Integrated Nonthermal Treatment Systems' (INTS) studies satisfy this request. EM-50 further authorized supporting studies including this technology and subsystem cost sensitivity analysis. This analysis identifies areas where technology development could have the greatest impact on total life cycle system costs. These areas are determined by evaluating the sensitivity of system life cycle costs relative to changes in life cycle component or phase costs, subsystem costs, contingency allowance, facility capacity, operating life, and disposal costs. For all treatment systems, the most cost sensitive life cycle phase is the operations and maintenance phase and the most cost sensitive subsystem is the receiving and inspection/preparation subsystem. These conclusions were unchanged when the sensitivity analysis was repeated on a present value basis. Opportunity exists for technology development to reduce waste receiving and inspection/preparation costs by effectively minimizing labor costs, the major cost driver, within the maintenance and operations phase of the life cycle
Levelized cost of energy and sensitivity analysis for the hydrogen-bromine flow battery
Singh, Nirala; McFarland, Eric W.
2015-08-01
The technoeconomics of the hydrogen-bromine flow battery are investigated. Using existing performance data the operating conditions were optimized to minimize the levelized cost of electricity using individual component costs for the flow battery stack and other system units. Several different configurations were evaluated including use of a bromine complexing agent to reduce membrane requirements. Sensitivity analysis of cost is used to identify the system elements most strongly influencing the economics. The stack lifetime and round-trip efficiency of the cell are identified as major factors on the levelized cost of electricity, along with capital components related to hydrogen storage, the bipolar plate, and the membrane. Assuming that an electrocatalyst and membrane with a lifetime of 2000 cycles can be identified, the lowest cost market entry system capital is 220 kWh-1 for a 4 h discharge system and for a charging energy cost of 0.04 kWh-1 the levelized cost of the electricity delivered is 0.40 kWh-1. With systems manufactured at large scales these costs are expected to be lower.
Strategies for cost-effective carbon reductions: A sensitivity analysis of alternative scenarios
International Nuclear Information System (INIS)
Gumerman, Etan; Koomey, Jonathan G.; Brown, Marilyn
2001-01-01
Analyses of alternative futures often present results for a limited set of scenarios, with little if any sensitivity analysis to identify the factors affecting the scenario results. This approach creates an artificial impression of certainty associated with the scenarios considered, and inhibits understanding of the underlying forces. This paper summarizes the economic and carbon savings sensitivity analysis completed for the Scenarios for a Clean Energy Future study (IWG, 2000). Its 19 sensitivity cases provide insight into the costs and carbon-reduction impacts of a carbon permit trading system, demand-side efficiency programs, and supply-side policies. Impacts under different natural gas and oil price trajectories are also examined. The results provide compelling evidence that policy opportunities exist to reduce carbon emissions and save society money
... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...
A Sensitivity Analysis of Timing and Costs of Greenhouse Gas Emission Reductions
International Nuclear Information System (INIS)
Gerlagh, R.; Van der Zwaan, B.
2004-01-01
This paper analyses the optimal timing and macro-economic costs of carbon emission reductions that mitigate the global average atmospheric temperature increase. We use a macro-economic model in which there are two competing energy sources, fossil-fuelled and non-fossil-fuelled. Technological change is represented endogenously through learning curves, and niche markets exist implying positive demand for the relatively expensive non-fossil-fuelled energy source. Under these conditions, with a temperature increase constraint of 2C, early abatement is found to be optimal, and, compared to the results of many existing top-down models, the costs of this strategy prove to be low. We perform an extensive sensitivity analysis of our results regarding the uncertainties that dominate various economic and technological modeling parameters. Uncertainties in the learning rate and the elasticity of substitution between the two different energy sources most significantly affect the robustness of our findings
International Nuclear Information System (INIS)
Kim, S. K.; Ko, W. I.; You, S. R.; Gao, R. X.
2015-01-01
This paper examines the difference in the value of the nuclear fuel cycle cost calculated by the deterministic and probabilistic methods on the basis of an equilibrium model. Calculating using the deterministic method, the direct disposal cost and Pyro-SFR (sodium-cooled fast reactor) nuclear fuel cycle cost, including the reactor cost, were found to be 66.41 mills/kWh and 77.82 mills/kWh, respectively (1 mill = one thousand of a dollar, i.e., 10-3 $). This is because the cost of SFR is considerably expensive. Calculating again using the probabilistic method, however, the direct disposal cost and Pyro-SFR nuclear fuel cycle cost, excluding the reactor cost, were found be 7.47 mills/kWh and 6.40 mills/kWh, respectively, on the basis of the most likely value. This is because the nuclear fuel cycle cost is significantly affected by the standard deviation and the mean of the unit cost that includes uncertainty. Thus, it is judged that not only the deterministic method, but also the probabilistic method, would also be necessary to evaluate the nuclear fuel cycle cost. By analyzing the sensitivity of the unit cost in each phase of the nuclear fuel cycle, it was found that the uranium unit price is the most influential factor in determining nuclear fuel cycle costs.
Economics of climate change : sensitivity analysis of social cost of carbon
Torniainen, Sami
2016-01-01
Social cost of carbon (SCC) is the key concept in the economics of climate change. It measures the economic cost of climate impacts. SCC has influence on how beneficial it is to prevent climate change: if the value of SCC increases, investments to low-carbon technology become more attractive and profitable. This paper examines the sensitivity of two important assumptions that affect to SCC: the choice of a discount rate and time horizon. Using the integrated assessment model, ...
Cost and sensitivity analysis for uranium in situ leach mining. Open file report Oct 79-Mar 81
International Nuclear Information System (INIS)
Toth, G.W.; Annett, J.R.
1981-03-01
This report presents the results of an assessment of uranium in situ leach mining costs through the application of process engineering and discounted cash flow analysis procedures. A computerized costing technique was developed to facilitate rapid cost analyses. Applications of the cost model will generate mine life capital and operating costs as well as solve for economic production cost per pound U 3 O 8 . Conversely, rate of return may be determined subject to a known selling price. The data bases of the cost model were designed to reflect variations in Texas versus Wyoming site applications. The results of applying the model under numerous ore deposit, operating, well field, and extraction plant conditions for Texas and Wyoming are summarized in the report. Sensitivity analysis of changes in key project parameters have also been tested and are included
Steinfort, Daniel P; Liew, Danny; Conron, Matthew; Hutchinson, Anastasia F; Irving, Louis B
2010-10-01
Accurate staging of non-small cell lung cancer (NSCLC) is critical for optimal management. Minimally invasive pathologic assessment of mediastinal lymphadenopathy is increasingly being performed. The cost-benefit (minimization of health care costs) of such approaches, in comparison with traditional surgical methods, is yet to be established. Decision-tree analysis was applied to compare downstream costs of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA), conventional TBNA, and surgical mediastinoscopy. Calculations were based on real costs derived from actual patient data at a major teaching hospital in Melbourne, Australia. One- and two-way sensitivity analyses were undertaken to account for potential variation in input parameter values. For the base-case analysis, initial evaluation with EBUS-TBNA (with negative results being surgically confirmed) was the most cost-beneficial approach (AU$2961) in comparison with EBUS-TBNA (negative results not surgically confirmed) ($3344), conventional TBNA ($3754), and mediastinoscopy ($8859). The sensitivity of EBUS-TBNA for detecting disease had the largest impact on cost, whereas the prevalence of mediastinal lymph node metastases determined whether surgical confirmation of negative EBUS-TBNA results remained cost-beneficial. Our study confirms that minimally invasive staging of NSCLC is cost-beneficial in comparison with traditional surgical techniques. EBUS-TBNA was the most cost-beneficial approach for mediastinal staging of patients with NSCLC across all studied parameters.
Leurent, Baptiste; Gomes, Manuel; Faria, Rita; Morris, Stephen; Grieve, Richard; Carpenter, James R
2018-04-20
Cost-effectiveness analyses (CEA) of randomised controlled trials are a key source of information for health care decision makers. Missing data are, however, a common issue that can seriously undermine their validity. A major concern is that the chance of data being missing may be directly linked to the unobserved value itself [missing not at random (MNAR)]. For example, patients with poorer health may be less likely to complete quality-of-life questionnaires. However, the extent to which this occurs cannot be ascertained from the data at hand. Guidelines recommend conducting sensitivity analyses to assess the robustness of conclusions to plausible MNAR assumptions, but this is rarely done in practice, possibly because of a lack of practical guidance. This tutorial aims to address this by presenting an accessible framework and practical guidance for conducting sensitivity analysis for MNAR data in trial-based CEA. We review some of the methods for conducting sensitivity analysis, but focus on one particularly accessible approach, where the data are multiply-imputed and then modified to reflect plausible MNAR scenarios. We illustrate the implementation of this approach on a weight-loss trial, providing the software code. We then explore further issues around its use in practice.
Soft-copy sonography: cost reduction sensitivity analysis in a pediatric hospital.
Don, S; Albertina, M J; Ammann, D
1998-03-01
Our objective was to determine whether interpreting sonograms of pediatric patients using soft-copy (computer workstation) instead of laser-printed film could reduce costs for a pediatric radiology department. We used theoretic models of growth to analyze costs. The costs of a sonographic picture archiving and communication system (three interface devices, two workstations, a network server, maintenance expenses, and storage media costs) were compared with the potential savings of eliminating film and increasing technologist efficiency or reducing the number of technologists. The model was based on historic trends and future capitation estimates that will reduce fee-for-service reimbursement. The effects of varying the study volume and reducing technologists' work hours were analyzed. By converting to soft-copy interpretation, we saved 6 min 32 sec per examination by eliminating film processing waiting time, thus reducing examination time from 30 min to 24 min. During an average day of 27 examinations, 176 min were saved. However, 33 min a day were spent retrieving prior studies from long-term storage; thus, 143 extra minutes a day were available for scanning. This improved efficiency could result in five more sonograms a day obtained by converting to soft-copy interpretation, using existing staff and equipment. Alternatively, five examinations a day would equate to one half of a full-time equivalent technologists position. Our analysis of costs considered that the hospital's anticipated growth of sonography and the depreciation of equipment during 5 years resulted in a savings of more than $606,000. Increasing the examinations by just 200 sonograms in the first year and no further growth resulted in a savings of more than $96,000. If the number of sonograms stayed constant, elimination of film printing alone resulted in a loss of approximately $157,000; reduction of one half of a full-time equivalent technologist's position would recuperate approximately $134
International Nuclear Information System (INIS)
Ogata, Noboru
1986-01-01
The system model for a conceptional design and cost estimation was studied on a multi-layered fluidizing bed with a pump which used hydrous titanium oxide (HTO) and amidoxime resin (AOR) as adsorbents. The cost effect of some parameters, namely characteristics of adsorbent, operating conditions, price of materials and some others, were estimated, and finally there was shown a direction of improvement and a possibility of cost reduction. The conceptional design and operating condition were obtained from the balance point on expansion ratio, recovery and characteristics of adsorbent. A suitable plan was obtained from the minimum cost condition in some level of the expansion ratio and some parameters. HTO was heavy in density and cheap in price. The main results of the study indicated that the thickness of the bed was 1 m, the linear velocity of seawater was 52 m/hr, the number of bed layers was 4, the construction cost of a 100 t/y plant was 10 billion yen, and the uranium cost was 160 $/1b. AOR had a large adsorption capacity. As the main results, the thickness of bed was 0.08 m, the linear velosity of seawater was 11.6 m, the number of the bed layers was 27, the construction cost of a 100 t/y plant was 15 billion yen, and the uranium cost was 280 $/1b. The size of the 100 t/y plant was about 800 m length x 80 m depth x 30 m height at 80 % of recovery. An increase of adsorption capacity in HTO, and an increase of density and particle size in AOR had the greatest merit for cost reduction. Other effective parameters were the adsorption velocity, the recovery, temperature, the price of adsorbent, the manufacturing cost of instrument, and the rate of interest. The cost of uranium by this process had a possibility of cost reduction to 67 $/1b at HTO and 79 $/1b at AOR. (author)
Directory of Open Access Journals (Sweden)
Mahmoudi Hoda
2014-09-01
Full Text Available These instructions give you guidelines for preparing papers for IFAC conferences. A reverse supply chain is configured by a sequence of elements forming a continuous process to treat return-products until they are properly recovered or disposed. The activities in a reverse supply chain include collection, cleaning, disassembly, test and sorting, storage, transport, and recovery operations. This paper presents a mathematical programming model with the objective of minimizing the total costs of reverse supply chain including transportation, fixed opening, operation, maintenance and remanufacturing costs of centers. The proposed model considers the design of a multi-layer, multi-product reverse supply chain that consists of returning, disassembly, processing, recycling, remanufacturing, materials and distribution centers. This integer linear programming model is solved by using Lingo 9 software and the results are reported. Finally, a sensitivity analysis of the proposed model is also presented.
Shahinfar, Saleh; Guenther, Jerry N; Page, C David; Kalantari, Afshin S; Cabrera, Victor E; Fricke, Paul M; Weigel, Kent A
2015-06-01
The common practice on most commercial dairy farms is to inseminate all cows that are eligible for breeding, while ignoring (or absorbing) the costs associated with semen and labor directed toward low-fertility cows that are unlikely to conceive. Modern analytical methods, such as machine learning algorithms, can be applied to cow-specific explanatory variables for the purpose of computing probabilities of success or failure associated with upcoming insemination events. Lift chart analysis can identify subsets of high fertility cows that are likely to conceive and are therefore appropriate targets for insemination (e.g., with conventional artificial insemination semen or expensive sex-enhanced semen), as well as subsets of low-fertility cows that are unlikely to conceive and should therefore be passed over at that point in time. Although such a strategy might be economically viable, the management, environmental, and financial conditions on one farm might differ widely from conditions on the next, and hence the reproductive management recommendations derived from such a tool may be suboptimal for specific farms. When coupled with cost-sensitive evaluation of misclassified and correctly classified insemination events, the strategy can be a potentially powerful tool for optimizing the reproductive management of individual farms. In the present study, lift chart analysis and cost-sensitive evaluation were applied to a data set consisting of 54,806 insemination events of primiparous Holstein cows on 26 Wisconsin farms, as well as a data set with 17,197 insemination events of primiparous Holstein cows on 3 Wisconsin farms, where the latter had more detailed information regarding health events of individual cows. In the first data set, the gains in profit associated with limiting inseminations to subsets of 79 to 97% of the most fertile eligible cows ranged from $0.44 to $2.18 per eligible cow in a monthly breeding period, depending on days in milk at breeding and milk
A sensitivity analysis of timing and costs of greenhouse gas emission reductions
Gerlagh, R.; van der Zwaan, B.C.C.
2004-01-01
This paper analyses the optimal timing and macro-economic costs of carbon emission reductions that mitigate the global average atmospheric temperature increase. We use a macro-economic model in which there are two competing energy sources, fossil-fuelled and non-fossil-fuelled. Technological change
International Nuclear Information System (INIS)
Kavvadias, K.C.; Khamis, I.
2014-01-01
The reliable supply of water and energy is an important prerequisite for sustainable development. Desalination is a feasible option that can solve the problem of water scarcity in some areas, but it is a very energy intensive technology. Moreover, the rising cost of fossil fuel, its uncertain availability and associated environmental concerns have led to a need for future desalination plants to use other energy sources, such as renewables and nuclear. Nuclear desalination has thus the potential to be an important option for safe, economic and reliable supply of large amounts of fresh water to meet the ever-increasing worldwide water demand. Different approaches to use nuclear power for seawater desalination have been considered including utilisation of the waste heat from nuclear reactors to further reduce the cost of nuclear desalination. Various options to implement nuclear desalination relay mainly on policy making based on socio-economic and environmental impacts of available technologies. This paper examines nuclear desalination costs and proposes a methodology for exploring interactions between critical parameters. - Highlights: • The paper demonstrated desalination costs under uncertainty conditions. • Uncertainty for nuclear power prevails only during the construction period. • Nuclear desalination proved to be cheaper and with less uncertainty
International Nuclear Information System (INIS)
Song, Hua; Ozkan, Umit S.
2010-01-01
In this study, the hydrogen selling price from ethanol steam reforming has been estimated for two different production scenarios in the United States, i.e. central production (150,000 kg H 2 /day) and distributed (forecourt) production (1500 kg H 2 /day), based on a process flowchart generated by Aspen Plus registered including downstream purification steps and economic analysis model template published by the U.S Department of Energy (DOE). The effect of several processing parameters as well as catalyst properties on the hydrogen selling price has been evaluated. 2.69/kg is estimated as the selling price for a central production process of 150,000 kg H 2 /day and 4.27/kg for a distributed hydrogen production process at a scale of 1500 kg H 2 /day. Among the parameters investigated through sensitivity analyses, ethanol feedstock cost, catalyst cost, and catalytic performance are found to play a significant role on determining the final hydrogen selling price. (author)
Directory of Open Access Journals (Sweden)
Marta Riu
Full Text Available To calculate the incremental cost of nosocomial bacteremia caused by the most common organisms, classified by their antimicrobial susceptibility.We selected patients who developed nosocomial bacteremia caused by Staphylococcus aureus, Escherichia coli, Klebsiella pneumoniae, or Pseudomonas aeruginosa. These microorganisms were analyzed because of their high prevalence and they frequently present multidrug resistance. A control group consisted of patients classified within the same all-patient refined-diagnosis related group without bacteremia. Our hospital has an established cost accounting system (full-costing that uses activity-based criteria to analyze cost distribution. A logistic regression model was fitted to estimate the probability of developing bacteremia for each admission (propensity score and was used for propensity score matching adjustment. Subsequently, the propensity score was included in an econometric model to adjust the incremental cost of patients who developed bacteremia, as well as differences in this cost, depending on whether the microorganism was multidrug-resistant or multidrug-sensitive.A total of 571 admissions with bacteremia matched the inclusion criteria and 82,022 were included in the control group. The mean cost was € 25,891 for admissions with bacteremia and € 6,750 for those without bacteremia. The mean incremental cost was estimated at € 15,151 (CI, € 11,570 to € 18,733. Multidrug-resistant P. aeruginosa bacteremia had the highest mean incremental cost, € 44,709 (CI, € 34,559 to € 54,859. Antimicrobial-susceptible E. coli nosocomial bacteremia had the lowest mean incremental cost, € 10,481 (CI, € 8,752 to € 12,210. Despite their lower cost, episodes of antimicrobial-susceptible E. coli nosocomial bacteremia had a major impact due to their high frequency.Adjustment of hospital cost according to the organism causing bacteremia and antibiotic sensitivity could improve prevention strategies
Dong, Hengjin; Buxton, Martin
2006-01-01
The objective of this study is to apply a Markov model to compare cost-effectiveness of total knee replacement (TKR) using computer-assisted surgery (CAS) with that of TKR using a conventional manual method in the absence of formal clinical trial evidence. A structured search was carried out to identify evidence relating to the clinical outcome, cost, and effectiveness of TKR. Nine Markov states were identified based on the progress of the disease after TKR. Effectiveness was expressed by quality-adjusted life years (QALYs). The simulation was carried out initially for 120 cycles of a month each, starting with 1,000 TKRs. A discount rate of 3.5 percent was used for both cost and effectiveness in the incremental cost-effectiveness analysis. Then, a probabilistic sensitivity analysis was carried out using a Monte Carlo approach with 10,000 iterations. Computer-assisted TKR was a long-term cost-effective technology, but the QALYs gained were small. After the first 2 years, the incremental cost per QALY of computer-assisted TKR was dominant because of cheaper and more QALYs. The incremental cost-effectiveness ratio (ICER) was sensitive to the "effect of CAS," to the CAS extra cost, and to the utility of the state "Normal health after primary TKR," but it was not sensitive to utilities of other Markov states. Both probabilistic and deterministic analyses produced similar cumulative serious or minor complication rates and complex or simple revision rates. They also produced similar ICERs. Compared with conventional TKR, computer-assisted TKR is a cost-saving technology in the long-term and may offer small additional QALYs. The "effect of CAS" is to reduce revision rates and complications through more accurate and precise alignment, and although the conclusions from the model, even when allowing for a full probabilistic analysis of uncertainty, are clear, the "effect of CAS" on the rate of revisions awaits long-term clinical evidence.
International Nuclear Information System (INIS)
Kosuda, Shigeru; Watanabe, Masumi; Kobayashi, Hideo; Kusano, Shoichi; Ichihara, Kiyoshi
1998-01-01
Decision tree analysis was used to assess cost-effectiveness of chest FDG-PET in patients with a pulmonary tumor (non-small cell carcinoma, ≤Stage IIIB), based on the data of the current decision tree. Decision tree models were constructed with two competing strategies (CT alone and CT plus chest FDG-PET) in 1,000 patient population with 71.4% prevalence. Baselines of FDG-PET sensitivity and specificity on detection of lung cancer and lymph node metastasis, and mortality and life expectancy were available from references. Chest CT plus chest FDG-PET strategy increased a total cost by 10.5% when a chest FDG-PET study costs 0.1 million yen, since it increased the number of mediastinoscopy and curative thoracotomy despite reducing the number of bronchofiberscopy to half. However, the strategy resulted in a remarkable increase by 115 patients with curable thoracotomy and decrease by 51 patients with non-curable thoracotomy. In addition, an average life expectancy increased by 0.607 year/patient, which means increase in medical cost is approximately 218,080 yen/year/patient when a chest FDG-PET study costs 0.1 million yen. In conclusion, chest CT plus chest FDG-PET strategy might not be cost-effective in Japan, but we are convinced that the strategy is useful in cost-benefit analysis. (author)
Sensitivity and uncertainty analysis
Cacuci, Dan G; Navon, Ionel Michael
2005-01-01
As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c
Cost benefit analysis cost effectiveness analysis
International Nuclear Information System (INIS)
Lombard, J.
1986-09-01
The comparison of various protection options in order to determine which is the best compromise between cost of protection and residual risk is the purpose of the ALARA procedure. The use of decision-aiding techniques is valuable as an aid to selection procedures. The purpose of this study is to introduce two rather simple and well known decision aiding techniques: the cost-effectiveness analysis and the cost-benefit analysis. These two techniques are relevant for the great part of ALARA decisions which need the use of a quantitative technique. The study is based on an hypothetical case of 10 protection options. Four methods are applied to the data
Mehrpooya, Mehdi; Ansarinasab, Hojat; Moftakhari Sharifzadeh, Mohammad Mehdi; Rosen, Marc A.
2017-10-01
An integrated power plant with a net electrical power output of 3.71 × 105 kW is developed and investigated. The electrical efficiency of the process is found to be 60.1%. The process includes three main sub-systems: molten carbonate fuel cell system, heat recovery section and cryogenic carbon dioxide capturing process. Conventional and advanced exergoeconomic methods are used for analyzing the process. Advanced exergoeconomic analysis is a comprehensive evaluation tool which combines an exergetic approach with economic analysis procedures. With this method, investment and exergy destruction costs of the process components are divided into endogenous/exogenous and avoidable/unavoidable parts. Results of the conventional exergoeconomic analyses demonstrate that the combustion chamber has the largest exergy destruction rate (182 MW) and cost rate (13,100 /h). Also, the total process cost rate can be decreased by reducing the cost rate of the fuel cell and improving the efficiency of the combustion chamber and heat recovery steam generator. Based on the total avoidable endogenous cost rate, the priority for modification is the heat recovery steam generator, a compressor and a turbine of the power plant, in rank order. A sensitivity analysis is done to investigate the exergoeconomic factor parameters through changing the effective parameter variations.
Cost-sensitive classification problem (Poster)
Calders, T.G.K.; Pechenizkiy, M.
2012-01-01
In practical situations almost all classification problems are cost-sensitive or utility based one way or another. This exercise mimics a real situation in which students first have to translate a description into a datamining workflow, learn a prediction model, apply it to new data, and set up a
Lei, Jiali; Rodriguez, Suset; Jayachandran, Maanasa; Solis, Elizabeth; Gonzalez, Stephanie; Perez-Clavijo, Francesco; Wigley, Stephen; Godavarty, Anuradha
2016-03-01
Lower extremity ulcers are devastating complications that are still un-recognized. To date, clinicians employ visual inspection of the wound site during its standard 4-week of healing process via monitoring of surface granulation. A novel ultra-portable near-infrared optical scanner (NIROS) has been developed at the Optical Imaging Laboratory that can perform non-contact 2D area imaging of the wound site. From preliminary studies it was observed that the nonhealing wounds had a greater absorption contrast with respect to the normal site, unlike in the healing wounds. Currently, non-contact near-infrared (NIR) imaging studies were carried out on 22 lower extremity wounds at two podiatric clinics, and the sensitivity and specificity of the scanner evaluated. A quantitative optical biometric was developed that differentiates healing from non-healing wounds, based on the threshold values obtained during ROC analysis. In addition, optical images of the wound obtained from weekly imaging studies are also assessed to determine the ability of the device to predict wound healing consistently on a periodic basis. This can potentially impact early intervention in the treatment of lower extremity ulcers when an objective and quantitative wound healing approach is developed. Lastly, the incorporation of MATLAB graphical user interface (GUI) to automate the process of image acquisition, image processing and image analysis realizes the potential of NIROS to perform non-contact and real-time imaging on lower extremity wounds.
Directory of Open Access Journals (Sweden)
Iulian N. BUJOREANU
2011-01-01
Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.
Low Cost, Low Power, High Sensitivity Magnetometer
2008-12-01
which are used to measure the small magnetic signals from brain. Other types of vector magnetometers are fluxgate , coil based, and magnetoresistance...concentrator with the magnetometer currently used in Army multimodal sensor systems, the Brown fluxgate . One sees the MEMS fluxgate magnetometer is...Guedes, A.; et al., 2008: Hybrid - LOW COST, LOW POWER, HIGH SENSITIVITY MAGNETOMETER A.S. Edelstein*, James E. Burnette, Greg A. Fischer, M.G
Sensitivity Analysis Without Assumptions.
Ding, Peng; VanderWeele, Tyler J
2016-05-01
Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder.
Cost related sensitivity analysis for optimal operation of a grid-parallel PEM fuel cell power plant
El-Sharkh, M. Y.; Tanrioven, M.; Rahman, A.; Alam, M. S.
Fuel cell power plants (FCPP) as a combined source of heat, power and hydrogen (CHP&H) can be considered as a potential option to supply both thermal and electrical loads. Hydrogen produced from the FCPP can be stored for future use of the FCPP or can be sold for profit. In such a system, tariff rates for purchasing or selling electricity, the fuel cost for the FCPP/thermal load, and hydrogen selling price are the main factors that affect the operational strategy. This paper presents a hybrid evolutionary programming and Hill-Climbing based approach to evaluate the impact of change of the above mentioned cost parameters on the optimal operational strategy of the FCPP. The optimal operational strategy of the FCPP for different tariffs is achieved through the estimation of the following: hourly generated power, the amount of thermal power recovered, power trade with the local grid, and the quantity of hydrogen that can be produced. Results show the importance of optimizing system cost parameters in order to minimize overall operating cost.
Energy Technology Data Exchange (ETDEWEB)
Cumberland, Riley M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Kent Alan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jarrell, Joshua J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Joseph, III, Robert Anthony [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-12-01
This report evaluates how the economic environment (i.e., discount rate, inflation rate, escalation rate) can impact previously estimated differences in lifecycle costs between an integrated waste management system with an interim storage facility (ISF) and a similar system without an ISF.
International Nuclear Information System (INIS)
Strait, R.S.
1996-01-01
The first phase of the Depleted Uranium Hexafluoride Management Program (Program)--management strategy selection--consists of several program elements: Technology Assessment, Engineering Analysis, Cost Analysis, and preparation of an Environmental Impact Statement (EIS). Cost Analysis will estimate the life-cycle costs associated with each of the long-term management strategy alternatives for depleted uranium hexafluoride (UF6). The scope of Cost Analysis will include all major expenditures, from the planning and design stages through decontamination and decommissioning. The costs will be estimated at a scoping or preconceptual design level and are intended to assist decision makers in comparing alternatives for further consideration. They will not be absolute costs or bid-document costs. The purpose of the Cost Analysis Guidelines is to establish a consistent approach to analyzing of cost alternatives for managing Department of Energy's (DOE's) stocks of depleted uranium hexafluoride (DUF6). The component modules that make up the DUF6 management program differ substantially in operational maintenance, process-options, requirements for R and D, equipment, facilities, regulatory compliance, (O and M), and operations risk. To facilitate a consistent and equitable comparison of costs, the guidelines offer common definitions, assumptions or basis, and limitations integrated with a standard approach to the analysis. Further, the goal is to evaluate total net life-cycle costs and display them in a way that gives DOE the capability to evaluate a variety of overall DUF6 management strategies, including commercial potential. The cost estimates reflect the preconceptual level of the designs. They will be appropriate for distinguishing among management strategies
Continuous integration congestion cost allocation based on sensitivity
International Nuclear Information System (INIS)
Wu, Z.Q.; Wang, Y.N.
2004-01-01
Congestion cost allocation is a very important topic in congestion management. Allocation methods based on the Aumann-Shapley value use the discrete numerical integration method, which needs to solve the incremented OPF solution many times, and as such it is not suitable for practical application to large-scale systems. The optimal solution and its sensitivity change tendency during congestion removal using a DC optimal power flow (OPF) process is analysed. A simple continuous integration method based on the sensitivity is proposed for the congestion cost allocation. The proposed sensitivity analysis method needs a smaller computation time than the method based on using the quadratic method and inner point iteration. The proposed congestion cost allocation method uses a continuous integration method rather than discrete numerical integration. The method does not need to solve the incremented OPF solutions; which allows it use in large-scale systems. The method can also be used for AC OPF congestion management. (author)
Interference and Sensitivity Analysis.
VanderWeele, Tyler J; Tchetgen Tchetgen, Eric J; Halloran, M Elizabeth
2014-11-01
Causal inference with interference is a rapidly growing area. The literature has begun to relax the "no-interference" assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted.
International Nuclear Information System (INIS)
Anand, A.B.
1992-01-01
Drilling assumes greater importance in present day uranium exploration which emphasizes to explore more areas on the basis of conceptual model than merely on surface anomalies. But drilling is as costly as it is important and consumes a major share (50% to 60%) of the exploration budget. As such the cost of drilling has great bearing on the exploration strategy as well as on the overall cost of the project. Therefore, understanding the cost analysis is very much important when planning or intensifying an exploration programme. This not only helps in controlling the current operations but also in planning the budgetary provisions for future operations. Also, if the work is entrusted to a private party, knowledge of in-house cost analysis helps in fixing the rates of drilling in different formations and areas to be drilled. Under this topic, various factors that contribute to the cost of drilling per meter as well as ways to minimize the drilling cost for better economic evaluation of mineral deposits are discussed. (author)
DEFF Research Database (Denmark)
Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad
2018-01-01
of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...
Chemical kinetic functional sensitivity analysis: Elementary sensitivities
International Nuclear Information System (INIS)
Demiralp, M.; Rabitz, H.
1981-01-01
Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research
Army 86 Cost Sensitivity Analysis.
1980-05-01
NI VI AN/PVS-4 W/IMG 5 Q15414 RADAR ST AN/MPQ-4A LP 25 Q16110 RADAR SET AN/PPS- SALP 773 Q34308 RADIO SET AN/GRC-160 191 Q45779 RADIO SET AN/VRC-12 654...VI AN/PVS-4 W/ING 5 Q15414 RADAR ST AN/MPQ-4A LP 25 Q16110 RADAR SET AN/PPS- SALP 592 Q34308 RADIO SET AN/GRC-160 555 Q38299 RADIO SET AN/PRC-25 474...PVS-4 W/IMG Q15414 RADAR ST AN/MPQ-4A LP Q16110 RADAR SET AN/PPS- SALP Q34308 RADIO SET AN/GRC-160 Q38299 RADIO SET AN/PRC-25 Q45779 RADIO SET AN/VRC-12
Directory of Open Access Journals (Sweden)
Glauber dos Santos
2017-01-01
miscellaneous expenses. The total production cost of corn silage in the 15/16 year crop was R$ 317.30 per ton of dry matter or R$ 104.71 per ton of green matter. The inputs (seeds, fertilizers and pesticides were the most representative items in the cost, followed by harvesting and silage, planting and cultivation, and soil preparation. The sensitivity analysis shows that the cost is 1.92 times more considering different productivity values and percentages of loss, being the lowest cost R$ 81.57 and the highest R$ 157.32.
MOVES regional level sensitivity analysis
2012-01-01
The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...
Energy Technology Data Exchange (ETDEWEB)
Ostafew, C. [Azure Dynamics Corp., Toronto, ON (Canada)
2010-07-01
This presentation included a sensitivity analysis of electric vehicle components on overall efficiency. The presentation provided an overview of drive cycles and discussed the major contributors to range in terms of rolling resistance; aerodynamic drag; motor efficiency; and vehicle mass. Drive cycles that were presented included: New York City Cycle (NYCC); urban dynamometer drive cycle; and US06. A summary of the findings were presented for each of the major contributors. Rolling resistance was found to have a balanced effect on each drive cycle and proportional to range. In terms of aerodynamic drive, there was a large effect on US06 range. A large effect was also found on NYCC range in terms of motor efficiency and vehicle mass. figs.
Applying cost-sensitive classification for financial fraud detection under high class-imbalance
CSIR Research Space (South Africa)
Moepya, SO
2014-12-01
Full Text Available , sensitivity, specificity, recall and precision using PCA and Factor Analysis. Weighted Support Vector Machines (SVM) were shown superior to the cost-sensitive Naive Bayes (NB) and K-Nearest Neighbors classifiers....
Sensitivity analysis of floating offshore wind farms
International Nuclear Information System (INIS)
Castro-Santos, Laura; Diaz-Casas, Vicente
2015-01-01
Highlights: • Develop a sensitivity analysis of a floating offshore wind farm. • Influence on the life-cycle costs involved in a floating offshore wind farm. • Influence on IRR, NPV, pay-back period, LCOE and cost of power. • Important variables: distance, wind resource, electric tariff, etc. • It helps to investors to take decisions in the future. - Abstract: The future of offshore wind energy will be in deep waters. In this context, the main objective of the present paper is to develop a sensitivity analysis of a floating offshore wind farm. It will show how much the output variables can vary when the input variables are changing. For this purpose two different scenarios will be taken into account: the life-cycle costs involved in a floating offshore wind farm (cost of conception and definition, cost of design and development, cost of manufacturing, cost of installation, cost of exploitation and cost of dismantling) and the most important economic indexes in terms of economic feasibility of a floating offshore wind farm (internal rate of return, net present value, discounted pay-back period, levelized cost of energy and cost of power). Results indicate that the most important variables in economic terms are the number of wind turbines and the distance from farm to shore in the costs’ scenario, and the wind scale parameter and the electric tariff for the economic indexes. This study will help investors to take into account these variables in the development of floating offshore wind farms in the future
Energy Technology Data Exchange (ETDEWEB)
Myrent, Noah J. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Barrett, Natalie C. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Adams, Douglas E. [Vanderbilt Univ., Nashville, TN (United States). Lab. for Systems Integrity and Reliability; Griffith, Daniel Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Wind Energy Technology Dept.
2014-07-01
Operations and maintenance costs for offshore wind plants are significantly higher than the current costs for land-based (onshore) wind plants. One way to reduce these costs would be to implement a structural health and prognostic management (SHPM) system as part of a condition based maintenance paradigm with smart load management and utilize a state-based cost model to assess the economics associated with use of the SHPM system. To facilitate the development of such a system a multi-scale modeling and simulation approach developed in prior work is used to identify how the underlying physics of the system are affected by the presence of damage and faults, and how these changes manifest themselves in the operational response of a full turbine. This methodology was used to investigate two case studies: (1) the effects of rotor imbalance due to pitch error (aerodynamic imbalance) and mass imbalance and (2) disbond of the shear web; both on a 5-MW offshore wind turbine in the present report. Sensitivity analyses were carried out for the detection strategies of rotor imbalance and shear web disbond developed in prior work by evaluating the robustness of key measurement parameters in the presence of varying wind speeds, horizontal shear, and turbulence. Detection strategies were refined for these fault mechanisms and probabilities of detection were calculated. For all three fault mechanisms, the probability of detection was 96% or higher for the optimized wind speed ranges of the laminar, 30% horizontal shear, and 60% horizontal shear wind profiles. The revised cost model provided insight into the estimated savings in operations and maintenance costs as they relate to the characteristics of the SHPM system. The integration of the health monitoring information and O&M cost versus damage/fault severity information provides the initial steps to identify processes to reduce operations and maintenance costs for an offshore wind farm while increasing turbine availability
Richard, Christopher L.
At the core of the geothermal industry is a need to identify how policy incentives can better be applied for optimal return. Literature from Bloomquist (1999), Doris et al. (2009), and McIlveen (2011) suggest that a more tailored approach to crafting geothermal policy is warranted. In this research the guiding theory is based on those suggestions and is structured to represent a policy analysis approach using analytical methods. The methods being used are focus on qualitative and quantitative results. To address the qualitative sections of this research an extensive review of contemporary literature is used to identify the frequency of use for specific barriers, and is followed upon with an industry survey to determine existing gaps. As a result there is support for certain barriers and justification for expanding those barriers found within the literature. This method of inquiry is an initial point for structuring modeling tools to further quantify the research results as part of the theoretical framework. Analytical modeling utilizes the levelized cost of energy as a foundation for comparative assessment of policy incentives. Model parameters use assumptions to draw conclusions from literature and survey results to reflect unique attributes held by geothermal power technologies. Further testing by policy option provides an opportunity to assess the sensitivity of each variable with respect to applied policy. Master limited partnerships, feed in tariffs, RD&D, and categorical exclusions all result as viable options for mitigating specific barriers associated to developing geothermal power. The results show reductions of levelized cost based upon the model's exclusive parameters. These results are also compared to contemporary policy options highlighting the need for tailored policy, as discussed by Bloomquist (1999), Doris et al. (2009), and McIlveen (2011). It is the intent of this research to provide the reader with a descriptive understanding of the role of
Cost analysis methodology of spent fuel storage
International Nuclear Information System (INIS)
1994-01-01
The report deals with the cost analysis of interim spent fuel storage; however, it is not intended either to give a detailed cost analysis or to compare the costs of the different options. This report provides a methodology for calculating the costs of different options for interim storage of the spent fuel produced in the reactor cores. Different technical features and storage options (dry and wet, away from reactor and at reactor) are considered and the factors affecting all options defined. The major cost categories are analysed. Then the net present value of each option is calculated and the levelized cost determined. Finally, a sensitivity analysis is conducted taking into account the uncertainty in the different cost estimates. Examples of current storage practices in some countries are included in the Appendices, with description of the most relevant technical and economic aspects. 16 figs, 14 tabs
A hybrid approach for global sensitivity analysis
International Nuclear Information System (INIS)
Chakraborty, Souvik; Chowdhury, Rajib
2017-01-01
Distribution based sensitivity analysis (DSA) computes sensitivity of the input random variables with respect to the change in distribution of output response. Although DSA is widely appreciated as the best tool for sensitivity analysis, the computational issue associated with this method prohibits its use for complex structures involving costly finite element analysis. For addressing this issue, this paper presents a method that couples polynomial correlated function expansion (PCFE) with DSA. PCFE is a fully equivalent operational model which integrates the concepts of analysis of variance decomposition, extended bases and homotopy algorithm. By integrating PCFE into DSA, it is possible to considerably alleviate the computational burden. Three examples are presented to demonstrate the performance of the proposed approach for sensitivity analysis. For all the problems, proposed approach yields excellent results with significantly reduced computational effort. The results obtained, to some extent, indicate that proposed approach can be utilized for sensitivity analysis of large scale structures. - Highlights: • A hybrid approach for global sensitivity analysis is proposed. • Proposed approach integrates PCFE within distribution based sensitivity analysis. • Proposed approach is highly efficient.
Applications of advances in nonlinear sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Werbos, P J
1982-01-01
The following paper summarizes the major properties and applications of a collection of algorithms involving differentiation and optimization at minimum cost. The areas of application include the sensitivity analysis of models, new work in statistical or econometric estimation, optimization, artificial intelligence and neuron modelling.
Cost benefit analysis vs. referenda
Martin J. Osborne; Matthew A. Turner
2007-01-01
We consider a planner who chooses between two possible public policies and ask whether a referendum or a cost benefit analysis leads to higher welfare. We find that a referendum leads to higher welfare than a cost benefit analyses in "common value" environments. Cost benefit analysis is better in "private value" environments.
Maternal sensitivity: a concept analysis.
Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae
2008-11-01
The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.
Risk-Sensitive Control with Near Monotone Cost
International Nuclear Information System (INIS)
Biswas, Anup; Borkar, V. S.; Suresh Kumar, K.
2010-01-01
The infinite horizon risk-sensitive control problem for non-degenerate controlled diffusions is analyzed under a 'near monotonicity' condition on the running cost that penalizes large excursions of the process.
[Operating cost analysis of anaesthesia: activity based costing (ABC analysis)].
Majstorović, Branislava M; Kastratović, Dragana A; Vučović, Dragan S; Milaković, Branko D; Miličić, Biljana R
2011-01-01
Cost of anaesthesiology represent defined measures to determine a precise profile of expenditure estimation of surgical treatment, which is important regarding planning of healthcare activities, prices and budget. In order to determine the actual value of anaestesiological services, we started with the analysis of activity based costing (ABC) analysis. Retrospectively, in 2005 and 2006, we estimated the direct costs of anestesiological services (salaries, drugs, supplying materials and other: analyses and equipment.) of the Institute of Anaesthesia and Resuscitation of the Clinical Centre of Serbia. The group included all anesthetized patients of both sexes and all ages. We compared direct costs with direct expenditure, "each cost object (service or unit)" of the Republican Healthcare Insurance. The Summary data of the Departments of Anaesthesia documented in the database of the Clinical Centre of Serbia. Numerical data were utilized and the numerical data were estimated and analyzed by computer programs Microsoft Office Excel 2003 and SPSS for Windows. We compared using the linear model of direct costs and unit costs of anaesthesiological services from the Costs List of the Republican Healthcare Insurance. Direct costs showed 40% of costs were spent on salaries, (32% on drugs and supplies, and 28% on other costs, such as analyses and equipment. The correlation of the direct costs of anaestesiological services showed a linear correlation with the unit costs of the Republican Healthcare Insurance. During surgery, costs of anaesthesia would increase by 10% the surgical treatment cost of patients. Regarding the actual costs of drugs and supplies, we do not see any possibility of costs reduction. Fixed elements of direct costs provide the possibility of rationalization of resources in anaesthesia.
Global optimization and sensitivity analysis
International Nuclear Information System (INIS)
Cacuci, D.G.
1990-01-01
A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints
1986-06-01
Cc) Ul y Cli U;ra ISO or.) . ............ t cc fl .9 it it ý I oli CC) I it cli L3 I HIM .......... 114 t4l t.r IM...Burroughz Cost AFIT/LSQ AV785-6280 Curve Programs Prof. Jeff Daneman Z-100 Cost Curve ASD/ACCR AV785- 8583 Programs Capt Arthur Mills * *- PROGRAMS CONCEPT
Harris, Catherine R; Osterberg, E Charles; Sanford, Thomas; Alwaal, Amjad; Gaither, Thomas W; McAninch, Jack W; McCulloch, Charles E; Breyer, Benjamin N
2016-08-01
To determine which factors are associated with higher costs of urethroplasty procedure and whether these factors have been increasing over time. Identification of determinants of extreme costs may help reduce cost while maintaining quality. We conducted a retrospective analysis using the 2001-2010 Healthcare Cost and Utilization Project-Nationwide Inpatient Sample (HCUP-NIS). The HCUP-NIS captures hospital charges which we converted to cost using the HCUP cost-to-charge ratio. Log cost linear regression with sensitivity analysis was used to determine variables associated with increased costs. Extreme cost was defined as the top 20th percentile of expenditure, analyzed with logistic regression, and expressed as odds ratios (OR). A total of 2298 urethroplasties were recorded in NIS over the study period. The median (interquartile range) calculated cost was $7321 ($5677-$10,000). Patients with multiple comorbid conditions were associated with extreme costs [OR 1.56, 95% confidence interval (CI) 1.19-2.04, P = .02] compared with patients with no comorbid disease. Inpatient complications raised the odds of extreme costs (OR 3.2, CI 2.14-4.75, P costs (OR 1.78, 95% CI 1.2-2.64, P = .005). Variations in patient age, race, hospital region, bed size, teaching status, payor type, and volume of urethroplasty cases were not associated with extremes of cost. Cost variation for perioperative inpatient urethroplasty procedures is dependent on preoperative patient comorbidities, postoperative complications, and surgical complexity related to graft usage. Procedural cost and cost variation are critical for understanding which aspects of care have the greatest impact on cost. Copyright © 2016 Elsevier Inc. All rights reserved.
Cost analysis of youth violence prevention.
Sharp, Adam L; Prosser, Lisa A; Walton, Maureen; Blow, Frederic C; Chermack, Stephen T; Zimmerman, Marc A; Cunningham, Rebecca
2014-03-01
Effective violence interventions are not widely implemented, and there is little information about the cost of violence interventions. Our goal is to report the cost of a brief intervention delivered in the emergency department that reduces violence among 14- to 18-year-olds. Primary outcomes were total costs of implementation and the cost per violent event or violence consequence averted. We used primary and secondary data sources to derive the costs to implement a brief motivational interviewing intervention and to identify the number of self-reported violent events (eg, severe peer aggression, peer victimization) or violence consequences averted. One-way and multi-way sensitivity analyses were performed. Total fixed and variable annual costs were estimated at $71,784. If implemented, 4208 violent events or consequences could be prevented, costing $17.06 per event or consequence averted. Multi-way sensitivity analysis accounting for variable intervention efficacy and different cost estimates resulted in a range of $3.63 to $54.96 per event or consequence averted. Our estimates show that the cost to prevent an episode of youth violence or its consequences is less than the cost of placing an intravenous line and should not present a significant barrier to implementation.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
An analysis of decommissioning costs
International Nuclear Information System (INIS)
Teunckens, L.; Loeschhorn, U.; Yanagihara, S.; Wren, G.; Menon, S.
1992-01-01
Within the OECD/NEA Cooperative Programme on Decommissioning a Task Group was set up early in 1989 to identify the reasons for the large variations in decommissioning cost estimates. The Task Group gathered cost data from 12 of the 14 projects in the Programme to form the basis of their analysis. They included reactors being decommissioned to various stages as well as fuel cycle facilities. The projects were divided into groups of projects with similar characteristics ('models') to facilitate the analysis of the cost distribution in each group of projects and the cost data was progressively refined by a dialogue between the Task Group and the project managers. A comparative analysis was then performed and project specific discrepancies were identified. The Task Group's report is summarized on the results of the comparative analysis as well as the lessons learnt by the Task Group in the acquisition and analysis of cost data from international decommissioning projects. (author) 5 tabs
International Nuclear Information System (INIS)
Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.
1990-01-01
A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs
High order depletion sensitivity analysis
International Nuclear Information System (INIS)
Naguib, K.; Adib, M.; Morcos, H.N.
2002-01-01
A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations
The cost of nurse-sensitive adverse events.
Pappas, Sharon Holcombe
2008-05-01
The aim of this study was to describe the methodology for nursing leaders to determine the cost of adverse events and effective levels of nurse staffing. The growing transparency of quality and cost outcomes motivates healthcare leaders to optimize the effectiveness of nurse staffing. Most hospitals have robust cost accounting systems that provide actual patient-level direct costs. These systems allow an analysis of the cost consumed by patients during a hospital stay. By knowing the cost of complications, leaders have the ability to justify the cost of improved staffing when quality evidence shows that higher nurse staffing improves quality. An analysis was performed on financial and clinical data from hospital databases of 3,200 inpatients. The purpose was to establish a methodology to determine actual cost per case. Three diagnosis-related groups were the focus of the analysis. Five adverse events were analyzed along with the costs. A regression analysis reported that the actual direct cost of an adverse event was dollars 1,029 per case in the congestive heart failure cases and dollars 903 in the surgical cases. There was a significant increase in the cost per case in medical patients with urinary tract infection and pressure ulcers and in surgical patients with urinary tract infection and pneumonia. The odds of pneumonia occurring in surgical patients decreased with additional registered nurse hours per patient day. Hospital cost accounting systems are useful in determining the cost of adverse events and can aid in decision making about nurse staffing. Adverse events add costs to patient care and should be measured at the unit level to adjust staffing to reduce adverse events and avoid costs.
Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2009-01-01
This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial
Sensitivity analysis using probability bounding
International Nuclear Information System (INIS)
Ferson, Scott; Troy Tucker, W.
2006-01-01
Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values
Life cycle cost analysis rehabilitation costs.
2015-07-01
This study evaluates data from CDOTs Cost Data books and Pavement Management Program. Cost : indices were used to normalize project data to year 2014. Data analyzed in the study was obtained from : the CDOTs Cost Data books and the Pavement Man...
A Support for Decision Making: Cost-Sensitive Learning System
Czech Academy of Sciences Publication Activity Database
Brůha, I.; Kočková, Sylva
č. 6 (1994), s. 67-82 ISSN 0933-3657 R&D Projects: GA ČR GA201/93/0781 Keywords : learning algorithm * noisy environment * inductive algorithm * decision rule * cost-sensitive inference * evaluating function Impact factor: 0.672, year: 1994
Sensitivity analysis in remote sensing
Ustinov, Eugene A
2015-01-01
This book contains a detailed presentation of general principles of sensitivity analysis as well as their applications to sample cases of remote sensing experiments. An emphasis is made on applications of adjoint problems, because they are more efficient in many practical cases, although their formulation may seem counterintuitive to a beginner. Special attention is paid to forward problems based on higher-order partial differential equations, where a novel matrix operator approach to formulation of corresponding adjoint problems is presented. Sensitivity analysis (SA) serves for quantitative models of physical objects the same purpose, as differential calculus does for functions. SA provides derivatives of model output parameters (observables) with respect to input parameters. In remote sensing SA provides computer-efficient means to compute the jacobians, matrices of partial derivatives of observables with respect to the geophysical parameters of interest. The jacobians are used to solve corresponding inver...
Sensitivity Analysis of Viscoelastic Structures
Directory of Open Access Journals (Sweden)
A.M.G. de Lima
2006-01-01
Full Text Available In the context of control of sound and vibration of mechanical systems, the use of viscoelastic materials has been regarded as a convenient strategy in many types of industrial applications. Numerical models based on finite element discretization have been frequently used in the analysis and design of complex structural systems incorporating viscoelastic materials. Such models must account for the typical dependence of the viscoelastic characteristics on operational and environmental parameters, such as frequency and temperature. In many applications, including optimal design and model updating, sensitivity analysis based on numerical models is a very usefull tool. In this paper, the formulation of first-order sensitivity analysis of complex frequency response functions is developed for plates treated with passive constraining damping layers, considering geometrical characteristics, such as the thicknesses of the multi-layer components, as design variables. Also, the sensitivity of the frequency response functions with respect to temperature is introduced. As an example, response derivatives are calculated for a three-layer sandwich plate and the results obtained are compared with first-order finite-difference approximations.
Summit Station Skiway Cost Analysis
2016-07-01
of fuel delivered to Summit via LC-130 at a price of $32/gal. (Lever et al. 2016), the cost for constructing and maintaining the skiway for the 2014...CRREL TR-16-9 18 The costs associated with the Twin Otter include a day rate plus an hourly mission rate, a per passenger rate, airport fees, fuel, a...ER D C/ CR RE L TR -1 6- 9 Engineering for Polar Operations, Logistics, and Research (EPOLAR) Summit Station Skiway Cost Analysis Co ld
International Nuclear Information System (INIS)
Kosuda, Shigeru; Kobayashi, Hideo; Kusano, Shoichi; Ichihara, Kiyoshi; Watanabe, Masazumi
2002-01-01
Whole-body 2-fluoro-2-D-[ 18 F]deoxyglucose[FDG] positron emission tomography (WB-PET) may be more cost-effective than chest PET because WB-PET does not require conventional imaging (CI) for extrathoracic staging. The cost-effectiveness of WB-PET for the management of Japanese patients with non-small-cell lung carcinoma (NSCLC) was assessed. Decision-tree sensitivity analysis was designed, based on the two competing strategies of WB-PET vs. CI. WB-PET was assumed to have a sensitivity and specificity for detecting metastases, of 90% to 100% and CI of 80% to 90%. The prevalences of M1 disease were 34% and 20%. On thousand patients suspected of having NSCLC were simulated in each strategy. We surveyed the relevant literature for the choice of variables. Expected cost saving (CS) and expected life expectancy (LE) for NSCLC patients were calculated. The WB-PET strategy yielded an expected CS of $951 US to $1,493 US per patient and an expected LE of minus 0.0246 years to minus 0.0136 years per patient for the 71.4% NSCLC and 34% M1 disease prevalence at our hospital. PET avoided unnecessary bronchoscopies and thoracotomies for incurable and benign disease. Overall, the CS for each patient was $833 US to $2,010 US at NSCLC prevalences ranging from 10% to 90%. The LE of the WB-PET strategy was similar to that of the CI strategy. The CS and LE minimally varied in the two situations of 34% and 20% M1 disease prevalence. The introduction of a WB-PET strategy in place of CI for managing NSCLC patients is potentially cost-effective in Japan. (author)
UMTS Common Channel Sensitivity Analysis
DEFF Research Database (Denmark)
Pratas, Nuno; Rodrigues, António; Santos, Frederico
2006-01-01
and as such it is necessary that both channels be available across the cell radius. This requirement makes the choice of the transmission parameters a fundamental one. This paper presents a sensitivity analysis regarding the transmission parameters of two UMTS common channels: RACH and FACH. Optimization of these channels...... is performed and values for the key transmission parameters in both common channels are obtained. On RACH these parameters are the message to preamble offset, the initial SIR target and the preamble power step while on FACH it is the transmission power offset....
TEMAC, Top Event Sensitivity Analysis
International Nuclear Information System (INIS)
Iman, R.L.; Shortencarier, M.J.
1988-01-01
1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement
Extensive analysis of hydrogen costs
Energy Technology Data Exchange (ETDEWEB)
Guinea, D M; Martin, D; Garcia-Alegre, M C; Guinea, D [Consejo Superior de Investigaciones Cientificas, Arganda, Madrid (Spain). Inst. de Automatica Industrial; Agila, W E [Acciona Infraestructuras, Alcobendas, Madrid (Spain). Dept. I+D+i
2010-07-01
Cost is a key issue in the spreading of any technology. In this work, the cost of hydrogen is analyzed and determined, for hydrogen obtained by electrolysis. Different contributing partial costs are taken into account to calculate the hydrogen final cost, such as energy and electrolyzers taxes. Energy cost data is taken from official URLs, while electrolyzer costs are obtained from commercial companies. The analysis is accomplished under different hypothesis, and for different countries: Germany, France, Austria, Switzerland, Spain and the Canadian region of Ontario. Finally, the obtained costs are compared to those of the most used fossil fuels, both in the automotive industry (gasoline and diesel) and in the residential sector (butane, coal, town gas and wood), and the possibilities of hydrogen competing against fuels are discussed. According to this work, in the automotive industry, even neglecting subsidies, hydrogen can compete with fossil fuels. Hydrogen can also compete with gaseous domestic fuels. Electrolyzer prices were found to have the highest influence on hydrogen prices. (orig.)
Global sensitivity analysis using polynomial chaos expansions
International Nuclear Information System (INIS)
Sudret, Bruno
2008-01-01
Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices
Global sensitivity analysis using polynomial chaos expansions
Energy Technology Data Exchange (ETDEWEB)
Sudret, Bruno [Electricite de France, R and D Division, Site des Renardieres, F 77818 Moret-sur-Loing Cedex (France)], E-mail: bruno.sudret@edf.fr
2008-07-15
Global sensitivity analysis (SA) aims at quantifying the respective effects of input random variables (or combinations thereof) onto the variance of the response of a physical or mathematical model. Among the abundant literature on sensitivity measures, the Sobol' indices have received much attention since they provide accurate information for most models. The paper introduces generalized polynomial chaos expansions (PCE) to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients. Thus the computational cost of the sensitivity indices practically reduces to that of estimating the PCE coefficients. An original non intrusive regression-based approach is proposed, together with an experimental design of minimal size. Various application examples illustrate the approach, both from the field of global SA (i.e. well-known benchmark problems) and from the field of stochastic mechanics. The proposed method gives accurate results for various examples that involve up to eight input random variables, at a computational cost which is 2-3 orders of magnitude smaller than the traditional Monte Carlo-based evaluation of the Sobol' indices.
Uncertainty and sensitivity analyses of ballast life-cycle cost and payback period
Energy Technology Data Exchange (ETDEWEB)
McMahon, James E.; Liu, Xiaomin; Turiel, Ike; Hakim, Sajid; Fisher, Diane
2000-06-01
The paper introduces an innovative methodology for evaluating the relative significance of energy-efficient technologies applied to fluorescent lamp ballasts. The method involves replacing the point estimates of life cycle cost of the ballasts with uncertainty distributions reflecting the whole spectrum of possible costs, and the assessed probability associated with each value. The results of uncertainty and sensitivity analyses will help analysts reduce effort in data collection and carry on analysis more efficiently. These methods also enable policy makers to gain an insightful understanding of which efficient technology alternatives benefit or cost what fraction of consumers, given the explicit assumptions of the analysis.
Sensitivity Analysis of the Critical Speed in Railway Vehicle Dynamics
DEFF Research Database (Denmark)
Bigoni, Daniele; True, Hans; Engsig-Karup, Allan Peter
2014-01-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, High-Dimensional Model Representation and Total Sensitivity Indices. The approach has a general applic...
Sensitivity of LWR fuel cycle costs to uncertainties in detailed thermal cross sections
International Nuclear Information System (INIS)
Ryskamp, J.M.; Becker, M.; Harris, D.R.
1979-01-01
Cross sections averaged over the thermal energy (< 1 or 2 eV) group have been shown to have an important economic role for light-water reactors. Cost implications of thermal cross section uncertainties at the few-group level were reported earlier. When it has been determined that costs are sensitive to a specific thermal-group cross section, it becomes desirable to determine how specific energy-dependent cross sections influence fuel cycle costs. Multigroup cross-section sensitivity coefficients vary with fuel exposure. By changing the shape of a cross section displayed on a view-tube through an interactive graphics system, one can compute the change in few-group cross section using the exposure dependent sensitivity coefficients. With the changed exposure dependent few-group cross section, a new fuel cycle cost is computed by a sequence of batch depletion, core analysis, and fuel batch cost code modules. Fuel cycle costs are generally most sensitive to cross section uncertainties near the peak of the hardened Maxwellian flux
Data fusion qualitative sensitivity analysis
International Nuclear Information System (INIS)
Clayton, E.A.; Lewis, R.E.
1995-09-01
Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables
Sensitivity of nuclear fuel-cycle cost to uncertainties in nuclear data. Final report
International Nuclear Information System (INIS)
Becker, M.; Harris, D.R.
1980-11-01
An improved capability for assessing the economic implications of uncertainties in nuclear data and methods on the power reactor fuel cycle was developed. This capability is applied to the sensitivity analysis of fuel-cycle cost with respect to changes in nuclear data and related computational methods. Broad group sensitivities for both a typical BWR and a PWR are determined under the assumption of a throwaway fuel cycle as well as for a scenario under which reprocessing is allowed. Particularly large dollar implications are found for the thermal and resonance cross sections of fissile and fertile materials. Sensitivities for the throwaway case are found to be significantly larger than for the recycle case. Constrained sensitivities obtained for cases in which information from critical experiments or other benchmarks is used in the design calculation to adjust a parameter such as anti ν are compared with unconstrained sensitivities. Sensitivities of various alternate fuel cycles were examined. These included the extended-burnup (18-month) LWR cycle, the mixed-oxide (plutonium) cycle, uranium-thorium and denatured uranium-thorium cycles, as well as CANDU-type reactor cycles. The importance of the thermal capture and fission cross sections of 239 Pu is shown to be very large in all cases. Detailed, energy dependent sensitivity profiles are provided for the thermal range (below 1.855 eV). Finally, sensitivity coefficients are combined with data uncertainties to determine the impact of such uncertainties on fuel-cycle cost parameters
Advanced Fuel Cycle Economic Sensitivity Analysis
Energy Technology Data Exchange (ETDEWEB)
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Army 86 Cost Sensitivity Analysis Verification.
1980-09-01
NIGHT VIS SI AN/TAS-6 11 P21220 (Z50154) P-A DET SYS AN/USQ-70 24 QUANTITY LINE ITEM NUMBER NOMENCLATURE 26 Q16110 RADAR SET AN/PPS- SALP 54 Q16173...4 W/IMG 48 N04982 NIGHT VIS SI AN/TAS-4 55 NOSOSO NIGHT VIS SI AN/TAS-6 13 P21220 (Z50154) P-A BET SYS AN/USQ-70 -6 Q16110 RADAR SET AN/PPS- SALP 36
Break-Even Cost for Residential Photovoltaics in the United States: Key Drivers and Sensitivities
Energy Technology Data Exchange (ETDEWEB)
Denholm, P.; Margolis, R. M.; Ong, S.; Roberts, B.
2009-12-01
Grid parity--or break-even cost--for photovoltaic (PV) technology is defined as the point where the cost of PV-generated electricity equals the cost of electricity purchased from the grid. Break-even cost is expressed in $/W of an installed system. Achieving break-even cost is a function of many variables. Consequently, break-even costs vary by location and time for a country, such as the United States, with a diverse set of resources, electricity prices, and other variables. In this report, we analyze PV break-even costs for U.S. residential customers. We evaluate some key drivers of grid parity both regionally and over time. We also examine the impact of moving from flat to time-of-use (TOU) rates, and we evaluate individual components of the break-even cost, including effect of rate structure and various incentives. Finally, we examine how PV markets might evolve on a regional basis considering the sensitivity of the break-even cost to four major drivers: technical performance, financing parameters, electricity prices and rates, and policies. We find that local incentives rather than ?technical? parameters are in general the key drivers of the break-even cost of PV. Additionally, this analysis provides insight about the potential viability of PV markets.
Probabilistic sensitivity analysis of biochemical reaction systems.
Zhang, Hong-Xuan; Dempsey, William P; Goutsias, John
2009-09-07
Sensitivity analysis is an indispensable tool for studying the robustness and fragility properties of biochemical reaction systems as well as for designing optimal approaches for selective perturbation and intervention. Deterministic sensitivity analysis techniques, using derivatives of the system response, have been extensively used in the literature. However, these techniques suffer from several drawbacks, which must be carefully considered before using them in problems of systems biology. We develop here a probabilistic approach to sensitivity analysis of biochemical reaction systems. The proposed technique employs a biophysically derived model for parameter fluctuations and, by using a recently suggested variance-based approach to sensitivity analysis [Saltelli et al., Chem. Rev. (Washington, D.C.) 105, 2811 (2005)], it leads to a powerful sensitivity analysis methodology for biochemical reaction systems. The approach presented in this paper addresses many problems associated with derivative-based sensitivity analysis techniques. Most importantly, it produces thermodynamically consistent sensitivity analysis results, can easily accommodate appreciable parameter variations, and allows for systematic investigation of high-order interaction effects. By employing a computational model of the mitogen-activated protein kinase signaling cascade, we demonstrate that our approach is well suited for sensitivity analysis of biochemical reaction systems and can produce a wealth of information about the sensitivity properties of such systems. The price to be paid, however, is a substantial increase in computational complexity over derivative-based techniques, which must be effectively addressed in order to make the proposed approach to sensitivity analysis more practical.
Sensitivity of nuclear fuel cycle cost to uncertainties in nuclear data
International Nuclear Information System (INIS)
Harris, D.R.; Becker, M.; Parvez, A.; Ryskamp, J.M.
1979-01-01
A sensitivity analysis system is developed for assessing the economic implications of uncertainties in nuclear data and related computational methods for light water power reactors. Results of the sensitivity analysis indicate directions for worthwhile improvements in data and methods. Benefits from improvements in data and methods are related to reduction of margins provided by designers to ensure meeting reactor and fuel objectives. Sensitivity analyses are carried out using the batch depletion code FASTCELL, the core analysis code FASTCORE, and the reactor cost code COSTR. FASTCELL depletes a cell using methods comparable to industry cell codes except for a few-group treatment of cell flux distribution. FASTCORE is used with the Haling strategy of fixed power sharing among batches in the core. COSTR computes costs using components and techniques as in industry costing codes, except that COSTR uses fixed payment schedules. Sensitivity analyses are carried out for large commercial boiling and pressurized water reactors. Each few-group nuclear parameter is changed, and initial enrichment is also changed so as to keep the end-of-cycle core multiplication factor unchanged, i.e., to preserve cycle time at the demand power. Sensitivities of equilibrium fuel cycle cost are determined with respect to approx. 300 few-group nuclear parameters, both for a normal fuel cycle and for a throwaway fuel cycle. Particularly large dollar implications are found for thermal and resonance range cross sections in fissile and fertile materials. Sensitivities constrained by adjustment of fission neutron yield so as to preserve agreement with zero exposure integral data also are computed
Cost-Sensitive Learning for Emotion Robust Speaker Recognition
Directory of Open Access Journals (Sweden)
Dongdong Li
2014-01-01
Full Text Available In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved.
Cost-sensitive learning for emotion robust speaker recognition.
Li, Dongdong; Yang, Yingchun; Dai, Weihui
2014-01-01
In the field of information security, voice is one of the most important parts in biometrics. Especially, with the development of voice communication through the Internet or telephone system, huge voice data resources are accessed. In speaker recognition, voiceprint can be applied as the unique password for the user to prove his/her identity. However, speech with various emotions can cause an unacceptably high error rate and aggravate the performance of speaker recognition system. This paper deals with this problem by introducing a cost-sensitive learning technology to reweight the probability of test affective utterances in the pitch envelop level, which can enhance the robustness in emotion-dependent speaker recognition effectively. Based on that technology, a new architecture of recognition system as well as its components is proposed in this paper. The experiment conducted on the Mandarin Affective Speech Corpus shows that an improvement of 8% identification rate over the traditional speaker recognition is achieved.
Extended forward sensitivity analysis of one-dimensional isothermal flow
International Nuclear Information System (INIS)
Johnson, M.; Zhao, H.
2013-01-01
Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)
Instructional Cost Analysis: History and Present Inadequacies.
Humphrey, David A.
The cost analysis of instruction is conducted according to principles of teaching and learning that have often become historically dated. Using today's costing systems prevents determination of whether cost effectiveness actually exists. The patterns of instruction in higher education and the systems employed for instructional cost analysis are…
Harris, Catherine R.; Osterberg, E. Charles; Sanford, Thomas; Alwaal, Amjad; Gaither, Thomas W.; McAninch, Jack W.; McCulloch, Charles E.; Breyer, Benjamin N.
2016-01-01
To determine which factors are associated with higher costs of urethroplasty procedure and whether these factors have been increasing over time. Identification of determinants of extreme costs may help reduce cost while maintaining quality.We conducted a retrospective analysis using the 2001-2010 Healthcare Cost and Utilization Project-Nationwide Inpatient Sample (HCUP-NIS). The HCUP-NIS captures hospital charges which we converted to cost using the HCUP cost-to-charge ratio. Log cost linear ...
Sensitivity Analysis of a Physiochemical Interaction Model ...
African Journals Online (AJOL)
In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Risk Characterization uncertainties associated description, sensitivity analysis
International Nuclear Information System (INIS)
Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.
2013-01-01
The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations
Object-sensitive Type Analysis of PHP
Van der Hoek, Henk Erik; Hage, J
2015-01-01
In this paper we develop an object-sensitive type analysis for PHP, based on an extension of the notion of monotone frameworks to deal with the dynamic aspects of PHP, and following the framework of Smaragdakis et al. for object-sensitive analysis. We consider a number of instantiations of the
Cost effectiveness analysis in radiopharmacy
International Nuclear Information System (INIS)
Carpentier, N.; Verbeke, S.; Ducloux, T.
1999-01-01
Objective: to evaluate the cost effectiveness of radiopharmaceuticals and their quality control. Materials and methods: this retrospective study was made in the Nuclear Medicine Department of the University Hospital of Limoges. Radiopharmaceutical costs were obtained with adding the price of the radiotracer, the materials, the equipments, the labour, the running expenses and the radioisotope. The costs of quality control were obtained with adding the price of labour, materials, equipments, running expenses and the cost of the quality control of 99m Tc eluate. Results: during 1998, 2106 radiopharmaceuticals were prepared in the Nuclear Medicine Department. The mean cost effectiveness of radiopharmaceutical was 1430 francs (846 to 4260). The mean cost effectiveness of quality control was 163 francs (84 to 343). The rise of the radiopharmaceutical cost induced by quality control was 11%. Conclusion: the technical methodology of quality control must be mastered to optimize the cost of this operation. (author)
Cost analysis of reliability investigations
International Nuclear Information System (INIS)
Schmidt, F.
1981-01-01
Taking Epsteins testing theory as a basis, premisses are formulated for the selection of cost-optimized reliability inspection plans. Using an example, the expected testing costs and inspection time periods of various inspection plan types, standardized on the basis of the exponential distribution, are compared. It can be shown that sequential reliability tests usually involve lower costs than failure or time-fixed tests. The most 'costly' test is to be expected with the inspection plan type NOt. (orig.) [de
Sensitivity analysis of physiochemical interaction model: which pair ...
African Journals Online (AJOL)
... of two model parameters at a time on the solution trajectory of physiochemical interaction over a time interval. Our aim is to use this powerful mathematical technique to select the important pair of parameters of this physical process which is cost-effective. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 ...
Design tradeoff studies and sensitivity analysis. Appendix B
Energy Technology Data Exchange (ETDEWEB)
1979-05-25
The results of the design trade-off studies and the sensitivity analysis of Phase I of the Near Term Hybrid Vehicle (NTHV) Program are presented. The effects of variations in the design of the vehicle body, propulsion systems, and other components on vehicle power, weight, cost, and fuel economy and an optimized hybrid vehicle design are discussed. (LCL)
Cost Analysis of Treating Neonatal Hypoglycemia with Dextrose Gel.
Glasgow, Matthew J; Harding, Jane E; Edlin, Richard
2018-04-03
To evaluate the costs of using dextrose gel as a primary treatment for neonatal hypoglycemia in the first 48 hours after birth compared with standard care. We used a decision tree to model overall costs, including those specific to hypoglycemia monitoring and treatment and those related to the infant's length of stay in the postnatal ward or neonatal intensive care unit, comparing the use of dextrose gel for treatment of neonatal hypoglycemia with placebo, using data from the Sugar Babies randomized trial. Sensitivity analyses assessed the impact of dextrose gel cost, neonatal intensive care cost, cesarean delivery rate, and costs of glucose monitoring. In the primary analysis, treating neonatal hypoglycemia using dextrose gel had an overall cost of NZ$6863.81 and standard care (placebo) cost NZ$8178.25; a saving of NZ$1314.44 per infant treated. Sensitivity analyses showed that dextrose gel remained cost saving with wide variations in dextrose gel costs, neonatal intensive care unit costs, cesarean delivery rates, and costs of monitoring. Use of buccal dextrose gel reduces hospital costs for management of neonatal hypoglycemia. Because it is also noninvasive, well tolerated, safe, and associated with improved breastfeeding, buccal dextrose gel should be routinely used for initial treatment of neonatal hypoglycemia. Australian New Zealand Clinical Trials Registry: ACTRN12608000623392. Copyright © 2018 Elsevier Inc. All rights reserved.
Life-Cycle Cost-Benefit Analysis
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
2010-01-01
The future use of Life-Cycle Cost-Benefit (LCCB) analysis is discussed in this paper. A more complete analysis including not only the traditional factors and user costs, but also factors which are difficult to include in the analysis is needed in the future.......The future use of Life-Cycle Cost-Benefit (LCCB) analysis is discussed in this paper. A more complete analysis including not only the traditional factors and user costs, but also factors which are difficult to include in the analysis is needed in the future....
Sensitivity analysis of a PWR pressurizer
International Nuclear Information System (INIS)
Bruel, Renata Nunes
1997-01-01
A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)
Incorporating psychological influences in probabilistic cost analysis
Energy Technology Data Exchange (ETDEWEB)
Kujawski, Edouard; Alvaro, Mariana; Edwards, William
2004-01-08
Today's typical probabilistic cost analysis assumes an ''ideal'' project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. In the real world ''Money Allocated Is Money Spent'' (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. Psychological influences such as overconfidence in assessing uncertainties and dependencies among cost elements and risks are other important considerations that are generally not addressed. It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as {at}Risk and Crystal Ball{reg_sign}. The analysis of a representative design and engineering project substantiates that today's typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for
Department of the Army Cost Analysis Manual
National Research Council Canada - National Science Library
1997-01-01
.... The specific goal of this manual is to help the cost analyst serve the customer. This is done by providing reference material on cost analysis processes, methods, techniques, structures, and definitions...
Cost benefit analysis in diagnostic radiology: glossary and definitions
International Nuclear Information System (INIS)
Golder, W.
1999-01-01
Cost efficiency analyses in clinical radiology require the application of methods and techniques that are not yet part of the academic qualifications of the specialists. The procedures used are borrowed from economics, decision theory, applied social sciences, epidemiology and statistics. Many expressions hail from the angloamerican literature and are presently not yet germanized unequivocally. This survey is intended to present main terms of cost efficiency analysis in the English version as well as a German translation, to give a clear definition and, if necessary, explanatory notes, and to illustrate their application by means of concrete radiologic examples. The selection of the terms is based on the hierarchical models of health technology assessment resp. clinical outcome research by Fryback and Thronbury resp. Maisey and Hutton. In concrete terms, both the differences between benefit, outcomes, and utility and the differences between effectiveness, efficacy and efficiency and the differences between direct, indirect, intangible, and marginal costs are explained. True cost efficiency analysis is compared with cost effectiveness analysis, cost identification analysis, cost minimization analysis, and cost utility analysis. Applied social sciences are represented by the Medical Outcomes Study Short Form-36 and the QALY conception. From decision theory both the analysis of hypothetical alternatives and the Markov model are taken. Finally, sensitivity analysis and the procedures of combined statistical evaluation of comparable results (meta-analysis) are quoted. (orig.) [de
RECTIFIED ETHANOL PRODUCTION COST ANALYSIS
Directory of Open Access Journals (Sweden)
Nikola J Budimir
2011-01-01
Full Text Available This paper deals with the impact of the most important factors of the total production costs in bioethanol production. The most influential factors are: total investment costs, price of raw materials (price of biomass, enzymes, yeast, and energy costs. Taking into account these factors, a procedure for estimation total production costs was establish. In order to gain insight into the relationship of production and selling price of bioethanol, price of bioethanol for some countries of the European Union and the United States are given.
The identification of model effective dimensions using global sensitivity analysis
International Nuclear Information System (INIS)
Kucherenko, Sergei; Feil, Balazs; Shah, Nilay; Mauntz, Wolfgang
2011-01-01
It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.
The identification of model effective dimensions using global sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Kucherenko, Sergei, E-mail: s.kucherenko@ic.ac.u [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Feil, Balazs [Department of Process Engineering, University of Pannonia, Veszprem (Hungary); Shah, Nilay [CPSE, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom); Mauntz, Wolfgang [Lehrstuhl fuer Anlagensteuerungstechnik, Fachbereich Chemietechnik, Universitaet Dortmund (Germany)
2011-04-15
It is shown that the effective dimensions can be estimated at reasonable computational costs using variance based global sensitivity analysis. Namely, the effective dimension in the truncation sense can be found by using the Sobol' sensitivity indices for subsets of variables. The effective dimension in the superposition sense can be estimated by using the first order effects and the total Sobol' sensitivity indices. The classification of some important classes of integrable functions based on their effective dimension is proposed. It is shown that it can be used for the prediction of the QMC efficiency. Results of numerical tests verify the prediction of the developed techniques.
Automated sensitivity analysis: New tools for modeling complex dynamic systems
International Nuclear Information System (INIS)
Pin, F.G.
1987-01-01
Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed
Sensitivity analysis for large-scale problems
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Sensitivity analysis in life cycle assessment
Groen, E.A.; Heijungs, R.; Bokkers, E.A.M.; Boer, de I.J.M.
2014-01-01
Life cycle assessments require many input parameters and many of these parameters are uncertain; therefore, a sensitivity analysis is an essential part of the final interpretation. The aim of this study is to compare seven sensitivity methods applied to three types of case stud-ies. Two
Supercritical extraction of oleaginous: parametric sensitivity analysis
Directory of Open Access Journals (Sweden)
Santos M.M.
2000-01-01
Full Text Available The economy has become universal and competitive, thus the industries of vegetable oil extraction must advance in the sense of minimising production costs and, at the same time, generating products that obey more rigorous patterns of quality, including solutions that do not damage the environment. The conventional oilseed processing uses hexane as solvent. However, this solvent is toxic and highly flammable. Thus the search of substitutes for hexane in oleaginous extraction process has increased in the last years. The supercritical carbon dioxide is a potential substitute for hexane, but it is necessary more detailed studies to understand the phenomena taking place in such process. Thus, in this work a diffusive model for semi-continuous (batch for the solids and continuous for the solvent isothermal and isobaric extraction process using supercritical carbon dioxide is presented and submitted to a parametric sensitivity analysis by means of a factorial design in two levels. The model parameters were disturbed and their main effects analysed, so that it is possible to propose strategies for high performance operation.
Ethical sensitivity in professional practice: concept analysis.
Weaver, Kathryn; Morse, Janice; Mitcham, Carl
2008-06-01
This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.
Ethics and Cost-Benefit Analysis
DEFF Research Database (Denmark)
Arler, Finn
The purpose of this research report is threefold. Firstly, the author traces the origins and justification of cost-benefit analysis in moral and political philosophy. Secondly, he explain some of the basic features of cost-benefit analysis as a planning tool in a step-bystep presentation. Thirdly......, he presents and discusses some of the main ethical difficulties related to the use of cost-benefit analysis as a planning tool....
LBLOCA sensitivity analysis using meta models
International Nuclear Information System (INIS)
Villamizar, M.; Sanchez-Saez, F.; Villanueva, J.F.; Carlos, S.; Sanchez, A.I.; Martorell, S.
2014-01-01
This paper presents an approach to perform the sensitivity analysis of the results of simulation of thermal hydraulic codes within a BEPU approach. Sensitivity analysis is based on the computation of Sobol' indices that makes use of a meta model, It presents also an application to a Large-Break Loss of Coolant Accident, LBLOCA, in the cold leg of a pressurized water reactor, PWR, addressing the results of the BEMUSE program and using the thermal-hydraulic code TRACE. (authors)
Sensitivity analysis in optimization and reliability problems
International Nuclear Information System (INIS)
Castillo, Enrique; Minguez, Roberto; Castillo, Carmen
2008-01-01
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods
Sensitivity analysis in optimization and reliability problems
Energy Technology Data Exchange (ETDEWEB)
Castillo, Enrique [Department of Applied Mathematics and Computational Sciences, University of Cantabria, Avda. Castros s/n., 39005 Santander (Spain)], E-mail: castie@unican.es; Minguez, Roberto [Department of Applied Mathematics, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: roberto.minguez@uclm.es; Castillo, Carmen [Department of Civil Engineering, University of Castilla-La Mancha, 13071 Ciudad Real (Spain)], E-mail: mariacarmen.castillo@uclm.es
2008-12-15
The paper starts giving the main results that allow a sensitivity analysis to be performed in a general optimization problem, including sensitivities of the objective function, the primal and the dual variables with respect to data. In particular, general results are given for non-linear programming, and closed formulas for linear programming problems are supplied. Next, the methods are applied to a collection of civil engineering reliability problems, which includes a bridge crane, a retaining wall and a composite breakwater. Finally, the sensitivity analysis formulas are extended to calculus of variations problems and a slope stability problem is used to illustrate the methods.
Techniques for sensitivity analysis of SYVAC results
International Nuclear Information System (INIS)
Prust, J.O.
1985-05-01
Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)
Chapter 17. Engineering cost analysis
Energy Technology Data Exchange (ETDEWEB)
Higbee, Charles V.
1998-01-01
In the early 1970s, life cycle costing (LCC) was adopted by the federal government. LCC is a method of evaluating all the costs associated with acquisition, construction and operation of a project. LCC was designed to minimize costs of major projects, not only in consideration of acquisition and construction, but especially to emphasize the reduction of operation and maintenance costs during the project life. Authors of engineering economics texts have been very reluctant and painfully slow to explain and deal with LCC. Many authors devote less than one page to the subject. The reason for this is that LCC has several major drawbacks. The first of these is that costs over the life of the project must be estimated based on some forecast, and forecasts have proven to be highly variable and frequently inaccurate. The second problem with LCC is that some life span must be selected over which to evaluate the project, and many projects, especially renewable energy projects, are expected to have an unlimited life (they are expected to live for ever). The longer the life cycle, the more inaccurate annual costs become because of the inability to forecast accurately.
Cost-effectiveness Analysis for Technology Acquisition.
Chakravarty, A; Naware, S S
2008-01-01
In a developing country with limited resources, it is important to utilize the total cost visibility approach over the entire life-cycle of the technology and then analyse alternative options for acquiring technology. The present study analysed cost-effectiveness of an "In-house" magnetic resonance imaging (MRI) scan facility of a large service hospital against outsourcing possibilities. Cost per unit scan was calculated by operating costing method and break-even volume was calculated. Then life-cycle cost analysis was performed to enable total cost visibility of the MRI scan in both "In-house" and "outsourcing of facility" configuration. Finally, cost-effectiveness analysis was performed to identify the more acceptable decision option. Total cost for performing unit MRI scan was found to be Rs 3,875 for scans without contrast and Rs 4,129 with contrast. On life-cycle cost analysis, net present value (NPV) of the "In-house" configuration was found to be Rs-(4,09,06,265) while that of "outsourcing of facility" configuration was Rs-(5,70,23,315). Subsequently, cost-effectiveness analysis across eight Figures of Merit showed the "In-house" facility to be the more acceptable option for the system. Every decision for acquiring high-end technology must be subjected to life-cycle cost analysis.
Model reduction by weighted Component Cost Analysis
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
Multiple predictor smoothing methods for sensitivity analysis
International Nuclear Information System (INIS)
Helton, Jon Craig; Storlie, Curtis B.
2006-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Multiple predictor smoothing methods for sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
International Nuclear Information System (INIS)
Schieber, C.; Lombard, J.; Lefaure, C.
1990-06-01
The objective of this report is to present the principles of decision making software for radiation protection option, applying ALARA principle. The choice of optimum options is performed by applying the models of cost effectiveness and cost-benefit. Options of radiation protection are described by two indicators: a simple economic indicator: cost of radiation protection; and dosimetry indicator: collective dose related to protection. For both analyses the software enables sensitivity analysis. It would be possible to complete the software by integrating a module which would take into account combinations of two options since they are not independent
A cost analysis: processing maple syrup products
Neil K. Huyler; Lawrence D. Garrett
1979-01-01
A cost analysis of processing maple sap to syrup for three fuel types, oil-, wood-, and LP gas-fired evaporators, indicates that: (1) fuel, capital, and labor are the major cost components of processing sap to syrup; (2) wood-fired evaporators show a slight cost advantage over oil- and LP gas-fired evaporators; however, as the cost of wood approaches $50 per cord, wood...
QUANTIFYING BENEFITS FOR COST-BENEFIT ANALYSIS
Attila GYORGY; Nicoleta VINTILA; Florian GAMAN
2014-01-01
Cost Benefit Analysis is one of the most widely used financial tools to select future investment projects in public and private sector. This method is based on comparing costs and benefits in terms of constant prices. While costs are easier to predict and monetize, the benefits should be identified not only in direct relation with the investment, but also widening the sphere of analysis to indirect benefits experienced by the community from the neighbourhood or the whole society. During finan...
Cost analysis of in vitro fertilization.
Stern, Z; Laufer, N; Levy, R; Ben-Shushan, D; Mor-Yosef, S
1995-08-01
In vitro fertilization (IVF) has become a routine tool in the arsenal of infertility treatments. Assisted reproductive techniques are expensive, as reflected by the current "take home baby" rate of about 15% per cycle, implying the need for repeated attempts until success is achieved. Israel, today is facing a major change in its health care system, including the necessity to define a national package of health care benefits. The issue of infertility and whether its treatment should be part of the "health basket" is in dispute. Therefore an exact cost analysis of IVF is important. Since the cost of an IVF cycle varies dramatically between countries, we sought an exact breakdown of the different components of the costs involved in an IVF cycle and in achieving an IVF child in Israel. The key question is not how much we spend on IVF cycles but what is the cost of a successful outcome, i.e., a healthy child. This study intends to answer this question, and to give the policy makers, at various levels of the health care system, a crucial tool for their decision-making process. The cost analysis includes direct and indirect costs. The direct costs are divided into fixed costs (labor, equipment, maintenance, depreciation, and overhead) and variable costs (laboratory tests, chemicals, disposable supplies, medications, and loss of working days by the couples). The indirect costs are the costs of premature IVF babies, hospitalization of the IVF pregnant women in a high risk unit, and the cost of complications of the procedure. According to our economic analysis, an IVF cycle in Israel costs $2,560, of which fixed costs are about 50%. The cost of a "take home baby" is $19,267, including direct and indirect costs.
Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.
1979-09-01
This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...
Dynamic Resonance Sensitivity Analysis in Wind Farms
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2017-01-01
(PFs) are calculated by critical eigenvalue sensitivity analysis versus the entries of the MIMO matrix. The PF analysis locates the most exciting bus of the resonances, where can be the best location to install the passive or active filters to reduce the harmonic resonance problems. Time...
Management of End-Stage Ankle Arthritis: Cost-Utility Analysis Using Direct and Indirect Costs.
Nwachukwu, Benedict U; McLawhorn, Alexander S; Simon, Matthew S; Hamid, Kamran S; Demetracopoulos, Constantine A; Deland, Jonathan T; Ellis, Scott J
2015-07-15
Total ankle replacement and ankle fusion are costly but clinically effective treatments for ankle arthritis. Prior cost-effectiveness analyses for the management of ankle arthritis have been limited by a lack of consideration of indirect costs and nonoperative management. The purpose of this study was to compare the cost-effectiveness of operative and nonoperative treatments for ankle arthritis with inclusion of direct and indirect costs in the analysis. Markov model analysis was conducted from a health-systems perspective with use of direct costs and from a societal perspective with use of direct and indirect costs. Costs were derived from the 2012 Nationwide Inpatient Sample (NIS) and expressed in 2013 U.S. dollars; effectiveness was expressed in quality-adjusted life years (QALYs). Model transition probabilities were derived from the available literature. The principal outcome measure was the incremental cost-effectiveness ratio (ICER). In the direct-cost analysis for the base case, total ankle replacement was associated with an ICER of $14,500/QALY compared with nonoperative management. When indirect costs were included, total ankle replacement was both more effective and resulted in $5900 and $800 in lifetime cost savings compared with the lifetime costs following nonoperative management and ankle fusion, respectively. At a $100,000/QALY threshold, surgical management of ankle arthritis was preferred for patients younger than ninety-six years and total ankle replacement was increasingly more cost-effective in younger patients. Total ankle replacement, ankle fusion, and nonoperative management were the preferred strategy in 83%, 12%, and 5% of the analyses, respectively; however, our model was sensitive to patient age, the direct costs of total ankle replacement, the failure rate of total ankle replacement, and the probability of arthritis after ankle fusion. Compared with nonoperative treatment for the management of end-stage ankle arthritis, total ankle
The 2nu-SVM: A Cost-Sensitive Extension of the nu-SVM
National Research Council Canada - National Science Library
Davenport, Mark A
2005-01-01
.... In this report we review cost-sensitive extensions of standard support vector machines (SVMs). In particular, we describe cost-sensitive extensions of the C-SVM and the nu-SVM, which we denote the 2C-SVM and 2nu-SVM respectively...
International Nuclear Information System (INIS)
Greenspan, E.
1982-01-01
This chapter presents the mathematical basis for sensitivity functions, discusses their physical meaning and information they contain, and clarifies a number of issues concerning their application, including the definition of group sensitivities, the selection of sensitivity functions to be included in the analysis, and limitations of sensitivity theory. Examines the theoretical foundation; criticality reset sensitivities; group sensitivities and uncertainties; selection of sensitivities included in the analysis; and other uses and limitations of sensitivity functions. Gives the theoretical formulation of sensitivity functions pertaining to ''as-built'' designs for performance parameters of the form of ratios of linear flux functionals (such as reaction-rate ratios), linear adjoint functionals, bilinear functions (such as reactivity worth ratios), and for reactor reactivity. Offers a consistent procedure for reducing energy-dependent or fine-group sensitivities and uncertainties to broad group sensitivities and uncertainties. Provides illustrations of sensitivity functions as well as references to available compilations of such functions and of total sensitivities. Indicates limitations of sensitivity theory originating from the fact that this theory is based on a first-order perturbation theory
A managerial accounting analysis of hospital costs.
Frank, W G
1976-01-01
Variance analysis, an accounting technique, is applied to an eight-component model of hospital costs to determine the contribution each component makes to cost increases. The method is illustrated by application to data on total costs from 1950 to 1973 for all U.S. nongovernmental not-for-profit short-term general hospitals. The costs of a single hospital are analyzed and compared to the group costs. The potential uses and limitations of the method as a planning and research tool are discussed.
Exponential Sensitivity and its Cost in Quantum Physics.
Gilyén, András; Kiss, Tamás; Jex, Igor
2016-02-10
State selective protocols, like entanglement purification, lead to an essentially non-linear quantum evolution, unusual in naturally occurring quantum processes. Sensitivity to initial states in quantum systems, stemming from such non-linear dynamics, is a promising perspective for applications. Here we demonstrate that chaotic behaviour is a rather generic feature in state selective protocols: exponential sensitivity can exist for all initial states in an experimentally realisable optical scheme. Moreover, any complex rational polynomial map, including the example of the Mandelbrot set, can be directly realised. In state selective protocols, one needs an ensemble of initial states, the size of which decreases with each iteration. We prove that exponential sensitivity to initial states in any quantum system has to be related to downsizing the initial ensemble also exponentially. Our results show that magnifying initial differences of quantum states (a Schrödinger microscope) is possible; however, there is a strict bound on the number of copies needed.
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1990-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems
Automated differentiation of computer models for sensitivity analysis
International Nuclear Information System (INIS)
Worley, B.A.
1991-01-01
Sensitivity analysis of reactor physics computer models is an established discipline after more than twenty years of active development of generalized perturbations theory based on direct and adjoint methods. Many reactor physics models have been enhanced to solve for sensitivities of model results to model data. The calculated sensitivities are usually normalized first derivatives, although some codes are capable of solving for higher-order sensitivities. The purpose of this paper is to report on the development and application of the GRESS system for automating the implementation of the direct and adjoint techniques into existing FORTRAN computer codes. The GRESS system was developed at ORNL to eliminate the costly man-power intensive effort required to implement the direct and adjoint techniques into already-existing FORTRAN codes. GRESS has been successfully tested for a number of codes over a wide range of applications and presently operates on VAX machines under both VMS and UNIX operating systems. (author). 9 refs, 1 tab
CONSTRUCTION OF A DIFFERENTIAL ISOTHERMAL CALORIMETER OF HIGH SENSITIVITY AND LOW COST.
Trinca, RB; Perles, CE; Volpe, PLO
2009-01-01
CONSTRUCTION OF A DIFFERENTIAL ISOTHERMAL CALORIMETER OF HIGH SENSITIVITY AND LOW COST The high cost of sensitivity commercial calorimeters may represent an obstacle for many calorimetric research groups. This work describes (fie construction and calibration of a batch differential heat conduction calorimeter with sample cells volumes of about 400 mu L. The calorimeter was built using two small high sensibility square Peltier thermoelectric sensors and the total cost was estimated to be about...
Probabilistic sensitivity analysis in health economics.
Baio, Gianluca; Dawid, A Philip
2015-12-01
Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. © The Author(s) 2011.
TOLERANCE SENSITIVITY ANALYSIS: THIRTY YEARS LATER
Directory of Open Access Journals (Sweden)
Richard E. Wendell
2010-12-01
Full Text Available Tolerance sensitivity analysis was conceived in 1980 as a pragmatic approach to effectively characterize a parametric region over which objective function coefficients and right-hand-side terms in linear programming could vary simultaneously and independently while maintaining the same optimal basis. As originally proposed, the tolerance region corresponds to the maximum percentage by which coefficients or terms could vary from their estimated values. Over the last thirty years the original results have been extended in a number of ways and applied in a variety of applications. This paper is a critical review of tolerance sensitivity analysis, including extensions and applications.
FORECAST: Regulatory effects cost analysis software annual
International Nuclear Information System (INIS)
Lopez, B.; Sciacca, F.W.
1991-11-01
Over the past several years the NRC has developed a generic cost methodology for the quantification of cost/economic impacts associated with a wide range of new or revised regulatory requirements. This methodology has been developed to aid the NRC in preparing Regulatory Impact Analyses (RIAs). These generic costing methods can be useful in quantifying impacts both to industry and to the NRC. The FORECAST program was developed to facilitate the use of the generic costing methodology. This PC program integrates the major cost considerations that may be required because of a regulatory change. FORECAST automates much of the calculations typically needed in an RIA and thus reduces the time and labor required to perform these analysis. More importantly, its integrated and consistent treatment of the different cost elements should help assure comprehensiveness, uniformity, and accuracy in the preparation of needed cost estimates
Neuraxial blockade for external cephalic version: Cost analysis.
Yamasato, Kelly; Kaneshiro, Bliss; Salcedo, Jennifer
2015-07-01
Neuraxial blockade (epidural or spinal anesthesia/analgesia) with external cephalic version increases the external cephalic version success rate. Hospitals and insurers may affect access to neuraxial blockade for external cephalic version, but the costs to these institutions remain largely unstudied. The objective of this study was to perform a cost analysis of neuraxial blockade use during external cephalic version from hospital and insurance payer perspectives. Secondarily, we estimated the effect of neuraxial blockade on cesarean delivery rates. A decision-analysis model was developed using costs and probabilities occurring prenatally through the delivery hospital admission. Model inputs were derived from the literature, national databases, and local supply costs. Univariate and bivariate sensitivity analyses and Monte Carlo simulations were performed to assess model robustness. Neuraxial blockade was cost saving to both hospitals ($30 per delivery) and insurers ($539 per delivery) using baseline estimates. From both perspectives, however, the model was sensitive to multiple variables. Monte Carlo simulation indicated neuraxial blockade to be more costly in approximately 50% of scenarios. The model demonstrated that routine use of neuraxial blockade during external cephalic version, compared to no neuraxial blockade, prevented 17 cesarean deliveries for every 100 external cephalic versions attempted. Neuraxial blockade is associated with minimal hospital and insurer cost changes in the setting of external cephalic version, while reducing the cesarean delivery rate. © 2015 The Authors. Journal of Obstetrics and Gynaecology Research © 2015 Japan Society of Obstetrics and Gynecology.
Cost Analysis for Large Civil Transport Rotorcraft
Coy, John J.
2006-01-01
This paper presents cost analysis of purchase price and DOC+I (direct operating cost plus interest) that supports NASA s study of three advanced rotorcraft concepts that could enter commercial transport service within 10 to 15 years. The components of DOC+I are maintenance, flight crew, fuel, depreciation, insurance, and finance. The cost analysis aims at VTOL (vertical takeoff and landing) and CTOL (conventional takeoff and landing) aircraft suitable for regional transport service. The resulting spreadsheet-implemented cost models are semi-empirical and based on Department of Transportation and Army data from actual operations of such aircraft. This paper describes a rationale for selecting cost tech factors without which VTOL is more costly than CTOL by a factor of 10 for maintenance cost and a factor of two for purchase price. The three VTOL designs selected for cost comparisons meet the mission requirement to fly 1,200 nautical miles at 350 knots and 30,000 ft carrying 120 passengers. The lowest cost VTOL design is a large civil tilt rotor (LCTR) aircraft. With cost tech factors applied, the LCTR is reasonably competitive with the Boeing 737-700 when operated in economy regional service following the business model of the selected baseline operation, that of Southwest Airlines.
MEMS cost analysis from laboratory to industry
Freng, Ron Lawes
2016-01-01
The World of MEMS; Chapter 2: Basic Fabrication Processes; Chapter 3: Surface Microengineering. High Aspect Ratio Microengineering; Chapter 5: MEMS Testing; Chapter 6: MEMS Packaging. Clean Rooms, Buildings and Plant; Chapter 8: The MEMSCOST Spreadsheet; Chapter 9: Product Costs - Accelerometers. Product Costs - Microphones. MEMS Foundries. Financial Reporting and Analysis. Conclusions.
Sensitivity Analysis of Centralized Dynamic Cell Selection
DEFF Research Database (Denmark)
Lopez, Victor Fernandez; Alvarez, Beatriz Soret; Pedersen, Klaus I.
2016-01-01
and a suboptimal optimization algorithm that nearly achieves the performance of the optimal Hungarian assignment. Moreover, an exhaustive sensitivity analysis with different network and traffic configurations is carried out in order to understand what conditions are more appropriate for the use of the proposed...
Sensitivity analysis in a structural reliability context
International Nuclear Information System (INIS)
Lemaitre, Paul
2014-01-01
This thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored. The proposed methods are then applied on the CWNR case, which motivates this thesis. (author)
*Corresponding Author Sensitivity Analysis of a Physiochemical ...
African Journals Online (AJOL)
Michael Horsfall
The numerical method of sensitivity or the principle of parsimony ... analysis is a widely applied numerical method often being used in the .... Chemical Engineering Journal 128(2-3), 85-93. Amod S ... coupled 3-PG and soil organic matter.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Cost benefit analysis of recycling nuclear fuel cycle in Korea
International Nuclear Information System (INIS)
Lee, Jewhan; Chang, Soonheung
2012-01-01
Nuclear power has become an essential part of electricity generation to meet the continuous growth of electricity demand. The importance if nuclear waste management has been the main issue since the beginning of nuclear history. The recycling nuclear fuel cycle includes the fast reactor, which can burn the nuclear wastes, and the pyro-processing technology, which can reprocess the spent nuclear fuel. In this study, a methodology using Linear Programming (LP) is employed to evaluate the cost and benefits of introducing the recycling strategy and thus, to see the competitiveness of recycling fuel cycle. The LP optimization involves tradeoffs between the fast reactor capital cost with pyro-processing cost premiums and the total system uranium price with spent nuclear fuel management cost premiums. With the help of LP and sensitivity analysis, the effect of important parameters is presented as well as the target values for each cost and price of key factors
Combined multi-criteria and cost-benefit analysis
DEFF Research Database (Denmark)
Moshøj, Claus Rehfeld
1996-01-01
The paper is an introduction to both theory and application of combined Cost-Benefit and Multi-Criteria Analysis. The first section is devoted to basic utility theory and its practical application in Cost-Benefit Analysis. Based on some of the problems encountered, arguments in favour...... of the application of utility-based Multi-Criteria Analyses methods as an extension and refinement of the traditional Cost-Benefit Analysis are provided. The theory presented in this paper is closely related the methods used in the WARP software (Leleur & Jensen, 1989). The presentation is however wider in scope.......The second section introduces the stated preference methodology used in WARP to create weight profiles for project pool sensitivity analysis. This section includes a simple example. The third section discusses how decision makers can get a priori aid to make their pair-wise comparisons based on project pool...
Correa Bahnsen, Alejandro
2015-01-01
Several real-world binary classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples and not only within classes. However, standard binary classification methods do not take these costs into account, and assume a constant cost of misclassification errors. This approach is not realistic in many real-world applications. For example in credit card fraud detection, failing to detect a fraudulent transaction may have an ec...
Green Infrastructure Siting and Cost Effectiveness Analysis
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Parcel scale green infrastructure siting and cost effectiveness analysis. You can find more details at the project's website.
The Effect of Conditional Conservatism and Agency Cost on Investment-Cashflow Sensitivity
Directory of Open Access Journals (Sweden)
Bima Abdi Wibawa
2018-01-01
Full Text Available This research aims to give empirical evidence of the effect of conditional conservatism on company’s investment-cashflow sensitivity, and whether the impact is stronger in high agency cost firms compare to in low agency cost firms. This research uses dividend payout ratio to measure the agency cost because this study uses Indonesia as a research context where companies in Indonesia majority have concentrated ownership and funding through debt so that agency conflict that appears more dominant is the conflict of agency type two and three. This study uses the sample from manufacturing companies listed on Indonesia Stock Exchange during the period 2008-2012. The total observation in this research is 474 firm years, which 152 of the samples is classified as high agency cost firms and 322 sample as low agency cost firms. The result shows that as the recognition of economic losses becomes more timely, the sensitivity of firm investment to cashflow decreases. Conditional conservatism decreases investment-cashflow sensitivity in low agency cost firms but increases the sensitivity in high agency cost firms. In fact, before implementation of conditional conservatism, high agency cost firms have smaller investment cashflow sensitivity compared to the low agency cost one.
Incremental ALARA cost/benefit computer analysis
International Nuclear Information System (INIS)
Hamby, P.
1987-01-01
Commonwealth Edison Company has developed and is testing an enhanced Fortran Computer Program to be used for cost/benefit analysis of Radiation Reduction Projects at its six nuclear power facilities and Corporate Technical Support Groups. This paper describes a Macro-Diven IBM Mainframe Program comprised of two different types of analyses-an Abbreviated Program with fixed costs and base values, and an extended Engineering Version for a detailed, more through and time-consuming approach. The extended engineering version breaks radiation exposure costs down into two components-Health-Related Costs and Replacement Labor Costs. According to user input, the program automatically adjust these two cost components and applies the derivation to company economic analyses such as replacement power costs, carrying charges, debt interest, and capital investment cost. The results from one of more program runs using different parameters may be compared in order to determine the most appropriate ALARA dose reduction technique. Benefits of this particular cost / benefit analysis technique includes flexibility to accommodate a wide range of user data and pre-job preparation, as well as the use of proven and standardized company economic equations
Sensitivity Analysis in Two-Stage DEA
Directory of Open Access Journals (Sweden)
Athena Forghani
2015-07-01
Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.
Sensitivity Analysis in Two-Stage DEA
Directory of Open Access Journals (Sweden)
Athena Forghani
2015-12-01
Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.
Environmentally based Cost-Benefit Analysis
International Nuclear Information System (INIS)
Magnell, M.
1993-11-01
The fundamentals of the basic elements of a new comprehensive economic assessment, MILA, developed in Sweden with inspiration from the Total Cost Assessment-model are presented. The core of the MILA approach is an expanded cost and benefit inventory. But MILA also includes a complementary addition of an internal waste stream analysis, a tool for evaluation of environmental conflicts in monetary terms, an extended time horizon and direct allocation of costs and revenues to products and processes. However, MILA does not ensure profitability for environmentally sound projects. Essentially, MILA is an approach of refining investment and profitability analysis of a project, investment or product. 109 refs., 38 figs
Cost-benefit analysis: reality or illusion
International Nuclear Information System (INIS)
Tait, G.W.C.
1980-01-01
The problems encountered in the application of cost-benefit analysis to the setting of acceptable radiation exposure levels are discussed, in particular the difficulty of assigning a monetary value to human life or disability, and the fact that the customary optimization of cost-benefit is not consistent with the ICRP dose limitation system, especially the ALARA principle. It is concluded that the present ICRP recommendations should remain the basis of exposure control while a carefully limited use of cost-benefit analysis may be helpful in some cases. (U.K.)
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Global sensitivity analysis by polynomial dimensional decomposition
Energy Technology Data Exchange (ETDEWEB)
Rahman, Sharif, E-mail: rahman@engineering.uiowa.ed [College of Engineering, The University of Iowa, Iowa City, IA 52242 (United States)
2011-07-15
This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.
Cost analysis and estimating tools and techniques
Nussbaum, Daniel
1990-01-01
Changes in production processes reflect the technological advances permeat ing our products and services. U. S. industry is modernizing and automating. In parallel, direct labor is fading as the primary cost driver while engineering and technology related cost elements loom ever larger. Traditional, labor-based ap proaches to estimating costs are losing their relevance. Old methods require aug mentation with new estimating tools and techniques that capture the emerging environment. This volume represents one of many responses to this challenge by the cost analysis profession. The Institute of Cost Analysis (lCA) is dedicated to improving the effective ness of cost and price analysis and enhancing the professional competence of its members. We encourage and promote exchange of research findings and appli cations between the academic community and cost professionals in industry and government. The 1990 National Meeting in Los Angeles, jointly spo~sored by ICA and the National Estimating Society (NES),...
A highly sensitive, low-cost, wearable pressure sensor based on conductive hydrogel spheres
Tai, Yanlong; Mulle, Matthieu; Ventura, Isaac Aguilar; Lubineau, Gilles
2015-01-01
Wearable pressure sensing solutions have promising future for practical applications in health monitoring and human/machine interfaces. Here, a highly sensitive, low-cost, wearable pressure sensor based on conductive single-walled carbon nanotube
Cost-effectiveness analysis of treatments for vertebral compression fractures.
Edidin, Avram A; Ong, Kevin L; Lau, Edmund; Schmier, Jordana K; Kemner, Jason E; Kurtz, Steven M
2012-07-01
Vertebral compression fractures (VCFs) can be treated by nonsurgical management or by minimally invasive surgical treatment including vertebroplasty and balloon kyphoplasty. The purpose of the present study was to characterize the cost to Medicare for treating VCF-diagnosed patients by nonsurgical management, vertebroplasty, or kyphoplasty. We hypothesized that surgical treatments for VCFs using vertebroplasty or kyphoplasty would be a cost-effective alternative to nonsurgical management for the Medicare patient population. Cost per life-year gained for VCF patients in the US Medicare population was compared between operated (kyphoplasty and vertebroplasty) and non-operated patients and between kyphoplasty and vertebroplasty patients, all as a function of patient age and gender. Life expectancy was estimated using a parametric Weibull survival model (adjusted for comorbidities) for 858 978 VCF patients in the 100% Medicare dataset (2005-2008). Median payer costs were identified for each treatment group for up to 3 years following VCF diagnosis, based on 67 018 VCF patients in the 5% Medicare dataset (2005-2008). A discount rate of 3% was used for the base case in the cost-effectiveness analysis, with 0% and 5% discount rates used in sensitivity analyses. After accounting for the differences in median costs and using a discount rate of 3%, the cost per life-year gained for kyphoplasty and vertebroplasty patients ranged from $US1863 to $US6687 and from $US2452 to $US13 543, respectively, compared with non-operated patients. The cost per life-year gained for kyphoplasty compared with vertebroplasty ranged from -$US4878 (cost saving) to $US2763. Among patients for whom surgical treatment was indicated, kyphoplasty was found to be cost effective, and perhaps even cost saving, compared with vertebroplasty. Even for the oldest patients (85 years of age and older), both interventions would be considered cost effective in terms of cost per life-year gained.
Demonstration sensitivity analysis for RADTRAN III
International Nuclear Information System (INIS)
Neuhauser, K.S.; Reardon, P.C.
1986-10-01
A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves
Systemization of burnup sensitivity analysis code. 2
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2005-02-01
Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For
Cost-aware Pre-training for Multiclass Cost-sensitive Deep Learning
Chung, Yu-An; Lin, Hsuan-Tien; Yang, Shao-Wen
2015-01-01
Deep learning has been one of the most prominent machine learning techniques nowadays, being the state-of-the-art on a broad range of applications where automatic feature extraction is needed. Many such applications also demand varying costs for different types of mis-classification errors, but it is not clear whether or how such cost information can be incorporated into deep learning to improve performance. In this work, we propose a novel cost-aware algorithm that takes into account the cos...
Simulation-Based Approach to Operating Costs Analysis of Freight Trucking
Directory of Open Access Journals (Sweden)
Ozernova Natalja
2015-12-01
Full Text Available The article is devoted to the problem of costs uncertainty in road freight transportation services. The article introduces the statistical approach, based on Monte Carlo simulation on spreadsheets, to the analysis of operating costs. The developed model gives an opportunity to estimate operating freight trucking costs under different configuration of cost factors. Important conclusions can be made after running simulations regarding sensitivity to different factors, optimal decisions and variability of operating costs.
Analysis of capital and operating costs associated with high level waste solidification processes
International Nuclear Information System (INIS)
Heckman, R.A.; Kniazewycz, B.G.
1978-03-01
An analysis was performed to evaluate the sensitivity of annual operating costs and capital costs of waste solidification processes to various parameters defined by the requirements of a proposed Federal waste repository. Five process methods and waste forms examined were: salt cake, spray calcine, fluidized bed calcine, borosilicate glass, and supercalcine multibarrier. Differential cost estimates of the annual operating and maintenance costs and the capital costs for the five HLW solidification alternates were developed
Cost Analysis of Poor Quality Using a Software Simulation
Directory of Open Access Journals (Sweden)
Jana Fabianová
2017-02-01
Full Text Available The issues of quality, cost of poor quality and factors affecting quality are crucial to maintaining a competitiveness regarding to business activities. Use of software applications and computer simulation enables more effective quality management. Simulation tools offer incorporating the variability of more variables in experiments and evaluating their common impact on the final output. The article presents a case study focused on the possibility of using computer simulation Monte Carlo in the field of quality management. Two approaches for determining the cost of poor quality are introduced here. One from retrospective scope of view, where the cost of poor quality and production process are calculated based on historical data. The second approach uses the probabilistic characteristics of the input variables by means of simulation, and reflects as a perspective view of the costs of poor quality. Simulation output in the form of a tornado and sensitivity charts complement the risk analysis.
Cost-sensitive case-based reasoning using a genetic algorithm: application to medical diagnosis.
Park, Yoon-Joo; Chun, Se-Hak; Kim, Byung-Chun
2011-02-01
The paper studies the new learning technique called cost-sensitive case-based reasoning (CSCBR) incorporating unequal misclassification cost into CBR model. Conventional CBR is now considered as a suitable technique for diagnosis, prognosis and prescription in medicine. However it lacks the ability to reflect asymmetric misclassification and often assumes that the cost of a positive diagnosis (an illness) as a negative one (no illness) is the same with that of the opposite situation. Thus, the objective of this research is to overcome the limitation of conventional CBR and encourage applying CBR to many real world medical cases associated with costs of asymmetric misclassification errors. The main idea involves adjusting the optimal cut-off classification point for classifying the absence or presence of diseases and the cut-off distance point for selecting optimal neighbors within search spaces based on similarity distribution. These steps are dynamically adapted to new target cases using a genetic algorithm. We apply this proposed method to five real medical datasets and compare the results with two other cost-sensitive learning methods-C5.0 and CART. Our finding shows that the total misclassification cost of CSCBR is lower than other cost-sensitive methods in many cases. Even though the genetic algorithm has limitations in terms of unstable results and over-fitting training data, CSCBR results with GA are better overall than those of other methods. Also the paired t-test results indicate that the total misclassification cost of CSCBR is significantly less than C5.0 and CART for several datasets. We have proposed a new CBR method called cost-sensitive case-based reasoning (CSCBR) that can incorporate unequal misclassification costs into CBR and optimize the number of neighbors dynamically using a genetic algorithm. It is meaningful not only for introducing the concept of cost-sensitive learning to CBR, but also for encouraging the use of CBR in the medical area
Complete daVinci versus laparoscopic pyeloplasty: cost analysis.
Bhayani, Sam B; Link, Richard E; Varkarakis, John M; Kavoussi, Louis R
2005-04-01
Computer-assisted pyeloplasty with the daVinci system is an emerging technique to treat ureteropelvic junction (UPJ) obstruction. A relative cost analysis was performed assessing this technology in comparison with purely laparoscopic pyeloplasty. Eight patients underwent computer-assisted (daVinci) dismembered pyeloplasty (CP) via a transperitoneal four-port approach. They were compared with 13 patients who underwent purely laparoscopic pyeloplasty (LP). All patients had a primary UPJ obstruction and were matched for age, sex, and body mass index. The cost of equipment and capital depreciation for both procedures, as well as assessment of room set-up time, takedown time, and personnel were analyzed. Surgeons and nursing staff for both groups were experienced in both laparoscopy and daVinci procedures. One- and two-way financial analysis was performed to assess relative costs. The mean set-up and takedown time was 71 minutes for CP and 49 minutes for LP. The mean length of stay was 2.3 days for CP and 2.5 days for LP. The mean operating room (OR) times for CP and LP were 176 and 210 minutes, respectively. There were no complications in either group. One-way cost analysis with an economic model showed that LP is more cost effective than CP at our hospital if LP OR time is cost effective as LP. Two-way sensitivity analysis shows that in-room time must still be 500 to obtain cost equivalence for CP. Perioperative parameters for CP are encouraging. However, the costs are a clear disadvantage. In our hospital, it is more cost effective to teach and perform LP than to perform CP.
Development of cost-benefit analysis system
International Nuclear Information System (INIS)
Shiba, Tsuyoshi; Mishima, Tetsuya; Yuyama, Tomonori; Suzuki, Atsushi
2001-01-01
In order to promote the FDR development, it is necessary to see various benefits brought by introduction of FBR from multiple perspectives and have a good grasp of such benefits quantitatively and an adequate R and D investment scale which corresponds with them. In this study, the structured prototype in the previous study was improved to be able to perform cost-benefit analysis. An example of improvement made in the system is addition of subroutine used for comparison between new energy and benefits brought by introduction of FBR with special emphasis on addition of logic for analyzing externality about the new energy. Other improvement examples are modification of the Conventional Year Expense Ratio method of power generation cost to Average Durable Year Cost method, addition of database function and turning input data into database, and reviewing idea on cost by the type of waste material and price of uranium. The cost-benefit analysis system was also restructured utilizing Microsoft ACCESS so that it should have a data base function. As the result of the improvement mentioned above, we expect that the improved cost-benefit analysis system will have higher generality than the system before; therefore, great deal of benefits brought by application of the system in the future is expected. (author)
Systemization of burnup sensitivity analysis code
International Nuclear Information System (INIS)
Tatsumi, Masahiro; Hyoudou, Hideaki
2004-02-01
To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this
Importance measures in global sensitivity analysis of nonlinear models
International Nuclear Information System (INIS)
Homma, Toshimitsu; Saltelli, Andrea
1996-01-01
The present paper deals with a new method of global sensitivity analysis of nonlinear models. This is based on a measure of importance to calculate the fractional contribution of the input parameters to the variance of the model prediction. Measures of importance in sensitivity analysis have been suggested by several authors, whose work is reviewed in this article. More emphasis is given to the developments of sensitivity indices by the Russian mathematician I.M. Sobol'. Given that Sobol' treatment of the measure of importance is the most general, his formalism is employed throughout this paper where conceptual and computational improvements of the method are presented. The computational novelty of this study is the introduction of the 'total effect' parameter index. This index provides a measure of the total effect of a given parameter, including all the possible synergetic terms between that parameter and all the others. Rank transformation of the data is also introduced in order to increase the reproducibility of the method. These methods are tested on a few analytical and computer models. The main conclusion of this work is the identification of a sensitivity analysis methodology which is both flexible, accurate and informative, and which can be achieved at reasonable computational cost
International Nuclear Information System (INIS)
Barber, A. D.; Busch, R.
2009-01-01
The goal of this work is to obtain sensitivities from direct uncertainty analysis calculation and correlate those calculated values with the sensitivities produced from TSUNAMI-3D (Tools for Sensitivity and Uncertainty Analysis Methodology Implementation in Three Dimensions). A full sensitivity analysis is performed on a critical experiment to determine the overall uncertainty of the experiment. Small perturbation calculations are performed for all known uncertainties to obtain the total uncertainty of the experiment. The results from a critical experiment are only known as well as the geometric and material properties. The goal of this relationship is to simplify the uncertainty quantification process in assessing a critical experiment, while still considering all of the important parameters. (authors)
Sensitivity analysis of the Two Geometry Method
International Nuclear Information System (INIS)
Wichers, V.A.
1993-09-01
The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)
An Analysis of Rocket Propulsion Testing Costs
Ramirez, Carmen; Rahman, Shamim
2010-01-01
The primary mission at NASA Stennis Space Center (SSC) is rocket propulsion testing. Such testing is commonly characterized as one of two types: production testing for certification and acceptance of engine hardware, and developmental testing for prototype evaluation or research and development (R&D) purposes. For programmatic reasons there is a continuing need to assess and evaluate the test costs for the various types of test campaigns that involve liquid rocket propellant test articles. Presently, in fact, there is a critical need to provide guidance on what represents a best value for testing and provide some key economic insights for decision-makers within NASA and the test customers outside the Agency. Hence, selected rocket propulsion test databases and references have been evaluated and analyzed with the intent to discover correlations of technical information and test costs that could help produce more reliable and accurate cost projections in the future. The process of searching, collecting, and validating propulsion test cost information presented some unique obstacles which then led to a set of recommendations for improvement in order to facilitate future cost information gathering and analysis. In summary, this historical account and evaluation of rocket propulsion test cost information will enhance understanding of the various kinds of project cost information; identify certain trends of interest to the aerospace testing community.
Permeable treatment wall design and cost analysis
International Nuclear Information System (INIS)
Manz, C.; Quinn, K.
1997-01-01
A permeable treatment wall utilizing the funnel and gate technology has been chosen as the final remedial solution for one industrial site, and is being considered at other contaminated sites, such as a closed municipal landfill. Reactive iron gates will be utilized for treatment of chlorinated VOCs identified in the groundwater. Alternatives for the final remedial solution at each site were evaluated to achieve site closure in the most cost effective manner. This paper presents the remedial alternatives and cost analyses for each site. Several options are available at most sites for the design of a permeable treatment wall. Our analysis demonstrates that the major cost factor's for this technology are the design concept, length, thickness, location and construction methods for the reactive wall. Minimizing the amount of iron by placement in the most effective area and construction by the lowest cost method is critical to achieving a low cost alternative. These costs dictate the design of a permeable treatment wall, including selection of a variety of alternatives (e.g., a continuous wall versus a funnel and gate system, fully penetrating gates versus partially penetrating gates, etc.). Selection of the appropriate construction methods and materials for the site can reduce the overall cost of the wall
Making choices in health: WHO guide to cost effectiveness analysis
National Research Council Canada - National Science Library
Tan Torres Edejer, Tessa
2003-01-01
... . . . . . . . . . . . . . . . . . . . . . . . XXI PART ONE: METHODS COST-EFFECTIVENESS FOR GENERALIZED ANALYSIS 1. 2. What is Generalized Cost-Effectiveness Analysis? . . . . . . . . . . . . 3 Undertaking...
[Cost analysis for navigation in knee endoprosthetics].
Cerha, O; Kirschner, S; Günther, K-P; Lützner, J
2009-12-01
Total knee arthroplasty (TKA) is one of the most frequent procedures in orthopaedic surgery. The outcome depends on a range of factors including alignment of the leg and the positioning of the implant in addition to patient-associated factors. Computer-assisted navigation systems can improve the restoration of a neutral leg alignment. This procedure has been established especially in Europe and North America. The additional expenses are not reimbursed in the German DRG system (Diagnosis Related Groups). In the present study a cost analysis of computer-assisted TKA compared to the conventional technique was performed. The acquisition expenses of various navigation systems (5 and 10 year depreciation), annual costs for maintenance and software updates as well as the accompanying costs per operation (consumables, additional operating time) were considered. The additional operating time was determined on the basis of a meta-analysis according to the current literature. Situations with 25, 50, 100, 200 and 500 computer-assisted TKAs per year were simulated. The amount of the incremental costs of the computer-assisted TKA depends mainly on the annual volume and the additional operating time. A relevant decrease of the incremental costs was detected between 50 and 100 procedures per year. In a model with 100 computer-assisted TKAs per year an additional operating time of 14 mins and a 10 year depreciation of the investment costs, the incremental expenses amount to 300-395 depending on the navigation system. Computer-assisted TKA is associated with additional costs. From an economical point of view an amount of more than 50 procedures per year appears to be favourable. The cost-effectiveness could be estimated if long-term results will show a reduction of revisions or a better clinical outcome.
[Cost analysis of intraoperative neurophysiological monitoring (IOM)].
Kombos, T; Suess, O; Brock, M
2002-01-01
A number of studies demonstrate that a significant reduction of postoperative neurological deficits can be achieved by applying intraoperative neurophysiological monitoring (IOM) methods. A cost analysis of IOM is imperative considering the strained financial situation in the public health services. The calculation model presented here comprises two cost components: material and personnel. The material costs comprise consumer goods and depreciation of capital goods. The computation base was 200 IOM cases per year. Consumer goods were calculated for each IOM procedure respectively. The following constellation served as a basis for calculating personnel costs: (a) a medical technician (salary level BAT Vc) for one hour per case; (b) a resident (BAT IIa) for the entire duration of the measurement, and (c) a senior resident (BAT Ia) only for supervision. An IOM device consisting of an 8-channel preamplifier, an electrical and acoustic stimulator and special software costs 66,467 euros on the average. With an annual depreciation of 20%, the costs are 13,293 euros per year. This amounts to 66.46 euros per case for the capital goods. For reusable materials a sum of 0.75 euro; per case was calculated. Disposable materials were calculate for each procedure respectively. Total costs of 228.02 euro; per case were,s a sum of 0.75 euros per case was calculated. Disposable materials were calculate for each procedure respectively. Total costs of 228.02 euros per case were, calculated for surgery on the peripheral nervous system. They amount to 196.40 euros per case for spinal interventions and to 347.63 euros per case for more complex spinal operations. Operations in the cerebellopontine angle and brain stem cost 376.63 euros and 397.33 euros per case respectively. IOM costs amount to 328.03 euros per case for surgical management of an intracranial aneurysm and to 537.15 euros per case for functional interventions. Expenses run up to 833.63 euros per case for operations near the
Sensitivity analysis of reactive ecological dynamics.
Verdy, Ariane; Caswell, Hal
2008-08-01
Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.
An Analysis of Rocket Propulsion Testing Costs
Ramirez-Pagan, Carmen P.; Rahman, Shamim A.
2009-01-01
The primary mission at NASA Stennis Space Center (SSC) is rocket propulsion testing. Such testing is generally performed within two arenas: (1) Production testing for certification and acceptance, and (2) Developmental testing for prototype or experimental purposes. The customer base consists of NASA programs, DOD programs, and commercial programs. Resources in place to perform on-site testing include both civil servants and contractor personnel, hardware and software including data acquisition and control, and 6 test stands with a total of 14 test positions/cells. For several business reasons there is the need to augment understanding of the test costs for all the various types of test campaigns. Historical propulsion test data was evaluated and analyzed in many different ways with the intent to find any correlation or statistics that could help produce more reliable and accurate cost estimates and projections. The analytical efforts included timeline trends, statistical curve fitting, average cost per test, cost per test second, test cost timeline, and test cost envelopes. Further, the analytical effort includes examining the test cost from the perspective of thrust level and test article characteristics. Some of the analytical approaches did not produce evidence strong enough for further analysis. Some other analytical approaches yield promising results and are candidates for further development and focused study. Information was organized for into its elements: a Project Profile, Test Cost Timeline, and Cost Envelope. The Project Profile is a snap shot of the project life cycle on a timeline fashion, which includes various statistical analyses. The Test Cost Timeline shows the cumulative average test cost, for each project, at each month where there was test activity. The Test Cost Envelope shows a range of cost for a given number of test(s). The supporting information upon which this study was performed came from diverse sources and thus it was necessary to
Contributions to sensitivity analysis and generalized discriminant analysis
International Nuclear Information System (INIS)
Jacques, J.
2005-12-01
Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)
Cost analysis of paroxetine versus imipramine in major depression.
Bentkover, J D; Feighner, J P
1995-09-01
A simulation decision analytical model was used to compare the annual direct medical costs of treating patients with major depression using the selective serotonin reuptake inhibitor (SSRI) paroxetine or the tricyclic antidepressant (TCA) imipramine. Medical treatment patterns were determined from focus groups of general and family practitioners and psychiatrists in Boston, Dallas and Chicago, US. Direct medical costs included the wholesale drug acquisition costs (based on a 6-month course of drug therapy), psychiatrist and/or general practitioner visits, hospital outpatient visits, hospitalisation and electroconvulsive therapy. Acute phase treatment failure rates were derived from an intention-to-treat analysis of a previously published trial of paroxetine, imipramine and placebo in patients with major depression. Maintenance phase relapse rates were obtained from a 12-month trial of paroxetine, supplemented from the medical literature. The relapse rates for the final 6 months of the year were obtained from medical literature and expert opinion. Direct medical costs were estimated from a health insurance claims database. The estimated total direct medical cost per patient was slightly lower using paroxetine ($US2348) than generic imipramine ($US2448) as first-line therapy. This result was sensitive to short term dropout rates but robust to changes in other major parameters, including hospitalisation costs and relapse rates. The financial benefit of paroxetine, despite its 15-fold higher acquisition cost compared with imipramine, is attributable to a higher rate of completion of the initial course of therapy and consequent reduced hospitalisation rates.
Cost-effectiveness analysis and innovation.
Jena, Anupam B; Philipson, Tomas J
2008-09-01
While cost-effectiveness (CE) analysis has provided a guide to allocating often scarce resources spent on medical technologies, less emphasis has been placed on the effect of such criteria on the behavior of innovators who make health care technologies available in the first place. A better understanding of the link between innovation and cost-effectiveness analysis is particularly important given the large role of technological change in the growth in health care spending and the growing interest of explicit use of CE thresholds in leading technology adoption in several Westernized countries. We analyze CE analysis in a standard market context, and stress that a technology's cost-effectiveness is closely related to the consumer surplus it generates. Improved CE therefore often clashes with interventions to stimulate producer surplus, such as patents. We derive the inconsistency between technology adoption based on CE analysis and economic efficiency. Indeed, static efficiency, dynamic efficiency, and improved patient health may all be induced by the cost-effectiveness of the technology being at its worst level. As producer appropriation of the social surplus of an innovation is central to the dynamic efficiency that should guide CE adoption criteria, we exemplify how appropriation can be inferred from existing CE estimates. For an illustrative sample of technologies considered, we find that the median technology has an appropriation of about 15%. To the extent that such incentives are deemed either too low or too high compared to dynamically efficient levels, CE thresholds may be appropriately raised or lowered to improve dynamic efficiency.
A Cost-effectiveness Analysis of Early vs Late Tracheostomy.
Liu, C Carrie; Rudmik, Luke
2016-10-01
The timing of tracheostomy in critically ill patients requiring mechanical ventilation is controversial. An important consideration that is currently missing in the literature is an evaluation of the economic impact of an early tracheostomy strategy vs a late tracheostomy strategy. To evaluate the cost-effectiveness of the early tracheostomy strategy vs the late tracheostomy strategy. This economic analysis was performed using a decision tree model with a 90-day time horizon. The economic perspective was that of the US health care third-party payer. The primary outcome was the incremental cost per tracheostomy avoided. Probabilities were obtained from meta-analyses of randomized clinical trials. Costs were obtained from the published literature and the Healthcare Cost and Utilization Project database. A multivariate probabilistic sensitivity analysis was performed to account for uncertainty surrounding mean values used in the reference case. The reference case demonstrated that the cost of the late tracheostomy strategy was $45 943.81 for 0.36 of effectiveness. The cost of the early tracheostomy strategy was $31 979.12 for 0.19 of effectiveness. The incremental cost-effectiveness ratio for the late tracheostomy strategy compared with the early tracheostomy strategy was $82 145.24 per tracheostomy avoided. With a willingness-to-pay threshold of $50 000, the early tracheostomy strategy is cost-effective with 56% certainty. The adaptation of an early vs a late tracheostomy strategy depends on the priorities of the decision-maker. Up to a willingness-to-pay threshold of $80 000 per tracheostomy avoided, the early tracheostomy strategy has a higher probability of being the more cost-effective intervention.
Cost-effectiveness Analysis with Influence Diagrams.
Arias, M; Díez, F J
2015-01-01
Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.
Simple Sensitivity Analysis for Orion GNC
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Cost benefit analysis of reactor safety systems
International Nuclear Information System (INIS)
Maurer, H.A.
1984-01-01
Cost/benefit analysis of reactor safety systems is a possibility appropriate to deal with reactor safety. The Commission of the European Communities supported a study on the cost-benefit or cost effectiveness of safety systems installed in modern PWR nuclear power plants. The following systems and their cooperation in emergency cases were in particular investigated in this study: the containment system (double containment), the leakage exhaust and control system, the annulus release exhaust system and the containment spray system. The benefit of a safety system is defined according to its contribution to the reduction of the radiological consequences for the environment after a LOCA. The analysis is so far performed in two different steps: the emergency core cooling system is considered to function properly, failure of the emergency core cooling system is assumed (with the possible consequence of core melt-down) and the results may demonstrate the evidence that striving for cost-effectiveness can produce a safer end result than the philosophy of safety at any cost. (orig.)
Costs Analysis of Iron Casts Manufacturing
Directory of Open Access Journals (Sweden)
S. Kukla
2012-04-01
Full Text Available The article presents the issues of costs analysis of iron casts manufacturing using automated foundry lines. Particular attention was paid to departmental costs, conversion costs and costs of in-plant transport. After the Pareto analysis had been carried out, it was possible to set the model area of the process and focus on improving activities related to finishing of a chosen group of casts. In order to eliminate losses, the activities realised in this domain were divided into activities with added value, activities with partially added value and activities without added value. To streamline the production flow, it was proposed to change the location of workstations related to grinding, control and machining of casts. Within the process of constant improvement of manufacturing processes, the aspect of work ergonomics at a workstation was taken into account. As a result of the undertaken actions, some activities without added value were eliminated, efficiency was increased and prime costs of manufacturing casts with regard to finishing treatment were lowered.
Nuclear fuel cycle cost analysis using a probabilistic simulation technique
International Nuclear Information System (INIS)
Won, Il Ko; Jong, Won Choi; Chul, Hyung Kang; Jae, Sol Lee; Kun, Jai Lee
1998-01-01
A simple approach was described to incorporate the Monte Carlo simulation technique into a fuel cycle cost estimate. As a case study, the once-through and recycle fuel cycle options were tested with some alternatives (ie. the change of distribution type for input parameters), and the simulation results were compared with the values calculated by a deterministic method. A three-estimate approach was used for converting cost inputs into the statistical parameters of assumed probabilistic distributions. It was indicated that the Monte Carlo simulation by a Latin Hypercube Sampling technique and subsequent sensitivity analyses were useful for examining uncertainty propagation of fuel cycle costs, and could more efficiently provide information to decisions makers than a deterministic method. It was shown from the change of distribution types of input parameters that the values calculated by the deterministic method were set around a 40 th ∼ 50 th percentile of the output distribution function calculated by probabilistic simulation. Assuming lognormal distribution of inputs, however, the values calculated by the deterministic method were set around an 85 th percentile of the output distribution function calculated by probabilistic simulation. It was also indicated from the results of the sensitivity analysis that the front-end components were generally more sensitive than the back-end components, of which the uranium purchase cost was the most important factor of all. It showed, also, that the discount rate made many contributions to the fuel cycle cost, showing the rank of third or fifth of all components. The results of this study could be useful in applications to another options, such as the Dcp (Direct Use of PWR spent fuel In Candu reactors) cycle with high cost uncertainty
Directory of Open Access Journals (Sweden)
Jing Bian
2016-01-01
Full Text Available In the era of big data, feature selection is an essential process in machine learning. Although the class imbalance problem has recently attracted a great deal of attention, little effort has been undertaken to develop feature selection techniques. In addition, most applications involving feature selection focus on classification accuracy but not cost, although costs are important. To cope with imbalance problems, we developed a cost-sensitive feature selection algorithm that adds the cost-based evaluation function of a filter feature selection using a chaos genetic algorithm, referred to as CSFSG. The evaluation function considers both feature-acquiring costs (test costs and misclassification costs in the field of network security, thereby weakening the influence of many instances from the majority of classes in large-scale datasets. The CSFSG algorithm reduces the total cost of feature selection and trades off both factors. The behavior of the CSFSG algorithm is tested on a large-scale dataset of network security, using two kinds of classifiers: C4.5 and k-nearest neighbor (KNN. The results of the experimental research show that the approach is efficient and able to effectively improve classification accuracy and to decrease classification time. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.
Regional and parametric sensitivity analysis of Sobol' indices
International Nuclear Information System (INIS)
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2015-01-01
Nowadays, utilizing the Monte Carlo estimators for variance-based sensitivity analysis has gained sufficient popularity in many research fields. These estimators are usually based on n+2 sample matrices well designed for computing both the main and total effect indices, where n is the input dimension. The aim of this paper is to use such n+2 sample matrices to investigate how the main and total effect indices change when the uncertainty of the model inputs are reduced. For this purpose, the regional main and total effect functions are defined for measuring the changes on the main and total effect indices when the distribution range of one input is reduced, and the parametric main and total effect functions are introduced to quantify the residual main and total effect indices due to the reduced variance of one input. Monte Carlo estimators are derived for all the developed sensitivity concepts based on the n+2 samples matrices originally used for computing the main and total effect indices, thus no extra computational cost is introduced. The Ishigami function, a nonlinear model and a planar ten-bar structure are utilized for illustrating the developed sensitivity concepts, and for demonstrating the efficiency and accuracy of the derived Monte Carlo estimators. - Highlights: • The regional main and total effect functions are developed. • The parametric main and total effect functions are introduced. • The proposed sensitivity functions are all generalizations of Sobol' indices. • The Monte Carlo estimators are derived for the four sensitivity functions. • The computational cost of the estimators is the same as that of Sobol' indices
PPICA, Power Plant Investment Cost Analysis
International Nuclear Information System (INIS)
Lefevre, J.C.
2002-01-01
1 - Description of program or function: This software package contains two modules: - CAPITAL1 calculates investment costs from overnight costs, based on the capital structure of the utility (debt/equity ratio), return and interest rates according to the type of securities involved, and a standard-shaped curve of capital outlays during construction of a power plant. - FCRATE1 calculates the year-by-year revenue requirements to cover the capital-related charges incurred by the new investment and their economic equivalent: the levelled fixed-charge rate and capital contribution to the levelled unit power generation cost per kWh. They are proposed as an alternative to the corresponding modules CAPITAL and FCRATE, included in the LPGC (Levelled Power Generation Cost) suite of codes developed by ORNL and US-DOE. They perform the same type of analysis and provide the same results. 2 - Methods: Results output from CAPITAL1, in terms of the initial investment at startup and the fraction thereof that is allowable for tax depreciation, can be transferred automatically as data input to FCRATE1. Other user-defined data are: the project life, the time horizon of the economic analysis (which does not necessarily coincide with the project life), the plant load factor (lifetime average), the tax rate applicable to utility's income, the tax depreciation scheme and the tax charge accounting method (normalised or flow- through). The results of CAPITAL1 and FCRATE1 are expressed both in current money and in constant money of a reference year. Inflation rate and escalation rate of construction expenditures during construction period, and of fixed charges during service life are defined by the user. The discount rate is set automatically by the programme, equal to the weighted average tax-adjusted cost of money. 3 - Restrictions on the complexity of the problem: CAPITAL1 and FCRATE1 are 'alternatives', not 'substitutes', to the corresponding programs CAPITAL and FCRATE of the LPGC
Design windows and cost analysis on helical reactors
International Nuclear Information System (INIS)
Kozaki, Y.; Imagawa, S.; Sagara, A.
2007-01-01
The LHD type helical reactors are characterized by a large major radius but slender helical coil, which give us different approaches for power plants from tokamak reactors. For searching design windows of helical reactors and discussing their potential as power plants, we have developed a mass-cost estimating model linked with system design code (HeliCos), thorough studying the relationships between major plasma parameters and reactor parameters, and weight of major components. In regard to cost data we have much experience through preparing ITER construction. To compare the weight and cost of magnet systems between tokamak and helical reactors, we broke down magnet systems and cost factors, such as weights of super conducting strands, conduits, support structures, and winding unit costs, through estimating ITER cost data basis. Based on FFHR2m1 deign we considered a typical 3 GWth helical plant (LHD type) with the same magnet size, coil major radius Rc 14 m, magnetic energy 120 GJ, but increasing plasma densities. We evaluated the weight and cost of magnet systems of 3 GWth helical plant, the total magnet weights of 16,000ton and costs of 210 BYen, which are similar values of tokamak reactors (10,200 ton, 110 BYen in ITER 2002 report, and 21,900 ton, 275 BYen in ITER FDR1999). The costs of strands and winding occupy 70% of total magnet costs, and influence entire power plants economics. The design windows analysis and comparative economics studies to optimize the main reactor parameters have been carried out. Economics studies show that it is misunderstanding to consider helical coils are too large and too expensive to achieve power plants. But we should notice that the helical reactor design windows and economics are very sensitive to allowable blanket space (depend on ergodic layer conditions) and diverter configuration for decreasing heat loads. (orig.)
Cost analysis of Navy acquisition alternatives for the NAVSTAR Global Positioning System
Darcy, T. F.; Smith, G. P.
1982-12-01
This research analyzes the life cycle cost (LCC) of the Navy's current and two hypothetical procurement alternatives for NAVSTAR Global Positioning System (GPS) user equipment. Costs are derived by the ARINC Research Corporation ACBEN cost estimating system. Data presentation is in a comparative format describing individual alternative LCC and differential costs between alternatives. Sensitivity analysis explores the impact receiver-processor unit (RPU) first unit production cost has on individual alternative LCC, as well as cost differentials between each alternative. Several benefits are discussed that might provide sufficient cost savings and/or system effectiveness improvements to warrant a procurement strategy other than the existing proposal.
Jozaghi, Ehsan; Reid, Andrew A; Andresen, Martin A; Juneau, Alexandre
2014-08-04
Supervised injection facilities (SIFs) are venues where people who inject drugs (PWID) have access to a clean and medically supervised environment in which they can safely inject their own illicit drugs. There is currently only one legal SIF in North America: Insite in Vancouver, British Columbia, Canada. The responses and feedback generated by the evaluations of Insite in Vancouver have been overwhelmingly positive. This study assesses whether the above mentioned facility in the Downtown Eastside of Vancouver needs to be expanded to other locations, more specifically that of Canada's capital city, Ottawa. The current study is aimed at contributing to the existing literature on health policy by conducting cost-benefit and cost-effective analyses for the opening of SIFs in Ottawa, Ontario. In particular, the costs of operating numerous SIFs in Ottawa was compared to the savings incurred; this was done after accounting for the prevention of new HIV and Hepatitis C (HCV) infections. To ensure accuracy, two distinct mathematical models and a sensitivity analysis were employed. The sensitivity analyses conducted with the models reveals the potential for SIFs in Ottawa to be a fiscally responsible harm reduction strategy for the prevention of HCV cases--when considered independently. With a baseline sharing rate of 19%, the cumulative annual cost model supported the establishment of two SIFs and the marginal annual cost model supported the establishment of a single SIF. More often, the prevention of HIV or HCV alone were not sufficient to justify the establishment cost-effectiveness; rather, only when both HIV and HCV are considered does sufficient economic support became apparent. Funded supervised injection facilities in Ottawa appear to be an efficient and effective use of financial resources in the public health domain.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
Directory of Open Access Journals (Sweden)
Gafurov Andrey
2018-01-01
Full Text Available The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the “Project analysis scenario” flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
Improvement of the cost-benefit analysis algorithm for high-rise construction projects
Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir
2018-03-01
The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.
Global sensitivity analysis using a Gaussian Radial Basis Function metamodel
International Nuclear Information System (INIS)
Wu, Zeping; Wang, Donghui; Okolo N, Patrick; Hu, Fan; Zhang, Weihua
2016-01-01
Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on response variables. Amongst the wide range of documented studies on sensitivity measures and analysis, Sobol' indices have received greater portion of attention due to the fact that they can provide accurate information for most models. In this paper, a novel analytical expression to compute the Sobol' indices is derived by introducing a method which uses the Gaussian Radial Basis Function to build metamodels of computationally expensive computer codes. Performance of the proposed method is validated against various analytical functions and also a structural simulation scenario. Results demonstrate that the proposed method is an efficient approach, requiring a computational cost of one to two orders of magnitude less when compared to the traditional Quasi Monte Carlo-based evaluation of Sobol' indices. - Highlights: • RBF based sensitivity analysis method is proposed. • Sobol' decomposition of Gaussian RBF metamodel is obtained. • Sobol' indices of Gaussian RBF metamodel are derived based on the decomposition. • The efficiency of proposed method is validated by some numerical examples.
Least cost analysis of renewable energy projects
International Nuclear Information System (INIS)
Cosgrove-Davies, M.; Cabraal, A.
1994-01-01
This paper describes the methodology for evaluating dispersed and centralized rural energy options on a least cost basis. In defining the load to be served, each supply alternative must provide equivalent levels of service. The village to be served is defined by the number of loads, load density, distance from the nearest power distribution line, and load growth. Appropriate rural energy alternatives are identified and sized to satisfy the defined load. Lastly, a net present value analysis (including capital, installation, O and M, fuel, and replacement costs, etc.) is performed to identify the least cost option. A spreadsheet-based analytical tool developed by the World Bank's Asia Alternative Energy Unit (ASTAE) incorporates this approach and has been applied to compare photovoltaic solar home systems with other rural energy supply options in Indonesia. Load size and load density are found to be the critical factors in choosing between a grid and off-grid solution
Analysis of nuclear-power construction costs
International Nuclear Information System (INIS)
Jansma, G.L.; Borcherding, J.D.
1988-01-01
This paper discusses the use of regression analysis for estimating construction costs. The estimate is based on an historical data base and quantification of key factors considered external to project management. This technique is not intended as a replacement for detailed cost estimates but can provide information useful to the cost-estimating process and to top management interested in evaluating project management. The focus of this paper is the nuclear-power construction industry but the technique is applicable beyond this example. The approach and critical assumptions are also useful in a public-policy situation where utility commissions are evaluating construction in prudence reviews and making comparisons to other nuclear projects. 13 references, 2 figures
Making choices in health: WHO guide to cost effectiveness analysis
National Research Council Canada - National Science Library
Tan Torres Edejer, Tessa
2003-01-01
... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6. Uncertainty in cost-effectiveness analysis . . . . . . . . . . . . . . . . . . 73 7. 8. Policy uses of Generalized CEA...
Sensitivity analysis of a modified energy model
International Nuclear Information System (INIS)
Suganthi, L.; Jagadeesan, T.R.
1997-01-01
Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)
Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I
National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...
Evaluation of pavement life cycle cost analysis: Review and analysis
Directory of Open Access Journals (Sweden)
Peyman Babashamsi
2016-07-01
Full Text Available The cost of road construction consists of design expenses, material extraction, construction equipment, maintenance and rehabilitation strategies, and operations over the entire service life. An economic analysis process known as Life-Cycle Cost Analysis (LCCA is used to evaluate the cost-efficiency of alternatives based on the Net Present Value (NPV concept. It is essential to evaluate the above-mentioned cost aspects in order to obtain optimum pavement life-cycle costs. However, pavement managers are often unable to consider each important element that may be required for performing future maintenance tasks. Over the last few decades, several approaches have been developed by agencies and institutions for pavement Life-Cycle Cost Analysis (LCCA. While the transportation community has increasingly been utilising LCCA as an essential practice, several organisations have even designed computer programs for their LCCA approaches in order to assist with the analysis. Current LCCA methods are analysed and LCCA software is introduced in this article. Subsequently, a list of economic indicators is provided along with their substantial components. Collecting previous literature will help highlight and study the weakest aspects so as to mitigate the shortcomings of existing LCCA methods and processes. LCCA research will become more robust if improvements are made, facilitating private industries and government agencies to accomplish their economic aims. Keywords: Life-Cycle Cost Analysis (LCCA, Pavement management, LCCA software, Net Present Value (NPV
Final Report: Hydrogen Storage System Cost Analysis
Energy Technology Data Exchange (ETDEWEB)
James, Brian David [Strategic Analysis Inc., Arlington, VA (United States); Houchins, Cassidy [Strategic Analysis Inc., Arlington, VA (United States); Huya-Kouadio, Jennie Moton [Strategic Analysis Inc., Arlington, VA (United States); DeSantis, Daniel A. [Strategic Analysis Inc., Arlington, VA (United States)
2016-09-30
The Fuel Cell Technologies Office (FCTO) has identified hydrogen storage as a key enabling technology for advancing hydrogen and fuel cell power technologies in transportation, stationary, and portable applications. Consequently, FCTO has established targets to chart the progress of developing and demonstrating viable hydrogen storage technologies for transportation and stationary applications. This cost assessment project supports the overall FCTO goals by identifying the current technology system components, performance levels, and manufacturing/assembly techniques most likely to lead to the lowest system storage cost. Furthermore, the project forecasts the cost of these systems at a variety of annual manufacturing rates to allow comparison to the overall 2017 and “Ultimate” DOE cost targets. The cost breakdown of the system components and manufacturing steps can then be used to guide future research and development (R&D) decisions. The project was led by Strategic Analysis Inc. (SA) and aided by Rajesh Ahluwalia and Thanh Hua from Argonne National Laboratory (ANL) and Lin Simpson at the National Renewable Energy Laboratory (NREL). Since SA coordinated the project activities of all three organizations, this report includes a technical description of all project activity. This report represents a summary of contract activities and findings under SA’s five year contract to the US Department of Energy (Award No. DE-EE0005253) and constitutes the “Final Scientific Report” deliverable. Project publications and presentations are listed in the Appendix.
A Nuclear Waste Management Cost Model for Policy Analysis
Barron, R. W.; Hill, M. C.
2017-12-01
Although integrated assessments of climate change policy have frequently identified nuclear energy as a promising alternative to fossil fuels, these studies have often treated nuclear waste disposal very simply. Simple assumptions about nuclear waste are problematic because they may not be adequate to capture relevant costs and uncertainties, which could result in suboptimal policy choices. Modeling nuclear waste management costs is a cross-disciplinary, multi-scale problem that involves economic, geologic and environmental processes that operate at vastly different temporal scales. Similarly, the climate-related costs and benefits of nuclear energy are dependent on environmental sensitivity to CO2 emissions and radiation, nuclear energy's ability to offset carbon emissions, and the risk of nuclear accidents, factors which are all deeply uncertain. Alternative value systems further complicate the problem by suggesting different approaches to valuing intergenerational impacts. Effective policy assessment of nuclear energy requires an integrated approach to modeling nuclear waste management that (1) bridges disciplinary and temporal gaps, (2) supports an iterative, adaptive process that responds to evolving understandings of uncertainties, and (3) supports a broad range of value systems. This work develops the Nuclear Waste Management Cost Model (NWMCM). NWMCM provides a flexible framework for evaluating the cost of nuclear waste management across a range of technology pathways and value systems. We illustrate how NWMCM can support policy analysis by estimating how different nuclear waste disposal scenarios developed using the NWMCM framework affect the results of a recent integrated assessment study of alternative energy futures and their effects on the cost of achieving carbon abatement targets. Results suggest that the optimism reflected in previous works is fragile: Plausible nuclear waste management costs and discount rates appropriate for intergenerational cost
Uncertainty and sensitivity analyses of ballast life-cycle cost and payback period
Mcmahon, James E.
2000-01-01
The paper introduces an innovative methology for evaluating the relative significance of energy-efficient technologies applied to fluorescent lamp ballasts. The method involves replacing the point estimates of life cycle cost of the ballasts with uncertainty distributions reflecting the whole spectrum of possible costs, and the assessed probability associated with each value. The results of uncertainty and sensitivity analyses will help analysts reduce effort in data collection and carry on a...
ADGEN: a system for automated sensitivity analysis of predictive models
International Nuclear Information System (INIS)
Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.
1987-01-01
A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modeling and model validation studies to avoid over modeling, in site characterization planning to avoid over collection of data, and in performance assessments to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed. 7 references, 2 figures
ADGEN: a system for automated sensitivity analysis of predictive models
International Nuclear Information System (INIS)
Pin, F.G.; Horwedel, J.E.; Oblow, E.M.; Lucius, J.L.
1986-09-01
A system that can automatically enhance computer codes with a sensitivity calculation capability is presented. With this new system, named ADGEN, rapid and cost-effective calculation of sensitivities can be performed in any FORTRAN code for all input data or parameters. The resulting sensitivities can be used in performance assessment studies related to licensing or interactions with the public to systematically and quantitatively prove the relative importance of each of the system parameters in calculating the final performance results. A general procedure calling for the systematic use of sensitivities in assessment studies is presented. The procedure can be used in modelling and model validation studies to avoid ''over modelling,'' in site characterization planning to avoid ''over collection of data,'' and in performance assessment to determine the uncertainties on the final calculated results. The added capability to formally perform the inverse problem, i.e., to determine the input data or parameters on which to focus additional research or analysis effort in order to improve the uncertainty of the final results, is also discussed
International Nuclear Information System (INIS)
Basta, C.; Olive, W.J.; Antunes, J.S.
1990-01-01
An analysis of cost for each components of Small Hydroelectric Power Plant, taking into account the real costs of these projects is shown. It also presents a global equation which allows a preliminary estimation of cost for each construction. (author)
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
A new importance measure for sensitivity analysis
International Nuclear Information System (INIS)
Liu, Qiao; Homma, Toshimitsu
2010-01-01
Uncertainty is an integral part of risk assessment of complex engineering systems, such as nuclear power plants and space crafts. The aim of sensitivity analysis is to identify the contribution of the uncertainty in model inputs to the uncertainty in the model output. In this study, a new importance measure that characterizes the influence of the entire input distribution on the entire output distribution was proposed. It represents the expected deviation of the cumulative distribution function (CDF) of the model output that would be obtained when one input parameter of interest were known. The applicability of this importance measure was tested with two models, a nonlinear nonmonotonic mathematical model and a risk model. In addition, a comparison of this new importance measure with several other importance measures was carried out and the differences between these measures were explained. (author)
DEA Sensitivity Analysis for Parallel Production Systems
Directory of Open Access Journals (Sweden)
J. Gerami
2011-06-01
Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.
Sensitivity of SBLOCA analysis to model nodalization
International Nuclear Information System (INIS)
Lee, C.; Ito, T.; Abramson, P.B.
1983-01-01
The recent Semiscale test S-UT-8 indicates the possibility for primary liquid to hang up in the steam generators during a SBLOCA, permitting core uncovery prior to loop-seal clearance. In analysis of Small Break Loss of Coolant Accidents with RELAP5, it is found that resultant transient behavior is quite sensitive to the selection of nodalization for the steam generators. Although global parameters such as integrated mass loss, primary inventory and primary pressure are relatively insensitive to the nodalization, it is found that the predicted distribution of inventory around the primary is significantly affected by nodalization. More detailed nodalization predicts that more of the inventory tends to remain in the steam generators, resulting in less inventory in the reactor vessel and therefore causing earlier and more severe core uncovery
Subset simulation for structural reliability sensitivity analysis
International Nuclear Information System (INIS)
Song Shufang; Lu Zhenzhou; Qiao Hongwei
2009-01-01
Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes
Cost-Sensitive Feature Selection of Numeric Data with Measurement Errors
Directory of Open Access Journals (Sweden)
Hong Zhao
2013-01-01
Full Text Available Feature selection is an essential process in data mining applications since it reduces a model’s complexity. However, feature selection with various types of costs is still a new research topic. In this paper, we study the cost-sensitive feature selection problem of numeric data with measurement errors. The major contributions of this paper are fourfold. First, a new data model is built to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models mainly on the error boundaries. Second, a covering-based rough set model with normal distribution measurement errors is constructed. With this model, coverings are constructed from data rather than assigned by users. Third, a new cost-sensitive feature selection problem is defined on this model. It is more realistic than the existing feature selection problems. Fourth, both backtracking and heuristic algorithms are proposed to deal with the new problem. Experimental results show the efficiency of the pruning techniques for the backtracking algorithm and the effectiveness of the heuristic algorithm. This study is a step toward realistic applications of the cost-sensitive learning.
Integrated analysis considered mitigation cost, damage cost and adaptation cost in Northeast Asia
Park, J. H.; Lee, D. K.; Kim, H. G.; Sung, S.; Jung, T. Y.
2015-12-01
Various studies show that raising the temperature as well as storms, cold snap, raining and drought caused by climate change. And variety disasters have had a damage to mankind. The world risk report(2012, The Nature Conservancy) and UNU-EHS (the United Nations University Institute for Environment and Human Security) reported that more and more people are exposed to abnormal weather such as floods, drought, earthquakes, typhoons and hurricanes over the world. In particular, the case of Korea, we influenced by various pollutants which are occurred in Northeast Asian countries, China and Japan, due to geographical meteorological characteristics. These contaminants have had a significant impact on air quality with the pollutants generated in Korea. Recently, around the world continued their effort to reduce greenhouse gas and to improve air quality in conjunction with the national or regional development goals priority. China is also working on various efforts in accordance with the international flows to cope with climate change and air pollution. In the future, effect of climate change and air quality in Korea and Northeast Asia will be change greatly according to China's growth and mitigation policies. The purpose of this study is to minimize the damage caused by climate change on the Korean peninsula through an integrated approach taking into account the mitigation and adaptation plan. This study will suggest a climate change strategy at the national level by means of a comprehensive economic analysis of the impacts and mitigation of climate change. In order to quantify the impact and damage cost caused by climate change scenarios in a regional scale, it should be priority variables selected in accordance with impact assessment of climate change. The sectoral impact assessment was carried out on the basis of selected variables and through this, to derive the methodology how to estimate damage cost and adaptation cost. And then, the methodology was applied in Korea
PLACE OF PRODUCTION COSTS SYSTEM ANALYSIS IN SYSTEM ANALYSIS
Directory of Open Access Journals (Sweden)
Mariia CHEREDNYCHENKO
2016-12-01
Full Text Available Current economic conditions require the development and implementation of an adequate system of production costs, which would ensure a steady profit growth and production volumes in a highly competitive, constantly increasing input prices and tariffs. This management system must be based on an integrated production costs system analysis (PCSA, which would provide all operating costs management subsystems necessary information to design and make better management decisions. It provides a systematic analysis of more opportunities in knowledge, creating conditions of integrity mechanism knowledge object consisting of elements that show intersystem connections, each of which has its own defined and limited objectives, relationship with the environment.
Food Irradiation Update and Cost Analysis
1991-11-01
Natick). Significant contributions were made by Dr. Irwin Taub and Mr. Christopher Rees of the Technology Acquisition Division, Food Engineering...stability. 5 Food Irradiation Update C-ost Analysis I. Introduction In the book The Physioloqy of Taste (1825), one of the pioneers of gastronomy ...review of the utility that radiation preserved foods might offer the military food service system. To date, this technology has seen limited use in the
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Michaels-Igbokwe, Christine; Abramsky, Tanya; Devries, Karen; Michau, Lori; Musuya, Tina; Watts, Charlotte
2016-02-29
Intimate partner violence (IPV) poses a major public health concern. To date there are few rigorous economic evaluations of interventions aimed at preventing IPV in low-income settings. This study provides a cost and cost effectiveness analysis of SASA!, a community mobilisation intervention to change social norms and prevent IPV. An economic evaluation alongside a cluster randomised controlled trial. Both financial and economic costs were collected retrospectively from the provider's perspective to generate total and unit cost estimates over four years of intervention programming. Univariate sensitivity analysis is conducted to estimate the impact of uncertainty in cost and outcome measures on results. The total cost of developing the SASA! Activist Kit is estimated as US$138,598. Total intervention costs over four years are estimated as US$553,252. The annual cost of supporting 351 activists to conduct SASA! activities was approximately US$389 per activist and the average cost per person reached in intervention communities was US$21 over the full course of the intervention, or US$5 annually. The primary trial outcome was past year experience of physical IPV with an estimated 1201 cases averted (90% CI: 97-2307 cases averted). The estimated cost per case of past year IPV averted was US$460. This study provides the first economic evaluation of a community mobilisation intervention aimed at preventing IPV. SASA! unit costs compare favourably with gender transformative interventions and support services for survivors of IPV. ClinicalTrials.gov # NCT00790959.
Diagnostic staging laparoscopy in gastric cancer treatment: A cost-effectiveness analysis.
Li, Kevin; Cannon, John G D; Jiang, Sam Y; Sambare, Tanmaya D; Owens, Douglas K; Bendavid, Eran; Poultsides, George A
2018-05-01
Accurate preoperative staging helps avert morbidity, mortality, and cost associated with non-therapeutic laparotomy in gastric cancer (GC) patients. Diagnostic staging laparoscopy (DSL) can detect metastases with high sensitivity, but its cost-effectiveness has not been previously studied. We developed a decision analysis model to assess the cost-effectiveness of preoperative DSL in GC workup. Analysis was based on a hypothetical cohort of GC patients in the U.S. for whom initial imaging shows no metastases. The cost-effectiveness of DSL was measured as cost per quality-adjusted life-year (QALY) gained. Drivers of cost-effectiveness were assessed in sensitivity analysis. Preoperative DSL required an investment of $107 012 per QALY. In sensitivity analysis, DSL became cost-effective at a threshold of $100 000/QALY when the probability of occult metastases exceeded 31.5% or when test sensitivity for metastases exceeded 86.3%. The likelihood of cost-effectiveness increased from 46% to 93% when both parameters were set at maximum reported values. The cost-effectiveness of DSL for GC patients is highly dependent on patient and test characteristics, and is more likely when DSL is used selectively where procedure yield is high, such as for locally advanced disease or in detecting peritoneal and superficial versus deep liver lesions. © 2017 Wiley Periodicals, Inc.
A cost analysis of Colorado's 1991-92 oxygenated fuels program
International Nuclear Information System (INIS)
Manderino, L.A.; Bowles, S.L.
1993-01-01
This paper discusses the methodology used to conduct a cost analysis of Colorado's 1991-92 Oxygenated Fuels Program. This program requires the use of oxygenated fuels during the winter season in Denver and surrounding areas. The cost analysis was conducted as part of an overall cost-effectiveness study of the 1991-92 program conducted by PRC Environmental Management, Inc. (PRC). The paper, however, focuses on cost analysis and does not consider potential benefits of the program. The study analyzed costs incurred by different segments of society, including government, industry, and consumers. Because the analysis focused on a specific program year, neither past nor future costs were studied. The discussion of government costs includes the agencies interviewed and the types of costs associated with government administration and enforcement of the program. The methodology used to calculate costs to private industry is also present. The study examined the costs to fuel refineries, pipelines, and blenders, as well as fuel retailers and automobile fleet operators. Finally, the paper discusses the potential costs incurred by the consumer purchasing oxygenated fuels. Costs associated with issues such as vehicle driveability, automobile parts durability and performance, and fuel economy are also examined. A summary of all costs by category is presented along with an analysis of the major cost components. These include costs which are sensitive to specific circumstances and which may vary among programs
Calibration, validation, and sensitivity analysis: What's what
International Nuclear Information System (INIS)
Trucano, T.G.; Swiler, L.P.; Igusa, T.; Oberkampf, W.L.; Pilch, M.
2006-01-01
One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a 'model discrepancy' term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty
Global sensitivity analysis in wind energy assessment
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present
Frontier Assignment for Sensitivity Analysis of Data Envelopment Analysis
Naito, Akio; Aoki, Shingo; Tsuji, Hiroshi
To extend the sensitivity analysis capability for DEA (Data Envelopment Analysis), this paper proposes frontier assignment based DEA (FA-DEA). The basic idea of FA-DEA is to allow a decision maker to decide frontier intentionally while the traditional DEA and Super-DEA decide frontier computationally. The features of FA-DEA are as follows: (1) provides chances to exclude extra-influential DMU (Decision Making Unit) and finds extra-ordinal DMU, and (2) includes the function of the traditional DEA and Super-DEA so that it is able to deal with sensitivity analysis more flexibly. Simple numerical study has shown the effectiveness of the proposed FA-DEA and the difference from the traditional DEA.
Stochastic cost estimating in repository life-cycle cost analysis
International Nuclear Information System (INIS)
Tzemos, S.; Dippold, D.
1986-01-01
The conceptual development, the design, and the final construction and operation of a nuclear repository span many decades. Given this lengthy time frame, it is quite challenging to obtain a good approximation of the repository life-cycle cost. One can deal with this challenge by using an analytic method, the method of moments, to explicitly assess the uncertainty of the estimate. A series expansion is used to approximate the uncertainty distribution of the cost estimate. In this paper, the moment methodology is derived and is illustrated through a numerical example. The range of validity of the approximation is discussed. The method of moments is compared to the traditional stochastic cost estimating methods and found to provide more and better information on cost uncertainty. The tow methods converge to identical results as the number of convolved variables increases and approaches the range where the central limit theorem is valid
Analysis of nuclear power plant construction costs
International Nuclear Information System (INIS)
1986-01-01
The objective of this report is to present the results of a statistical analysis of nuclear power plant construction costs and lead-times (where lead-time is defined as the duration of the construction period), using a sample of units that entered construction during the 1966-1977 period. For more than a decade, analysts have been attempting to understand the reasons for the divergence between predicted and actual construction costs and lead-times. More importantly, it is rapidly being recognized that the future of the nuclear power industry rests precariously on an improvement in the cost and lead-time situation. Thus, it is important to study the historical information on completed plants, not only to understand what has occurred to also to improve the ability to evaluate the economics of future plants. This requires an examination of the factors that have affected both the realized costs and lead-times and the expectations about these factors that have been formed during the construction process. 5 figs., 22 tabs
Analysis of nuclear power plant construction costs
Energy Technology Data Exchange (ETDEWEB)
1986-01-01
The objective of this report is to present the results of a statistical analysis of nuclear power plant construction costs and lead-times (where lead-time is defined as the duration of the construction period), using a sample of units that entered construction during the 1966-1977 period. For more than a decade, analysts have been attempting to understand the reasons for the divergence between predicted and actual construction costs and lead-times. More importantly, it is rapidly being recognized that the future of the nuclear power industry rests precariously on an improvement in the cost and lead-time situation. Thus, it is important to study the historical information on completed plants, not only to understand what has occurred to also to improve the ability to evaluate the economics of future plants. This requires an examination of the factors that have affected both the realized costs and lead-times and the expectations about these factors that have been formed during the construction process. 5 figs., 22 tabs.
On the role of cost-sensitive learning in multi-class brain-computer interfaces.
Devlaminck, Dieter; Waegeman, Willem; Wyns, Bart; Otte, Georges; Santens, Patrick
2010-06-01
Brain-computer interfaces (BCIs) present an alternative way of communication for people with severe disabilities. One of the shortcomings in current BCI systems, recently put forward in the fourth BCI competition, is the asynchronous detection of motor imagery versus resting state. We investigated this extension to the three-class case, in which the resting state is considered virtually lying between two motor classes, resulting in a large penalty when one motor task is misclassified into the other motor class. We particularly focus on the behavior of different machine-learning techniques and on the role of multi-class cost-sensitive learning in such a context. To this end, four different kernel methods are empirically compared, namely pairwise multi-class support vector machines (SVMs), two cost-sensitive multi-class SVMs and kernel-based ordinal regression. The experimental results illustrate that ordinal regression performs better than the other three approaches when a cost-sensitive performance measure such as the mean-squared error is considered. By contrast, multi-class cost-sensitive learning enables us to control the number of large errors made between two motor tasks.
Sensitivity analysis of Smith's AMRV model
International Nuclear Information System (INIS)
Ho, Chih-Hsiang
1995-01-01
Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years
Cost benefit analysis for occupational radiation exposure
International Nuclear Information System (INIS)
Caruthers, G.F.; Rodgers, R.C.; Donohue, J.P.; Swartz, H.M.
1978-01-01
In the course of system design, many decisions must be made concerning different aspects of that particular system. The design of systems and components in a nuclear power plant has the added faction of occupational exposure experienced as a result of that design. This paper will deal with the different methods available to factor occupational exposure into design decisions. The ultimate goal is to have exposures related to the design 'As Low As Reasonably Achievable' or ALARA. To do this an analysis should be performed to show that the cost of reducing exposures any further cannot be justified in a cost-benefit analysis. In this paper examples will be given that will show that it is possible to change to a design which would increase occupational exposure somewhat but would increase the benefit over the cost of the extra exposure received. It will also be shown that some changes in design or additional equipment could be justified due to a reduction in exposure while some changes could not be justified on a reduction in exposure aspect alone but are justified on a time saving aspect such as during a refueling outage. (author)
Cost analysis and provider satisfaction with pediatrician in triage.
Kezirian, Janice; Muhammad, Warees T; Wan, Jim Y; Godambe, Sandip A; Pershad, Jay
2012-10-01
The goals of this study were to (1) conduct a cost-benefit analysis, from a hospital's perspective, of using a pediatrician in triage (PIT) in the emergency department (ED) and (2) assess the impact of a physician in triage on provider satisfaction. This was a prospective, controlled trial of PIT (intervention) versus conventional registered nurse-driven triage (control), at an urban, academic, tertiary level pediatric ED, which led to a cost-benefit analysis by looking at the effect that PIT has on length of stay (LOS) and thus on ED revenue. Provider satisfaction was assessed through surveys. During the 8-week study period, a total of 6579 patients were triaged: 3242 in the PIT group and 3337 in the control group. The 2 groups were similar in age, sex, admission rate, left-without-being-seen rate, and level of acuity. The mean LOS in the PIT group was 24.3 minutes shorter than in the control group. The costs of PIT seem to be increased and are not offset by savings; the net margin (total revenue minus costs) was $42,883 per year lower in the PIT than in the control group. Sensitivity analysis showed that if the LOS were reduced by more than 98.4 minutes, the cost savings would favor PIT. Most of the physicians and nurses (67%) reported that PIT facilitated their job. Placement of a PIT during periods of peak census resulted in shorter stay and notable provider satisfaction but at an incremental cost of $42,883 per year.
M. Westwood (Marie); T. van Asselt (Thea); B. Ramaekers (Bram); P. Whiting (Penny); P. Tokala (Praveen); M.A. Joore (Manuela); N. Armstrong (Nigel); J. Ross (Janine); J.L. Severens (Hans); J. Kleijnen (Jos)
2015-01-01
textabstractBackground: Early diagnosis of acute myocardial infarction (AMI) can ensure quick and effective treatment but only 20% of adults with emergency admissions for chest pain have an AMI. High-sensitivity cardiac troponin (hs-cTn) assays may allow rapid rule-out of AMI and avoidance of
Westwood, Marie; van Asselt, Thea; Ramaekers, Bram; Whiting, Penny; Thokala, Praveen; Joore, Manuela; Armstrong, Nigel; Ross, Janine; Severens, Johan; Kleijnen, Jos
BACKGROUND: Early diagnosis of acute myocardial infarction (AMI) can ensure quick and effective treatment but only 20% of adults with emergency admissions for chest pain have an AMI. High-sensitivity cardiac troponin (hs-cTn) assays may allow rapid rule-out of AMI and avoidance of unnecessary
Some Observations on Cost-Effectiveness Analysis in Education.
Geske, Terry G.
1979-01-01
The general nature of cost-effectiveness analysis is discussed, analytical frameworks for conducting cost-effectiveness studies are described, and some of the problems inherent in measuring educational costs and in assessing program effectiveness are addressed. (Author/IRT)
Wear-Out Sensitivity Analysis Project Abstract
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Sensitivity analysis of ranked data: from order statistics to quantiles
Heidergott, B.F.; Volk-Makarewicz, W.
2015-01-01
In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before
Cost-benefit analysis of FBR program
Energy Technology Data Exchange (ETDEWEB)
Suzuki, S [Japan Energy Economic Research Inst., Tokyo
1975-07-01
In several countries of the world, both financial and human resources are being invested to the development of fast breeder reactors. Quantitative determination of the benefit which will be expected as the reqard to these efforts of research and development - this is the purpose of the present study. It is cost-benefit analysis. The instances of this analysis are given, namely the work in The Institute of Energy Economics in Japan, and also the one by U.S.AEC. The effect of the development of fast breeder reactors is evaluated in this way ; and problems in the analysis method are indicated. These two works in Japan and the U.S. were performed before the so-called oil crisis.
Cost-benefit analysis of FBR program
International Nuclear Information System (INIS)
Suzuki, Shinji
1975-01-01
In several countries of the world, both financial and human resources are being invested to the development of fast breeder reactors. Quantitative determination of the benefit which will be expected as the reqard to these efforts of research and development - this is the purpose of the present study. It is cost-benefit analysis. The instances of this analysis are given, namely the work in The Institute of Energy Economics in Japan, and also the one by U.S.AEC. The effect of the development of fast breeder reactors is evaluated in this way ; and problems in the analysis method are indicated. These two works in Japan and the U.S. were performed before the so-called oil crisis. (Mori, K.)
Uranium solution mining cost estimating technique: means for rapid comparative analysis of deposits
International Nuclear Information System (INIS)
Anon.
1978-01-01
Twelve graphs provide a technique for determining relative cost ranges for uranium solution mining projects. The use of the technique can provide a consistent framework for rapid comparative analysis of various properties of mining situations. The technique is also useful to determine the sensitivities of cost figures to incremental changes in mining factors or deposit characteristics
SENSIT: a cross-section and design sensitivity and uncertainty analysis code
International Nuclear Information System (INIS)
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE
Multitarget global sensitivity analysis of n-butanol combustion.
Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T
2013-05-02
A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis.
Sensitivity analysis in multi-parameter probabilistic systems
International Nuclear Information System (INIS)
Walker, J.R.
1987-01-01
Probabilistic methods involving the use of multi-parameter Monte Carlo analysis can be applied to a wide range of engineering systems. The output from the Monte Carlo analysis is a probabilistic estimate of the system consequence, which can vary spatially and temporally. Sensitivity analysis aims to examine how the output consequence is influenced by the input parameter values. Sensitivity analysis provides the necessary information so that the engineering properties of the system can be optimized. This report details a package of sensitivity analysis techniques that together form an integrated methodology for the sensitivity analysis of probabilistic systems. The techniques have known confidence limits and can be applied to a wide range of engineering problems. The sensitivity analysis methodology is illustrated by performing the sensitivity analysis of the MCROC rock microcracking model
76 FR 56413 - Building Energy Codes Cost Analysis
2011-09-13
... intends to calculate three metrics. Life-cycle cost. Simple payback period. Cash flow. Life-cycle cost... exceed costs) will be considered cost effective. The payback period and cash flow analyses provide... of LCC analysis is the summing of costs and benefits over multiple years, it requires that cash flows...
Nuclear Power Plant Module, NPP-1: Nuclear Power Cost Analysis.
Whitelaw, Robert L.
The purpose of the Nuclear Power Plant Modules, NPP-1, is to determine the total cost of electricity from a nuclear power plant in terms of all the components contributing to cost. The plan of analysis is in five parts: (1) general formulation of the cost equation; (2) capital cost and fixed charges thereon; (3) operational cost for labor,…
An ESDIRK Method with Sensitivity Analysis Capabilities
DEFF Research Database (Denmark)
Kristensen, Morten Rode; Jørgensen, John Bagterp; Thomsen, Per Grove
2004-01-01
of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches...
Sensitivity Analysis of Fire Dynamics Simulation
DEFF Research Database (Denmark)
Brohus, Henrik; Nielsen, Peter V.; Petersen, Arnkell J.
2007-01-01
(Morris method). The parameters considered are selected among physical parameters and program specific parameters. The influence on the calculation result as well as the CPU time is considered. It is found that the result is highly sensitive to many parameters even though the sensitivity varies...
Cost-benefit analysis of wetland restoration
DEFF Research Database (Denmark)
Dubgaard, Alex
2004-01-01
The purpose of cost-benefit analysis (CBA) is to identify value for money solutions to government policies or projects. Environmental policy appraisal is typically complicated by the fact that thre are a number of feasible solutions to a decision problem - each yielding a different mix of environ...... is to illustrate the application of CBA within the field of river restoration. The Skjern River restoration project in Denmark is used as an empirical example of how these methods can be applied in the wetland restoration context....
A low-cost EDXRF analysis system
International Nuclear Information System (INIS)
Kahdeman, J.E.; Watson, W.
1984-01-01
The article deals with an EDXRF (Energy Dispersive X-ray Fluorescence) system, the Spectrace (sup TM) 4020 (Tractor X-ray). The Spectra analysis software is both powerful and flexible enough to handle a wide variety of applications. The instrument was designed to be economical by integrating the major system components into a single unit. This practical approach to hardware has cut the cost per unit. The software structure of the Spectra 4020 is presented in a flow chart. The article also contains a diagram of the hardware configuration of the instrument
Cost-benefit considerations in regulatory analysis
Energy Technology Data Exchange (ETDEWEB)
Mubayi, V.; Sailor, V.; Anandalingam, G.
1995-10-01
Justification for safety enhancements at nuclear facilities, e.g., a compulsory backfit to nuclear power plants, requires a value-impact analysis of the increase in overall public protection versus the cost of implementation. It has been customary to assess the benefits in terms of radiation dose to the public averted by the introduction of the safety enhancement. Comparison of such benefits with the costs of the enhancement then requires an estimate of the monetary value of averted dose (dollars/person rem). This report reviews available information on a variety of factors that affect this valuation and assesses the continuing validity of the figure of $1000/person-rem averted, which has been widely used as a guideline in performing value-impact analyses. Factors that bear on this valuation include the health risks of radiation doses, especially the higher risk estimates of the BEIR V committee, recent calculations of doses and offsite costs by consequence codes for hypothesized severe accidents at U.S. nuclear power plants under the NUREG-1150 program, and recent information on the economic consequences of the Chernobyl accident in the Soviet Union and estimates of risk avoidance based on the willingness-to-pay criterion. The report analyzes these factors and presents results on the dollars/person-rem ratio arising from different assumptions on the values of these factors.
Cost-benefit considerations in regulatory analysis
International Nuclear Information System (INIS)
Mubayi, V.; Sailor, V.; Anandalingam, G.
1995-10-01
Justification for safety enhancements at nuclear facilities, e.g., a compulsory backfit to nuclear power plants, requires a value-impact analysis of the increase in overall public protection versus the cost of implementation. It has been customary to assess the benefits in terms of radiation dose to the public averted by the introduction of the safety enhancement. Comparison of such benefits with the costs of the enhancement then requires an estimate of the monetary value of averted dose (dollars/person rem). This report reviews available information on a variety of factors that affect this valuation and assesses the continuing validity of the figure of $1000/person-rem averted, which has been widely used as a guideline in performing value-impact analyses. Factors that bear on this valuation include the health risks of radiation doses, especially the higher risk estimates of the BEIR V committee, recent calculations of doses and offsite costs by consequence codes for hypothesized severe accidents at U.S. nuclear power plants under the NUREG-1150 program, and recent information on the economic consequences of the Chernobyl accident in the Soviet Union and estimates of risk avoidance based on the willingness-to-pay criterion. The report analyzes these factors and presents results on the dollars/person-rem ratio arising from different assumptions on the values of these factors
29 CFR 95.45 - Cost and price analysis.
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true Cost and price analysis. 95.45 Section 95.45 Labor Office of... Procurement Standards § 95.45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be...
43 CFR 12.945 - Cost and price analysis.
2010-10-01
... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Cost and price analysis. 12.945 Section 12... Requirements § 12.945 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be...
24 CFR 84.45 - Cost and price analysis.
2010-04-01
... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Cost and price analysis. 84.45....45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various...
41 CFR 105-72.505 - Cost and price analysis.
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Cost and price analysis... § 105-72.505 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be...
22 CFR 145.45 - Cost and price analysis.
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Cost and price analysis. 145.45 Section 145.45....45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various...
Peter, Winter
2005-01-01
This paper aims at providing an insight into Japanese cost accounting. Firstly, the development of cost accounting in Japan is delineated. Subsequently, the cost accounting systems codified in the Japanese cost accounting standard are analysed based on the classification according to Hoitsch/Schmitz. Lastly, a critical appraisal of the cost accounting systems of the Japanese cost accounting standard as well as a comparison to German and American cost accounting systems are conducted.
Probability density adjoint for sensitivity analysis of the Mean of Chaos
Energy Technology Data Exchange (ETDEWEB)
Blonigan, Patrick J., E-mail: blonigan@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
2014-08-01
Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.
A cost sensitive inpatient bed reservation approach to reduce emergency department boarding times.
Qiu, Shanshan; Chinnam, Ratna Babu; Murat, Alper; Batarse, Bassam; Neemuchwala, Hakimuddin; Jordan, Will
2015-03-01
Emergency departments (ED) in hospitals are experiencing severe crowding and prolonged patient waiting times. A significant contributing factor is boarding delays where admitted patients are held in ED (occupying critical resources) until an inpatient bed is identified and readied in the admit wards. Recent research has suggested that if the hospital admissions of ED patients can be predicted during triage or soon after, then bed requests and preparations can be triggered early on to reduce patient boarding time. We propose a cost sensitive bed reservation policy that recommends optimal bed reservation times for patients. The policy relies on a classifier that estimates the probability that the ED patient will be admitted using the patient information collected and readily available at triage or right after. The policy is cost sensitive in that it accounts for costs associated with patient admission prediction misclassification as well as costs associated with incorrectly selecting the reservation time. Results from testing the proposed bed reservation policy using data from a VA Medical Center are very promising and suggest significant cost saving opportunities and reduced patient boarding times.
A cost-benefit analysis of The National Map
Halsing, David L.; Theissen, Kevin; Bernknopf, Richard
2003-01-01
, over its 30-year projected lifespan, The National Map will bring a net present value (NPV) of benefits of $2.05 billion in 2001 dollars. The average time until the initial investments (the break-even period) are recovered is 14 years. Table ES-1 shows a running total of NPV in each year of the simulation model. In year 14, The National Map first shows a positive NPV, and so the table is highlighted in gray after that point. Figure ES-1 is a graph of the total benefit and total cost curves of a single model run over time. The curves cross in year 14, when the project breaks even. A sensitivity analysis of the input variables illustrated that these results of the NPV of The National Map are quite robust. Figure ES-2 plots the mean NPV results from 60 different scenarios, each consisting of fifty 30-year runs. The error bars represent a two-standard-deviation range around each mean. The analysis that follows contains the details of the cost-benefit analysis, the framework for evaluating economic benefits, a computational simulation tool, and a sensitivity analysis of model variables and values.
Procedures for uncertainty and sensitivity analysis in repository performance assessment
International Nuclear Information System (INIS)
Poern, K.; Aakerlund, O.
1985-10-01
The objective of the project was mainly a literature study of available methods for the treatment of parameter uncertainty propagation and sensitivity aspects in complete models such as those concerning geologic disposal of radioactive waste. The study, which has run parallel with the development of a code package (PROPER) for computer assisted analysis of function, also aims at the choice of accurate, cost-affective methods for uncertainty and sensitivity analysis. Such a choice depends on several factors like the number of input parameters, the capacity of the model and the computer reresources required to use the model. Two basic approaches are addressed in the report. In one of these the model of interest is directly simulated by an efficient sampling technique to generate an output distribution. Applying the other basic method the model is replaced by an approximating analytical response surface, which is then used in the sampling phase or in moment matching to generate the output distribution. Both approaches are illustrated by simple examples in the report. (author)
Superconducting Accelerating Cavity Pressure Sensitivity Analysis
International Nuclear Information System (INIS)
Rodnizki, J.; Horvits, Z.; Ben Aliz, Y.; Grin, A.; Weissman, L.
2014-01-01
The measured sensitivity of the cavity was evaluated and it is full consistent with the measured values. It was explored that the tuning system (the fog structure) has a significant contribution to the cavity sensitivity. By using ribs or by modifying the rigidity of the fog we may reduce the HWR sensitivity. During cool down and warming up we have to analyze the stresses on the HWR to avoid plastic deformation to the HWR since the Niobium yield is an order of magnitude lower in room temperature
Modelling User-Costs in Life Cycle Cost-Benefit (LCCB) analysis
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
2008-01-01
The importance of including user's costs in Life-Cycle Cost-Benefit analysis of structures is discussed in this paper. This is especially for bridges of great importance. Repair or/and failure of a bridge will usually result in user costs greater than the repair or replacement costs of the bridge...
Cost and performance analysis of physical security systems
International Nuclear Information System (INIS)
Hicks, M.J.; Yates, D.; Jago, W.H.; Phillips, A.W.
1998-04-01
Analysis of cost and performance of physical security systems can be a complex, multi-dimensional problem. There are a number of point tools that address various aspects of cost and performance analysis. Increased interest in cost tradeoffs of physical security alternatives has motivated development of an architecture called Cost and Performance Analysis (CPA), which takes a top-down approach to aligning cost and performance metrics. CPA incorporates results generated by existing physical security system performance analysis tools, and utilizes an existing cost analysis tool. The objective of this architecture is to offer comprehensive visualization of complex data to security analysts and decision-makers
Derivative based sensitivity analysis of gamma index
Directory of Open Access Journals (Sweden)
Biplab Sarkar
2015-01-01
Full Text Available Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD and distance-to-agreement (DTA measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm, the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA against the RP, the first and second order derivatives of the DDs (δD', δD" between these two curves were derived and used as the
Richards, Robert J; Hammitt, James K
2002-09-01
Although surgery is recommended after two or more attacks of uncomplicated diverticulitis, the optimal timing for surgery in terms of cost-effectiveness is unknown. A Markov model was used to compare the costs and outcomes of performing surgery after one, two, or three uncomplicated attacks in 60-year-old hypothetical cohorts. Transition state probabilities were assigned values using published data and expert opinion. Costs were estimated from Medicare reimbursement rates. Surgery after the third attack is cost saving, yielding more years of life and quality adjusted life years at a lower cost than the other two strategies. The results were not sensitive to many of the variables tested in the model or to changes made in the discount rate (0-5%). In conclusion, performing prophylactic resection after the third attack of diverticulitis is cost saving in comparison to resection performed after the first or second attacks and remains cost-effective during sensitivity analysis.
MOVES2010a regional level sensitivity analysis
2012-12-10
This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...
Cabot, Jennifer C; Lee, Cho Rok; Brunaud, Laurent; Kleiman, David A; Chung, Woong Youn; Fahey, Thomas J; Zarnegar, Rasa
2012-12-01
This study presents a cost analysis of the standard cervical, gasless transaxillary endoscopic, and gasless transaxillary robotic thyroidectomy approaches based on medical costs in the United States. A retrospective review of 140 patients who underwent standard cervical, transaxillary endoscopic, or transaxillary robotic thyroidectomy at 2 tertiary centers was conducted. The cost model included operating room charges, anesthesia fee, consumables cost, equipment depreciation, and maintenance cost. Sensitivity analyses assessed individual cost variables. The mean operative times for the standard cervical, transaxillary endoscopic, and transaxillary robotic approaches were 121 ± 18.9, 185 ± 26.0, and 166 ± 29.4 minutes, respectively. The total cost for the standard cervical, transaxillary endoscopic, and transaxillary robotic approaches were $9,028 ± $891, $12,505 ± $1,222, and $13,670 ± $1,384, respectively. Transaxillary approaches were significantly more expensive than the standard cervical technique (standard cervical/transaxillary endoscopic, P cost when transaxillary endoscopic operative time decreased to 111 minutes and transaxillary robotic operative time decreased to 68 minutes. Increasing the case load did not resolve the cost difference. Transaxillary endoscopic and transaxillary robotic thyroidectomies are significantly more expensive than the standard cervical approach. Decreasing operative times reduces this cost difference. The greater expense may be prohibitive in countries with a flat reimbursement schedule. Copyright © 2012 Mosby, Inc. All rights reserved.
Development of computer software for pavement life cycle cost analysis.
1988-01-01
The life cycle cost analysis program (LCCA) is designed to automate and standardize life cycle costing in Virginia. It allows the user to input information necessary for the analysis, and it then completes the calculations and produces a printed copy...
Flat plate vs. concentrator solar photovoltaic cells - A manufacturing cost analysis
Granon, L. A.; Coleman, M. G.
1980-01-01
The choice of which photovoltaic system (flat plate or concentrator) to use for utilizing solar cells to generate electricity depends mainly on the cost. A detailed, comparative manufacturing cost analysis of the two types of systems is presented. Several common assumptions, i.e., cell thickness, interest rate, power rate, factory production life, polysilicon cost, and direct labor rate are utilized in this analysis. Process sequences, cost variables, and sensitivity analyses have been studied, and results of the latter show that the most important parameters which determine manufacturing costs are concentration ratio, manufacturing volume, and cell efficiency. The total cost per watt of the flat plate solar cell is $1.45, and that of the concentrator solar cell is $1.85, the higher cost being due to the increased process complexity and material costs.
Long vs. short-term energy storage:sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Schoenung, Susan M. (Longitude 122 West, Inc., Menlo Park, CA); Hassenzahl, William V. (,Advanced Energy Analysis, Piedmont, CA)
2007-07-01
This report extends earlier work to characterize long-duration and short-duration energy storage technologies, primarily on the basis of life-cycle cost, and to investigate sensitivities to various input assumptions. Another technology--asymmetric lead-carbon capacitors--has also been added. Energy storage technologies are examined for three application categories--bulk energy storage, distributed generation, and power quality--with significant variations in discharge time and storage capacity. Sensitivity analyses include cost of electricity and natural gas, and system life, which impacts replacement costs and capital carrying charges. Results are presented in terms of annual cost, $/kW-yr. A major variable affecting system cost is hours of storage available for discharge.
Directory of Open Access Journals (Sweden)
Xiao-meng Song
2013-01-01
Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.
Cost-effectiveness analysis of radon remediation in schools
International Nuclear Information System (INIS)
Kennedy, C.A.; Gray, A.M.
2000-01-01
sensitivity analysis show that the ratio is particularly sensitive to assumptions of two parameters including: the average capital cost of remediation and the discount rates chosen for the life yells. The overall model presented in this study can be applied to any other area, and alternative regional parameter estimates can be substituted if these are available. As the sensitivity analysis shows, however, remediation is likely to prove cost-effective even if these parameter estimates are substantially different. These results should help to inform further discussion of policy setting for radon remediation in various settings. It provides an empirical example of the type of economic analysis encouraged by both the UK NRPB (1986) and the ICRP (1983). General information on the average costs of remediation and potential savings to the health care system will be helpful as increasing numbers of local authorities start planning remediation programmes for the schools under their care. This study also highlights the need for the evaluation of other schools remediation-based radon-induced lung cancer prevention programmes in other countries using similar methodological techniques. (author)
7 CFR 550.47 - Cost and price analysis.
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Cost and price analysis. 550.47 Section 550.47... OF AGRICULTURE GENERAL ADMINISTRATIVE POLICY FOR NON-ASSISTANCE COOPERATIVE AGREEMENTS Management of Agreements Procurement Standards § 550.47 Cost and price analysis. Some form of cost or price analysis shall...
24 CFR 965.402 - Benefit/cost analysis.
2010-04-01
... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Benefit/cost analysis. 965.402...-Owned Projects § 965.402 Benefit/cost analysis. (a) A benefit/cost analysis shall be made to determine... (Continued) OFFICE OF ASSISTANT SECRETARY FOR PUBLIC AND INDIAN HOUSING, DEPARTMENT OF HOUSING AND URBAN...
15 CFR 14.45 - Cost and price analysis.
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Cost and price analysis. 14.45 Section... COMMERCIAL ORGANIZATIONS Post-Award Requirements Procurement Standards § 14.45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with...
14 CFR 1274.506 - Cost and price analysis.
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Cost and price analysis. 1274.506 Section... WITH COMMERCIAL FIRMS Procurement Standards § 1274.506 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every...
45 CFR 74.45 - Cost and price analysis.
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Cost and price analysis. 74.45 Section 74.45... ORGANIZATIONS, AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Procurement Standards § 74.45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in...
45 CFR 2543.45 - Cost and price analysis.
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Cost and price analysis. 2543.45 Section 2543.45... ORGANIZATIONS Post-Award Requirements Property Standards § 2543.45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every...
49 CFR 19.45 - Cost and price analysis.
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false Cost and price analysis. 19.45 Section 19.45... Requirements Procurement Standards § 19.45 Cost and price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price...
28 CFR 70.45 - Cost and price analysis.
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Cost and price analysis. 70.45 Section 70... NON-PROFIT ORGANIZATIONS Post-Award Requirements Procurement Standards § 70.45 Cost and price analysis. Some form of cost or price analysis must be made and documented in the procurement files in connection...
40 CFR 35.6585 - Cost and price analysis.
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Cost and price analysis. 35.6585... Response Actions Procurement Requirements Under A Cooperative Agreement § 35.6585 Cost and price analysis. (a) General. The recipient must conduct and document a cost or price analysis in connection with...
Harrington, Rachel; Lee, Edward; Yang, Hongbo; Wei, Jin; Messali, Andrew; Azie, Nkechi; Wu, Eric Q; Spalding, James
2017-01-01
Invasive aspergillosis (IA) is associated with a significant clinical and economic burden. The phase III SECURE trial demonstrated non-inferiority in clinical efficacy between isavuconazole and voriconazole. No studies have evaluated the cost-effectiveness of isavuconazole compared to voriconazole. The objective of this study was to evaluate the costs and cost-effectiveness of isavuconazole vs. voriconazole for the first-line treatment of IA from the US hospital perspective. An economic model was developed to assess the costs and cost-effectiveness of isavuconazole vs. voriconazole in hospitalized patients with IA. The time horizon was the duration of hospitalization. Length of stay for the initial admission, incidence of readmission, clinical response, overall survival rates, and experience of adverse events (AEs) came from the SECURE trial. Unit costs were from the literature. Total costs per patient were estimated, composed of drug costs, costs of AEs, and costs of hospitalizations. Incremental costs per death avoided and per additional clinical responders were reported. Deterministic and probabilistic sensitivity analyses (DSA and PSA) were conducted. Base case analysis showed that isavuconazole was associated with a $7418 lower total cost per patient than voriconazole. In both incremental costs per death avoided and incremental costs per additional clinical responder, isavuconazole dominated voriconazole. Results were robust in sensitivity analysis. Isavuconazole was cost saving and dominant vs. voriconazole in most DSA. In PSA, isavuconazole was cost saving in 80.2% of the simulations and cost-effective in 82.0% of the simulations at the $50,000 willingness to pay threshold per additional outcome. Isavuconazole is a cost-effective option for the treatment of IA among hospitalized patients. Astellas Pharma Global Development, Inc.
Analysis and modeling of rail maintenance costs
Directory of Open Access Journals (Sweden)
Amir Ali Bakhshi
2012-01-01
Full Text Available Railroad maintenance engineering plays an important role on availability of roads and reducing the cost of railroad incidents. Rail is of the most important parts of railroad industry, which needs regular maintenance since it covers a significant part of total maintenance cost. Any attempt on optimizing total cost of maintenance could substantially reduce the cost of railroad system and it can reduce total cost of the industry. The paper presents a new method to estimate the cost of rail failure using different cost components such as cost of inspection and cost of risk associated with possible accidents. The proposed model of this paper is used for a real-world case study of railroad transportation of Tehran region and the results have been analyzed.
An analysis of energy conservation measure costs
International Nuclear Information System (INIS)
Jones, R.; Ellis, R.; Gellineau, D.
1990-01-01
This paper reports on a Denver Support Office project to evaluate cost estimation in the Institutional Conservation Program. Unit cost characteristics and cost prediction accuracy were evaluated from 1,721 Energy Conservation Measures (ECMs) and 390 Technical Assistance (TA) reports funded in the last six years. This information is especially useful to state and DOE review engineers in determining the reasonableness of future cost estimates. The estimated cost provisions for TA report grants were generally adequate to cover the actual costs. Individually, there was a tendency for TA reports to cost less than estimated by about 10%. TA report unit costs averaged $.09 to $.11 per square foot, and decreased as the building size increased. Individually, there was a tendency for ECMs to cost more than estimated by about 17%. Overall, the estimated costs of the 1,721 measures were $20.4 minion, while the actual costs were $21.4 million. This 4.6% difference indicates that, overall, ECM cost estimates have provided a reasonable basis for grant awards. There was a high variation in ECM unit costs. The data did not support speculation that there is a tendency to manipulate cost estimates to fit ECMs within the simple payback eligibility criteria of 2 to 10 years
BIM cost analysis of transport infrastructure projects
Volkov, Andrey; Chelyshkov, Pavel; Grossman, Y.; Khromenkova, A.
2017-10-01
The article describes the method of analysis of the energy costs of transport infrastructure objects using BIM software. The paper consideres several options of orientation of a building using SketchUp and IES VE software programs. These options allow to choose the best direction of the building facades. Particular attention is given to a distribution of a temperature field in a cross-section of the wall according to the calculation made in the ELCUT software. The issues related to calculation of solar radiation penetration into a building and selection of translucent structures are considered in the paper. The article presents data on building codes relating to the transport sector, on the basis of which the calculations were made. The author emphasizes that BIM-programs should be implemented and used in order to optimize a thermal behavior of a building and increase its energy efficiency using climatic data.
An improved set of standards for finding cost for cost-effectiveness analysis.
Barnett, Paul G
2009-07-01
Guidelines have helped standardize methods of cost-effectiveness analysis, allowing different interventions to be compared and enhancing the generalizability of study findings. There is agreement that all relevant services be valued from the societal perspective using a long-term time horizon and that more exact methods be used to cost services most affected by the study intervention. Guidelines are not specific enough with respect to costing methods, however. The literature was reviewed to identify the problems associated with the 4 principal methods of cost determination. Microcosting requires direct measurement and is ordinarily reserved to cost novel interventions. Analysts should include nonwage labor cost, person-level and institutional overhead, and the cost of development, set-up activities, supplies, space, and screening. Activity-based cost systems have promise of finding accurate costs of all services provided, but are not widely adopted. Quality must be evaluated and the generalizability of cost estimates to other settings must be considered. Administrative cost estimates, chiefly cost-adjusted charges, are widely used, but the analyst must consider items excluded from the available system. Gross costing methods determine quantity of services used and employ a unit cost. If the intervention will affect the characteristics of a service, the method should not assume that the service is homogeneous. Questions are posed for future reviews of the quality of costing methods. The analyst must avoid inappropriate assumptions, especially those that bias the analysis by exclusion of costs that are affected by the intervention under study.
NPV Sensitivity Analysis: A Dynamic Excel Approach
Mangiero, George A.; Kraten, Michael
2017-01-01
Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…
Sensitivity Analysis for Multidisciplinary Systems (SAMS)
2016-12-01
release. Distribution is unlimited. 14 Server and Client Code Server from geometry import Point, Geometry import math import zmq class Server...public release; Distribution is unlimited. DISTRIBUTION STATEMENT A: Approved for public release. Distribution is unlimited. 19 Example Application Boeing...Materials Conference, 2011. Cross, D. M., Local continuum sensitivity method for shape design derivatives using spatial gradient reconstruction. Diss
Simplified procedures for fast reactor fuel cycle and sensitivity analysis
International Nuclear Information System (INIS)
Badruzzaman, A.
1979-01-01
The Continuous Slowing Down-Integral Transport Theory has been extended to perform criticality calculations in a Fast Reactor Core-blanket system achieving excellent prediction of the spectrum and the eigenvalue. The integral transport parameters did not need recalculation with source iteration and were found to be relatively constant with exposure. Fuel cycle parameters were accurately predicted when these were not varied, thus reducing a principal potential penalty of the Intergal Transport approach where considerable effort may be required to calculate transport parameters in more complicated geometries. The small variation of the spectrum in the central core region, and its weak dependence on exposure for both this region, the core blanket interface and blanket region led to the extension and development of inexpensive simplified procedures to complement exact methods. These procedures gave accurate predictions of the key fuel cycle parameters such as cost and their sensitivity to variation in spectrum-averaged and multigroup cross sections. They also predicted the implications of design variation on these parameters very well. The accuracy of these procedures and their use in analyzing a wide variety of sensitivities demonstrate the potential utility of survey calculations in Fast Reactor analysis and fuel management
Cost-benefit analysis of avian influenza control in Nepal.
Karki, S; Lupiani, B; Budke, C M; Karki, N P S; Rushton, J; Ivanek, R
2015-12-01
Numerous outbreaks of highly pathogenic avian influenza A strain H5N1 have occurred in Nepal since 2009 despite implementation of a national programme to control the disease through surveillance and culling of infected poultry flocks. The objective of the study was to use cost-benefit analysis to compare the current control programme (CCP) with the possible alternatives of: i) no intervention (i.e., absence of control measures [ACM]) and ii) vaccinating 60% of the national poultry flock twice a year. In terms of the benefit-cost ratio, findings indicate a return of US $1.94 for every dollar spent in the CCP compared with ACM. The net present value of the CCP versus ACM, i.e., the amount of money saved by implementing the CCP rather than ACM, is US $861,507 (the benefits of CCP [prevented losses which would have occurred under ACM] minus the cost of CCP). The vaccination programme yields a return of US $2.32 for every dollar spent when compared with the CCR The net present value of vaccination versus the CCP is approximately US $12 million. Sensitivity analysis indicated thatthe findings were robust to different rates of discounting, whereas results were sensitive to the assumed market loss and the number of birds affected in the outbreaks under the ACM and vaccination options. Overall, the findings of the study indicate that the CCP is economically superior to ACM, but that vaccination could give greater economic returns and may be a better control strategy. Future research should be directed towards evaluating the financial feasibility and social acceptability of the CCP and of vaccination, with an emphasis on evaluating market reaction to the presence of H5N1 infection in the country.
A Cost-Effectiveness Analysis of the Swedish Universal Parenting Program All Children in Focus
Ulfsdotter, Malin
2015-01-01
Objective There are few health economic evaluations of parenting programs with quality-adjusted life-years (QALYs) as the outcome measure. The objective of this study was, therefore, to conduct a cost-effectiveness analysis of the universal parenting program All Children in Focus (ABC). The goals were to estimate the costs of program implementation, investigate the health effects of the program, and examine its cost-effectiveness. Methods A cost-effectiveness analysis was conducted. Costs included setup costs and operating costs. A parent proxy Visual Analog Scale was used to measure QALYs in children, whereas the General Health Questionnaire-12 was used for parents. A societal perspective was adopted, and the incremental cost-effectiveness ratio was calculated. To account for uncertainty in the estimate, the probability of cost-effectiveness was investigated, and sensitivity analyses were used to account for the uncertainty in cost data. Results The cost was €326.3 per parent, of which €53.7 represented setup costs under the assumption that group leaders on average run 10 groups, and €272.6 was the operating costs. For health effects, the QALY gain was 0.0042 per child and 0.0027 per parent. These gains resulted in an incremental cost-effectiveness ratio for the base case of €47 290 per gained QALY. The sensitivity analyses resulted in ratios from €41 739 to €55 072. With the common Swedish threshold value of €55 000 per QALY, the probability of the ABC program being cost-effective was 50.8 percent. Conclusion Our analysis of the ABC program demonstrates cost-effectiveness ratios below or just above the QALY threshold in Sweden. However, due to great uncertainty about the data, the health economic rationale for implementation should be further studied considering a longer time perspective, effects on siblings, and validated measuring techniques, before full scale implementation. PMID:26681349
A Cost-Effectiveness Analysis of the Swedish Universal Parenting Program All Children in Focus.
Directory of Open Access Journals (Sweden)
Malin Ulfsdotter
Full Text Available There are few health economic evaluations of parenting programs with quality-adjusted life-years (QALYs as the outcome measure. The objective of this study was, therefore, to conduct a cost-effectiveness analysis of the universal parenting program All Children in Focus (ABC. The goals were to estimate the costs of program implementation, investigate the health effects of the program, and examine its cost-effectiveness.A cost-effectiveness analysis was conducted. Costs included setup costs and operating costs. A parent proxy Visual Analog Scale was used to measure QALYs in children, whereas the General Health Questionnaire-12 was used for parents. A societal perspective was adopted, and the incremental cost-effectiveness ratio was calculated. To account for uncertainty in the estimate, the probability of cost-effectiveness was investigated, and sensitivity analyses were used to account for the uncertainty in cost data.The cost was € 326.3 per parent, of which € 53.7 represented setup costs under the assumption that group leaders on average run 10 groups, and € 272.6 was the operating costs. For health effects, the QALY gain was 0.0042 per child and 0.0027 per parent. These gains resulted in an incremental cost-effectiveness ratio for the base case of € 47 290 per gained QALY. The sensitivity analyses resulted in ratios from € 41 739 to € 55 072. With the common Swedish threshold value of € 55 000 per QALY, the probability of the ABC program being cost-effective was 50.8 percent.Our analysis of the ABC program demonstrates cost-effectiveness ratios below or just above the QALY threshold in Sweden. However, due to great uncertainty about the data, the health economic rationale for implementation should be further studied considering a longer time perspective, effects on siblings, and validated measuring techniques, before full scale implementation.
A Cost-Effectiveness Analysis of the Swedish Universal Parenting Program All Children in Focus.
Ulfsdotter, Malin; Lindberg, Lene; Månsdotter, Anna
2015-01-01
There are few health economic evaluations of parenting programs with quality-adjusted life-years (QALYs) as the outcome measure. The objective of this study was, therefore, to conduct a cost-effectiveness analysis of the universal parenting program All Children in Focus (ABC). The goals were to estimate the costs of program implementation, investigate the health effects of the program, and examine its cost-effectiveness. A cost-effectiveness analysis was conducted. Costs included setup costs and operating costs. A parent proxy Visual Analog Scale was used to measure QALYs in children, whereas the General Health Questionnaire-12 was used for parents. A societal perspective was adopted, and the incremental cost-effectiveness ratio was calculated. To account for uncertainty in the estimate, the probability of cost-effectiveness was investigated, and sensitivity analyses were used to account for the uncertainty in cost data. The cost was € 326.3 per parent, of which € 53.7 represented setup costs under the assumption that group leaders on average run 10 groups, and € 272.6 was the operating costs. For health effects, the QALY gain was 0.0042 per child and 0.0027 per parent. These gains resulted in an incremental cost-effectiveness ratio for the base case of € 47 290 per gained QALY. The sensitivity analyses resulted in ratios from € 41 739 to € 55 072. With the common Swedish threshold value of € 55 000 per QALY, the probability of the ABC program being cost-effective was 50.8 percent. Our analysis of the ABC program demonstrates cost-effectiveness ratios below or just above the QALY threshold in Sweden. However, due to great uncertainty about the data, the health economic rationale for implementation should be further studied considering a longer time perspective, effects on siblings, and validated measuring techniques, before full scale implementation.
Comparative Life-Cycle Cost Analysis Of Solar Photovoltaic Power ...
African Journals Online (AJOL)
Comparative Life-Cycle Cost Analysis Of Solar Photovoltaic Power System And Diesel Generator System For Remote Residential Application In Nigeria. ... like capital cost, and diesel fuel costs are varied. The results show the photovoltaic system to be more cost-effective at low-power ranges of electrical energy supply.
Moche CAPE Formula: Cost Analysis of Public Education.
Moche, Joanne Spiers
The Moche Cost Analysis of Public Education (CAPE) formula was developed to identify total and per pupil costs of regular elementary education, regular secondary education, elementary special education, and secondary special education. Costs are analyzed across five components: (1) comprehensive costs (including transportation and supplemental…
Social costs of road crashes : an international analysis.
Wijnen, W. & Stipdonk, H.L.
2016-01-01
This paper provides an international overview of the most recent estimates of the social costs of road crashes: total costs, value per casualty and breakdown in cost components. The analysis is based on publications about the national costs of road crashes of 17 countries, of which ten high income
The JPL Cost Risk Analysis Approach that Incorporates Engineering Realism
Harmon, Corey C.; Warfield, Keith R.; Rosenberg, Leigh S.
2006-01-01
This paper discusses the JPL Cost Engineering Group (CEG) cost risk analysis approach that accounts for all three types of cost risk. It will also describe the evaluation of historical cost data upon which this method is based. This investigation is essential in developing a method that is rooted in engineering realism and produces credible, dependable results to aid decision makers.
Robust and sensitive analysis of mouse knockout phenotypes.
Directory of Open Access Journals (Sweden)
Natasha A Karp
Full Text Available A significant challenge of in-vivo studies is the identification of phenotypes with a method that is robust and reliable. The challenge arises from practical issues that lead to experimental designs which are not ideal. Breeding issues, particularly in the presence of fertility or fecundity problems, frequently lead to data being collected in multiple batches. This problem is acute in high throughput phenotyping programs. In addition, in a high throughput environment operational issues lead to controls not being measured on the same day as knockouts. We highlight how application of traditional methods, such as a Student's t-Test or a 2-way ANOVA, in these situations give flawed results and should not be used. We explore the use of mixed models using worked examples from Sanger Mouse Genome Project focusing on Dual-Energy X-Ray Absorptiometry data for the analysis of mouse knockout data and compare to a reference range approach. We show that mixed model analysis is more sensitive and less prone to artefacts allowing the discovery of subtle quantitative phenotypes essential for correlating a gene's function to human disease. We demonstrate how a mixed model approach has the additional advantage of being able to include covariates, such as body weight, to separate effect of genotype from these covariates. This is a particular issue in knockout studies, where body weight is a common phenotype and will enhance the precision of assigning phenotypes and the subsequent selection of lines for secondary phenotyping. The use of mixed models with in-vivo studies has value not only in improving the quality and sensitivity of the data analysis but also ethically as a method suitable for small batches which reduces the breeding burden of a colony. This will reduce the use of animals, increase throughput, and decrease cost whilst improving the quality and depth of knowledge gained.
Lamb Production Costs: Analyses of Composition and Elasticities Analysis of Lamb Production Costs
Directory of Open Access Journals (Sweden)
C. Raineri
2015-08-01
Full Text Available Since lamb is a commodity, producers cannot control the price of the product they sell. Therefore, managing production costs is a necessity. We explored the study of elasticities as a tool for basing decision-making in sheep production, and aimed at investigating the composition and elasticities of lamb production costs, and their influence on the performance of the activity. A representative sheep production farm, designed in a panel meeting, was the base for calculation of lamb production cost. We then performed studies of: i costs composition, and ii cost elasticities for prices of inputs and for zootechnical indicators. Variable costs represented 64.15% of total cost, while 21.66% were represented by operational fixed costs, and 14.19% by the income of the factors. As for elasticities to input prices, the opportunity cost of land was the item to which production cost was more sensitive: a 1% increase in its price would cause a 0.2666% increase in lamb cost. Meanwhile, the impact of increasing any technical indicator was significantly higher than the impact of rising input prices. A 1% increase in weight at slaughter, for example, would reduce total cost in 0.91%. The greatest obstacle to economic viability of sheep production under the observed conditions is low technical efficiency. Increased production costs are more related to deficient zootechnical indexes than to high expenses.
Rodriguez-Martinez, Carlos E; Sossa-Briceño, Monica P; Castro-Rodriguez, Jose A
2018-05-01
Asthma educational interventions have been shown to improve several clinically and economically important outcomes. However, these interventions are costly in themselves and could lead to even higher disease costs. A cost-effectiveness threshold analysis would be helpful in determining the threshold value of the cost of educational interventions, leading to these interventions being cost-effective. The aim of the present study was to perform a cost-effectiveness threshold analysis to determine the level at which the cost of a pediatric asthma educational intervention would be cost-effective and cost-saving. A Markov-type model was developed in order to estimate costs and health outcomes of a simulated cohort of pediatric patients with persistent asthma treated over a 12-month period. Effectiveness parameters were obtained from a single uncontrolled before-and-after study performed with Colombian asthmatic children. Cost data were obtained from official databases provided by the Colombian Ministry of Health. The main outcome was the variable "quality-adjusted life-years" (QALYs). A deterministic threshold sensitivity analysis showed that the asthma educational intervention will be cost-saving to the health system if its cost is under US$513.20. Additionally, the analysis showed that the cost of the intervention would have to be below US$967.40 in order to be cost-effective. This study identified the level at which the cost of a pediatric asthma educational intervention will be cost-effective and cost-saving for the health system in Colombia. Our findings could be a useful aid for decision makers in efficiently allocating limited resources when planning asthma educational interventions for pediatric patients.
The role of sensitivity analysis in probabilistic safety assessment
International Nuclear Information System (INIS)
Hirschberg, S.; Knochenhauer, M.
1987-01-01
The paper describes several items suitable for close examination by means of application of sensitivity analysis, when performing a level 1 PSA. Sensitivity analyses are performed with respect to; (1) boundary conditions, (2) operator actions, and (3) treatment of common cause failures (CCFs). The items of main interest are identified continuously in the course of performing a PSA, as well as by scrutinising the final results. The practical aspects of sensitivity analysis are illustrated by several applications from a recent PSA study (ASEA-ATOM BWR 75). It is concluded that sensitivity analysis leads to insights important for analysts, reviewers and decision makers. (orig./HP)
Cost Accounting and Analysis for University Libraries.
Leimkuhler, Ferdinand F.; Cooper, Michael D.
The approach to library planning studied in this report is the use of accounting models to measure library costs and implement program budgets. A cost-flow model for a university library is developed and listed with historical data from the Berkeley General Library. Various comparisons of an exploratory nature are made of the unit costs for…
Comparison of global sensitivity analysis methods – Application to fuel behavior modeling
Energy Technology Data Exchange (ETDEWEB)
Ikonen, Timo, E-mail: timo.ikonen@vtt.fi
2016-02-15
Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.
Automated sensitivity analysis using the GRESS language
International Nuclear Information System (INIS)
Pin, F.G.; Oblow, E.M.; Wright, R.Q.
1986-04-01
An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies
Pinjari, Rahul V; Delcey, Mickaël G; Guo, Meiyuan; Odelius, Michael; Lundberg, Marcus
2016-02-15
The restricted active-space (RAS) approach can accurately simulate metal L-edge X-ray absorption spectra of first-row transition metal complexes without the use of any fitting parameters. These characteristics provide a unique capability to identify unknown chemical species and to analyze their electronic structure. To find the best balance between cost and accuracy, the sensitivity of the simulated spectra with respect to the method variables has been tested for two models, [FeCl6 ](3-) and [Fe(CN)6 ](3-) . For these systems, the reference calculations give deviations, when compared with experiment, of ≤1 eV in peak positions, ≤30% for the relative intensity of major peaks, and ≤50% for minor peaks. When compared with these deviations, the simulated spectra are sensitive to the number of final states, the inclusion of dynamical correlation, and the ionization potential electron affinity shift, in addition to the selection of the active space. The spectra are less sensitive to the quality of the basis set and even a double-ζ basis gives reasonable results. The inclusion of dynamical correlation through second-order perturbation theory can be done efficiently using the state-specific formalism without correlating the core orbitals. Although these observations are not directly transferable to other systems, they can, together with a cost analysis, aid in the design of RAS models and help to extend the use of this powerful approach to a wider range of transition metal systems. © 2015 Wiley Periodicals, Inc.
A cost-effectiveness analysis of two different antimicrobial stewardship programs.
Okumura, Lucas Miyake; Riveros, Bruno Salgado; Gomes-da-Silva, Monica Maria; Veroneze, Izelandia
2016-01-01
There is a lack of formal economic analysis to assess the efficiency of antimicrobial stewardship programs. Herein, we conducted a cost-effectiveness study to assess two different strategies of Antimicrobial Stewardship Programs. A 30-day Markov model was developed to analyze how cost-effective was a Bundled Antimicrobial Stewardship implemented in a university hospital in Brazil. Clinical data derived from a historical cohort that compared two different strategies of antimicrobial stewardship programs and had 30-day mortality as main outcome. Selected costs included: workload, cost of defined daily doses, length of stay, laboratory and imaging resources used to diagnose infections. Data were analyzed by deterministic and probabilistic sensitivity analysis to assess model's robustness, tornado diagram and Cost-Effectiveness Acceptability Curve. Bundled Strategy was more expensive (Cost difference US$ 2119.70), however, it was more efficient (US$ 27,549.15 vs 29,011.46). Deterministic and probabilistic sensitivity analysis suggested that critical variables did not alter final Incremental Cost-Effectiveness Ratio. Bundled Strategy had higher probabilities of being cost-effective, which was endorsed by cost-effectiveness acceptability curve. As health systems claim for efficient technologies, this study conclude that Bundled Antimicrobial Stewardship Program was more cost-effective, which means that stewardship strategies with such characteristics would be of special interest in a societal and clinical perspective. Copyright © 2016 Elsevier Editora Ltda. All rights reserved.
A cost-effectiveness analysis of two different antimicrobial stewardship programs
Directory of Open Access Journals (Sweden)
Lucas Miyake Okumura
2016-05-01
Full Text Available There is a lack of formal economic analysis to assess the efficiency of antimicrobial stewardship programs. Herein, we conducted a cost-effectiveness study to assess two different strategies of Antimicrobial Stewardship Programs. A 30-day Markov model was developed to analyze how cost-effective was a Bundled Antimicrobial Stewardship implemented in a university hospital in Brazil. Clinical data derived from a historical cohort that compared two different strategies of antimicrobial stewardship programs and had 30-day mortality as main outcome. Selected costs included: workload, cost of defined daily doses, length of stay, laboratory and imaging resources used to diagnose infections. Data were analyzed by deterministic and probabilistic sensitivity analysis to assess model's robustness, tornado diagram and Cost-Effectiveness Acceptability Curve. Bundled Strategy was more expensive (Cost difference US$ 2119.70, however, it was more efficient (US$ 27,549.15 vs 29,011.46. Deterministic and probabilistic sensitivity analysis suggested that critical variables did not alter final Incremental Cost-Effectiveness Ratio. Bundled Strategy had higher probabilities of being cost-effective, which was endorsed by cost-effectiveness acceptability curve. As health systems claim for efficient technologies, this study conclude that Bundled Antimicrobial Stewardship Program was more cost-effective, which means that stewardship strategies with such characteristics would be of special interest in a societal and clinical perspective.
An examination of sources of sensitivity of consumer surplus estimates in travel cost models.
Blaine, Thomas W; Lichtkoppler, Frank R; Bader, Timothy J; Hartman, Travis J; Lucente, Joseph E
2015-03-15
We examine sensitivity of estimates of recreation demand using the Travel Cost Method (TCM) to four factors. Three of the four have been routinely and widely discussed in the TCM literature: a) Poisson verses negative binomial regression; b) application of Englin correction to account for endogenous stratification; c) truncation of the data set to eliminate outliers. A fourth issue we address has not been widely modeled: the potential effect on recreation demand of the interaction between income and travel cost. We provide a straightforward comparison of all four factors, analyzing the impact of each on regression parameters and consumer surplus estimates. Truncation has a modest effect on estimates obtained from the Poisson models but a radical effect on the estimates obtained by way of the negative binomial. Inclusion of an income-travel cost interaction term generally produces a more conservative but not a statistically significantly different estimate of consumer surplus in both Poisson and negative binomial models. It also generates broader confidence intervals. Application of truncation, the Englin correction and the income-travel cost interaction produced the most conservative estimates of consumer surplus and eliminated the statistical difference between the Poisson and the negative binomial. Use of the income-travel cost interaction term reveals that for visitors who face relatively low travel costs, the relationship between income and travel demand is negative, while it is positive for those who face high travel costs. This provides an explanation of the ambiguities on the findings regarding the role of income widely observed in the TCM literature. Our results suggest that policies that reduce access to publicly owned resources inordinately impact local low income recreationists and are contrary to environmental justice. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sensitivity Analysis of a Simplified Fire Dynamic Model
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt; Nielsen, Anker
2015-01-01
This paper discusses a method for performing a sensitivity analysis of parameters used in a simplified fire model for temperature estimates in the upper smoke layer during a fire. The results from the sensitivity analysis can be used when individual parameters affecting fire safety are assessed...
Francis, Tittu; Washington, Travis; Srivastava, Karan; Moutzouros, Vasilios; Makhni, Eric C; Hakeos, William
2017-11-01
Tension band wiring (TBW) and locked plating are common treatment options for Mayo IIA olecranon fractures. Clinical trials have shown excellent functional outcomes with both techniques. Although TBW implants are significantly less expensive than a locked olecranon plate, TBW often requires an additional operation for implant removal. To choose the most cost-effective treatment strategy, surgeons must understand how implant costs and return to the operating room influence the most cost-effective strategy. This cost-effective analysis study explored the optimal treatment strategies by using decision analysis tools. An expected-value decision tree was constructed to estimate costs based on the 2 implant choices. Values for critical variables, such as implant removal rate, were obtained from the literature. A Monte Carlo simulation consisting of 100,000 trials was used to incorporate variability in medical costs and implant removal rates. Sensitivity analysis and strategy tables were used to show how different variables influence the most cost-effective strategy. TBW was the most cost-effective strategy, with a cost savings of approximately $1300. TBW was also the dominant strategy by being the most cost-effective solution in 63% of the Monte Carlo trials. Sensitivity analysis identified implant costs for plate fixation and surgical costs for implant removal as the most sensitive parameters influencing the cost-effective strategy. Strategy tables showed the most cost-effective solution as 2 parameters vary simultaneously. TBW is the most cost-effective strategy in treating Mayo IIA olecranon fractures despite a higher rate of return to the operating room. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.
Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P
2018-04-01
Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.
Cost-effectiveness analysis of infant feeding strategies to prevent ...
African Journals Online (AJOL)
Changing feeding practices is beneficial, depending on context. Breastfeeding is dominant (less costly, more effective) in rural settings, whilst formula feeding is a dominant strategy in urban settings. Cost-effectiveness was most sensitive to proportion of women on lifelong antiretroviral therapy (ART) and infant mortality rate ...
Micosoft Excel Sensitivity Analysis for Linear and Stochastic Program Feed Formulation
Sensitivity analysis is a part of mathematical programming solutions and is used in making nutritional and economic decisions for a given feed formulation problem. The terms, shadow price and reduced cost, are familiar linear program (LP) terms to feed formulators. Because of the nonlinear nature of...
40 CFR 30.45 - Cost and price analysis.
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Cost and price analysis. 30.45 Section... price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various ways...
38 CFR 49.45 - Cost and price analysis.
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Cost and price analysis... price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various ways...
14 CFR 1260.145 - Cost and price analysis.
2010-01-01
... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Cost and price analysis. 1260.145 Section... price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various ways...
32 CFR 32.45 - Cost and price analysis.
2010-07-01
... 32 National Defense 1 2010-07-01 2010-07-01 false Cost and price analysis. 32.45 Section 32.45... price analysis. Some form of cost or price analysis shall be made and documented in the procurement files in connection with every procurement action. Price analysis may be accomplished in various ways...
Cost Analysis of NEDU’s Helium Reclaimer.
1981-09-01
T ITLE (and Subtitle) S . TYPE OF REPORT 6 PERIOD COVERED COST ANALYSIS OF NEDU’S HELIUM RECLAIMER . Survey 6 . PERFORMING ORG. REPORT NUMSER 7...telephone conversation). 5. Charles T. Horngren , "Introduction tu Management Accounting " Fourth Edition. 3 . .4m mmnssmmlm~ • FIGURE 1 PRESENT, FUTURE AND...FEET COST OF PERIODIC MAINTENANCE OF HELIUM ELECTRIiC COST COST OF TOTAL RECLAIMED POWER NEW COST PRESENT WORTH YEAR N PER YEAR ( S /1000 FT
Global sensitivity analysis using low-rank tensor approximations
International Nuclear Information System (INIS)
Konakli, Katerina; Sudret, Bruno
2016-01-01
In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.
International Nuclear Information System (INIS)
Shay, M.R.
1990-04-01
The System Engineering Cost Analysis (SECA) capability has been developed by the System Integration Branch of the US Department of Energy's Office of Civilian Radioactive Waste Management for use in assessing the cost performance of alternative waste management system configurations. The SECA capability is designed to provide rapid cost estimates of the waste management system for a given operational scenario and to permit aggregate or detailed cost comparisons for alternative waste system configurations. This capability may be used as an integral part of the System Integration Modeling System (SIMS) or, with appropriate input defining a scenario, as a separate cost analysis model
Cost analysis of energy storage systems for electric utility applications
Energy Technology Data Exchange (ETDEWEB)
Akhil, A. [Sandia National Lab., Albuquerque, NM (United States); Swaminathan, S.; Sen, R.K. [R.K. Sen & Associates, Inc., Bethesda, MD (United States)
1997-02-01
Under the sponsorship of the Department of Energy, Office of Utility Technologies, the Energy Storage System Analysis and Development Department at Sandia National Laboratories (SNL) conducted a cost analysis of energy storage systems for electric utility applications. The scope of the study included the analysis of costs for existing and planned battery, SMES, and flywheel energy storage systems. The analysis also identified the potential for cost reduction of key components.
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
Rapid, sensitive and cost effective method for isolation of viral DNA from feacal samples of dogs
Directory of Open Access Journals (Sweden)
Savi.
2010-06-01
Full Text Available A simple method for viral DNA extraction using chelex resin was developed. The method used was eco-friendly and cost effective compared to other methods such as phenol chloroform method which use health hazardous organic reagents. Further, a polymerase chain reaction (PCR based detection of canine parvovirus (CPV using primers from conserved region of VP2 gene was developed. To increase the sensitivity and specificity of reaction, nested PCR was designed. PCR reaction was optimized to amplify 747bp product of VP2 gene. The assay can be completed in few hours and doesn’t need hazardous chemicals. Thus, the sample preparation using chelating resin along with nested PCR seems to be a sensitive, specific and practical method for the detection of CPV in diarrhoeal feacal samples. [Vet. World 2010; 3(3.000: 105-106
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Automating sensitivity analysis of computer models using computer calculus
International Nuclear Information System (INIS)
Oblow, E.M.; Pin, F.G.
1986-01-01
An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Directory of Open Access Journals (Sweden)
Georgios Arampatzis
Full Text Available Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of
McGuffin, M; Merino, T; Keller, B; Pignol, J-P
2017-03-01
Standard treatment for early breast cancer includes whole breast irradiation (WBI) after breast-conserving surgery. Recently, accelerated partial breast irradiation (APBI) has been proposed for well-selected patients. A cost and cost-effectiveness analysis was carried out comparing WBI with two APBI techniques. An activity-based costing method was used to determine the treatment cost from a societal perspective of WBI, high dose rate brachytherapy (HDR) and permanent breast seed implants (PBSI). A Markov model comparing the three techniques was developed with downstream costs, utilities and probabilities adapted from the literature. Sensitivity analyses were carried out for a wide range of variables, including treatment costs, patient costs, utilities and probability of developing recurrences. Overall, HDR was the most expensive ($14 400), followed by PBSI ($8700), with WBI proving the least expensive ($6200). The least costly method to the health care system was WBI, whereas PBSI and HDR were less costly for the patient. Under cost-effectiveness analyses, downstream costs added about $10 000 to the total societal cost of the treatment. As the outcomes are very similar between techniques, WBI dominated under cost-effectiveness analyses. WBI was found to be the most cost-effective radiotherapy technique for early breast cancer. However, both APBI techniques were less costly to the patient. Although innovation may increase costs for the health care system it can provide cost savings for the patient in addition to convenience. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)
2015-08-15
Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.
An Analysis of the IOM Cost Study
Schwartz, Michael A.
1976-01-01
During the 1972-73 academic year, the National Institute of Medicine (IOM) undertook a study of the cost of education of those health professionals supported through federal capitation grants. The methodology of the study is described and the patterns of costs of pharmacy education are compared with those in another profession. (LBH)
Department of the Army Cost Analysis Manual
2001-05-01
SECTION I - AUTOMATED COST ESTIMATING INTEGRATED TOOLS ( ACEIT ) ................................................................179 SECTION II - AUTOMATED...Management & Comptroller) endorsed the Automated Cost Estimating Integrated Tools ( ACEIT ) model and since it is widely used to prepare POEs, CCAs and...CRB IPT (in ACEIT ) will be the basis for information contained in the CAB. Any remaining unresolved issues from the IPT process will be raised at the
Cost Risk Analysis Based on Perception of the Engineering Process
Dean, Edwin B.; Wood, Darrell A.; Moore, Arlene A.; Bogart, Edward H.
1986-01-01
In most cost estimating applications at the NASA Langley Research Center (LaRC), it is desirable to present predicted cost as a range of possible costs rather than a single predicted cost. A cost risk analysis generates a range of cost for a project and assigns a probability level to each cost value in the range. Constructing a cost risk curve requires a good estimate of the expected cost of a project. It must also include a good estimate of expected variance of the cost. Many cost risk analyses are based upon an expert's knowledge of the cost of similar projects in the past. In a common scenario, a manager or engineer, asked to estimate the cost of a project in his area of expertise, will gather historical cost data from a similar completed project. The cost of the completed project is adjusted using the perceived technical and economic differences between the two projects. This allows errors from at least three sources. The historical cost data may be in error by some unknown amount. The managers' evaluation of the new project and its similarity to the old project may be in error. The factors used to adjust the cost of the old project may not correctly reflect the differences. Some risk analyses are based on untested hypotheses about the form of the statistical distribution that underlies the distribution of possible cost. The usual problem is not just to come up with an estimate of the cost of a project, but to predict the range of values into which the cost may fall and with what level of confidence the prediction is made. Risk analysis techniques that assume the shape of the underlying cost distribution and derive the risk curve from a single estimate plus and minus some amount usually fail to take into account the actual magnitude of the uncertainty in cost due to technical factors in the project itself. This paper addresses a cost risk method that is based on parametric estimates of the technical factors involved in the project being costed. The engineering
Sensitivity Analysis Based on Markovian Integration by Parts Formula
Directory of Open Access Journals (Sweden)
Yongsheng Hang
2017-10-01
Full Text Available Sensitivity analysis is widely applied in financial risk management and engineering; it describes the variations brought by the changes of parameters. Since the integration by parts technique for Markov chains is well developed in recent years, in this paper we apply it for computation of sensitivity and show the closed-form expressions for two commonly-used time-continuous Markovian models. By comparison, we conclude that our approach outperforms the existing technique of computing sensitivity on Markovian models.
Sensitivity analysis on uncertainty variables affecting the NPP's LUEC with probabilistic approach
International Nuclear Information System (INIS)
Nuryanti; Akhmad Hidayatno; Erlinda Muslim
2013-01-01
One thing that is quite crucial to be reviewed prior to any investment decision on the nuclear power plant (NPP) project is the calculation of project economic, including calculation of Levelized Unit Electricity Cost (LUEC). Infrastructure projects such as NPP’s project are vulnerable to a number of uncertainty variables. Information on the uncertainty variables which makes LUEC’s value quite sensitive due to the changes of them is necessary in order the cost overrun can be avoided. Therefore this study aimed to do the sensitivity analysis on variables that affect LUEC with probabilistic approaches. This analysis was done by using Monte Carlo technique that simulate the relationship between the uncertainty variables and visible impact on LUEC. The sensitivity analysis result shows the significant changes on LUEC value of AP1000 and OPR due to the sensitivity of investment cost and capacity factors. While LUEC changes due to sensitivity of U 3 O 8 ’s price looks not quite significant. (author)
International Nuclear Information System (INIS)
Mehta, Minesh; Noyes, William; Craig, Bruce; Lamond, John; Auchter, Richard; French, Molly; Johnson, Mark; Levin, Allan; Badie, Behnam; Robbins, Ian; Kinsella, Timothy
1997-01-01
determined. To calculate the societal or national impact of these practices, the proportion of patients potentially eligible for aggressive management was estimated and the financial impact was determined using various utilization ratios for radiosurgery and surgery. Results: Both resection and radiosurgery yielded superior survival and functional independence, compared to whole brain radiotherapy alone, with minor differences in outcome between the two modalities; resection resulted in a 1.8-fold increase in cost, compared to radiosurgery. The latter modality yielded superior cost outcomes on all measures, even when a sensitivity analysis of up to 50% was performed. A reversal estimate indicated that in order for surgery to yield equal cost effectiveness, its cost would have to decrease by 48% or median survival would have to improve by 108%. The average cost per week of survival was $310 for radiotherapy, $524 for resection plus radiation, and $270 for radiosurgery plus radiation. Conclusions: For selected patients, aggressive strategies such as resection or radiosurgery are warranted, as they result in improved median survival and functional independence. Radiosurgery appears to be the more cost-effective procedure
The role of sensitivity analysis in assessing uncertainty
International Nuclear Information System (INIS)
Crick, M.J.; Hill, M.D.
1987-01-01
Outside the specialist world of those carrying out performance assessments considerable confusion has arisen about the meanings of sensitivity analysis and uncertainty analysis. In this paper we attempt to reduce this confusion. We then go on to review approaches to sensitivity analysis within the context of assessing uncertainty, and to outline the types of test available to identify sensitive parameters, together with their advantages and disadvantages. The views expressed in this paper are those of the authors; they have not been formally endorsed by the National Radiological Protection Board and should not be interpreted as Board advice
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Analysis of Sensitivity Experiments - An Expanded Primer
2017-03-08
conducted with this purpose in mind. Due diligence must be paid to the structure of the dosage levels and to the number of trials. The chosen data...analysis. System reliability is of paramount importance for protecting both the investment of funding and human life . Failing to accurately estimate
Sensitivity analysis of hybrid thermoelastic techniques
W.A. Samad; J.M. Considine
2017-01-01
Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...
Automating sensitivity analysis of computer models using computer calculus
International Nuclear Information System (INIS)
Oblow, E.M.; Pin, F.G.
1985-01-01
An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs
Cost Analysis of Noninvasive Helmet Ventilation Compared with Use of Noninvasive Face Mask in ARDS
Directory of Open Access Journals (Sweden)
Kwadwo Kyeremanteng
2018-01-01
Full Text Available Intensive care unit (ICU costs have doubled since 2000, totalling 108 billion dollars per year. Acute respiratory distress syndrome (ARDS has a prevalence of 10.4% and a 28-day mortality of 34.8%. Noninvasive ventilation (NIV is used in up to 30% of cases. A recent randomized controlled trial by Patel et al. (2016 showed lower intubation rates and 90-day mortality when comparing helmet to face mask NIV in ARDS. The population in the Patel et al. trial was used for cost analysis in this study. Projections of cost savings showed a decrease in ICU costs by $2527 and hospital costs by $3103 per patient, along with a 43.3% absolute reduction in intubation rates. Sensitivity analysis showed consistent cost reductions. Projected annual cost savings, assuming the current prevalence of ARDS, were $237538 in ICU costs and $291682 in hospital costs. At a national level, using yearly incidence of ARDS cases in American ICUs, this represents $449 million in savings. Helmet NIV, compared to face mask NIV, in nonintubated patients with ARDS, reduces ICU and hospital direct-variable costs along with intubation rates, LOS, and mortality. A large-scale cost-effectiveness analysis is needed to validate the findings.
Gandjour, Afschin; Müller, Dirk
2014-10-01
One of the major ethical concerns regarding cost-effectiveness analysis in health care has been the inclusion of life-extension costs ("it is cheaper to let people die"). For this reason, many analysts have opted to rule out life-extension costs from the analysis. However, surprisingly little has been written in the health economics literature regarding this ethical concern and the resulting practice. The purpose of this work was to present a framework and potential solution for ethical objections against life-extension costs. This work found three levels of ethical concern: (i) with respect to all life-extension costs (disease-related and -unrelated); (ii) with respect to disease-unrelated costs only; and (iii) regarding disease-unrelated costs plus disease-related costs not influenced by the intervention. Excluding all life-extension costs for ethical reasons would require-for reasons of consistency-a simultaneous exclusion of savings from reducing morbidity. At the other extreme, excluding only disease-unrelated life-extension costs for ethical reasons would require-again for reasons of consistency-the exclusion of health gains due to treatment of unrelated diseases. Therefore, addressing ethical concerns regarding the inclusion of life-extension costs necessitates fundamental changes in the calculation of cost effectiveness.
Parker, David; Belaud-Rotureau, Marc-Antoine
2014-01-01
Break-apart fluorescence in situ hybridization (FISH) is the gold standard test for anaplastic lymphoma kinase (ALK) gene rearrangement. However, this methodology often is assumed to be expensive and potentially cost-prohibitive given the low prevalence of ALK-positive non-small cell lung cancer (NSCLC) cases. To more accurately estimate the cost of ALK testing by FISH, we developed a micro-cost model that accounts for all cost elements of the assay, including laboratory reagents, supplies, capital equipment, technical and pathologist labor, and the acquisition cost of the commercial test and associated reagent kits and controls. By applying a set of real-world base-case parameter values, we determined that the cost of a single ALK break-apart FISH test result is $278.01. Sensitivity analysis on the parameters of batch size, testing efficiency, and the cost of the commercial diagnostic testing products revealed that the cost per result is highly sensitive to batch size, but much less so to efficiency or product cost. This implies that ALK testing by FISH will be most cost effective when performed in high-volume centers. Our results indicate that testing cost may not be the primary determinant of crizotinib (Xalkori®) treatment cost effectiveness, and suggest that testing cost is an insufficient reason to limit the use of FISH testing for ALK rearrangement. PMID:25520569
Parker, David; Belaud-Rotureau, Marc-Antoine
2014-01-01
Break-apart fluorescence in situ hybridization (FISH) is the gold standard test for anaplastic lymphoma kinase (ALK) gene rearrangement. However, this methodology often is assumed to be expensive and potentially cost-prohibitive given the low prevalence of ALK-positive non-small cell lung cancer (NSCLC) cases. To more accurately estimate the cost of ALK testing by FISH, we developed a micro-cost model that accounts for all cost elements of the assay, including laboratory reagents, supplies, capital equipment, technical and pathologist labor, and the acquisition cost of the commercial test and associated reagent kits and controls. By applying a set of real-world base-case parameter values, we determined that the cost of a single ALK break-apart FISH test result is $278.01. Sensitivity analysis on the parameters of batch size, testing efficiency, and the cost of the commercial diagnostic testing products revealed that the cost per result is highly sensitive to batch size, but much less so to efficiency or product cost. This implies that ALK testing by FISH will be most cost effective when performed in high-volume centers. Our results indicate that testing cost may not be the primary determinant of crizotinib (Xalkori(®)) treatment cost effectiveness, and suggest that testing cost is an insufficient reason to limit the use of FISH testing for ALK rearrangement.
Ding, Yao; Thompson, John D; Kobrynski, Lisa; Ojodu, Jelili; Zarbalian, Guisou; Grosse, Scott D
2016-05-01
To evaluate the expected cost-effectiveness and net benefit of the recent implementation of newborn screening (NBS) for severe combined immunodeficiency (SCID) in Washington State. We constructed a decision analysis model to estimate the costs and benefits of NBS in an annual birth cohort of 86 600 infants based on projections of avoided infant deaths. Point estimates and ranges for input variables, including the birth prevalence of SCID, proportion detected asymptomatically without screening through family history, screening test characteristics, survival rates, and costs of screening, diagnosis, and treatment were derived from published estimates, expert opinion, and the Washington NBS program. We estimated treatment costs stratified by age of identification and SCID type (with or without adenosine deaminase deficiency). Economic benefit was estimated using values of $4.2 and $9.0 million per death averted. We performed sensitivity analyses to evaluate the influence of key variables on the incremental cost-effectiveness ratio (ICER) of net direct cost per life-year saved. Our model predicts an additional 1.19 newborn infants with SCID detected preclinically through screening, in addition to those who would have been detected early through family history, and 0.40 deaths averted annually. Our base-case model suggests an ICER of $35 311 per life-year saved, and a benefit-cost ratio of either 5.31 or 2.71. Sensitivity analyses found ICER values <$100 000 and positive net benefit for plausible assumptions on all variables. Our model suggests that NBS for SCID in Washington is likely to be cost-effective and to show positive net economic benefit. Published by Elsevier Inc.
Global and Local Sensitivity Analysis Methods for a Physical System
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Adjoint sensitivity analysis of high frequency structures with Matlab
Bakr, Mohamed; Demir, Veysel
2017-01-01
This book covers the theory of adjoint sensitivity analysis and uses the popular FDTD (finite-difference time-domain) method to show how wideband sensitivities can be efficiently estimated for different types of materials and structures. It includes a variety of MATLAB® examples to help readers absorb the content more easily.
Analysis of an inventory model for both linearly decreasing demand and holding cost
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model
Directory of Open Access Journals (Sweden)
Hong Xue
2018-01-01
Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to
Cost Analysis in Shoulder Arthroplasty Surgery
Directory of Open Access Journals (Sweden)
Matthew J. Teusink
2012-01-01
Full Text Available Cost in shoulder surgery has taken on a new focus with passage of the Patient Protection and Affordable Care Act. As part of this law, there is a provision for Accountable Care Organizations (ACOs and the bundled payment initiative. In this model, one entity would receive a single payment for an episode of care and distribute funds to all other parties involved. Given its reproducible nature, shoulder arthroplasty is ideally situated to become a model for an episode of care. Currently, there is little research into cost in shoulder arthroplasty surgery. The current analyses do not provide surgeons with a method for determining the cost and outcomes of their interventions, which is necessary to the success of bundled payment. Surgeons are ideally positioned to become leaders in ACOs, but in order for them to do so a methodology must be developed where accurate costs and outcomes can be determined for the episode of care.
Brain Network Analysis: Separating Cost from Topology Using Cost-Integration
Ginestet, Cedric E.; Nichols, Thomas E.; Bullmore, Ed T.; Simmons, Andrew
2011-01-01
A statistically principled way of conducting brain network analysis is still lacking. Comparison of different populations of brain networks is hard because topology is inherently dependent on wiring cost, where cost is defined as the number of edges in an unweighted graph. In this paper, we evaluate the benefits and limitations associated with using cost-integrated topological metrics. Our focus is on comparing populations of weighted undirected graphs that differ in mean association weight, using global efficiency. Our key result shows that integrating over cost is equivalent to controlling for any monotonic transformation of the weight set of a weighted graph. That is, when integrating over cost, we eliminate the differences in topology that may be due to a monotonic transformation of the weight set. Our result holds for any unweighted topological measure, and for any choice of distribution over cost levels. Cost-integration is therefore helpful in disentangling differences in cost from differences in topology. By contrast, we show that the use of the weighted version of a topological metric is generally not a valid approach to this problem. Indeed, we prove that, under weak conditions, the use of the weighted version of global efficiency is equivalent to simply comparing weighted costs. Thus, we recommend the reporting of (i) differences in weighted costs and (ii) differences in cost-integrated topological measures with respect to different distributions over the cost domain. We demonstrate the application of these techniques in a re-analysis of an fMRI working memory task. We also provide a Monte Carlo method for approximating cost-integrated topological measures. Finally, we discuss the limitations of integrating topology over cost, which may pose problems when some weights are zero, when multiplicities exist in the ranks of the weights, and when one expects subtle cost-dependent topological differences, which could be masked by cost-integration. PMID:21829437
Brain network analysis: separating cost from topology using cost-integration.
Directory of Open Access Journals (Sweden)
Cedric E Ginestet
Full Text Available A statistically principled way of conducting brain network analysis is still lacking. Comparison of different populations of brain networks is hard because topology is inherently dependent on wiring cost, where cost is defined as the number of edges in an unweighted graph. In this paper, we evaluate the benefits and limitations associated with using cost-integrated topological metrics. Our focus is on comparing populations of weighted undirected graphs that differ in mean association weight, using global efficiency. Our key result shows that integrating over cost is equivalent to controlling for any monotonic transformation of the weight set of a weighted graph. That is, when integrating over cost, we eliminate the differences in topology that may be due to a monotonic transformation of the weight set. Our result holds for any unweighted topological measure, and for any choice of distribution over cost levels. Cost-integration is therefore helpful in disentangling differences in cost from differences in topology. By contrast, we show that the use of the weighted version of a topological metric is generally not a valid approach to this problem. Indeed, we prove that, under weak conditions, the use of the weighted version of global efficiency is equivalent to simply comparing weighted costs. Thus, we recommend the reporting of (i differences in weighted costs and (ii differences in cost-integrated topological measures with respect to different distributions over the cost domain. We demonstrate the application of these techniques in a re-analysis of an fMRI working memory task. We also provide a Monte Carlo method for approximating cost-integrated topological measures. Finally, we discuss the limitations of integrating topology over cost, which may pose problems when some weights are zero, when multiplicities exist in the ranks of the weights, and when one expects subtle cost-dependent topological differences, which could be masked by cost-integration.
Cost-Effectiveness Analysis of Second-Line Chemotherapy Agents for Advanced Gastric Cancer.
Lam, Simon W; Wai, Maya; Lau, Jessica E; McNamara, Michael; Earl, Marc; Udeh, Belinda
2017-01-01
Gastric cancer is the fifth most common malignancy and second leading cause of cancer-related mortality. Chemotherapy options for patients who fail first-line treatment are limited. Thus the objective of this study was to assess the cost-effectiveness of second-line treatment options for patients with advanced or metastatic gastric cancer. Cost-effectiveness analysis using a Markov model to compare the cost-effectiveness of six possible second-line treatment options for patients with advanced gastric cancer who have failed previous chemotherapy: irinotecan, docetaxel, paclitaxel, ramucirumab, paclitaxel plus ramucirumab, and palliative care. The model was performed from a third-party payer's perspective to compare lifetime costs and health benefits associated with studied second-line therapies. Costs included only relevant direct medical costs. The model assumed chemotherapy cycle lengths of 30 days and a maximum number of 24 cycles. Systematic review of literature was performed to identify clinical data sources and utility and cost data. Quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs) were calculated. The primary outcome measure for this analysis was the ICER between different therapies, where the incremental cost was divided by the number of QALYs saved. The ICER was compared with a willingness-to-pay (WTP) threshold that was set at $50,000/QALY gained, and an exploratory analysis using $160,000/QALY gained was also used. The model's robustness was tested by using 1-way sensitivity analyses and a 10,000 Monte Carlo simulation probabilistic sensitivity analysis (PSA). Irinotecan had the lowest lifetime cost and was associated with a QALY gain of 0.35 year. Docetaxel, ramucirumab alone, and palliative care were dominated strategies. Paclitaxel and the combination of paclitaxel plus ramucirumab led to higher QALYs gained, at an incremental cost of $86,815 and $1,056,125 per QALY gained, respectively. Based on our prespecified
Jiang, Jiewei; Liu, Xiyang; Zhang, Kai; Long, Erping; Wang, Liming; Li, Wangting; Liu, Lin; Wang, Shuai; Zhu, Mingmin; Cui, Jiangtao; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Wang, Jinghui; Lin, Haotian
2017-11-21
Ocular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial. In this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient. Qualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method. Our study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application.
Variance analysis refines overhead cost control.
Cooper, J C; Suver, J D
1992-02-01
Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.
A highly sensitive, low-cost, wearable pressure sensor based on conductive hydrogel spheres
Tai, Yanlong
2015-01-01
Wearable pressure sensing solutions have promising future for practical applications in health monitoring and human/machine interfaces. Here, a highly sensitive, low-cost, wearable pressure sensor based on conductive single-walled carbon nanotube (SWCNT)/alginate hydrogel spheres is reported. Conductive and piezoresistive spheres are embedded between conductive electrodes (indium tin oxide-coated polyethylene terephthalate films) and subjected to environmental pressure. The detection mechanism is based on the piezoresistivity of the SWCNT/alginate conductive spheres and on the sphere-electrode contact. Step-by-step, we optimized the design parameters to maximize the sensitivity of the sensor. The optimized hydrogel sensor exhibited a satisfactory sensitivity (0.176 ΔR/R0/kPa-1) and a low detectable limit (10 Pa). Moreover, a brief response time (a few milliseconds) and successful repeatability were also demonstrated. Finally, the efficiency of this strategy was verified through a series of practical tests such as monitoring human wrist pulse, detecting throat muscle motion or identifying the location and the distribution of an external pressure using an array sensor (4 × 4). © 2015 The Royal Society of Chemistry.
Sensitivity study of CFD turbulent models for natural convection analysis
International Nuclear Information System (INIS)
Yu sun, Park
2007-01-01
The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation
Dispersion sensitivity analysis & consistency improvement of APFSDS
Directory of Open Access Journals (Sweden)
Sangeeta Sharma Panda
2017-08-01
In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Sensitivity analysis of the RESRAD, a dose assessment code
International Nuclear Information System (INIS)
Yu, C.; Cheng, J.J.; Zielen, A.J.
1991-01-01
The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational
A sensitivity analysis approach to optical parameters of scintillation detectors
International Nuclear Information System (INIS)
Ghal-Eh, N.; Koohi-Fayegh, R.
2008-01-01
In this study, an extended version of the Monte Carlo light transport code, PHOTRACK, has been used for a sensitivity analysis to estimate the importance of different wavelength-dependent parameters in the modelling of light collection process in scintillators
Sobol’ sensitivity analysis for stressor impacts on honeybee colonies
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...
Experimental Design for Sensitivity Analysis of Simulation Models
Kleijnen, J.P.C.
2001-01-01
This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as
Sensitivity analysis of a greedy heuristic for knapsack problems
Ghosh, D; Chakravarti, N; Sierksma, G
2006-01-01
In this paper, we carry out parametric analysis as well as a tolerance limit based sensitivity analysis of a greedy heuristic for two knapsack problems-the 0-1 knapsack problem and the subset sum problem. We carry out the parametric analysis based on all problem parameters. In the tolerance limit
John-Baptiste, A; Sowerby, L J; Chin, C J; Martin, J; Rotenberg, B W
2016-01-01
When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems.
Sensitivity analysis of numerical solutions for environmental fluid problems
International Nuclear Information System (INIS)
Tanaka, Nobuatsu; Motoyama, Yasunori
2003-01-01
In this study, we present a new numerical method to quantitatively analyze the error of numerical solutions by using the sensitivity analysis. If a reference case of typical parameters is one calculated with the method, no additional calculation is required to estimate the results of the other numerical parameters such as more detailed solutions. Furthermore, we can estimate the strict solution from the sensitivity analysis results and can quantitatively evaluate the reliability of the numerical solution by calculating the numerical error. (author)
Cost-effective analysis of PET application in NSCLC
International Nuclear Information System (INIS)
Gu Aichun; Liu Jianjun; Sun Xiaoguang; Shi Yiping; Huang Gang
2006-01-01
Objective: To evaluate the cost-effectiveness of PET and CT application for diagnosis of non-small cell lung cancer (NSCLC) in China. Methods: Using decision analysis method the diagnostic efficiency of PET and CT for diagnosis of NSCLC in china was analysed. And also the value of cost for accurate diagnosis (CAD), cost for accurate staging (CAS) and cost for effective therapy (CAT) was calculated. Results: (1) For the accurate diagnosis, CT was much more cost-effective than PET. (2) For the accurate staging, CT was still more cost-effective than PET. (3) For the all over diagnostic and therapeutic cost, PET was more cost-effective than CT. (4) The priority of PET to CT was for the diagnosis of stage I NSCLC. Conclusion: For the management of NSCLC patient in China, CT is more cost-effective for screening, whereas PET for clinical staging and monitoring therapeutic effect. (authors)
Governance Based on Cost Analysis (Unit Cost Analysis for Vocational Schools
Directory of Open Access Journals (Sweden)
Chandra Situmeang
2018-02-01
Full Text Available This study aims to calculate unit cost to produce one middle-level vocational school graduate (in Indonesian terms known as "Sekolah Menengah Kejuruan” abbreviated as SMK. The calculation is required because operational grant funds (in Indonesian terms known as antuan Operasional Sekolah abbreviated as BOS are distributed so far to the same extent in all areas of Indonesia and for all majors. This is most likely less than optimal because in fact there are very basic characteristics differences including; Economic capacity of each region, the cost standard for each region, and the type of department in the school. Based on this, the researcher assumed that cost analysis should be done by considering these things as a basis to provide BOS funds tailored to specific characteristics. The data to be analyzed in this research is North Sumatra province data. This research is conducted in two stages, which in this report only completed the first phase which is a survey in North Sumatra region. Stages of survey to obtain data which then analyzed with related data such as community income, learning outcomes through the value of national examinations, tuition fee, and conditions of learning facilities. The research is funded by the ministries of research, technology and higher education through competing grant schemes for fiscal year 2017 and 2018. The result of correlation analysis between the variables shows that there is a strong relationship between the average income with average tuition paid by the community and between average tuition paid by the community with Quality Level of Education Facilities. The result of correlation analysis also shows a moderate relationship between the average tuition with learning outcomes measured through average national exam and relationship between quality level of education facilities with average national exam. While the relationship between average income with average national exam does not have a strong
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
Directory of Open Access Journals (Sweden)
R. Ariza
2014-07-01
Full Text Available Objective: To compare the cost of treating rheumatoid arthritis patients that have failed an initial treatment with methotrexate, with subcutaneous aba - tacept versus other first-line biologic disease-modifying antirheumatic drugs. Method: Subcutaneous abatacept was considered comparable to intravenous abatacept, adalimumab, certolizumab pegol, etanercept, golimumab, infliximab and tocilizumab, based on indirect comparison using mixed treatment analysis. A cost-minimization analysis was therefore considered appropriate. The Spanish Health System perspective and a 3 year time horizon were selected. Pharmaceutical and administration costs (, 2013 of all available first-line biological disease-modifying antirheumatic drugs were considered. Administration costs were obtained from a local costs database. Patients were considered to have a weight of 70 kg. A 3% annual discount rate was applied. Deterministic and probabilistic sensitivity analyses were performed. Results: Subcutaneous abatacept proved in the base case to be less costly than all other biologic antirrheumatic drugs (ranging from -831.42 to -9,741.69 versus infliximab and tocilizumab, respectively. Subcutaneous abatacept was associated with a cost of 10,760.41 per patient during the first year of treatment and 10,261.29 in subsequent years. The total 3-year cost of subcutaneous abatacept was 29,953.89 per patient. Sensitivity analyses proved the model to be robust. Subcutaneous abatacept remained cost-saving in 100% of probabilistic sensitivity analysis simulations versus adalimumab, certolizumab, etanercept and golimumab, in more than 99.6% versus intravenous abatacept and tocilizumab and in 62.3% versus infliximab. Conclusions: Treatment with subcutaneous abatacept is cost-saving versus intravenous abatacept, adalimumab, certolizumab, etanercept, golimumab, infliximab and tocilizumab in the management of rheumatoid arthritis patients initiating
Cost analysis of nursing home registered nurse staffing times.
Dorr, David A; Horn, Susan D; Smout, Randall J
2005-05-01
To examine potential cost savings from decreased adverse resident outcomes versus additional wages of nurses when nursing homes have adequate staffing. A retrospective cost study using differences in adverse outcome rates of pressure ulcers (PUs), urinary tract infections (UTIs), and hospitalizations per resident per day from low staffing and adequate staffing nursing homes. Cost savings from reductions in these events are calculated in dollars and compared with costs of increasing nurse staffing. Eighty-two nursing homes throughout the United States. One thousand three hundred seventy-six frail elderly long-term care residents at risk of PU development. Event rates are from the National Pressure Ulcer Long-Term Care Study. Hospital costs are estimated from Medicare statistics and from charges in the Healthcare Cost and Utilization Project. UTI costs and PU costs are from cost-identification studies. Time horizon is 1 year; perspectives are societal and institutional. Analyses showed an annual net societal benefit of 3,191 dollars per resident per year in a high-risk, long-stay nursing home unit that employs sufficient nurses to achieve 30 to 40 minutes of registered nurse direct care time per resident per day versus nursing homes that have nursing time of less than 10 minutes. Sensitivity analyses revealed a robust set of estimates, with no single or paired elements reaching the cost/benefit equality threshold. Increasing nurse staffing in nursing homes may create significant societal cost savings from reduction in adverse outcomes. Challenges in increasing nurse staffing are discussed.
International Nuclear Information System (INIS)
Muhsen, Dhiaa Halboot; Khatib, Tamer; Haider, Haider Tarish
2017-01-01
Highlights: • Feasibility and load sensitivity analysis is conducted for PVPS. • Battery and diesel generator are considered as supporting units to the system. • The configuration of the PV array and the initial status of the tank are important. • The COU is more sensitive to the capital cost of PV array than other components. • Increasing the maximum capacity of water storage tank is better storage and DG. - Abstract: In this paper, a feasibility and load sensitivity analysis is conducted for photovoltaic water pumping systems with storage device (battery) or diesel generator so as to obtain an optimal configuration that achieves a reliable system. The analysis is conducted based on techno-economic aspects, where the loss of load probability and life cycle cost are represented as technical and economic criteria, respectively. Various photovoltaic water pumping systems scenarios with initially full storage tank; battery and hybrid DG-PV energy source are proposed to analyze the feasibility of system. The result shows that the configuration of the PV array and the initial status of the storage tank are important variables to be considered. Moreover, the sensitivity of cost of unit for various PVPS components is studied. It is found that the cost of unit is more sensitive to the initial capital cost of photovoltaic array than other components. In this paper a standalone PV based pumping system with a PV array capacity of 2.4 kWp and a storage tank with a capacity of 80 m 3 was proposed an a optimum system. The system with the aforementioned configuration pumps an average hourly water volume of approximately 3.297 m 3 over one year with a unit of 0.05158 USD/m 3 . Moreover, according to results, increasing the maximum capacity of water storage tank is technically and economically better than supporting a photovoltaic water pumping systems with another energy source or extra storage device.
Cost Analysis In A Multi-Mission Operations Environment
Newhouse, M.; Felton, L.; Bornas, N.; Botts, D.; Roth, K.; Ijames, G.; Montgomery, P.
2014-01-01
Spacecraft control centers have evolved from dedicated, single-mission or single missiontype support to multi-mission, service-oriented support for operating a variety of mission types. At the same time, available money for projects is shrinking and competition for new missions is increasing. These factors drive the need for an accurate and flexible model to support estimating service costs for new or extended missions; the cost model in turn drives the need for an accurate and efficient approach to service cost analysis. The National Aeronautics and Space Administration (NASA) Huntsville Operations Support Center (HOSC) at Marshall Space Flight Center (MSFC) provides operations services to a variety of customers around the world. HOSC customers range from launch vehicle test flights; to International Space Station (ISS) payloads; to small, short duration missions; and has included long duration flagship missions. The HOSC recently completed a detailed analysis of service costs as part of the development of a complete service cost model. The cost analysis process required the team to address a number of issues. One of the primary issues involves the difficulty of reverse engineering individual mission costs in a highly efficient multimission environment, along with a related issue of the value of detailed metrics or data to the cost model versus the cost of obtaining accurate data. Another concern is the difficulty of balancing costs between missions of different types and size and extrapolating costs to different mission types. The cost analysis also had to address issues relating to providing shared, cloud-like services in a government environment, and then assigning an uncertainty or risk factor to cost estimates that are based on current technology, but will be executed using future technology. Finally the cost analysis needed to consider how to validate the resulting cost models taking into account the non-homogeneous nature of the available cost data and the
Oil and gas pipeline construction cost analysis and developing regression models for cost estimation
Thaduri, Ravi Kiran
In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.
Risk and sensitivity analysis in relation to external events
International Nuclear Information System (INIS)
Alzbutas, R.; Urbonas, R.; Augutis, J.
2001-01-01
This paper presents risk and sensitivity analysis of external events impacts on the safe operation in general and in particular the Ignalina Nuclear Power Plant safety systems. Analysis is based on the deterministic and probabilistic assumptions and assessment of the external hazards. The real statistic data are used as well as initial external event simulation. The preliminary screening criteria are applied. The analysis of external event impact on the NPP safe operation, assessment of the event occurrence, sensitivity analysis, and recommendations for safety improvements are performed for investigated external hazards. Such events as aircraft crash, extreme rains and winds, forest fire and flying parts of the turbine are analysed. The models are developed and probabilities are calculated. As an example for sensitivity analysis the model of aircraft impact is presented. The sensitivity analysis takes into account the uncertainty features raised by external event and its model. Even in case when the external events analysis show rather limited danger, the sensitivity analysis can determine the highest influence causes. These possible variations in future can be significant for safety level and risk based decisions. Calculations show that external events cannot significantly influence the safety level of the Ignalina NPP operation, however the events occurrence and propagation can be sufficiently uncertain.(author)
Cost analysis of open radical cystectomy versus robot-assisted radical cystectomy.
Bansal, Sukhchain S; Dogra, Tara; Smith, Peter W; Amran, Maisarah; Auluck, Ishna; Bhambra, Maninder; Sura, Manraj S; Rowe, Edward; Koupparis, Anthony
2018-03-01
To perform a cost analysis comparing the cost of robot-assisted radical cystectomy (RARC) with open RC (ORC) in a UK tertiary referral centre and to identify the key cost drivers. Data on hospital length of stay (LOS), operative time (OT), transfusion rate, and volume and complication rate were obtained from a prospectively updated institutional database for patients undergoing RARC or ORC. A cost decision tree model was created. Sensitivity analysis was performed to find key drivers of overall cost and to find breakeven points with ORC. Monte Carlo analysis was performed to quantify the variability in the dataset. One RARC procedure costs £12 449.87, or £12 106.12 if the robot was donated via charitable funds. In comparison, one ORC procedure costs £10 474.54. RARC is 18.9% more expensive than ORC. The key cost drivers were OT, LOS, and the number of cases performed per annum. High ongoing equipment costs remain a large barrier to the cost of RARC falling. However, minimal improvements in patient quality of life would be required to offset this difference. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.
Cost and performance analysis of physical security systems
International Nuclear Information System (INIS)
Hicks, M.J.; Yates, D.; Jago, W.H.
1997-01-01
CPA - Cost and Performance Analysis - is a prototype integration of existing PC-based cost and performance analysis tools: ACEIT (Automated Cost Estimating Integrated Tools) and ASSESS (Analytic System and Software for Evaluating Safeguards and Security). ACE is an existing DOD PC-based tool that supports cost analysis over the full life cycle of a system; that is, the cost to procure, operate, maintain and retire the system and all of its components. ASSESS is an existing DOE PC-based tool for analysis of performance of physical protection systems. Through CPA, the cost and performance data are collected into Excel workbooks, making the data readily available to analysts and decision makers in both tabular and graphical formats and at both the system and subsystem levels. The structure of the cost spreadsheets incorporates an activity-based approach to cost estimation. Activity-based costing (ABC) is an accounting philosophy used by industry to trace direct and indirect costs to the products or services of a business unit. By tracing costs through security sensors and procedures and then mapping the contributions of the various sensors and procedures to system effectiveness, the CPA architecture can provide security managers with information critical for both operational and strategic decisions. The architecture, features and applications of the CPA prototype are presented. 5 refs., 3 figs
High sensitivity analysis of atmospheric gas elements
International Nuclear Information System (INIS)
Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo
2006-01-01
We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10 17 atoms/cm 3 for H, of 3 x 10 16 atoms/cm 3 for C and of 2 x 10 16 atoms/cm 3 for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it
High sensitivity analysis of atmospheric gas elements
Energy Technology Data Exchange (ETDEWEB)
Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)
2006-07-30
We have investigated the detection limit of H, C and O in Si, GaAs and InP using a Cameca IMS-4f instrument equipped with a modified vacuum system to improve the detection limit with a lower sputtering rate We found that the detection limits for H, O and C are improved by employing a primary ion bombardment before the analysis. Background levels of 1 x 10{sup 17} atoms/cm{sup 3} for H, of 3 x 10{sup 16} atoms/cm{sup 3} for C and of 2 x 10{sup 16} atoms/cm{sup 3} for O could be achieved in silicon with a sputtering rate of 2 nm/s after a primary ion bombardment for 160 h. We also found that the use of a 20 K He cryo-panel near the sample holder was effective for obtaining better detection limits in a shorter time, although the final detection limits using the panel are identical to those achieved without it.
Sensitivity Analysis of BLISK Airfoil Wear †
Directory of Open Access Journals (Sweden)
Andreas Kellersmann
2018-05-01
Full Text Available The decreasing performance of jet engines during operation is a major concern for airlines and maintenance companies. Among other effects, the erosion of high-pressure compressor (HPC blades is a critical one and leads to a changed aerodynamic behavior, and therefore to a change in performance. The maintenance of BLISKs (blade-integrated-disks is especially challenging because the blade arrangement cannot be changed and individual blades cannot be replaced. Thus, coupled deteriorated blades have a complex aerodynamic behavior which can have a stronger influence on compressor performance than a conventional HPC. To ensure effective maintenance for BLISKs, the impact of coupled misshaped blades are the key factor. The present study addresses these effects on the aerodynamic performance of a first-stage BLISK of a high-pressure compressor. Therefore, a design of experiments (DoE is done to identify the geometric properties which lead to a reduction in performance. It is shown that the effect of coupled variances is dependent on the operating point. Based on the DoE analysis, the thickness-related parameters, the stagger angle, and the max. profile camber as coupled parameters are identified as the most important parameters for all operating points.
Hasegawa, Raiden; Small, Dylan
2017-12-01
In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.
Directory of Open Access Journals (Sweden)
Filip Meheus
2010-09-01
Full Text Available Visceral leishmaniasis is a systemic parasitic disease that is fatal unless treated. We assessed the cost and cost-effectiveness of alternative strategies for the treatment of visceral leishmaniasis in the Indian subcontinent. In particular we examined whether combination therapies are a cost-effective alternative compared to monotherapies.We assessed the cost-effectiveness of all possible mono- and combination therapies for the treatment of visceral leishmaniasis in the Indian subcontinent (India, Nepal and Bangladesh from a societal perspective using a decision analytical model based on a decision tree. Primary data collected in each country was combined with data from the literature and an expert poll (Delphi method. The cost per patient treated and average and incremental cost-effectiveness ratios expressed as cost per death averted were calculated. Extensive sensitivity analysis was done to evaluate the robustness of our estimations and conclusions. With a cost of US$92 per death averted, the combination miltefosine-paromomycin was the most cost-effective treatment strategy. The next best alternative was a combination of liposomal amphotericin B with paromomycin with an incremental cost-effectiveness of $652 per death averted. All other strategies were dominated with the exception of a single dose of 10mg per kg of liposomal amphotericin B. While strategies based on liposomal amphotericin B (AmBisome were found to be the most effective, its current drug cost of US$20 per vial resulted in a higher average cost-effectiveness. Sensitivity analysis showed the conclusion to be robust to variations in the input parameters over their plausible range.Combination treatments are a cost-effective alternative to current monotherapy for VL. Given their expected impact on the emergence of drug resistance, a switch to combination therapy should be considered once final results from clinical trials are available.
Social cost-benefit analysis and nuclear futures
International Nuclear Information System (INIS)
Pearce, D.W.
1979-01-01
The usefulness of cost-benefit analysis in making nuclear power investment decisions is considered. The essence of social cost-benefit analysis is outlined and shown to be unavoidably value-laden. As a case study six issues relevant to the decision to build on oxide fuel reprocessing plant (THORP) are examined. The potential practical value of using cost-benefit analysis as an aid to decision-making is considered for each of these issues. It is concluded that cost-benefit approach is of limited value in the nuclear power case because of its inapplicability to such issues as the liberty of the individual and nuclear weapons proliferation. (author)
Amorphous silicon batch process cost analysis
International Nuclear Information System (INIS)
Whisnant, R.A.; Sherring, C.
1993-08-01
This report describes the development of baseline manufacturing cost data to assist PVMaT monitoring teams in assessing current and future subcontracts, which an emphasis on commercialization and production. A process for the manufacture of a single-junction, large-area, a Si module was modeled using an existing Research Triangle Institute (RTI) computer model. The model estimates a required, or breakeven, price for the module based on its production process and the financial structure of the company operating the process. Sufficient detail on cost drivers is presented so the relationship of the process features and business characteristics can be related to the estimated required price
Malaria community health workers in Myanmar: a cost analysis.
Kyaw, Shwe Sin; Drake, Tom; Thi, Aung; Kyaw, Myat Phone; Hlaing, Thaung; Smithuis, Frank M; White, Lisa J; Lubell, Yoel
2016-01-25
Myanmar has the highest malaria incidence and attributed mortality in South East Asia with limited healthcare infrastructure to manage this burden. Establishing malaria Community Health Worker (CHW) programmes is one possible strategy to improve access to malaria diagnosis and treatment, particularly in remote areas. Despite considerable donor support for implementing CHW programmes in Myanmar, the cost implications are not well understood. An ingredients based micro-costing approach was used to develop a model of the annual implementation cost of malaria CHWs in Myanmar. A cost model was constructed based on activity centres comprising of training, patient malaria services, monitoring and supervision, programme management, overheads and incentives. The model takes a provider perspective. Financial data on CHWs programmes were obtained from the 2013 financial reports of the Three Millennium Development Goal fund implementing partners that have been working on malaria control and elimination in Myanmar. Sensitivity and scenario analyses were undertaken to outline parameter uncertainty and explore changes to programme cost for key assumptions. The range of total annual costs for the support of one CHW was US$ 966-2486. The largest driver of CHW cost was monitoring and supervision (31-60% of annual CHW cost). Other important determinants of cost included programme management (15-28% of annual CHW cost) and patient services (6-12% of annual CHW cost). Within patient services, malaria rapid diagnostic tests are the major contributor to cost (64% of patient service costs). The annual cost of a malaria CHW in Myanmar varies considerably depending on the context and the design of the programme, in particular remoteness and the approach to monitoring and evaluation. The estimates provide information to policy makers and CHW programme planners in Myanmar as well as supporting economic evaluations of their cost-effectiveness.
International Nuclear Information System (INIS)
Pin, F.G.; Worley, B.A.; Oblow, E.M.; Wright, R.Q.; Harper, W.V.
1986-01-01
To support an effort in making large-scale sensitivity analyses feasible, cost efficient and quantitatively complete, the authors have developed an automated procedure making use of computer calculus. The procedure, called GRESS (GRadient Enhanced Software System), is embodied in a precompiler that can process Fortran computer codes and add derivative-taking capabilities to the normal calculation scheme. In this paper, the automated GRESS procedure is described and applied to the code UCB-NE-10.2, which simulates the migration through a sorption medium of the radionuclide members of a decay chain. The sensitivity calculations for a sample problem are verified using comparison with analytical and perturbation analysis results. Conclusions are drawn relative to the applicability of GRESS for more general large-scale sensitivity studies, and the role of such techniques in an overall sensitivity and uncertainty analysis program is discussed
Standardization: using comparative maintenance costs in an economic analysis
Clark, Roger Nelson
1987-01-01
Approved for public release; distribution is unlimited This thesis investigates the use of comparative maintenance costs of functionally interchangeable equipments in similar U.S. Navy shipboard applications in an economic analysis of standardization. The economics of standardization, life-cycle costing, and the Navy 3-M System are discussed in general. An analysis of 3-M System maintenance costs for a selected equipment, diesel engines, is conducted. The potential use of comparative ma...
Space construction system analysis. Part 2: Cost and programmatics
Vonflue, F. W.; Cooper, W.
1980-01-01
Cost and programmatic elements of the space construction systems analysis study are discussed. The programmatic aspects of the ETVP program define a comprehensive plan for the development of a space platform, the construction system, and the space shuttle operations/logistics requirements. The cost analysis identified significant items of cost on ETVP development, ground, and flight segments, and detailed the items of space construction equipment and operations.
Application of Stochastic Sensitivity Analysis to Integrated Force Method
Directory of Open Access Journals (Sweden)
X. F. Wei
2012-01-01
Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.
The EVEREST project: sensitivity analysis of geological disposal systems
International Nuclear Information System (INIS)
Marivoet, Jan; Wemaere, Isabelle; Escalier des Orres, Pierre; Baudoin, Patrick; Certes, Catherine; Levassor, Andre; Prij, Jan; Martens, Karl-Heinz; Roehlig, Klaus
1997-01-01
The main objective of the EVEREST project is the evaluation of the sensitivity of the radiological consequences associated with the geological disposal of radioactive waste to the different elements in the performance assessment. Three types of geological host formations are considered: clay, granite and salt. The sensitivity studies that have been carried out can be partitioned into three categories according to the type of uncertainty taken into account: uncertainty in the model parameters, uncertainty in the conceptual models and uncertainty in the considered scenarios. Deterministic as well as stochastic calculational approaches have been applied for the sensitivity analyses. For the analysis of the sensitivity to parameter values, the reference technique, which has been applied in many evaluations, is stochastic and consists of a Monte Carlo simulation followed by a linear regression. For the analysis of conceptual model uncertainty, deterministic and stochastic approaches have been used. For the analysis of uncertainty in the considered scenarios, mainly deterministic approaches have been applied
Multiple predictor smoothing methods for sensitivity analysis: Description of techniques
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Multiple predictor smoothing methods for sensitivity analysis: Example results
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Carbon dioxide capture processes: Simulation, design and sensitivity analysis
DEFF Research Database (Denmark)
Zaman, Muhammad; Lee, Jay Hyung; Gani, Rafiqul
2012-01-01
equilibrium and associated property models are used. Simulations are performed to investigate the sensitivity of the process variables to change in the design variables including process inputs and disturbances in the property model parameters. Results of the sensitivity analysis on the steady state...... performance of the process to the L/G ratio to the absorber, CO2 lean solvent loadings, and striper pressure are presented in this paper. Based on the sensitivity analysis process optimization problems have been defined and solved and, a preliminary control structure selection has been made.......Carbon dioxide is the main greenhouse gas and its major source is combustion of fossil fuels for power generation. The objective of this study is to carry out the steady-state sensitivity analysis for chemical absorption of carbon dioxide capture from flue gas using monoethanolamine solvent. First...
Cost-Effectiveness Analysis of Regorafenib for Metastatic Colorectal Cancer.
Goldstein, Daniel A; Ahmad, Bilal B; Chen, Qiushi; Ayer, Turgay; Howard, David H; Lipscomb, Joseph; El-Rayes, Bassel F; Flowers, Christopher R
2015-11-10
Regorafenib is a standard-care option for treatment-refractory metastatic colorectal cancer that increases median overall survival by 6 weeks compared with placebo. Given this small incremental clinical benefit, we evaluated the cost-effectiveness of regorafenib in the third-line setting for patients with metastatic colorectal cancer from the US payer perspective. We developed a Markov model to compare the cost and effectiveness of regorafenib with those of placebo in the third-line treatment of metastatic colorectal cancer. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Drug costs were based on Medicare reimbursement rates in 2014. Model robustness was addressed in univariable and probabilistic sensitivity analyses. Regorafenib provided an additional 0.04 QALYs (0.13 life-years) at a cost of $40,000, resulting in an incremental cost-effectiveness ratio of $900,000 per QALY. The incremental cost-effectiveness ratio for regorafenib was > $550,000 per QALY in all of our univariable and probabilistic sensitivity analyses. Regorafenib provides minimal incremental benefit at high incremental cost per QALY in the third-line management of metastatic colorectal cancer. The cost-effectiveness of regorafenib could be improved by the use of value-based pricing. © 2015 by American Society of Clinical Oncology.
An analysis of electric utility embedded power supply costs
International Nuclear Information System (INIS)
Kahal, M.; Brown, D.
1998-01-01
There is little doubt that for the vast majority of electric utilities the embedded costs of power supply exceed market prices, giving rise to the stranded cost problem. Beyond that simple generalization, there are a number of crucial questions, which this study attempts to answer. What are the regional patterns of embedded cost differences? To what extent is the cost problem attributable to nuclear power? How does the cost of purchased power compare to the cost of utility self-generation? What is the breakdown of utility embedded generation costs between operating costs - which are potentially avoidable--and ownership costs, which by definition are ''sunk'' and therefore not avoidable? How will embedded generation costs and market prices compare over time? These are the crucial questions for states as they address retail-restructuring proposal. This study presents an analysis of generation costs, which addresses these key questions. A computerized costing model was developed and applied using FERC Form 1 data for 1995. The model analyzed embedded power supply costs (i.e.; self-generation plus purchased power) for two groups of investor-owned utilities, 49 non-nuclear vs. 63 nuclear. These two subsamples represent substantially the entire US investor-owned electric utility industry. For each utility, embedded cost is estimated both at busbar and at meter
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information
Directory of Open Access Journals (Sweden)
Chuanqi Li
2014-11-01
Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro, María
2016-12-26
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
76 FR 64931 - Building Energy Codes Cost Analysis
2011-10-19
...-0046] Building Energy Codes Cost Analysis AGENCY: Office of Energy Efficiency and Renewable Energy... reopening of the time period for submitting comments on the request for information on Building Energy Codes... the request for information on Building Energy Code Cost Analysis and provide docket number EERE-2011...
Cost-benefit analysis and non-utilitarian ethics
Lowry, R.J.; Peterson, M.B.
2012-01-01
Cost-benefit analysis is commonly understood to be intimately connected with utilitarianism and incompatible with other moral theories, particularly those that focus on deontological concepts such as rights. We reject this claim and argue that cost-benefit analysis can take moral rights as well as
How (not) to Lie with Benefit-Cost Analysis
Scott Farrow
2013-01-01
Benefit-cost analysis is seen by some as a controversial activity in which the analyst can significantly bias the results. This note highlights some of the ways that analysts can "lie" in a benefit-cost analysis but more importantly, provides guidance on how not to lie and how to better inform public decisionmakers.
22 CFR 226.45 - Cost and price analysis.
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Cost and price analysis. 226.45 Section 226.45 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS Post-award Requirements Procurement Standards § 226.45 Cost and price analysis. Some...
Cost analysis and cost justification of automated data processing in the clinical laboratory.
Westlake, G E
1983-03-01
Prospective cost analysis of alternative data processing systems can be facilitated by proper selection of the costs to be analyzed and realistic appraisal of the effect on staffing. When comparing projects with dissimilar cash flows, techniques such as analysis of net present value can be helpful in identifying financial benefits. Confidence and accuracy in prospective analyses will increase as more retrospective studies are published. Several accounts now in the literature describe long-term experience with turnkey laboratory information systems. Acknowledging the difficulty in longitudinal studies, they all report favorable effects on labor costs and recovery of lost charges. Enthusiasm is also expressed for the many intangible benefits of the systems. Several trends suggest that cost justification and cost effectiveness will be more easily demonstrated in the future. These are the rapidly decreasing cost of hardware (with corresponding reduction in service costs) and the entry into the market of additional systems designed for medium to small hospitals. The effect of broadening the sales base may be lower software prices. Finally, operational and executive data management and reporting are destined to become the premier extensions of the LIS for cost justification. Aptly applied, these facilities can promote understanding of costs, control of costs, and greater efficiency in providing laboratory services.
Analysis of costs of transrectal prostate biopsy.
Fandella, Andrea
2011-01-01
Literature reports mortality and morbidity data from prostatic carcinoma which permit a better use of some routine diagnostic tools such as transrectal ultrasound-guided biopsy. The aim of this work is to quantify the overall cost of transrectal ultrasound biopsy of the prostate (TRUSB) and to assess the economic impact of current procedures for diagnosing prostatic carcinoma. The total cost of TRUSB was calculated with reference to 247 procedures performed in 2008. The following cost factors were evaluated: personnel, materials, maintenance/depreciation of the equipment, energy consumption, and hospital overheads. A literature review was also carried out to check if our extrapolated costs corresponded to those of other authors worldwide, and to consider them in the wider framework of the economic effectiveness of strategies for early diagnosis of cancer of the prostate. The overall cost of TRUSB (8 samples) was EUR 249,000, obtained by adding together the costs of: personnel (EUR 160,000); materials (EUR 59,000); equipment maintenance and depreciation (EUR 12,400); energy consumption (EUR0,1); hospital overheads (EUR 17,500). With extended or saturation biopsies the cost increases for the more time needed by pathologists and can be calculated as EUR 300,000. The literature review points out TRUSB as an invasive tool for diagnosing prostatic carcinoma, clinically and economically controversial. Post-mortem data report the presence of cancer cells in the prostate of 50% of 70-year-old men, while extrapolations calculate a morbidity rate from prostatic carcinoma in 9.5% of 50-year-old men. It is therefore obvious that randomized prostatic biopsies, methods apart, have a good probability of being positive. This probability varies with the patient's age, the level of prostate specific antigen (PSA), the density of PSA/cm3 of prostate volume (PSAD), and the detection by digital exploration and/or positive transrectal ultrasound. CONCLUSIONS. Despite the severe
Processing Cost Analysis for Biomass Feedstocks
Energy Technology Data Exchange (ETDEWEB)
Badger, P.C.
2002-11-20
The receiving, handling, storing, and processing of woody biomass feedstocks is an overlooked component of biopower systems. The purpose of this study was twofold: (1) to identify and characterize all the receiving, handling, storing, and processing steps required to make woody biomass feedstocks suitable for use in direct combustion and gasification applications, including small modular biopower (SMB) systems, and (2) to estimate the capital and operating costs at each step. Since biopower applications can be varied, a number of conversion systems and feedstocks required evaluation. In addition to limiting this study to woody biomass feedstocks, the boundaries of this study were from the power plant gate to the feedstock entry point into the conversion device. Although some power plants are sited at a source of wood waste fuel, it was assumed for this study that all wood waste would be brought to the power plant site. This study was also confined to the following three feedstocks (1) forest residues, (2) industrial mill residues, and (3) urban wood residues. Additionally, the study was confined to grate, suspension, and fluidized bed direct combustion systems; gasification systems; and SMB conversion systems. Since scale can play an important role in types of equipment, operational requirements, and capital and operational costs, this study examined these factors for the following direct combustion and gasification system size ranges: 50, 20, 5, and 1 MWe. The scope of the study also included: Specific operational issues associated with specific feedstocks (e.g., bark and problems with bridging); Opportunities for reducing handling, storage, and processing costs; How environmental restrictions can affect handling and processing costs (e.g., noise, commingling of treated wood or non-wood materials, emissions, and runoff); and Feedstock quality issues and/or requirements (e.g., moisture, particle size, presence of non-wood materials). The study found that over the
Energy Technology Data Exchange (ETDEWEB)
Vijayakumar, P.; Pandian, Muthu Senthil; Ramasamy, P., E-mail: ramasamyp@ssn.edu.in [SSN Research Centre, SSN College of Engineering, Kalavakkam-603 110, Chennai, Tamilnadu (India)
2015-06-24
Vanadium oxide nanostars were synthesized by chemical method. The prepared Vanadium oxide nanostars are introduced into dye sensitized solar cell (DSSC) as counter electrode (CE) catalyst to replace the expensive platinum (Pt). The products were characterized by X-ray diffractometry (XRD), scanning electron microscopy (SEM), and Brunauer–Emmett–Teller (BET) method. The photovoltaic performance of the VO as counter electrode based DSSC was evaluated under simulated standard global AM 1.5G sunlight (100 mW/cm{sup 2}). The solar to electrical energy conversion efficiency (η) of the DSSC was found to be 0.38%.This work expands the Counter electrode catalyst, which can help to reduce the cost of DSSC and thereby encourage their fundamental research and commercial application.
Directory of Open Access Journals (Sweden)
Mark Kenneth Quinn
2017-07-01
Full Text Available Measurements of pressure-sensitive paint (PSP have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.
Quinn, Mark Kenneth; Spinosa, Emanuele; Roberts, David A
2017-07-25
Measurements of pressure-sensitive paint (PSP) have been performed using new or non-scientific imaging technology based on machine vision tools. Machine vision camera systems are typically used for automated inspection or process monitoring. Such devices offer the benefits of lower cost and reduced size compared with typically scientific-grade cameras; however, their optical qualities and suitability have yet to be determined. This research intends to show relevant imaging characteristics and also show the applicability of such imaging technology for PSP. Details of camera performance are benchmarked and compared to standard scientific imaging equipment and subsequent PSP tests are conducted using a static calibration chamber. The findings demonstrate that machine vision technology can be used for PSP measurements, opening up the possibility of performing measurements on-board small-scale model such as those used for wind tunnel testing or measurements in confined spaces with limited optical access.
Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln
2014-01-01
Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.
Structure and sensitivity analysis of individual-based predator–prey models
International Nuclear Information System (INIS)
Imron, Muhammad Ali; Gergs, Andre; Berger, Uta
2012-01-01
The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.
Cost analysis of light water reactor power plants
International Nuclear Information System (INIS)
Mooz, W.E.
1978-06-01
A statistical analysis is presented of the capital costs of light water reactor (LWR) electrical power plants. The objective is twofold: to determine what factors are statistically related to capital costs and to produce a methodology for estimating these costs. The analysis in the study is based on the time and cost data that are available on U.S. nuclear power plants. Out of a total of about 60 operating plants, useful capital-cost data were available on only 39 plants. In addition, construction-time data were available on about 65 plants, and data on completed construction permit applications were available for about 132 plants. The cost data were first systematically adjusted to constant dollars. Then multivariate regression analyses were performed by using independent variables consisting of various physical and locational characteristics of the plants. The dependent variables analyzed were the time required to obtain a construction permit, the construction time, and the capital cost
Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.
Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun
2017-12-01
Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.
A general first-order global sensitivity analysis method
International Nuclear Information System (INIS)
Xu Chonggang; Gertner, George Zdzislaw
2008-01-01
Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters
Cost-identification analysis of total laryngectomy: an itemized approach to hospital costs.
Dedhia, Raj C; Smith, Kenneth J; Weissfeld, Joel L; Saul, Melissa I; Lee, Steve C; Myers, Eugene N; Johnson, Jonas T
2011-02-01
To understand the contribution of intraoperative and postoperative hospital costs to total hospital costs, examine the costs associated with specific hospital services in the postoperative period, and recognize the impact of patient factors on hospital costs. Case series with chart review. Large tertiary care teaching hospital system. Using the Pittsburgh Head and Neck Organ-Specific Database, 119 patients were identified as having total laryngectomy with bilateral selective neck dissection and primary closure from 1999 to 2009. Cost data were obtained for 112 patients. Costs include fixed and variable costs, adjusted to 2010 US dollars using the Consumer Price Index. Mean total hospital costs were $29,563 (range, $10,915 to $120,345). Operating room costs averaged 24% of total hospital costs, whereas room charges, respiratory therapy, laboratory, pharmacy, and radiology accounted for 38%, 14%, 8%, 7%, and 3%, respectively. Median length of stay was 9 days (range, 6-43), and median Charlson comorbidity index score was 8 (2-16). Patients with ≥1 day in the intensive care unit had significantly higher hospital costs ($46,831 vs $24,601, P cost differences with stratification based on previous radiation therapy ($27,598 vs $29,915 with no prior radiation, P = .62) or hospital readmission within 30 days ($29,483 vs $29,609 without readmission, P = .97). This is one of few studies in surgery and the first in otolaryngology to analyze hospital costs for a relatively standardized procedure. Further work will include cost analysis from multiple centers with investigation of global cost drivers.
Cost benefit analysis of power plant database integration
International Nuclear Information System (INIS)
Wilber, B.E.; Cimento, A.; Stuart, R.
1988-01-01
A cost benefit analysis of plant wide data integration allows utility management to evaluate integration and automation benefits from an economic perspective. With this evaluation, the utility can determine both the quantitative and qualitative savings that can be expected from data integration. The cost benefit analysis is then a planning tool which helps the utility to develop a focused long term implementation strategy that will yield significant near term benefits. This paper presents a flexible cost benefit analysis methodology which is both simple to use and yields accurate, verifiable results. Included in this paper is a list of parameters to consider, a procedure for performing the cost savings analysis, and samples of this procedure when applied to a utility. A case study is presented involving a specific utility where this procedure was applied. Their uses of the cost-benefit analysis are also described
Whittington, Melanie D; Atherly, Adam J; Curtis, Donna J; Lindrooth, Richard C; Bradley, Cathy J; Campbell, Jonathan D
2017-08-01
Patients in the ICU are at the greatest risk of contracting healthcare-associated infections like methicillin-resistant Staphylococcus aureus. This study calculates the cost-effectiveness of methicillin-resistant S aureus prevention strategies and recommends specific strategies based on screening test implementation. A cost-effectiveness analysis using a Markov model from the hospital perspective was conducted to determine if the implementation costs of methicillin-resistant S aureus prevention strategies are justified by associated reductions in methicillin-resistant S aureus infections and improvements in quality-adjusted life years. Univariate and probabilistic sensitivity analyses determined the influence of input variation on the cost-effectiveness. ICU. Hypothetical cohort of adults admitted to the ICU. Three prevention strategies were evaluated, including universal decolonization, targeted decolonization, and screening and isolation. Because prevention strategies have a screening component, the screening test in the model was varied to reflect commonly used screening test categories, including conventional culture, chromogenic agar, and polymerase chain reaction. Universal and targeted decolonization are less costly and more effective than screening and isolation. This is consistent for all screening tests. When compared with targeted decolonization, universal decolonization is cost-saving to cost-effective, with maximum cost savings occurring when a hospital uses more expensive screening tests like polymerase chain reaction. Results were robust to sensitivity analyses. As compared with screening and isolation, the current standard practice in ICUs, targeted decolonization, and universal decolonization are less costly and more effective. This supports updating the standard practice to a decolonization approach.
An analysis of nuclear power plant operating costs
International Nuclear Information System (INIS)
1988-01-01
This report presents the results of a statistical analysis of nonfuel operating costs for nuclear power plants. Most studies of the economic costs of nuclear power have focused on the rapid escalation in the cost of constructing a nuclear power plant. The present analysis found that there has also been substantial escalation in real (inflation-adjusted) nonfuel operating costs. It is important to determine the factors contributing to the escalation in operating costs, not only to understand what has occurred but also to gain insights about future trends in operating costs. There are two types of nonfuel operating costs. The first is routine operating and maintenance expenditures (O and M costs), and the second is large postoperational capital expenditures, or what is typically called ''capital additions.'' O and M costs consist mainly of expenditures on labor, and according to one recently completed study, the majoriy of employees at a nuclear power plant perform maintenance activities. It is generally thought that capital additions costs consist of large maintenance expenditures needed to keep the plants operational, and to make plant modifications (backfits) required by the Nuclear Regulatory Commission (NRC). Many discussions of nuclear power plant operating costs have not considered these capital additions costs, and a major finding of the present study is that these costs are substantial. The objective of this study was to determine why nonfuel operating costs have increased over the past decade. The statistical analysis examined a number of factors that have influenced the escalation in real nonfuel operating costs and these are discussed in this report. 4 figs, 19 tabs
Evaluation of Cost Models and Needs & Gaps Analysis
DEFF Research Database (Denmark)
Kejser, Ulla Bøgvad
2014-01-01
they breakdown costs. This is followed by an in depth analysis of stakeholders’ needs for financial information derived from the 4C project stakeholder consultation.The stakeholders’ needs analysis indicated that models should:• support accounting, but more importantly they should enable budgeting• be able......his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... andcomparing financial information. Based on this evaluation, it aims to point out gaps that need to be bridged in order to increase the uptake of cost & benefit modelling and good practices that will enable costing and comparison of the costs of alternative scenarios—which in turn provides a starting point...
Cost benefit analysis for optimization of radiation protection
International Nuclear Information System (INIS)
Lindell, B.
1984-01-01
ICRP recommends three basic principles for radiation protection. One is the justification of the source. Any use of radiation should be justified with regard to its benefit. The second is the optimization of radiation protection, i.e. all radiation exposure should be kept as low as resonably achievable. And the third principle is that there should be a limit for the radiation dose that any individual receives. Cost benefit assessment or cost benefit analysis is one tool to achieve the optimization, but the optimization is not identical with cost benefit analysis. Basically, in principle, the cost benefit analysis for the optimization of radiation protection is to find the minimum sum of the cost of protection and some cost of detriment. (Mori, K.)
Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes
International Nuclear Information System (INIS)
Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae
2016-01-01
Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed
Sensitivity Analysis of Criticality for Different Nuclear Fuel Shapes
Energy Technology Data Exchange (ETDEWEB)
Kang, Hyun Sik; Jang, Misuk; Kim, Seoung Rae [NESS, Daejeon (Korea, Republic of)
2016-10-15
Rod-type nuclear fuel was mainly developed in the past, but recent study has been extended to plate-type nuclear fuel. Therefore, this paper reviews the sensitivity of criticality according to different shapes of nuclear fuel types. Criticality analysis was performed using MCNP5. MCNP5 is well-known Monte Carlo codes for criticality analysis and a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical systems. We performed the sensitivity analysis of criticality for different fuel shapes. In sensitivity analysis for simple fuel shapes, the criticality is proportional to the surface area. But for fuel Assembly types, it is not proportional to the surface area. In sensitivity analysis for intervals between plates, the criticality is greater as the interval increases, but if the interval is greater than 8mm, it showed an opposite trend that the criticality decrease by a larger interval. As a result, it has failed to obtain the logical content to be described in common for all cases. The sensitivity analysis of Criticality would be always required whenever subject to be analyzed is changed.
Dixit, Chandra K; Kadimisetty, Karteek; Otieno, Brunah A; Tang, Chi; Malla, Spundana; Krause, Colleen E; Rusling, James F
2016-01-21
Early detection and reliable diagnostics are keys to effectively design cancer therapies with better prognoses. The simultaneous detection of panels of biomarker proteins holds great promise as a general tool for reliable cancer diagnostics. A major challenge in designing such a panel is to decide upon a coherent group of biomarkers which have higher specificity for a given type of cancer. The second big challenge is to develop test devices to measure these biomarkers quantitatively with high sensitivity and specificity, such that there are no interferences from the complex serum or tissue matrices. Lastly, integrating all these tests into a technology that does not require exclusive training to operate, and can be used at point-of-care (POC) is another potential bottleneck in futuristic cancer diagnostics. In this article, we review electrochemistry-based tools and technologies developed and/or used in our laboratories to construct low-cost microfluidic protein arrays for the highly sensitive detection of a panel of cancer-specific biomarkers with high specificity which at the same time has the potential to be translated into POC applications.
Cost-Effectiveness Analysis of Diagnosis of Duchenne/Becker Muscular Dystrophy in Colombia.
Atehortúa, Sara C; Lugo, Luz H; Ceballos, Mateo; Orozco, Esteban; Castro, Paula A; Arango, Juan C; Mateus, Heidi E
2018-03-09
To determine the cost-effectiveness ratio of different courses of action for the diagnosis of Duchenne or Becker muscular dystrophy in Colombia. The cost-effectiveness analysis was performed from the Colombian health system perspective. Decision trees were constructed, and different courses of action were compared considering the following tests: immunohistochemistry (IHC), Western blot (WB), multiplex polymerase chain reaction, multiplex ligation-dependent probe amplification (MLPA), and the complete sequencing of the dystrophin gene. The time horizon matched the duration of sample extraction and analysis. Transition probabilities were obtained from a systematic review. Costs were constructed with a type-case methodology using the consensus of experts and the valuation of resources from consulting laboratories and the 2001 Social Security Institute cost manual. Deterministic sensitivity and scenario analyses were performed with one or more unavailable alternatives. Costs were converted from Colombian pesos to US dollars using the 2014 exchange rate. In the base case, WB was the dominant strategy, with a cost of US $419.07 and a sensitivity of 100%. This approach remains the dominant strategy down to a 98.2% sensitivity and while costs do not exceed US $837.38. If WB was not available, IHC had the best cost-effectiveness ratio, followed by MLPA and sequencing. WB is a cost-effective alternative for the diagnosis of patients suspected of having Duchenne or Becker muscular dystrophy in the Colombian health system. The IHC test is rated as the second-best detection method. If these tests are not available, MLPA followed by sequencing would be the most cost-effective alternative. Copyright © 2018. Published by Elsevier Inc.
Improving the quality of pressure ulcer care with prevention: a cost-effectiveness analysis.
Padula, William V; Mishra, Manish K; Makic, Mary Beth F; Sullivan, Patrick W
2011-04-01
In October 2008, Centers for Medicare and Medicaid Services discontinued reimbursement for hospital-acquired pressure ulcers (HAPUs), thus placing stress on hospitals to prevent incidence of this costly condition. To evaluate whether prevention methods are cost-effective compared with standard care in the management of HAPUs. A semi-Markov model simulated the admission of patients to an acute care hospital from the time of admission through 1 year using the societal perspective. The model simulated health states that could potentially lead to an HAPU through either the practice of "prevention" or "standard care." Univariate sensitivity analyses, threshold analyses, and Bayesian multivariate probabilistic sensitivity analysis using 10,000 Monte Carlo simulations were conducted. Cost per quality-adjusted life-years (QALYs) gained for the prevention of HAPUs. Prevention was cost saving and resulted in greater expected effectiveness compared with the standard care approach per hospitalization. The expected cost of prevention was $7276.35, and the expected effectiveness was 11.241 QALYs. The expected cost for standard care was $10,053.95, and the expected effectiveness was 9.342 QALYs. The multivariate probabilistic sensitivity analysis showed that prevention resulted in cost savings in 99.99% of the simulations. The threshold cost of prevention was $821.53 per day per person, whereas the cost of prevention was estimated to be $54.66 per day per person. This study suggests that it is more cost effective to pay for prevention of HAPUs compared with standard care. Continuous preventive care of HAPUs in acutely ill patients could potentially reduce incidence and prevalence, as well as lead to lower expenditures.
Beyer, Sebastian E; Hunink, Myriam G; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H
2015-07-01
This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80,000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80,000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80,000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. © 2015 The Authors.
Global sensitivity analysis of computer models with functional inputs
International Nuclear Information System (INIS)
Iooss, Bertrand; Ribatet, Mathieu
2009-01-01
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.
Cost-effectiveness analysis of implants versus autologous perforator flaps using the BREAST-Q.
Matros, Evan; Albornoz, Claudia R; Razdan, Shantanu N; Mehrara, Babak J; Macadam, Sheina A; Ro, Teresa; McCarthy, Colleen M; Disa, Joseph J; Cordeiro, Peter G; Pusic, Andrea L
2015-04-01
Reimbursement has been recognized as a physician barrier to autologous reconstruction. Autologous reconstructions are more expensive than prosthetic reconstructions, but provide greater health-related quality of life. The authors' hypothesis is that autologous tissue reconstructions are cost-effective compared with prosthetic techniques when considering health-related quality of life and patient satisfaction. A cost-effectiveness analysis from the payer perspective, including patient input, was performed for unilateral and bilateral reconstructions with deep inferior epigastric perforator (DIEP) flaps and implants. The effectiveness measure was derived using the BREAST-Q and interpreted as the cost for obtaining 1 year of perfect breast health-related quality-adjusted life-year. Costs were obtained from the 2010 Nationwide Inpatient Sample. The incremental cost-effectiveness ratio was generated. A sensitivity analysis for age and stage at diagnosis was performed. BREAST-Q scores from 309 patients with implants and 217 DIEP flap reconstructions were included. The additional cost for obtaining 1 year of perfect breast-related health for a unilateral DIEP flap compared with implant reconstruction was $11,941. For bilateral DIEP flaps compared with implant reconstructions, the cost for an additional breast health-related quality-adjusted life-year was $28,017. The sensitivity analysis demonstrated that the cost for an additional breast health-related quality-adjusted life-year for DIEP flaps compared with implants was less for younger patients and earlier stage breast cancer. DIEP flaps are cost-effective compared with implants, especially for unilateral reconstructions. Cost-effectiveness of autologous techniques is maximized in women with longer life expectancy. Patient-reported outcomes findings can be incorporated into cost-effectiveness analyses to demonstrate the relative value of reconstructive procedures.
Nuclear power company activity based costing management analysis
International Nuclear Information System (INIS)
Xu Dan
2012-01-01
With Nuclear Energy Industry development, Nuclear Power Company has the continual promoting stress of inner management to the sustainable marketing operation development. In view of this, it is very imminence that Nuclear Power Company should promote the cost management levels and built the nuclear safety based lower cost competitive advantage. Activity based costing management (ABCM) transfer the cost management emphases from the 'product' to the 'activity' using the value chain analysis methods, cost driver analysis methods and so on. According to the analysis of the detail activities and the value chains, cancel the unnecessary activity, low down the resource consuming of the necessary activity, and manage the cost from the source, achieve the purpose of reducing cost, boosting efficiency and realizing the management value. It gets the conclusion from the detail analysis with the nuclear power company procedure and activity, and also with the selection to 'pieces analysis' of the important cost related project in the nuclear power company. The conclusion is that the activities of the nuclear power company has the obviously performance. It can use the management of ABC method. And with the management of the procedure and activity, it is helpful to realize the nuclear safety based low cost competitive advantage in the nuclear power company. (author)
Hospitalisations and costs relating to ambulatory care sensitive conditions in Ireland.
LENUS (Irish Health Repository)
Sheridan, A
2012-03-08
BACKGROUND: Ambulatory care sensitive conditions (ACSCs) are conditions for which the provision of timely and effective outpatient care can reduce the risks of hospitalisation by preventing, controlling or managing a chronic disease or condition. AIMS: The aims of this study were to report on ACSCs in Ireland, and to provide a baseline for future reference. METHODS: Using HIPE, via Health Atlas Ireland, inpatient discharges classified as ACSCs using definitions from the Victorian ACSC study were extracted for the years 2005-2008. Direct methods of standardisation allowed comparison of rates using the EU standard population as a comparison for national data, and national population as comparison for county data. Costs were estimated using diagnosis-related groups. RESULTS: The directly age-standardised discharge rate for ACSC-related discharges increased slightly, but non-significantly, from 15.40 per 1,000 population in 2005 to 15.75 per 1,000 population in 2008. The number of discharges increased (9.5%) from 63,619 in 2005 to 69,664 in 2008, with the estimated associated hospital costs increasing (31.5%) from
Cost Benefit Analysis of Boat Lifts
2014-09-01
associated with commercial boat lifts were obtained through a market survey based on products advertised for sale to the general public. The information...from the market survey and knowledge of specific boat maintenance items susceptible to cost reduction using a boat lift were then compared to identify...transferred to the Boat Inventory Manager ( BIM ). Custodians are responsible for maintaining boats and small craft in good working order at all times
A tool model for predicting atmospheric kinetics with sensitivity analysis
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
A package( a tool model) for program of predicting atmospheric chemical kinetics with sensitivity analysis is presented. The new direct method of calculating the first order sensitivity coefficients using sparse matrix technology to chemical kinetics is included in the tool model, it is only necessary to triangularize the matrix related to the Jacobian matrix of the model equation. The Gear type procedure is used to integrate amodel equation and its coupled auxiliary sensitivity coefficient equations. The FORTRAN subroutines of the model equation, the sensitivity coefficient equations, and their Jacobian analytical expressions are generated automatically from a chemical mechanism. The kinetic representation for the model equation and its sensitivity coefficient equations, and their Jacobian matrix is presented. Various FORTRAN subroutines in packages, such as SLODE, modified MA28, Gear package, with which the program runs in conjunction are recommended.The photo-oxidation of dimethyl disulfide is used for illustration.
Benefit-cost analysis of OHER research
International Nuclear Information System (INIS)
Nesse, R.J.
1988-01-01
This research was undertaken to estimate societal benefits and costs of selected past research performed for OHER. Three case studies of representative OHER and DOE research were performed. One of these, the acid rain case study, included research conducted in another office in DOE. The other two cases were the OHER marine research program and the OHER project that developed high-purity germanium used in radiation detectors. The acid rain case study looked at research benefits and costs of furnace sorbent injection and duct injection, technologies that might reduce acid deposition precursors. Both appeared to show benefits in excess of costs. They examined in detail one of the marine research program's accomplishments, the increase in environmental information used by the Outer Continental Shelf leasing program to manage bidding for off-shore oil drilling. The results of an econometric model showed that, environmentally, marine research supported by OHER is unequivocally linked to government and industry leasing decisions. Finally, the germanium case study indicated that benefits of germanium radiation detectors were significant
International Nuclear Information System (INIS)
Hirano, Emi; Kawabuchi, Koichi; Fuji, Hiroshi; Onoe, Tsuyoshi; Kumar, Vinay; Shirato, Hiroki
2014-01-01
The aim of this study is to evaluate the cost-effectiveness of proton beam therapy with cochlear dose reduction compared with conventional X-ray radiotherapy for medulloblastoma in childhood. We developed a Markov model to describe health states of 6-year-old children with medulloblastoma after treatment with proton or X-ray radiotherapy. The risks of hearing loss were calculated on cochlear dose for each treatment. Three types of health-related quality of life (HRQOL) of EQ-5D, HUI3 and SF-6D were used for estimation of quality-adjusted life years (QALYs). The incremental cost-effectiveness ratio (ICER) for proton beam therapy compared with X-ray radiotherapy was calculated for each HRQOL. Sensitivity analyses were performed to model uncertainty in these parameters. The ICER for EQ-5D, HUI3 and SF-6D were $21 716/QALY, $11 773/QALY, and $20 150/QALY, respectively. One-way sensitivity analyses found that the results were sensitive to discount rate, the risk of hearing loss after proton therapy, and costs of proton irradiation. Cost-effectiveness acceptability curve analysis revealed a 99% probability of proton therapy being cost effective at a societal willingness-to-pay value. Proton beam therapy with cochlear dose reduction improves health outcomes at a cost that is within the acceptable cost-effectiveness range from the payer's standpoint. (author)
Sensitivity analysis of the nuclear data for MYRRHA reactor modelling
International Nuclear Information System (INIS)
Stankovskiy, Alexey; Van den Eynde, Gert; Cabellos, Oscar; Diez, Carlos J.; Schillebeeckx, Peter; Heyse, Jan
2014-01-01
A global sensitivity analysis of effective neutron multiplication factor k eff to the change of nuclear data library revealed that JEFF-3.2T2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than does JEFF-3.1.2. The analysis of contributions of individual evaluations into k eff sensitivity allowed establishing the priority list of nuclides for which uncertainties on nuclear data must be improved. Detailed sensitivity analysis has been performed for two nuclides from this list, 56 Fe and 238 Pu. The analysis was based on a detailed survey of the evaluations and experimental data. To track the origin of the differences in the evaluations and their impact on k eff , the reaction cross-sections and multiplicities in one evaluation have been substituted by the corresponding data from other evaluations. (authors)
Life cycle cost estimation and systems analysis of Waste Management Facilities
International Nuclear Information System (INIS)
Shropshire, D.; Feizollahi, F.
1995-01-01
This paper presents general conclusions from application of a system cost analysis method developed by the United States Department of Energy (DOE), Waste Management Division (WM), Waste Management Facilities Costs Information (WMFCI) program. The WMFCI method has been used to assess the DOE complex-wide management of radioactive, hazardous, and mixed wastes. The Idaho Engineering Laboratory, along with its subcontractor Morrison Knudsen Corporation, has been responsible for developing and applying the WMFCI cost analysis method. The cost analyses are based on system planning level life-cycle costs. The costs for life-cycle waste management activities estimated by WMFCI range from bench-scale testing and developmental work needed to design and construct a facility, facility permitting and startup, operation and maintenance, to the final decontamination, decommissioning, and closure of the facility. For DOE complex-wide assessments, cost estimates have been developed at the treatment, storage, and disposal module level and rolled up for each DOE installation. Discussions include conclusions reached by studies covering complex-wide consolidation of treatment, storage, and disposal facilities, system cost modeling, system costs sensitivity, system cost optimization, and the integration of WM waste with the environmental restoration and decontamination and decommissioning secondary wastes
Janssen, Ellen M; Jerome, Gerald J; Dalcin, Arlene T; Gennusa, Joseph V; Goldsholl, Stacy; Frick, Kevin D; Wang, Nae-Yuh; Appel, Lawrence J; Daumit, Gail L
2017-06-01
In the ACHIEVE randomized controlled trial, an 18-month behavioral intervention accomplished weight loss in persons with serious mental illness who attended community psychiatric rehabilitation programs. This analysis estimates costs for delivering the intervention during the study. It also estimates expected costs to implement the intervention more widely in a range of community mental health programs. Using empirical data, costs were calculated from the perspective of a community psychiatric rehabilitation program delivering the intervention. Personnel and travel costs were calculated using time sheet data. Rent and supply costs were calculated using rent per square foot and intervention records. A univariate sensitivity analysis and an expert-informed sensitivity analysis were conducted. With 144 participants receiving the intervention and a mean weight loss of 3.4 kg, costs of $95 per participant per month and $501 per kilogram lost in the trial were calculated. In univariate sensitivity analysis, costs ranged from $402 to $725 per kilogram lost. Through expert-informed sensitivity analysis, it was estimated that rehabilitation programs could implement the intervention for $68 to $85 per client per month. Costs of implementing the ACHIEVE intervention were in the range of other intensive behavioral weight loss interventions. Wider implementation of efficacious lifestyle interventions in community mental health settings will require adequate funding mechanisms. © 2017 The Obesity Society.
Cost Benefit Analysis: Bypass of Prešov city
Directory of Open Access Journals (Sweden)
Margorínová Martina
2017-01-01
Full Text Available The paper describes decision making process based on economic evaluation, i.e. Cost Benefit Analysis for motorway bypass of the Prešov city. Three variants were evaluated by means of the Highway Development and Management Tool (HDM-4. HDM-4 is a software system for evaluating options for investing in road transport infrastructure. Vehicle operating costs and travel time costs were monetized with the use of the software. The investment opportunities were evaluated in terms of Cost Benefit Analysis results, i.e. economic indicators.
Analysis of costs-benefits tradeoffs of complex security systems
International Nuclear Information System (INIS)
Hicks, M.J.
1996-01-01
Essential to a systems approach to design of security systems is an analysis of the cost effectiveness of alternative designs. While the concept of analysis of costs and benefits is straightforward, implementation can be at the least tedious and, for complex designs and alternatives, can become nearly intractable without the help of structured analysis tools. PACAIT--Performance and Cost Analysis Integrated Tools--is a prototype tool. The performance side of the analysis collates and reduces data from ASSESS, and existing DOE PC-based security systems performance analysis tool. The costs side of the analysis uses ACE, an existing DOD PC-based costs analysis tool. Costs are reported over the full life-cycle of the system, that is, the costs to procure, operate, maintain and retire the system and all of its components. Results are collected in Microsoft reg-sign Excel workbooks and are readily available to analysts and decision makers in both tabular and graphical formats and at both the system and path-element levels
Deterministic Local Sensitivity Analysis of Augmented Systems - I: Theory
International Nuclear Information System (INIS)
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2005-01-01
This work provides the theoretical foundation for the modular implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for large-scale simulation systems. The implementation of the ASAP commences with a selected code module and then proceeds by augmenting the size of the adjoint sensitivity system, module by module, until the entire system is completed. Notably, the adjoint sensitivity system for the augmented system can often be solved by using the same numerical methods used for solving the original, nonaugmented adjoint system, particularly when the matrix representation of the adjoint operator for the augmented system can be inverted by partitioning
Application of Sensitivity Analysis in Design of Sustainable Buildings
DEFF Research Database (Denmark)
Heiselberg, Per; Brohus, Henrik; Rasmussen, Henrik
2009-01-01
satisfies the design objectives and criteria. In the design of sustainable buildings, it is beneficial to identify the most important design parameters in order to more efficiently develop alternative design solutions or reach optimized design solutions. Sensitivity analyses make it possible to identify...... possible to influence the most important design parameters. A methodology of sensitivity analysis is presented and an application example is given for design of an office building in Denmark....
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
Sensitivity analysis of network DEA illustrated in branch banking
N. Avkiran
2010-01-01
Users of data envelopment analysis (DEA) often presume efficiency estimates to be robust. While traditional DEA has been exposed to various sensitivity studies, network DEA (NDEA) has so far escaped similar scrutiny. Thus, there is a need to investigate the sensitivity of NDEA, further compounded by the recent attention it has been receiving in literature. NDEA captures the underlying performance information found in a firm?s interacting divisions or sub-processes that would otherwise remain ...
Sensitivity analysis of periodic errors in heterodyne interferometry
International Nuclear Information System (INIS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-01-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors
Sensitivity analysis of periodic errors in heterodyne interferometry
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Jozaghi, Ehsan
2014-11-13
Smoking crack involves the risk of transmitting diseases such as HIV and hepatitis C (HCV). The current study determines whether the formerly unsanctioned supervised smoking facility (SSF)-operated by the grassroot organization, Vancouver Area Network of Drug Users (VANDU) for the last few years-costs less than the costs incurred for health-care services as a direct consequence of not having such a program in Vancouver, Canada. The data pertaining to the attendance at the SSF was gathered in 2012-2013 by VANDU. By relying on this data, a mathematical model was employed to estimate the number of HCV infections prevented by the former facility in Vancouver's Downtown Eastside (DTES). The DTES SSF's benefit-cost ratio was conservatively estimated at 12.1:1 due to its low operating cost. The study used 70% and 90% initial pipe-sharing rates for sensitivity analysis. At 80% sharing rate, the marginal HCV cases prevented were determined to be 55 cases. Moreover, at 80% sharing rate, the marginal cost-effectiveness ratio ranges from $1,705 to $97,203. The results from both the baseline and sensitivity analysis demonstrated that the establishment of the SSF by VANDU on average had annually saved CAD$1.8 million dollars in taxpayer's money. Funding SSFs in Vancouver is an efficient and effective use of financial resources in the public health domain; therefore, Vancouver Coastal Health should actively participate in their establishment in order to reduce HCV and other blood-borne infections such as HIV within the non-injecting drug users.
Costs analysis of a population level rabies control programme in Tamil Nadu, India.
Directory of Open Access Journals (Sweden)
Syed Shahid Abbas
2014-02-01
Full Text Available The study aimed to determine costs to the state government of implementing different interventions for controlling rabies among the entire human and animal populations of Tamil Nadu. This built upon an earlier assessment of Tamil Nadu's efforts to control rabies. Anti-rabies vaccines were made available at all health facilities. Costs were estimated for five different combinations of animal and human interventions using an activity-based costing approach from the provider perspective. Disease and population data were sourced from the state surveillance data, human census and livestock census. Program costs were extrapolated from official documents. All capital costs were depreciated to estimate annualized costs. All costs were inflated to 2012 Rupees. Sensitivity analysis was conducted across all major cost centres to assess their relative impact on program costs. It was found that the annual costs of providing Anti-rabies vaccine alone and in combination with Immunoglobulins was $0.7 million (Rs 36 million and $2.2 million (Rs 119 million, respectively. For animal sector interventions, the annualised costs of rolling out surgical sterilisation-immunization, injectable immunization and oral immunizations were estimated to be $ 44 million (Rs 2,350 million, $23 million (Rs 1,230 million and $ 11 million (Rs 590 million, respectively. Dog bite incidence, health systems coverage and cost of rabies biologicals were found to be important drivers of costs for human interventions. For the animal sector interventions, the size of dog catching team, dog population and vaccine costs were found to be driving the costs. Rabies control in Tamil Nadu seems a costly proposition the way it is currently structured. Policy makers in Tamil Nadu and other similar settings should consider the long-term financial sustainability before embarking upon a state or nation-wide rabies control programme.
Costs analysis of a population level rabies control programme in Tamil Nadu, India.
Abbas, Syed Shahid; Kakkar, Manish; Rogawski, Elizabeth Tacket
2014-02-01
The study aimed to determine costs to the state government of implementing different interventions for controlling rabies among the entire human and animal populations of Tamil Nadu. This built upon an earlier assessment of Tamil Nadu's efforts to control rabies. Anti-rabies vaccines were made available at all health facilities. Costs were estimated for five different combinations of animal and human interventions using an activity-based costing approach from the provider perspective. Disease and population data were sourced from the state surveillance data, human census and livestock census. Program costs were extrapolated from official documents. All capital costs were depreciated to estimate annualized costs. All costs were inflated to 2012 Rupees. Sensitivity analysis was conducted across all major cost centres to assess their relative impact on program costs. It was found that the annual costs of providing Anti-rabies vaccine alone and in combination with Immunoglobulins was $0.7 million (Rs 36 million) and $2.2 million (Rs 119 million), respectively. For animal sector interventions, the annualised costs of rolling out surgical sterilisation-immunization, injectable immunization and oral immunizations were estimated to be $ 44 million (Rs 2,350 million), $23 million (Rs 1,230 million) and $ 11 million (Rs 590 million), respectively. Dog bite incidence, health systems coverage and cost of rabies biologicals were found to be important drivers of costs for human interventions. For the animal sector interventions, the size of dog catching team, dog population and vaccine costs were found to be driving the costs. Rabies control in Tamil Nadu seems a costly proposition the way it is currently structured. Policy makers in Tamil Nadu and other similar settings should consider the long-term financial sustainability before embarking upon a state or nation-wide rabies control programme.
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
Sensitivity analysis of the reactor safety study. Final report
International Nuclear Information System (INIS)
Parkinson, W.J.; Rasmussen, N.C.; Hinkle, W.D.
1979-01-01
The Reactor Safety Study (RSS) or Wash 1400 developed a methodology estimating the public risk from light water nuclear reactors. In order to give further insights into this study, a sensitivity analysis has been performed to determine the significant contributors to risk for both the PWR and BWR. The sensitivity to variation of the point values of the failure probabilities reported in the RSS was determined for the safety systems identified therein, as well as for many of the generic classes from which individual failures contributed to system failures. Increasing as well as decreasing point values were considered. An analysis of the sensitivity to increasing uncertainty in system failure probabilities was also performed. The sensitivity parameters chosen were release category probabilities, core melt probability, and the risk parameters of early fatalities, latent cancers and total property damage. The latter three are adequate for describing all public risks identified in the RSS. The results indicate reductions of public risk by less than a factor of two for factor reductions in system or generic failure probabilities as high as one hundred. There also appears to be more benefit in monitoring the most sensitive systems to verify adherence to RSS failure rates than to backfitting present reactors. The sensitivity analysis results do indicate, however, possible benefits in reducing human error rates
Khabbazan, Mohammad Mohammadi; Roshan, Elnaz; Held, Hermann
2017-04-01
In principle solar radiation management (SRM) offers an option to ameliorate anthropogenic temperature rise. However we cannot expect it to simultaneously compensate for anthropogenic changes in further climate variables in a perfect manner. Here, we ask to what extent a proponent of the 2°C-temperature target would apply SRM in conjunction with mitigation in view of global or regional disparities in precipitation changes. We apply cost-risk analysis (CRA), which is a decision analytic framework that makes a trade-off between the expected welfare-loss from climate policy costs and the climate risks from transgressing a climate target. Here, in both global-scale and 'Giorgi'-regional-scale analyses, we evaluate the optimal mixture of SRM and mitigation under probabilistic information about climate sensitivity. To do so, we generalize CRA for the sake of including not only temperature risk, but also globally aggregated and regionally disaggregated precipitation risks. Social welfare is maximized for the following three valuation scenarios: temperature-risk-only, precipitation-risk-only, and equally weighted both-risks. For now, the Giorgi regions are treated by equal weight. We find that for regionally differentiated precipitation targets, the usage of SRM will be comparably more restricted. In the course of time, a cooling of up to 1.3°C can be attributed to SRM for the latter scenario and for a median climate sensitivity of 3°C (for a global target only, this number reduces by 0.5°C). Our results indicate that although SRM would almost completely substitute for mitigation in the globally aggregated analysis, it only saves 70% to 75% of the welfare-loss compared to a purely mitigation-based analysis (from economic costs and climate risks, approximately 4% in terms of BGE) when considering regional precipitation risks in precipitation-risk-only and both-risks scenarios. It remains to be shown how the inclusion of further risks or different regional weights would
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Social costs of road crashes: An international analysis.
Wijnen, Wim; Stipdonk, Henk
2016-09-01
This paper provides an international overview of the most recent estimates of the social costs of road crashes: total costs, value per casualty and breakdown in cost components. The analysis is based on publications about the national costs of road crashes of 17 countries, of which ten high income countries (HICs) and seven low and middle income countries (LMICs). Costs are expressed as a proportion of the gross domestic product (GDP). Differences between countries are described and explained. These are partly a consequence of differences in the road safety level, but there are also methodological explanations. Countries may or may not correct for underreporting of road crashes, they may or may not use the internationally recommended willingness to pay (WTP)-method for estimating human costs, and there are methodological differences regarding the calculation of some other cost components. The analysis shows that the social costs of road crashes in HICs range from 0.5% to 6.0% of the GDP with an average of 2.7%. Excluding countries that do not use a WTP- method for estimating human costs and countries that do not correct for underreporting, results in average costs of 3.3% of GDP. For LMICs that do correct for underreporting the share in GDP ranges from 1.1% to 2.9%. However, none of the LMICs included has performed a WTP study of the human costs. A major part of the costs is related to injuries: an average share of 50% for both HICs and LMICs. The average share of fatalities in the costs is 23% and 30% respectively. Prevention of injuries is thus important to bring down the socio-economic burden of road crashes. The paper shows that there are methodological differences between countries regarding cost components that are taken into account and regarding the methods used to estimate specific cost components. In order to be able to make sound comparisons of the costs of road crashes across countries, (further) harmonization of cost studies is recommended. This can be
Terminal patients in Belgian nursing homes: a cost analysis.
Simoens, Steven; Kutten, Betty; Keirse, Emmanuel; Vanden Berghe, Paul; Beguin, Claire; Desmedt, Marianne; Deveugele, Myriam; Léonard, Christian; Paulus, Dominique; Menten, Johan
2013-06-01
Policy makers and health care payers are concerned about the costs of treating terminal patients. This study was done to measure the costs of treating terminal patients during the final month of life in a sample of Belgian nursing homes from the health care payer perspective. Also, this study compares the costs of palliative care with those of usual care. This multicenter, retrospective cohort study enrolled terminal patients from a representative sample of nursing homes. Health care costs included fixed nursing home costs, medical fees, pharmacy charges, other charges, and eventual hospitalization costs. Data sources consisted of accountancy and invoice data. The analysis calculated costs per patient during the final month of life at 2007/2008 prices. Nineteen nursing homes participated in the study, generating a total of 181 patients. Total mean nursing home costs amounted to 3,243 € per patient during the final month of life. Total mean nursing home costs per patient of 3,822 € for patients receiving usual care were higher than costs of 2,456 € for patients receiving palliative care (p = 0.068). Higher costs of usual care were driven by higher hospitalization costs (p < 0.001). This study suggests that palliative care models in nursing homes need to be supported because such care models appear to be less expensive than usual care and because such care models are likely to better reflect the needs of terminal patients.
Directory of Open Access Journals (Sweden)
Halidi Lyeme
2017-11-01
Full Text Available In this study, a sensitivity analysis of a multi-objective optimization model for solid waste management (SWM for Dar es Salaam city in Tanzania is considered. Our objectives were to identify the most sensitive parameters and effect of other input data to the model output. Five scenarios were considered by varying their associated parameter values. The results showed that the decrease of total cost for the SWM system in all scenarios was observed compared to the baseline solution when the single landfill was considered. Furthermore, the analysis shows that the variable cost parameter for the processing facilities is very sensitivity in such a way that if you increase the variable cost then, there is a rapid increase of total cost for the SWM system and the vice versa is true. The relevant suggestions to the decision makers were also discussed.
The analysis of cost-effectiveness of implant and conventional fixed dental prosthesis.
Chun, June Sang; Har, Alix; Lim, Hyun-Pil; Lim, Hoi-Jeong
2016-02-01
This study conducted an analysis of cost-effectiveness of the implant and conventional fixed dental prosthesis (CFDP) from a single treatment perspective. The Markov model for cost-effectiveness analysis of the implant and CFDP was carried out over maximum 50 years. The probabilistic sensitivity analysis was performed by the 10,000 Monte-Carlo simulations, and cost-effectiveness acceptability curves (CEAC) were also presented. The results from meta-analysis studies were used to determine the survival rates and complication rates of the implant and CFDP. Data regarding the cost of each treatment method were collected from University Dental Hospital and Statistics Korea for 2013. Using the results of the patient satisfaction survey study, quality-adjusted prosthesis year (QAPY) of the implant and CFDP strategy was evaluated with annual discount rate. When only the direct cost was considered, implants were more cost-effective when the willingness to pay (WTP) was more than 10,000 won at 10(th) year after the treatment, and more cost-effective regardless of the WTP from 20(th) year after the prosthodontic treatment. When the indirect cost was added to the direct cost, implants were more cost-effective only when the WTP was more than 75,000 won at the 10(th) year after the prosthodontic treatment, more than 35,000 won at the 20(th) year after prosthodontic treatment. The CFDP was more cost-effective unless the WTP was more than 75,000 won at the 10(th) year after prosthodontic treatment. But the cost-effectivenss tendency changed from CFDP to implant as time passed.
Nurse manager succession planning: A cost-benefit analysis.
Phillips, Tracy; Evans, Jennifer L; Tooley, Stephanie; Shirey, Maria R
2018-03-01
This commentary presents a cost-benefit analysis to advocate for the use of succession planning to mitigate the problems ensuing from nurse manager turnover. An estimated 75% of nurse managers will leave the workforce by 2020. Many benefits are associated with proactively identifying and developing internal candidates. Fewer than 7% of health care organisations have implemented formal leadership succession planning programmes. A cost-benefit analysis of a formal succession-planning programme from one hospital illustrates the benefits of the programme in their organisation and can be replicated easily. Assumptions of nursing manager succession planning cost-benefit analysis are identified and discussed. The succession planning exemplar demonstrates the integration of cost-benefit analysis principles. Comparing the costs of a formal nurse manager succession planning strategy with the status quo results in a positive cost-benefit ratio. The implementation of a formal nurse manager succession planning programme effectively reduces replacement costs and time to transition into the new role. This programme provides an internal pipeline of future leaders who will be more successful than external candidates. Using an actual cost-benefit analysis equips nurse managers with valuable evidence depicting succession planning as a viable business strategy. © 2017 John Wiley & Sons Ltd.
Sensitivity analysis technique for application to deterministic models
International Nuclear Information System (INIS)
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method
Probabilistic Sensitivities for Fatigue Analysis of Turbine Engine Disks
Directory of Open Access Journals (Sweden)
Harry R. Millwater
2006-01-01
Full Text Available A methodology is developed and applied that determines the sensitivities of the probability-of-fracture of a gas turbine disk fatigue analysis with respect to the parameters of the probability distributions describing the random variables. The disk material is subject to initial anomalies, in either low- or high-frequency quantities, such that commonly used materials (titanium, nickel, powder nickel and common damage mechanisms (inherent defects or surface damage can be considered. The derivation is developed for Monte Carlo sampling such that the existing failure samples are used and the sensitivities are obtained with minimal additional computational time. Variance estimates and confidence bounds of the sensitivity estimates are developed. The methodology is demonstrated and verified using a multizone probabilistic fatigue analysis of a gas turbine compressor disk analysis considering stress scatter, crack growth propagation scatter, and initial crack size as random variables.
Application of sensitivity analysis for optimized piping support design
International Nuclear Information System (INIS)
Tai, K.; Nakatogawa, T.; Hisada, T.; Noguchi, H.; Ichihashi, I.; Ogo, H.
1993-01-01
The objective of this study was to see if recent developments in non-linear sensitivity analysis could be applied to the design of nuclear piping systems which use non-linear supports and to develop a practical method of designing such piping systems. In the study presented in this paper, the seismic response of a typical piping system was analyzed using a dynamic non-linear FEM and a sensitivity analysis was carried out. Then optimization for the design of the piping system supports was investigated, selecting the support location and yield load of the non-linear supports (bi-linear model) as main design parameters. It was concluded that the optimized design was a matter of combining overall system reliability with the achievement of an efficient damping effect from the non-linear supports. The analysis also demonstrated sensitivity factors are useful in the planning stage of support design. (author)
Sensitivity and uncertainty analysis of the PATHWAY radionuclide transport model
International Nuclear Information System (INIS)
Otis, M.D.
1983-01-01
Procedures were developed for the uncertainty and sensitivity analysis of a dynamic model of radionuclide transport through human food chains. Uncertainty in model predictions was estimated by propagation of parameter uncertainties using a Monte Carlo simulation technique. Sensitivity of model predictions to individual parameters was investigated using the partial correlation coefficient of each parameter with model output. Random values produced for the uncertainty analysis were used in the correlation analysis for sensitivity. These procedures were applied to the PATHWAY model which predicts concentrations of radionuclides in foods grown in Nevada and Utah and exposed to fallout during the period of atmospheric nuclear weapons testing in Nevada. Concentrations and time-integrated concentrations of iodine-131, cesium-136, and cesium-137 in milk and other foods were investigated. 9 figs., 13 tabs
Discrete non-parametric kernel estimation for global sensitivity analysis
International Nuclear Information System (INIS)
Senga Kiessé, Tristan; Ventura, Anne
2016-01-01
This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Cost-Benefit Analysis and the Democratic Ideal
Karine Nyborg; Inger Spangen
1997-01-01
In traditional cost-benefit analyses of public projects, every citizen’s willingness to pay for a project is given an equal weight. This is sometimes taken to imply that cost-benefit analysis is a democratic method for making public decisions, as opposed to, for example, political processes involving log-rolling and lobbying from interest groups. Politicians are frequently criticized for not putting enough emphasis on the cost-benefit analyses when making decisions. In this paper we discuss t...
Cost Per Flying Hour Analysis of the C-141
1997-09-01
Government Printing Office, 1996. Horngren , Charles T. Cost Accounting : A Managerial Emphasis (Eighth Edition). New Jersey: Prentice Hall, 1994. Hough...standard accounting techniques. This analysis of AMC’s current costs and their applicability to the price charged to the customer shall be the focus of... Horngren et al.,1994:864). There are three generally recognized methods of determining a transfer price (Arnstein and Gilabert, 1980:189). Cost based
Cost-effectiveness analysis of sandhill crane habitat management
Kessler, Andrew C.; Merchant, James W.; Shultz, Steven D.; Allen, Craig R.
2013-01-01
Invasive species often threaten native wildlife populations and strain the budgets of agencies charged with wildlife management. We demonstrate the potential of cost-effectiveness analysis to improve the efficiency and value of efforts to enhance sandhill crane (Grus canadensis) roosting habitat. We focus on the central Platte River in Nebraska (USA), a region of international ecological importance for migrating avian species including sandhill cranes. Cost-effectiveness analysis is a valuation process designed to compare alternative actions based on the cost of achieving a pre-determined objective. We estimated costs for removal of invasive vegetation using geographic information system simulations and calculated benefits as the increase in area of sandhill crane roosting habitat. We generated cost effectiveness values for removing invasive vegetation on 7 land parcels and for the entire central Platte River to compare the cost-effectiveness of management at specific sites and for the central Platte River landscape. Median cost effectiveness values for the 7 land parcels evaluated suggest that costs for creating 1 additional hectare of sandhill crane roosting habitat totaled US $1,595. By contrast, we found that creating an additional hectare of sandhill crane roosting habitat could cost as much as US $12,010 for some areas in the central Platte River, indicating substantial cost savings can be achieved by using a cost effectiveness analysis to target specific land parcels for management. Cost-effectiveness analysis, used in conjunction with geographic information systems, can provide decision-makers with a new tool for identifying the most economically efficient allocation of resources to achieve habitat management goals.
Health Care Analysis for the MCRMC Insurance Cost Model
2015-06-01
incentive to reduce utilization Subsidy to leave TRICARE and use other private health insurance Increases in TRICARE premiums and co-pays This...analysis develops the estimated cost of providing health care through a premium -based insurance model consistent with an employer-sponsored benefit...State Income Plan premium data Contract cost data 22 May 2015 9 Agenda Overview Background Data Insurance Cost Estimate Methodology
Directory of Open Access Journals (Sweden)
Julia K Ostermann
Full Text Available The aim of this study was to compare the health care costs for patients using additional homeopathic treatment (homeopathy group with the costs for those receiving usual care (control group.Cost data provided by a large German statutory health insurance company were retrospectively analysed from the societal perspective (primary outcome and from the statutory health insurance perspective. Patients in both groups were matched using a propensity score matching procedure based on socio-demographic variables as well as costs, number of hospital stays and sick leave days in the previous 12 months. Total cumulative costs over 18 months were compared between the groups with an analysis of covariance (adjusted for baseline costs across diagnoses and for six specific diagnoses (depression, migraine, allergic rhinitis, asthma, atopic dermatitis, and headache.Data from 44,550 patients (67.3% females were available for analysis. From the societal perspective, total costs after 18 months were higher in the homeopathy group (adj. mean: EUR 7,207.72 [95% CI 7,001.14-7,414.29] than in the control group (EUR 5,857.56 [5,650.98-6,064.13]; p<0.0001 with the largest differences between groups for productivity loss (homeopathy EUR 3,698.00 [3,586.48-3,809.53] vs. control EUR 3,092.84 [2,981.31-3,204.37] and outpatient care costs (homeopathy EUR 1,088.25 [1,073.90-1,102.59] vs. control EUR 867.87 [853.52-882.21]. Group differences decreased over time. For all diagnoses, costs were higher in the homeopathy group than in the control group, although this difference was not always statistically significant.Compared with usual care, additional homeopathic treatment was associated with significantly higher costs. These analyses did not confirm previously observed cost savings resulting from the use of homeopathy in the health care system.
Ostermann, Julia K.; Reinhold, Thomas; Witt, Claudia M.
2015-01-01
Objectives The aim of this study was to compare the health care costs for patients using additional homeopathic treatment (homeopathy group) with the costs for those receiving usual care (control group). Methods Cost data provided by a large German statutory health insurance company were retrospectively analysed from the societal perspective (primary outcome) and from the statutory health insurance perspective. Patients in both groups were matched using a propensity score matching procedure based on socio-demographic variables as well as costs, number of hospital stays and sick leave days in the previous 12 months. Total cumulative costs over 18 months were compared between the groups with an analysis of covariance (adjusted for baseline costs) across diagnoses and for six specific diagnoses (depression, migraine, allergic rhinitis, asthma, atopic dermatitis, and headache). Results Data from 44,550 patients (67.3% females) were available for analysis. From the societal perspective, total costs after 18 months were higher in the homeopathy group (adj. mean: EUR 7,207.72 [95% CI 7,001.14–7,414.29]) than in the control group (EUR 5,857.56 [5,650.98–6,064.13]; phomeopathy EUR 3,698.00 [3,586.48–3,809.53] vs. control EUR 3,092.84 [2,981.31–3,204.37]) and outpatient care costs (homeopathy EUR 1,088.25 [1,073.90–1,102.59] vs. control EUR 867.87 [853.52–882.21]). Group differences decreased over time. For all diagnoses, costs were higher in the homeopathy group than in the control group, although this difference was not always statistically significant. Conclusion Compared with usual care, additional homeopathic treatment was associated with significantly higher costs. These analyses did not confirm previously observed cost savings resulting from the use of homeopathy in the health care system. PMID:26230412
2001-07-21
APPENDIX A. ACRONYMS ACCES Attenuating Custom Communication Earpiece System ACEIT Automated Cost estimating Integrated Tools AFSC Air Force...documented in the ACEIT cost estimating tool developed by Tecolote, Inc. The factor used was 14 percent of PMP. 1.3 System Engineering/ Program...The data source is the ASC Aeronautical Engineering Products Cost Factor Handbook which is documented in the ACEIT cost estimating tool developed
National Research Council Canada - National Science Library
Doyle, Michael C
2005-01-01
.../A) and cost management (CM) capabilities. In particular, it supports the Deputy Assistant Secretary of the Army- Cost AND Economics' mission to provide DA with cost, performance and economic analysis in the form of expertise, models, data...
Cost analysis of hospitalized Clostridium difficile-associated diarrhea (CDAD
Directory of Open Access Journals (Sweden)
Hübner, Claudia
2015-10-01
Full Text Available Aim: -associated diarrhea (CDAD causes heavy financial burden on healthcare systems worldwide. As with all hospital-acquired infections, prolonged hospital stays are the main cost driver. Previous cost studies only include hospital billing data and compare the length of stay in contrast to non-infected patients. To date, a survey of actual cost has not yet been conducted.Method: A retrospective analysis of data for patients with nosocomial CDAD was carried out over a 1-year period at the University Hospital of Greifswald. Based on identification of CDAD related treatment processes, cost of hygienic measures, antibiotics and laboratory as well as revenue losses due to bed blockage and increased length of stay were calculated.Results: 19 patients were included in the analysis. On average, a CDAD patient causes additional costs of € 5,262.96. Revenue losses due to extended length of stay take the highest proportion with € 2,555.59 per case, followed by loss in revenue due to bed blockage during isolation with € 2,413.08 per case. Overall, these opportunity costs accounted for 94.41% of total costs. In contrast, costs for hygienic measures (€ 253.98, pharmaceuticals (€ 22.88 and laboratory (€ 17.44 are quite low.Conclusion: CDAD results in significant additional costs for the hospital. This survey of actual costs confirms previous study results.
Variance estimation for sensitivity analysis of poverty and inequality measures
Directory of Open Access Journals (Sweden)
Christian Dudel
2017-04-01
Full Text Available Estimates of poverty and inequality are often based on application of a single equivalence scale, despite the fact that a large number of different equivalence scales can be found in the literature. This paper describes a framework for sensitivity analysis which can be used to account for the variability of equivalence scales and allows to derive variance estimates of results of sensitivity analysis. Simulations show that this method yields reliable estimates. An empirical application reveals that accounting for both variability of equivalence scales and sampling variance leads to confidence intervals which are wide.
Sensitivity analysis of water consumption in an office building
Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan
2018-02-01
This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.
Probabilistic and sensitivity analysis of Botlek Bridge structures
Directory of Open Access Journals (Sweden)
Králik Juraj
2017-01-01
Full Text Available This paper deals with the probabilistic and sensitivity analysis of the largest movable lift bridge of the world. The bridge system consists of six reinforced concrete pylons and two steel decks 4000 tons weight each connected through ropes with counterweights. The paper focuses the probabilistic and sensitivity analysis as the base of dynamic study in design process of the bridge. The results had a high importance for practical application and design of the bridge. The model and resistance uncertainties were taken into account in LHS simulation method.
Applying DEA sensitivity analysis to efficiency measurement of Vietnamese universities
Directory of Open Access Journals (Sweden)
Thi Thanh Huyen Nguyen
2015-11-01
Full Text Available The primary purpose of this study is to measure the technical efficiency of 30 doctorate-granting universities, the universities or the higher education institutes with PhD training programs, in Vietnam, applying the sensitivity analysis of data envelopment analysis (DEA. The study uses eight sets of input-output specifications using the replacement as well as aggregation/disaggregation of variables. The measurement results allow us to examine the sensitivity of the efficiency of these universities with the sets of variables. The findings also show the impact of variables on their efficiency and its “sustainability”.
Time-Dependent Global Sensitivity Analysis for Long-Term Degeneracy Model Using Polynomial Chaos
Directory of Open Access Journals (Sweden)
Jianbin Guo
2014-07-01
Full Text Available Global sensitivity is used to quantify the influence of uncertain model inputs on the output variability of static models in general. However, very few approaches can be applied for the sensitivity analysis of long-term degeneracy models, as far as time-dependent reliability is concerned. The reason is that the static sensitivity may not reflect the completed sensitivity during the entire life circle. This paper presents time-dependent global sensitivity analysis for long-term degeneracy models based on polynomial chaos expansion (PCE. Sobol’ indices are employed as the time-dependent global sensitivity since they provide accurate information on the selected uncertain inputs. In order to compute Sobol’ indices more efficiently, this paper proposes a moving least squares (MLS method to obtain the time-dependent PCE coefficients with acceptable simulation effort. Then Sobol’ indices can be calculated analytically as a postprocessing of the time-dependent PCE coefficients with almost no additional cost. A test case is used to show how to conduct the proposed method, then this approach is applied to an engineering case, and the time-dependent global sensitivity is obtained for the long-term degeneracy mechanism model.
A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.
2013-12-01
Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.
Cost analysis of the treatment of severe acute malnutrition in West Africa.
Isanaka, Sheila; Menzies, Nicolas A; Sayyad, Jessica; Ayoola, Mudasiru; Grais, Rebecca F; Doyon, Stéphane
2017-10-01
We present an updated cost analysis to provide new estimates of the cost of providing community-based treatment for severe acute malnutrition, including expenditure shares for major cost categories. We calculated total and per child costs from a provider perspective. We categorized costs into three main activities (outpatient treatment, inpatient treatment, and management/administration) and four cost categories within each activity (personnel; therapeutic food; medical supplies; and infrastructure and logistical support). For each category, total costs were calculated by multiplying input quantities expended in the Médecins Sans Frontières nutrition program in Niger during a 12-month study period by 2015 input prices. All children received outpatient treatment, with 43% also receiving inpatient treatment. In this large, well-established program, the average cost per child treated was €148.86, with outpatient and inpatient treatment costs of €75.50 and €134.57 per child, respectively. Therapeutic food (44%, €32.98 per child) and personnel (35%, €26.70 per child) dominated outpatient costs, while personnel (56%, €75.47 per child) dominated in the cost of inpatient care. Sensitivity analyses suggested lowering prices of medical treatments, and therapeutic food had limited effect on total costs per child, while increasing program size and decreasing use of expatriate staff support reduced total costs per child substantially. Updated estimates of severe acute malnutrition treatment cost are substantially lower than previously published values, and important cost savings may be possible with increases in coverage/program size and integration into national health programs. These updated estimates can be used to suggest approaches to improve efficiency and inform national-level resource allocation. © 2016 John Wiley & Sons Ltd.
Siamphukdee, Kanjana; Collins, Frank; Zou, Roger
2013-06-01
Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.
SENSITIVITY ANALYSIS OF BUILDING STRUCTURES WITHIN THE SCOPE OF ENERGY, ENVIRONMENT AND INVESTMENT
Directory of Open Access Journals (Sweden)
František Kulhánek
2015-10-01
Full Text Available The primary objective of this paper is to prove the feasibility of sensitivity analysis with dominant weight method for structure parts of envelope of buildings inclusive of energy; ecological and financial assessments, and determination of different designs for same structural part via multi-criteria assessment with theoretical example designs ancillary. Multi-criteria assessment (MCA of different structural designs or in other word alternatives aims to find the best available alternative. The application of sensitivity analysis technique in this paper bases on dominant weighting method. In this research, to choose the best thermal insulation design in the case of that more than one projection, simultaneously, criteria of total thickness (T; heat transfer coefficient (U through the cross section; global warming potential (GWP; acid produce (AP; primary energy content (PEI non renewable and cost per m2 (C are investigated for all designs via sensitivity analysis. Three different designs for external wall (over soil which are convenient with regard to globally suggested energy features for passive house design are investigated through the mentioned six projections. By creating a given set of scenarios; depending upon the importance of each criterion, sensitivity analysis is distributed. As conclusion, uncertainty in the output of model is attributed to different sources in the model input. In this manner, determination of the best available design is achieved. The original outlook and the outlook afterwards the sensitivity analysis are visualized, that enables easily to choose the optimum design within the scope of verified components.
Seismic analysis of steam generator and parameter sensitivity studies
International Nuclear Information System (INIS)
Qian Hao; Xu Dinggen; Yang Ren'an; Liang Xingyun
2013-01-01
Background: The steam generator (SG) serves as the primary means for removing the heat generated within the reactor core and is part of the reactor coolant system (RCS) pressure boundary. Purpose: Seismic analysis in required for SG, whose seismic category is Cat. I. Methods: The analysis model of SG is created with moisture separator assembly and tube bundle assembly herein. The seismic analysis is performed with RCS pipe and Reactor Pressure Vessel (RPV). Results: The seismic stress results of SG are obtained. In addition, parameter sensitivities of seismic analysis results are studied, such as the effect of another SG, support, anti-vibration bars (AVBs), and so on. Our results show that seismic results are sensitive to support and AVBs setting. Conclusions: The guidance and comments on these parameters are summarized for equipment design and analysis, which should be focused on in future new type NPP SG's research and design. (authors)
Special waste disposal in Austria - cost benefit analysis
International Nuclear Information System (INIS)
Kuntscher, H.
1983-01-01
The present situation of special waste disposal in Austria is summarized for radioactive and nonradioactive wastes. A cost benefit analysis for regulary collection, transport and disposal of industrial wastes, especially chemical wastes is given and the cost burden for the industry is calculated. (A.N.)
A Comparative Cost Analysis of Picture Archiving and ...
African Journals Online (AJOL)
Method: An incremental cost analysis for chest radiographs,, computed tomography and magnetic resonance imaging brain scans with and without contrast were performed. The overall incremental cost for PACS in comparison with a conventional radiology site was determined. The net present value was also determined to ...
Infrastructures and Life-Cycle Cost-Benefit Analysis
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
2012-01-01
Design and maintenance of infrastructures using Life-Cycle Cost-Benefit analysis is discussed in this paper with special emphasis on users costs. This is for several infrastructures such as bridges, highways etc. of great importance. Repair or/and failure of infrastructures will usually result...
Cost-utility analysis of the National truth campaign to prevent youth smoking.
Holtgrave, David R; Wunderink, Katherine A; Vallone, Donna M; Healton, Cheryl G
2009-05-01
In 2005, the American Journal of Public Health published an article that indicated that 22% of the overall decline in youth smoking that occurred between 1999 and 2002 was directly attributable to the truth social marketing campaign launched in 2000. A remaining key question about the truth campaign is whether the economic investment in the program can be justified by the public health outcomes; that question is examined here. Standard methods of cost and cost-utility analysis were employed in accordance with the U.S. Panel on Cost-Effectiveness in Health and Medicine; a societal perspective was employed. During 2000-2002, expenditures totaled just over $324 million to develop, deliver, evaluate, and litigate the truth campaign. The base-case cost-utility analysis result indicates that the campaign was cost saving; it is estimated that the campaign recouped its costs and that just under $1.9 billion in medical costs was averted for society. Sensitivity analysis indicated that the basic determination of cost effectiveness for this campaign is robust to substantial variation in input parameters. This study suggests that the truth campaign not only markedly improved the public's health but did so in an economically efficient manner.
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Energy Technology Data Exchange (ETDEWEB)
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
Development of hospital data warehouse for cost analysis of DPC based on medical costs.
Muranaga, F; Kumamoto, I; Uto, Y
2007-01-01
To develop a data warehouse system for cost analysis, based on the categories of the diagnosis procedure combination (DPC) system, in which medical costs were estimated by DPC category and factors influencing the balance between costs and fees. We developed a data warehouse system for cost analysis using data from the hospital central data warehouse system. The balance data of patients who were discharged from Kagoshima University Hospital from April 2003 to March 2005 were determined in terms of medical procedure, cost per day and patient admission in order to conduct a drill-down analysis. To evaluate this system, we analyzed cash flow by DPC category of patients who were categorized as having malignant tumors and whose DPC category was reevaluated in 2004. The percentages of medical expenses were highest in patients with acute leukemia, non-Hodgkin's lymphoma, and particularly in patients with malignant tumors of the liver and intrahepatic bile duct. Imaging tests degraded the percentages of medical expenses in Kagoshima University Hospital. These results suggested that cost analysis by patient is important for hospital administration in the inclusive evaluation system using a case-mix index such as DPC.
Cost-effectiveness analysis of treatments for premenstrual dysphoric disorder.
Rendas-Baum, Regina; Yang, Min; Gricar, Joseph; Wallenstein, Gene V
2010-01-01
Premenstrual syndrome (PMS) is reported to affect between 13% and 31% of women. Between 3% and 8% of women are reported to meet criteria for the more severe form of PMS, premenstrual dysphoric disorder (PMDD). Although PMDD has received increased attention in recent years, the cost effectiveness of treatments for PMDD remains unknown. To evaluate the cost effectiveness of the four medications with a US FDA-approved indication for PMDD: fluoxetine, sertraline, paroxetine and drospirenone plus ethinyl estradiol (DRSP/EE). A decision-analytic model was used to evaluate both direct costs (medication and physician visits) and clinical outcomes (treatment success, failure and discontinuation). Medication costs were based on average wholesale prices of branded products; physician visit costs were obtained from a claims database study of PMDD patients and the Agency for Healthcare Research and Quality. Clinical outcome probabilities were derived from published clinical trials in PMDD. The incremental cost-effectiveness ratio (ICER) was calculated using the difference in costs and percentage of successfully treated patients at 6 months. Deterministic and probabilistic sensitivity analyses were used to assess the impact of uncertainty in parameter estimates. Threshold values where a change in the cost-effective strategy occurred were identified using a net benefit framework. Starting therapy with DRSP/EE dominated both sertraline and paroxetine, but not fluoxetine. The estimated ICER of initiating treatment with fluoxetine relative to DRSP/EE was $US4385 per treatment success (year 2007 values). Cost-effectiveness acceptability curves revealed that for ceiling ratios>or=$US3450 per treatment success, fluoxetine had the highest probability (>or=0.37) of being the most cost-effective treatment, relative to the other options. The cost-effectiveness acceptability frontier further indicated that DRSP/EE remained the option with the highest expected net monetary benefit for
Analysis of the production and transaction costs of forest carbon offset projects in the USA.
Galik, Christopher S; Cooley, David M; Baker, Justin S
2012-12-15
Forest carbon offset project implementation costs, comprised of both production and transaction costs, could present an important barrier to private landowner participation in carbon offset markets. These costs likewise represent a largely undocumented component of forest carbon offset potential. Using a custom spreadsheet model and accounting tool, this study examines the implementation costs of different forest offset project types operating in different forest types under different accounting and sampling methodologies. Sensitivity results are summarized concisely through response surface regression analysis to illustrate the relative effect of project-specific variables on total implementation costs. Results suggest that transaction costs may represent a relatively small percentage of total project implementation costs - generally less than 25% of the total. Results also show that carbon accounting methods, specifically the method used to establish project baseline, may be among the most important factors in driving implementation costs on a per-ton-of-carbon-sequestered basis, dramatically increasing variability in both transaction and production costs. This suggests that accounting could be a large driver in the financial viability of forest offset projects, with transaction costs likely being of largest concern to those projects at the margin. Copyright © 2012 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Dykes, K.; Ning, A.; King, R.; Graf, P.; Scott, G.; Veers, P.
2014-02-01
This paper introduces the development of a new software framework for research, design, and development of wind energy systems which is meant to 1) represent a full wind plant including all physical and nonphysical assets and associated costs up to the point of grid interconnection, 2) allow use of interchangeable models of varying fidelity for different aspects of the system, and 3) support system level multidisciplinary analyses and optimizations. This paper describes the design of the overall software capability and applies it to a global sensitivity analysis of wind turbine and plant performance and cost. The analysis was performed using three different model configurations involving different levels of fidelity, which illustrate how increasing fidelity can preserve important system interactions that build up to overall system performance and cost. Analyses were performed for a reference wind plant based on the National Renewable Energy Laboratory's 5-MW reference turbine at a mid-Atlantic offshore location within the United States.
Directory of Open Access Journals (Sweden)
Janfry Sihite
2014-12-01
Full Text Available The ASEAN Open Sky Policy is one of ASEAN policy to open the airspace between the ASEAN member countries. Aviation services based companies including the Low Cost airlines will experience tight com-petition among ASEAN airline companies. This research aim to explore the effect of price on customer loyalty through the mediating role of promotion and trust in brand. The original sample collected from 100 Indonesian low-cost airline Citilink consumer that just arrived in Soekarno-Hatta International Airport, the bootstrapped techniques conducted for 500 sub-samples and further analyzed with structural equation modelling partial least square. The research findings support the low cost airline consumer price sensitivity, furthermore price affect the trust in brand more severe compared with the promotion. Price effect fully mediated through the trust in brand and promotion toward the consumer loyalty. Further research should consider the sensitivity of price to elaborate the decision making process for the low cost airline consumer.
Energy Technology Data Exchange (ETDEWEB)
Mendes, Jocelia S.; Ferreira, Andrea L.O. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Silva, Giovanilton F. [Tecnologia Bioenergetica - Tecbio, Fortaleza, CE (Brazil)
2008-07-01
The aim of this work ware simulation, optimization and to find the biodiesel production cost produced by enzymatic route. Consequently, it was carried out a methodology of economic calculations and sensitivity analyses for this process. It was used a computational software from balance equations for obtaining the biodiesel cost. The economical analysis was obtained by capital cost of biofuel. The whole process was developed according analysis of fixed capital cost, total manufacturing cost, raw material cost, and chemical cost. The results of economic calculations to biodiesel production showed efficient. The model was meant for use in assessing the effects on estimated biodiesel production cost of changes in different types of oils. (author)
Rudoler, David; de Oliveira, Claire; Jacob, Binu; Hopkins, Melonie; Kurdyak, Paul
2018-01-01
The objective of this article was to conduct a cost analysis comparing the costs of a supportive housing intervention to inpatient care for clients with severe mental illness who were designated alternative-level care while inpatient at the Centre for Addiction and Mental Health in Toronto. The intervention, called the High Support Housing Initiative, was implemented in 2013 through a collaboration between 15 agencies in the Toronto area. The perspective of this cost analysis was that of the Ontario Ministry of Health and Long-Term Care. We compared the cost of inpatient mental health care to high-support housing. Cost data were derived from a variety of sources, including health administrative data, expenditures reported by housing providers, and document analysis. The High Support Housing Initiative was cost saving relative to inpatient care. The average cost savings per diem were between $140 and $160. This amounts to an annual cost savings of approximately $51,000 to $58,000. When tested through sensitivity analysis, the intervention remained cost saving in most scenarios; however, the result was highly sensitive to health system costs for clients of the High Support Housing Initiative program. This study suggests the High Support Housing Initiative is potentially cost saving relative to inpatient hospitalization at the Centre for Addiction and Mental Health.