WorldWideScience

Sample records for sparging optimization decision

  1. Conceptual air sparging decision tool in support of the development of an air sparging optimization decision tool

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-09-01

    The enclosed document describes a conceptual decision tool (hereinafter, Tool) for determining applicability of and for optimizing air sparging systems. The Tool was developed by a multi-disciplinary team of internationally recognized experts in air sparging technology, lead by a group of project and task managers at Parsons Engineering Science, Inc. (Parsons ES). The team included Mr. Douglas Downey and Dr. Robert Hinchee of Parsons ES, Dr. Paul Johnson of Arizona State University, Dr. Richard Johnson of Oregon Graduate Institute, and Mr. Michael Marley of Envirogen, Inc. User Community Panel Review was coordinated by Dr. Robert Siegrist of Colorado School of Mines (also of Oak Ridge National Laboratory) and Dr. Thomas Brouns of Battelle/Pacific Northwest Laboratory. The Tool is intended to provide guidance to field practitioners and environmental managers for evaluating the applicability and optimization of air sparging as remedial action technique.

  2. Existing air sparging model and literature review for the development of an air sparging optimization decision tool

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The objectives of this Report are two-fold: (1) to provide overviews of the state-of-the-art and state-of-the-practice with respect to air sparging technology, air sparging models and related or augmentation technologies (e.g., soil vapor extraction); and (2) to provide the basis for the development of the conceptual Decision Tool. The Project Team conducted an exhaustive review of available literature. The complete listing of the documents, numbering several hundred and reviewed as a part of this task, is included in Appendix A. Even with the large amount of material written regarding the development and application of air sparging, there still are significant gaps in the technical community`s understanding of the remediation technology. The results of the literature review are provided in Section 2. In Section 3, an overview of seventeen conceptual, theoretical, mathematical and empirical models is presented. Detailed descriptions of each of the models reviewed is provided in Appendix B. Included in Appendix D is a copy of the questionnaire used to compile information about the models. The remaining sections of the document reflect the analysis and synthesis of the information gleaned during the literature and model reviews. The results of these efforts provide the basis for development of the decision tree and conceptual decision tool for determining applicability and optimization of air sparging. The preliminary decision tree and accompanying information provided in Section 6 describe a three-tiered approach for determining air sparging applicability: comparison with established scenarios; calculation of conceptual design parameters; and the conducting of pilot-scale studies to confirm applicability. The final two sections of this document provide listings of the key success factors which will be used for evaluating the utility of the Decision Tool and descriptions of potential applications for Decision Tool use.

  3. Remedial Process Optimization and Green In-Situ Ozone Sparging for Treatment of Groundwater Impacted with Petroleum Hydrocarbons

    Science.gov (United States)

    Leu, J.

    2012-12-01

    A former natural gas processing station is impacted with TPH and BTEX in groundwater. Air sparging and soil vapor extraction (AS/AVE) remediation systems had previously been operated at the site. Currently, a groundwater extraction and treatment system is operated to remove the chemicals of concern (COC) and contain the groundwater plume from migrating offsite. A remedial process optimization (RPO) was conducted to evaluate the effectiveness of historic and current remedial activities and recommend an approach to optimize the remedial activities. The RPO concluded that both the AS/SVE system and the groundwater extraction system have reached the practical limits of COC mass removal and COC concentration reduction. The RPO recommended an in-situ chemical oxidation (ISCO) study to evaluate the best ISCO oxidant and approach. An ISCO bench test was conducted to evaluate COC removal efficiency and secondary impacts to recommend an application dosage. Ozone was selected among four oxidants based on implementability, effectiveness, safety, and media impacts. The bench test concluded that ozone demand was 8 to 12 mg ozone/mg TPH and secondary groundwater by-products of ISCO include hexavalent chromium and bromate. The pH also increased moderately during ozone sparging and the TDS increased by approximately 20% after 48 hours of ozone treatment. Prior to the ISCO pilot study, a capture zone analysis (CZA) was conducted to ensure containment of the injected oxidant within the existing groundwater extraction system. The CZA was conducted through a groundwater flow modeling using MODFLOW. The model indicated that 85%, 90%, and 95% of an injected oxidant could be captured when a well pair is injecting and extracting at 2, 5, and 10 gallons per minute, respectively. An ISCO pilot test using ozone was conducted to evaluate operation parameters for ozone delivery. The ozone sparging system consisted of an ozone generator capable of delivering 6 lbs/day ozone through two ozone

  4. Optimization of air-sparged plutonium oxalate/hydroxide precipitators

    International Nuclear Information System (INIS)

    VanderHeyden, W.B.; Yarbro, S.L.; Fife, K.W.

    1997-04-01

    The high cost of waste management and experimental work makes numerical modeling an inexpensive and attractive tool for optimizing and understanding complex chemical processes. Multiphase open-quotes bubbleclose quotes columns are used extensively at the Los Alamos Plutonium Facility for a variety of different applications. No moving parts and efficient mixing characteristics allow them to be used in glovebox operations. Initially, a bubble column for oxalate precipitations is being modeled to identify the effect of various design parameters such as, draft tube location, air sparge rate and vessel geometry. Two-dimensional planar and axisymmetric models have been completed and successfully compared to literature data. Also, a preliminary three-dimensional model has been completed. These results are discussed in this report along with future work

  5. Optimisation and design of nitrogen-sparged fermentative hydrogen production bioreactors

    Energy Technology Data Exchange (ETDEWEB)

    Kraemer, Jeremy T. [CH2M HILL, 255 Consumers Road, Suite 300, Toronto, Ontario, M2J 5B6 (Canada); Bagley, David M. [Department of Civil and Architectural Engineering, University of Wyoming, 1000 E. University Avenue, Department 3295, Laramie, Wyoming 82071 (United States)

    2008-11-15

    The optimisation of nitrogen sparging during fermentative hydrogen production was investigated. A N{sub 2} sparging rate of 12 mL/min/L-liquid was observed to maximise the H{sub 2} yield at approximately 2 mol H{sub 2}/mol glucose converted, compared to an H{sub 2} yield of approximately 1 mol H{sub 2}/mol glucose converted without any sparging. There was no significant increase in H{sub 2} yield at sparging rates of 12-80 mL/min/L-liquid. The optimum sparging rate was lower than N{sub 2} sparging rates examined in the past (>20 mL/min/L-liquid). To facilitate improved scale-up, the overall volumetric mass-transfer coefficients (K{sub L}a) for H{sub 2} and CO{sub 2} were measured and the relationship between the dimensionless Sherwood and Froude numbers was determined. The optimal sparging rate occurred at a K{sub L}a value of 5.0 h{sup -1} for H{sub 2}, corresponding to a Sherwood number of 4800. By holding the Sherwood number constant upon scale-up, the full-scale K{sub L}a can be determined and the appropriate sparging rate can be determined from the corresponding Froude number. The benefits of operating at the optimum sparging rate, including minimising product hydrogen gas dilution and energy use, can thus be achieved in larger-scale systems. (author)

  6. Performance of air sparging systems -- A review of case studies

    International Nuclear Information System (INIS)

    Bass, D.H.; Brown, R.A.

    1995-01-01

    In situ air sparging is a commonly used remediation technology which volatilizes and enhances aerobic biodegradation of contamination in Groundwater and saturated zone soil. Recently, some questions have been raised regarding the effectiveness of air sparging. To address these questions the results of 21 sparging case studies have been compiled to shed light on how well air sparging achieves permanent reduction in groundwater contaminant concentrations. The case studies included both chlorinated solvents and petroleum hydrocarbon contamination, and covered a wide range of soil conditions and sparge system parameters. In each case study, groundwater concentrations were compared before sparging was initiated, just before sparging was terminated, and in the months following shutdown of the sparging system

  7. Bioventing and air sparging: a field research study

    International Nuclear Information System (INIS)

    Moore, B.J.; Armstrong, J.E.; Hardisty, P.E.; Dupont, R.R.

    1997-01-01

    A study was conducted at Gulf Canada Resources' Strachan Gas Plant in Alberta, in which bioventing and air sparging were used individually and in combination to remediate a free-phase natural gas condensate plume estimated to cover approximately 65,000 m 2 . The condensate was composed of light hydrocarbons. Benzene, toluene, ethylbenzene and total xylenes (BTEX) made up a large portion of the dissolved plume. The objectives of the bioventing program were to: (1) study the use of biodegradation respiration rates and hydrocarbon vapour concentrations as indicators of soil clean-up progress, (2) study the effectiveness of bioventing during winter operations, (3) assess the degree of soil clean-up achievable through bioventing, and (4) evaluate the economics of bioventing. It was shown that bioventing is an economical in-situ remediation technique, costing about $10/m 3 . Air sparging involves the injection of air below the groundwater table to remove dissolved phased contaminants in-situ. The objectives of the air sparging program were to: (1) determine the zone of influence achievable through air sparging, (2) assess bioventing for treating hydrocarbon vapours introduced into the unsaturated zone during sparging, and (3) evaluate hydrocarbon mass removal effectiveness due to volatilization and biodegradation. It was shown that 90 per cent of the saturated zone hydrocarbon mass was removed during eight months of air sparging. 11 refs., 1 tab., 5 figs

  8. Aerosol entrainment from a sparged non-Newtonian slurry

    International Nuclear Information System (INIS)

    Fritz, Brad G.

    2006-01-01

    Aerosol measurements were conducted above a half-scale air sparged mixing tank filled with simulated waste slurry. Three aerosol size fractions were measured at three sampling heights at three different sparging rates using a filter based ambient air sampling technique. Aerosol concentrations in the head space above the closed tank demonstrated a wide range, varying between 97 ?g m-3 for PM2.5 and 5650 ?g m-3 for TSP. The variation in concentrations was a function of sampling heights, size fraction and sparging rate. Measured aerosol entrainment coefficients showed good agreement with existing entrainment models. The models evaluated generally over predicted the entrainment, but were within a factor of two of the measured entrainment. This indicates that the range of applicability of the models may be extendable to include sparged slurries with Bingham plastic rheological properties

  9. Effect of increased groundwater viscosity on the remedial performance of surfactant-enhanced air sparging

    Science.gov (United States)

    Choi, Jae-Kyeong; Kim, Heonki; Kwon, Hobin; Annable, Michael D.

    2018-03-01

    The effect of groundwater viscosity control on the performance of surfactant-enhanced air sparging (SEAS) was investigated using 1- and 2-dimensional (1-D and 2-D) bench-scale physical models. The viscosity of groundwater was controlled by a thickener, sodium carboxymethylcellulose (SCMC), while an anionic surfactant, sodium dodecylbenzene sulfonate (SDBS), was used to control the surface tension of groundwater. When resident DI water was displaced with a SCMC solution (500 mg/L), a SDBS solution (200 mg/L), and a solution with both SCMC (500 mg/L) and SDBS (200 mg/L), the air saturation for sand-packed columns achieved by air sparging increased by 9.5%, 128%, and 154%, respectively, (compared to that of the DI water-saturated column). When the resident water contained SCMC, the minimum air pressure necessary for air sparging processes increased, which is considered to be responsible for the increased air saturation. The extent of the sparging influence zone achieved during the air sparging process using the 2-D model was also affected by viscosity control. Larger sparging influence zones (de-saturated zone due to air injection) were observed for the air sparging processes using the 2-D model initially saturated with high-viscosity solutions, than those without a thickener in the aqueous solution. The enhanced air saturations using SCMC for the 1-D air sparging experiment improved the degradative performance of gaseous oxidation agent (ozone) during air sparging, as measured by the disappearance of fluorescence (fluorescein sodium salt). Based on the experimental evidence generated in this study, the addition of a thickener in the aqueous solution prior to air sparging increased the degree of air saturation and the sparging influence zone, and enhanced the remedial potential of SEAS for contaminated aquifers.

  10. [Study on the groundwater petroleum contaminant remediation by air sparging].

    Science.gov (United States)

    Wang, Zhi-Qiang; Wu, Qiang; Zou, Zu-Guang; Chen, Hong; Yang, Xun-Chang; Zhao, Ji-Chu

    2007-04-01

    The groundwater petroleum contaminant remediation effect by air sparging was investigated in an oil field. The results show that the soil geological situation has great influence on the air distribution, and the shape of air distribution is not symmetrical to the air sparging (AS) well as axis. The influence distance in the left of AS well is 6 m, and only 4 m in the right. The petroleum removal rate can reach 70% in the zone with higher air saturation, but only 40% in the zone with lower air saturation, and the average petroleum removal rate reaches 60% in the influence zone for 40 days continuous air sparging. The petroleum components in groundwater were analyzed by GC/MS (gas chromatogram-mass spectrograph) before and after experiments, respectively. The results show that the petroleum removal rate has relationship with the components and their properties. The petroleum components with higher volatility are easily removed by volatilization, but those with lower volatility are difficult to remove, so a tailing effect of lingering residual contaminant exists when the air sparging technology is adopted to treat groundwater contaminated by petroleum products.

  11. Liquid entrainment through orifices by sparging gas

    International Nuclear Information System (INIS)

    Bonnet, J.M.; Malara, M.; Amblard, M.; Seiler, J.M.

    2001-01-01

    Corium Coolability by water flood during an MCCI (Molten Corium Concrete Interaction) is still an open problem. Several physical mechanisms have been identified which may reduce significantly and finally stop the ablation of concrete. Among these mechanisms, corium ejection by sparging gas into the overlying water may represent an important contribution. This mechanism was at the origin of a large and coolable debris bed and volcano formation in the MACE M3B test. This mechanism has also been observed in simulant material tests performed at UCSB and at FZK. The objective of the work, which is described in the present paper, is to model this mechanism and to quantify the liquid entrainment rate by sparging gas. (author)

  12. Sustainable operation of submerged Anammox membrane bioreactor with recycling biogas sparging for alleviating membrane fouling.

    Science.gov (United States)

    Li, Ziyin; Xu, Xindi; Xu, Xiaochen; Yang, FengLin; Zhang, ShuShen

    2015-12-01

    A submerged anaerobic ammonium oxidizing (Anammox) membrane bioreactor with recycling biogas sparging for alleviating membrane fouling has been successfully operated for 100d. Based on the batch tests, a recycling biogas sparging rate at 0.2m(3)h(-1) was fixed as an ultimate value for the sustainable operation. The mixed liquor volatile suspended solid (VSS) of the inoculum for the long operation was around 3000mgL(-1). With recycling biogas sparging rate increasing stepwise from 0 to 0.2m(3)h(-1), the reactor reached an influent total nitrogen (TN) up to 1.7gL(-1), a stable TN removal efficiency of 83% and a maximum specific Anammox activity (SAA) of 0.56kg TNkg(-1) VSSd(-1). With recycling biogas sparging rate at 0.2 m(3) h(-1) (corresponding to an aeration intensity of 118m(3)m(-2)h(-1)), the membrane operation circle could prolong by around 20 times compared to that without gas sparging. Furthermore, mechanism of membrane fouling was proposed. And with recycling biogas sparging, the VSS and EPS content increasing rate in cake layer were far less than the ones without biogas sparging. The TN removal performance and sustainable membrane operation of this system showed the appealing potential of the submerged Anammox MBR with recycling biogas sparging in treating high-strength nitrogen-containing wastewaters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Supersaturation of dissolved H(2) and CO (2) during fermentative hydrogen production with N(2) sparging.

    Science.gov (United States)

    Kraemer, Jeremy T; Bagley, David M

    2006-09-01

    Dissolved H(2) and CO(2) were measured by an improved manual headspace-gas chromatographic method during fermentative H(2) production with N(2) sparging. Sparging increased the yield from 1.3 to 1.8 mol H(2)/mol glucose converted, although H(2) and CO(2) were still supersaturated regardless of sparging. The common assumption that sparging increases the H(2) yield because of lower dissolved H(2) concentrations may be incorrect, because H(2) was not lowered into the range necessary to affect the relevant enzymes. More likely, N(2) sparging decreased the rate of H(2) consumption via lower substrate concentrations.

  14. Effect of biogas sparging on the performance of bio-hydrogen reactor over a long-term operation

    Science.gov (United States)

    Nualsri, Chatchawin; Kongjan, Prawit; Imai, Tsuyoshi

    2017-01-01

    This study aimed to enhance hydrogen production from sugarcane syrup by biogas sparging. Two-stage continuous stirred tank reactor (CSTR) and upflow anaerobic sludge blanket (UASB) reactor were used to produce hydrogen and methane, respectively. Biogas produced from the UASB was used to sparge into the CSTR. Results indicated that sparging with biogas increased the hydrogen production rate (HPR) by 35% (from 17.1 to 23.1 L/L.d) resulted from a reduction in the hydrogen partial pressure. A fluctuation of HPR was observed during a long term monitoring because CO2 in the sparging gas and carbon source in the feedstock were consumed by Enterobacter sp. to produce succinic acid without hydrogen production. Mixed gas released from the CSTR after the sparging can be considered as bio-hythane (H2+CH4). In addition, a continuous sparging biogas into CSTR release a partial pressure in the headspace of the methane reactor. In consequent, the methane production rate is increased. PMID:28207755

  15. Air Sparging Versus Gas Saturated Water Injection for Remediation of Volatile LNAPL in the Borden Aquifer

    Science.gov (United States)

    Barker, J.; Nelson, L.; Doughty, C.; Thomson, N.; Lambert, J.

    2009-05-01

    In the shallow, rather homogeneous, unconfined Borden sand aquifer, field trials of air sparging (Tomlinson et al., 2003) and pulsed air sparging (Lambert et al., 2009) have been conducted, the latter to remediate a residual gasoline source emplaced below the water table. As well, a supersaturated (with CO2) water injection (SWI) technology, using the inVentures inFusion system, has been trialed in two phases: 1. in the uncontaminated sand aquifer to evaluate the radius of influence, extent of lateral gas movement and gas saturation below the water table, and 2. in a sheet pile cell in the Borden aquifer to evaluate the recovery of volatile hydrocarbon components (pentane and hexane) of an LNAPL emplaced below the water table (Nelson et al., 2008). The SWI injects water supersaturated with CO2. The supersaturated injected water moves laterally away from the sparge point, releasing CO2 over a wider area than does gas sparging from a single well screen. This presentation compares these two techniques in terms of their potential for remediating volatile NAPL components occurring below the water table in a rather homogeneous sand aquifer. Air sparging created a significantly greater air saturation in the vicinity of the sparge well than did the CO2 system (60 percent versus 16 percent) in the uncontaminated Borden aquifer. However, SWI pushed water, still supersaturated with CO2, up to about 2.5 m from the injection well. This would seem to provide a considerable advantage over air sparging from a point, in that gas bubbles are generated at a much larger radius from the point of injection with SWI and so should involve additional gas pathways through a residual NAPL. Overall, air sparging created a greater area of influence, defined by measurable air saturation in the aquifer, but air sparging also injected about 12 times more gas than was injected in the SWI trials. The pulsed air sparging at Borden (Lambert et al.) removed about 20 percent (4.6 kg) of gasoline

  16. Air sparging/high vacuum extraction to remove chlorinated solvents in groundwater and soil

    International Nuclear Information System (INIS)

    Phelan, J.M.; Gilliat, M.D.

    1998-01-01

    An air sparging and high vacuum extraction was installed as an alternative to a containment pump and treat system to reduce the long-term remediation schedule. The site is located at the DOE Mound facility in Miamisburg, Ohio, just south of Dayton. The air sparging system consists of 23 wells interspersed between 17 soil vapor extraction wells. The SVE system has extracted about 1,500 lbs of VOCs in five months. The air sparging system operated for about 6 weeks before shutdown due to suspected biochemical fouling. Technical data are presented on the operating characteristics of the system

  17. In situ air sparging for bioremediation of groundwater and soils

    International Nuclear Information System (INIS)

    Lord, D.; Lei, J.; Chapdelaine, M.C.; Sansregret, J.L.; Cyr, B.

    1995-01-01

    Activities at a former petroleum products depot resulted in the hydrocarbon contamination of soil and groundwater over a 30,000-m 2 area. Site remediation activities consisted of three phases: site-specific characterization and treatability study, pilot-scale testing, and full-scale bioremediation. During Phase 1, a series of site/soil/waste characterizations was undertaken to ascertain the degree of site contamination and to determine soil physical/chemical and microbiological characteristics. Treatability studies were carried out to simulate an air sparging process in laboratory-scale columns. Results indicated 42% mineral oil and grease removal and 94% benzene, toluene, ethylbenzene, and xylenes (BTEX) removal over an 8-week period. The removal rate was higher in the unsaturated zone than in the saturated zone. Phase 2 involved pilot-scale testing over a 550-m 2 area. The radius of influence of the air sparge points was evaluated through measurements of dissolved oxygen concentrations in the groundwater and of groundwater mounding. A full-scale air sparging system (Phase 3) was installed on site and has been operational since early 1994. Physical/chemical and microbiological parameters, and contaminants were analyzed to evaluate the system performance

  18. Do conventional monitoring practices indicate in situ air sparging performance?

    International Nuclear Information System (INIS)

    Johnson, P.C.

    1995-01-01

    Short-term pilot tests play a key role in the selection and design of in situ air sparging systems. Most pilot tests are less than 24 h in duration and consist of monitoring changes in dissolved oxygen, water levels in wells, soil gas pressures, and soil gas contaminant concentrations while air is injected into the aquifer. These parameters are assumed to be indicators of air sparging feasibility and performance, and are also used in the design of full-scale systems. In this work the authors assess the validity of this critical assumption. Data are presented from a study site where a typical pilot-scale short-term test was conducted, followed by continued operation of a full-scale system for 110 days. Conventional sampling practices were augmented with more discrete and detailed assessment methods. In addition, a tracer gas was used to better understand air distributions, vapor flow paths, and vapor recovery efficiency. The data illustrate that conclusions regarding the performance and applicability of air sparging at the study site vary significantly depending on the monitoring approach used. There was no clear correlation between short-term pilot-test data and extended system performance when using data collected only from conventional groundwater monitoring wells. Attention is focused on petroleum hydrocarbons

  19. Hydrodynamic effects of air sparging on hollow fiber membranes in a bubble column reactor.

    Science.gov (United States)

    Xia, Lijun; Law, Adrian Wing-Keung; Fane, Anthony G

    2013-07-01

    Air sparging is now a standard approach to reduce concentration polarization and fouling of membrane modules in membrane bioreactors (MBRs). The hydrodynamic shear stresses, bubble-induced turbulence and cross flows scour the membrane surfaces and help reduce the deposit of foulants onto the membrane surface. However, the detailed quantitative knowledge on the effect of air sparging remains lacking in the literature due to the complex hydrodynamics generated by the gas-liquid flows. To date, there is no valid model that describes the relationship between the membrane fouling performance and the flow hydrodynamics. The present study aims to examine the impact of hydrodynamics induced by air sparging on the membrane fouling mitigation in a quantitative manner. A modelled hollow fiber module was placed in a cylindrical bubble column reactor at different axial heights with the trans-membrane pressure (TMP) monitored under constant flux conditions. The configuration of bubble column without the membrane module immersed was identical to that studied by Gan et al. (2011) using Phase Doppler Anemometry (PDA), to ensure a good quantitative understanding of turbulent flow conditions along the column height. The experimental results showed that the meandering flow regime which exhibits high flow instability at the 0.3 m is more beneficial to fouling alleviation compared with the steady flow circulation regime at the 0.6 m. The filtration tests also confirmed the existence of an optimal superficial air velocity beyond which a further increase is of no significant benefit on the membrane fouling reduction. In addition, the alternate aeration provided by two air stones mounted at the opposite end of the diameter of the bubble column was also studied to investigate the associated flow dynamics and its influence on the membrane filtration performance. It was found that with a proper switching interval and membrane module orientation, the membrane fouling can be effectively

  20. The application of in situ air sparging as an innovative soils and ground water remediation technology

    International Nuclear Information System (INIS)

    Marley, M.C.; Hazebrouck, D.J.; Walsh, M.T.

    1992-01-01

    Vapor extraction (soil venting) has been demonstrated to be a successful and cost-effective remediation technology for removing VOCs from the vadose (unsaturated) zone. However, in many cases, seasonal water table fluctuations, drawdown associated with pump-and-treat remediation techniques, and spills involving dense, non-aqueous phase liquids (DNAPLS) create contaminated soil below the water table. Vapor extraction alone is not considered to be an optimal remediation technology to address this type of contamination. An innovative approach to saturated zone remediation is the use of sparging (injection) wells to inject a hydrocarbon-free gaseous medium (typically air) into the saturated zone below the areas of contamination. The contaminants dissolved in the ground water and sorbed onto soil particles partition into the advective air phase, effectively simulating an in situ air-stripping system. The stripped contaminants are transported in the gas phase to the vadose zone, within the radius of influence of a vapor extraction and vapor treatment system. In situ air sparging is a complex multifluid phase process, which has been applied successfully in Europe since the mid-1980s. To date, site-specific pilot tests have been used to design air-sparging systems. Research is currently underway to develop better engineering design methodologies for the process. Major design parameters to be considered include contaminant type, gas injection pressures and flow rates, site geology, bubble size, injection interval (areal and vertical) and the equipment specifications. Correct design and operation of this technology has been demonstrated to achieve ground water cleanup of VOC contamination to low part-per-billion levels

  1. Totally optimal decision rules

    KAUST Repository

    Amin, Talha

    2017-11-22

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  2. Totally optimal decision rules

    KAUST Repository

    Amin, Talha M.; Moshkov, Mikhail

    2017-01-01

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  3. The use of sparge curtains for contaminant plume control

    International Nuclear Information System (INIS)

    Molnaa, B.; Dablow, J.

    1994-01-01

    Contamination by petroleum hydrocarbons and organic solvents represents a major impact to soil and groundwater. Following recent research and development, several technologies have evolved to treat saturated zone adsorbed- and dissolved-phase contaminants in situ. These technologies include bioremediation and air sparging. Funnel and gate approaches have been developed at the Waterloo Center for Groundwater Research to control contaminant plume migration and treat dissolved-phase contaminants before allowing migration downgradient and off site. The process consists of using low hydraulic conductivity cutoff walls to funnel groundwater flow through gates that contain in situ bioreactors. These systems can maintain hydraulic control and treat dissolved-phase contaminants at the downgradient margins of plumes, while minimizing, or in some cases eliminating, the need for groundwater pumping. Sparge curtains can be applied to treat dissolved-phase contaminants and prevent downgradient, off-site migration of contaminated groundwater

  4. Air sparging of organic compounds in groundwater

    International Nuclear Information System (INIS)

    Hicks, P.M.

    1994-01-01

    Soils and aquifers containing organic compounds have been traditionally treated by excavation and disposal of the soil and/or pumping and treating the groundwater. These remedial options are often not practical or cost effective solutions. A more favorable alternative for removal of the adsorbed/dissolved organic compounds would be an in situ technology. Air sparging will remove volatile organic compounds from both the adsorbed and dissolved phases in the saturated zone. This technology effectively creates a crude air stripper below the aquifer where the soil acts as the ''packing''. The air stream that contacts dissolved/adsorbed phase organics in the aquifer induces volatilization. A case history illustrates the effectiveness of air sparging as a remedial technology for addressing organic compounds in soil and groundwater. The site is an operating heavy equipment manufacturing facility in central Florida. The soil and groundwater below a large building at the facility was found to contain primarily diesel type petroleum hydrocarbons during removal of underground storage tanks. The organic compounds identified in the groundwater were Benzene, Xylenes, Ethylbenzene and Toluenes (BTEX), Methyl tert-Butyl Ether (MTBE) and naphthalenes in concentrations related to diesel fuel

  5. In situ treatment of arsenic-contaminated groundwater by air sparging.

    Science.gov (United States)

    Brunsting, Joseph H; McBean, Edward A

    2014-04-01

    Arsenic contamination of groundwater is a major problem in some areas of the world, particularly in West Bengal (India) and Bangladesh where it is caused by reducing conditions in the aquifer. In situ treatment, if it can be proven as operationally feasible, has the potential to capture some advantages over other treatment methods by being fairly simple, not using chemicals, and not necessitating disposal of arsenic-rich wastes. In this study, the potential for in situ treatment by injection of compressed air directly into the aquifer (i.e. air sparging) is assessed. An experimental apparatus was constructed to simulate conditions of arsenic-rich groundwater under anaerobic conditions, and in situ treatment by air sparging was employed. Arsenic (up to 200 μg/L) was removed to a maximum of 79% (at a local point in the apparatus) using a solution with dissolved iron and arsenic only. A static "jar" test revealed arsenic removal by co-precipitation with iron at a molar ratio of approximately 2 (iron/arsenic). This is encouraging since groundwater with relatively high amounts of dissolved iron (as compared to arsenic) therefore has a large theoretical treatment capacity for arsenic. Iron oxidation was significantly retarded at pH values below neutral. In terms of operation, analysis of experimental results shows that periodic air sparging may be feasible. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Agitation within Mk-42 insert caused by air sparge

    International Nuclear Information System (INIS)

    Ramsey, C.J.

    1991-01-01

    Dissolution of Rocky Flats Pu alloys and Pu metal using a ''nested insert'' configuration (One Well Insert (S-3352) inside the Mk-42 Insert) will require a Nuclear Safety Study, a major assumption of which will be that the annular dissolver is well-mixed. The ''well-mixed'' assumption was theoretically and experimentally supported for alloy dissolution using the Three Well Insert, but the present situation differs significantly. In the former case, the insert was directly exposed to the agitation induced by air sparging; in the case under consideration, the One Well Insert would be shielded by the Mk-42 Insert. In an effort to determine if the ''nested insert'' approach should be pursued, the past studies and technical literature have been surveyed and an attempt made to predict the extent of mixing and bulk circulation for a ''nested insert'' configuration in the presence of air sparging

  7. Decision and Inhibitory Rule Optimization for Decision Tables with Many-valued Decisions

    KAUST Repository

    Alsolami, Fawaz

    2016-04-25

    ‘If-then’ rule sets are one of the most expressive and human-readable knowledge representations. This thesis deals with optimization and analysis of decision and inhibitory rules for decision tables with many-valued decisions. The most important areas of applications are knowledge extraction and representation. The benefit of considering inhibitory rules is connected with the fact that in some situations they can describe more knowledge than the decision ones. Decision tables with many-valued decisions arise in combinatorial optimization, computational geometry, fault diagnosis, and especially under the processing of data sets. In this thesis, various examples of real-life problems are considered which help to understand the motivation of the investigation. We extend relatively simple results obtained earlier for decision rules over decision tables with many-valued decisions to the case of inhibitory rules. The behavior of Shannon functions (which characterize complexity of rule systems) is studied for finite and infinite information systems, for global and local approaches, and for decision and inhibitory rules. The extensions of dynamic programming for the study of decision rules over decision tables with single-valued decisions are generalized to the case of decision tables with many-valued decisions. These results are also extended to the case of inhibitory rules. As a result, we have algorithms (i) for multi-stage optimization of rules relative to such criteria as length or coverage, (ii) for counting the number of optimal rules, (iii) for construction of Pareto optimal points for bi-criteria optimization problems, (iv) for construction of graphs describing relationships between two cost functions, and (v) for construction of graphs describing relationships between cost and accuracy of rules. The applications of created tools include comparison (based on information about Pareto optimal points) of greedy heuristics for bi-criteria optimization of rules

  8. Effect of gas sparging on flux enhancement and phytochemical properties of clarified pineapple juice by microfiltration

    KAUST Repository

    Laorko, Aporn

    2011-08-01

    Membrane fouling is a major obstacle in the application of microfiltration. Several techniques have been proposed to enhance the permeate flux during microfiltration. Gas sparging is a hydrodynamic method for improving the performance of the membrane process. In this study, a 0.2 μm hollow fiber microfiltration membrane was used to study the effect of cross flow velocity (CFV) and gas injection factor () on the critical and limiting flux during microfiltration of pineapple juice. In addition, the phytochemical properties of clarified juice were investigated. In the absence of gas sparging, the critical and limiting flux increased as the CFV or shear stress number increased. The use of gas sparging led to a remarkable improvement in both the critical and limiting flux but it was more effective at the lower CFV (1.5 m s-1), compared to those at higher CFV (2.0 and 2.5 m s-1). When the gas injection factor was applied at 0.15, 0.25 and 0.35 with a CFV of 1.5 m s -1, the enhancement of 55.6%, 75.5% and 128.2% was achieved for critical flux, while 65.8%, 69.7% and 95.2% was achieved for limiting flux, respectively. The results also indicated that the use of gas sparging was an effective method to reduce reversible fouling and external irreversible fouling rather than internal irreversible fouling. In addition, the CFV and gas sparging did not affect pH, total soluble solids, colour, total phenolic content and the antioxidant property of the clarified juice. The l-ascorbic acid and total vitamin C were significantly decreased when the higher CFV and high gas injection factor were applied. The results also indicated that the use of gas sparging with low CFV was beneficial for flux enhancement while most of the phytochemical properties of the clarified juice was preserved. © 2011 Elsevier B.V. All rights reserved.

  9. CO2 Sparging Phase 3 Full Scale Implementation and Monitoring Report

    Science.gov (United States)

    In-situ carbon dioxide (CO2) sparging was designed and implemented to treat a subsurface causticbrine pool (CBP) formed as a result of releases from historical production of industrial chemicals at theLCP Chemicals Site, Brunswick, GA (Site).

  10. Multi-stage optimization of decision and inhibitory trees for decision tables with many-valued decisions

    KAUST Repository

    Azad, Mohammad

    2017-06-16

    We study problems of optimization of decision and inhibitory trees for decision tables with many-valued decisions. As cost functions, we consider depth, average depth, number of nodes, and number of terminal/nonterminal nodes in trees. Decision tables with many-valued decisions (multi-label decision tables) are often more accurate models for real-life data sets than usual decision tables with single-valued decisions. Inhibitory trees can sometimes capture more information from decision tables than decision trees. In this paper, we create dynamic programming algorithms for multi-stage optimization of trees relative to a sequence of cost functions. We apply these algorithms to prove the existence of totally optimal (simultaneously optimal relative to a number of cost functions) decision and inhibitory trees for some modified decision tables from the UCI Machine Learning Repository.

  11. Multi-stage optimization of decision and inhibitory trees for decision tables with many-valued decisions

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2017-01-01

    We study problems of optimization of decision and inhibitory trees for decision tables with many-valued decisions. As cost functions, we consider depth, average depth, number of nodes, and number of terminal/nonterminal nodes in trees. Decision tables with many-valued decisions (multi-label decision tables) are often more accurate models for real-life data sets than usual decision tables with single-valued decisions. Inhibitory trees can sometimes capture more information from decision tables than decision trees. In this paper, we create dynamic programming algorithms for multi-stage optimization of trees relative to a sequence of cost functions. We apply these algorithms to prove the existence of totally optimal (simultaneously optimal relative to a number of cost functions) decision and inhibitory trees for some modified decision tables from the UCI Machine Learning Repository.

  12. Aerosol entrainment from a sparged non-Newtonian slurry.

    Science.gov (United States)

    Fritz, Brad G

    2006-08-01

    Previous bench-scale experiments have provided data necessary for the development of empirical models that describe aerosol entrainment from bubble bursting. However, previous work has not been extended to non-Newtonian liquid slurries. Design of a waste treatment plant on the Hanford Site in Washington required an evaluation of the applicability of these models outside of their intended range. For this evaluation, aerosol measurements were conducted above an air-sparged mixing tank filled with simulated waste slurry possessing Bingham plastic rheological properties. Three aerosol-size fractions were measured at three sampling heights and for three different sparging rates. The measured entrainment was compared with entrainment models. One model developed based on bench-scale air-water experiments agreed well with measured entrainment. Another model did not agree well with the measured entrainment. It appeared that the source of discrepancy between measured and modeled entrainment stemmed from application beyond the range of data used to develop the model. A possible separation in entrainment coefficients between air-water and steam-water systems was identified. A third entrainment model was adapted to match experimental conditions and fit a posteri to the experimental data, resulting in a modified version that resulted in estimated entrainment rates similar to the first model.

  13. Cultivation of Chlorella Vulgaris Using Airlift Photobioreactor Sparged with 5%CO 2 -Air as a Biofixing Process

    Directory of Open Access Journals (Sweden)

    Mahmood Khazzal Hummadi AL-Mashhadani

    2017-04-01

    Full Text Available The present paper addresses cultivation of Chlorella vulgaris microalgae using airlift photobioreactor that sparged with 5% CO 2 /air. The experimental data were compared with that obtained from bioreactor aerated with air and unsparged bioreactor. The results showed that the concentration of biomass is 0.36 g l -1 in sparged bioreactor with CO2/air, while, the concentration of biomass reached to 0.069 g l -1 in the unsparged bioreactor. They showed also that aerated ioreactor.with CO2/air gives more biomass production even the bioreactor was aerated with air. This study proved that application of sparging system for ultivation of Chlorella vulgaris microalgae using either CO2/air mixture or air has a significant growth rate, since the bioreactors become more thermodynamically favorable and provide impetus for a higher level of production. biofixing process

  14. Air sparging for subsurface remediation: Numerical analysis using T2VOC

    Energy Technology Data Exchange (ETDEWEB)

    McCray, J.E.; Falta, R.W. [Clemson Univ. SC (United States)

    1995-03-01

    Air sparging is under active investigation as a promising remediation technology for aquifers contaminated with folatile organic dense nonaqueous phase liquids (DNAPLs). A theoretical study for the removal of DNAPLs from the subsurface using this technology is presented. T2VOC is used to conduct multiphase numerical simulations of DNAPL removal utilizing a model aquifer with a radially-symmetric geometry. Both homogeneous and macroscale heterogeneous systems are considered. These simulations suggest that DNAPLs are efficiently removed in a zone of contaminant cleanup at relatively low gas saturations within the injected air plume. The zone of effective removal may be referred to as the radius of influence (ROI). The sparging-induced pressure increase below the water table, which may be measured in the field, is recommended as the best method for determining the ROI. Multiphase numerical simulations are used to support this recommendation, to relate the injected gas ROI to the zone of NAPL cleanup, and to illustrate the transient and steady-state aquifer behavior.

  15. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor

    2016-07-28

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters characterizing both time (in the worst- and average-case) and space complexity of decision trees, i.e., depth, total path length (average depth), and number of nodes. We have created tools based on extensions of dynamic programming to study totally optimal trees. These tools are applicable to both exact and approximate decision trees, and allow us to make multi-stage optimization of decision trees relative to different parameters and to count the number of optimal trees. Based on the experimental results we have formulated the following hypotheses (and subsequently proved): for almost all Boolean functions there exist totally optimal decision trees (i) relative to the depth and number of nodes, and (ii) relative to the depth and average depth.

  16. Optimal policy for value-based decision-making.

    Science.gov (United States)

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  17. Optimization of decision rule complexity for decision tables with many-valued decisions

    KAUST Repository

    Azad, Mohammad

    2013-10-01

    We describe new heuristics to construct decision rules for decision tables with many-valued decisions from the point of view of length and coverage which are enough good. We use statistical test to find leaders among the heuristics. After that, we compare our results with optimal result obtained by dynamic programming algorithms. The average percentage of relative difference between length (coverage) of constructed and optimal rules is at most 6.89% (15.89%, respectively) for leaders which seems to be a promising result. © 2013 IEEE.

  18. Decision and Inhibitory Rule Optimization for Decision Tables with Many-valued Decisions

    KAUST Repository

    Alsolami, Fawaz

    2016-01-01

    ‘If-then’ rule sets are one of the most expressive and human-readable knowledge representations. This thesis deals with optimization and analysis of decision and inhibitory rules for decision tables with many-valued decisions. The most important

  19. Understanding Optimal Decision-making in Wargaming

    OpenAIRE

    Nesbitt, P; Kennedy, Q; Alt, JK; Fricker, RD; Whitaker, L; Yang, J; Appleget, JA; Huston, J; Patton, S

    2013-01-01

    Approved for public release; distribution is unlimited. This research aims to gain insight into optimal wargaming decision-making mechanisms using neurophysiological measures by investigating whether brain activation and visual scan patterns predict attention, perception, and/or decision-making errors through human-in-the-loop wargaming simulation experiments. We investigate whether brain activity and visual scan patterns can explain optimal wargaming decision making and its devel...

  20. Optimization of tactical decisions: subjective and objective conditionality

    Directory of Open Access Journals (Sweden)

    Олег Юрійович Булулуков

    2016-06-01

    Full Text Available In the article «human» and «objective» factors are investigated that influencing on optimization of tactical decisions. Attention is accented on dependence of the got information about the circumstances of crime from the acceptance of correct decisions an investigator. Connection between efficiency of investigation and acceptance of optimal tactical decisions is underlined. The declared problem is not investigational in literature in a sufficient measure. Its separate aspects found the reflection in works: D. А. Solodova, S. Yu. Yakushina and others. Some questions related to optimization of investigation and making decision an investigator we discover in works: R. S. Belkin, V. А. Juravel, V. Е. Konovalova, V. L. Sinchuk, B. V. Shur, V. Yu. Shepitko. The aim of the article is determination of term «optimization», as it applies to tactical decisions in criminalistics, and also consideration of influence of human and objective factors on the acceptance of optimal decisions at investigation of crimes. In the article etymology of term is considered «optimization» and interpretation of its is given as it applies to the acceptance of tactical decisions. The types of mark human and objective factors, stipulating optimization of tactical decisions. The last assists efficiency of tactics of investigation of crimes. At consideration of «human factors» of influencing on optimization decisions, attention applies on «psychological traps» can take place at making decision. Among them such are named, as: anchoring; status quo; irreversible expenses; desired and actual; incorrect formulation; conceit; reinsurance; constancy of memory. Underlined, absence of unambiguity in the brought list over of «objective factors» influencing at choice tactical decision. The different understanding of «tactical risk» is argued, as a factor influencing on an acceptance tactical decisions. The analysis of «human» and «objective» factors influencing on

  1. Helium Tracer Tests for Assessing Air Recovery and Air Distribution During In Situ Air Sparging

    National Research Council Canada - National Science Library

    Johnson, Richard

    2001-01-01

    ...) systems for capturing contaminant vapors liberated by in situ air sparging (IAS). The tracer approach is simple to conduct and provides more direct and reliable measures than the soil-gas pressure approach...

  2. CO2 Sparging Proof of Concept Test Report, Revision 1, LCP Chemicals Site, Brunswick, Georgia

    Science.gov (United States)

    April 2013 report to evaluate the feasibility of CO2 sparging to remediate a sub-surface caustic brine pool (CBP) at the LCP Chemicals Superfund Site, GA. Region ID : 04, DocID: 10940639 , DocDate: 2013-04-01

  3. A tool for study of optimal decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes a tool which allows us for relatively small decision tables to make consecutive optimization of decision trees relative to various complexity measures such as number of nodes, average depth, and depth, and to find parameters and the number of optimal decision trees. © 2010 Springer-Verlag Berlin Heidelberg.

  4. Modeling of permeate flux and mass transfer resistances in the reclamation of molasses wastewater by a novel gas-sparged nanofiltration

    International Nuclear Information System (INIS)

    Patel, Tejal Manish; Nath, Kaushik

    2014-01-01

    A semi-empirical model has been applied to predict the permeate flux and mass transfer resistances during the cross flow nanofiltration of molasses wastewater in flat-sheet module. The model includes laminar flow regime as well as flow in presence of gas sparging at two different gas velocities. Membrane hydraulic resistance (R m ), osmotic pressure resistance (R osm ) and the concentration polarization resistance (R cp ) were considered in series. The concentration polarization resistance was correlated to the operating conditions, namely, the feed concentration, the trans-membrane pressure difference and the cross flow velocity for a selected range of experiments. There was an appreciable reduction of concentration polarization resistance R cp spar in presence of gas sparging. Both the concentration polarization resistance R cp lam and osmotic pressure resistance R osm decreased with cross-flow velocity, but increased with feed concentration and the operating pressure. Experimental and theoretical permeate flux values as a function of cross flow velocity for both the cases, in the presence and absence of gas sparging, were also compared

  5. Modeling of permeate flux and mass transfer resistances in the reclamation of molasses wastewater by a novel gas-sparged nanofiltration

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Tejal Manish; Nath, Kaushik [G H Patel College of Engineering and Technology, Gujarat (India)

    2014-10-15

    A semi-empirical model has been applied to predict the permeate flux and mass transfer resistances during the cross flow nanofiltration of molasses wastewater in flat-sheet module. The model includes laminar flow regime as well as flow in presence of gas sparging at two different gas velocities. Membrane hydraulic resistance (R{sub m}), osmotic pressure resistance (R{sub osm}) and the concentration polarization resistance (R{sub cp}) were considered in series. The concentration polarization resistance was correlated to the operating conditions, namely, the feed concentration, the trans-membrane pressure difference and the cross flow velocity for a selected range of experiments. There was an appreciable reduction of concentration polarization resistance R{sub cp}{sup spar} in presence of gas sparging. Both the concentration polarization resistance R{sub cp}{sup lam} and osmotic pressure resistance R{sub osm} decreased with cross-flow velocity, but increased with feed concentration and the operating pressure. Experimental and theoretical permeate flux values as a function of cross flow velocity for both the cases, in the presence and absence of gas sparging, were also compared.

  6. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values

  7. A field-scale demonstration of air sparging to remediate tritiated fluids

    International Nuclear Information System (INIS)

    Russell, C.E.; Gillespie, D.R.; Hokett, S.L.; Donithan, J.D.

    1996-09-01

    Two pilot field-scale studies were conducted during the period of May 24 to July 22, 1996, to evaluate the potential of air sparging to remediate tritiated fluids. Previous analytical solutions to the rate of tritium removal were evaluated and compared to the experimental results. The analytical solution of Craig and Gordon that describes isotopic fractionation of an evaporating body of water appears to most accurately describe the process, versus the more limited isotopic exchange equation of Slattery and Ingraham and the mass transfer equation of Wilson and Fordham, which are accurate only at moderate to high humidities and do not describe the tritium enrichment process that would occur at low humidities. The results of the two experiments demonstrated that air sparging of tritium is a viable process in the field. Tritium removal rates of 60 percent were reported during the first experiment and 66 percent for the second experiment. Comparison to previous laboratory work revealed that rates could have been improved by starting with higher concentrations, utilizing smaller bubbles, and longer bubble path lengths. Risks associated with the pilot study were greater the closer one worked to the experiment with a maximum increase in the Lifetime Excess Total Risk per Unit Uptake of 2.4 x 10 -5 . Conduct of this experiment at locations with much higher activities of tritium would significantly increase the associated risk

  8. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-01-01

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  9. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-07-10

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  10. Dispositional optimism, self-framing and medical decision-making.

    Science.gov (United States)

    Zhao, Xu; Huang, Chunlei; Li, Xuesong; Zhao, Xin; Peng, Jiaxi

    2015-03-01

    Self-framing is an important but underinvestigated area in risk communication and behavioural decision-making, especially in medical settings. The present study aimed to investigate the relationship among dispositional optimism, self-frame and decision-making. Participants (N = 500) responded to the Life Orientation Test-Revised and self-framing test of medical decision-making problem. The participants whose scores were higher than the middle value were regarded as highly optimistic individuals. The rest were regarded as low optimistic individuals. The results showed that compared to the high dispositional optimism group, participants from the low dispositional optimism group showed a greater tendency to use negative vocabulary to construct their self-frame, and tended to choose the radiation therapy with high treatment survival rate, but low 5-year survival rate. Based on the current findings, it can be concluded that self-framing effect still exists in medical situation and individual differences in dispositional optimism can influence the processing of information in a framed decision task, as well as risky decision-making. © 2014 International Union of Psychological Science.

  11. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2016-01-01

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters

  12. Determining Optimal Decision Version

    Directory of Open Access Journals (Sweden)

    Olga Ioana Amariei

    2014-06-01

    Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.

  13. Unrealistic optimism and decision making

    Directory of Open Access Journals (Sweden)

    Božović Bojana

    2009-01-01

    Full Text Available One of the leading descriptive theories of decision-making under risk, Tversky & Kahneman's Prospect theory, reveals that normative explanation of decisionmaking, based only on principle of maximizing outcomes expected utility, is unsustainable. It also underlines the effect of alternative factors on decision-making. Framing effect relates to an influence that verbal formulation of outcomes has on choosing between certain and risky outcomes; in negative frame people tend to be risk seeking, whereas in positive frame people express risk averse tendencies. Individual decisions are not based on objective probabilities of outcomes, but on subjective probabilities that depend on outcome desirability. Unrealistically pessimistic subjects assign lower probabilities (than the group average to the desired outcomes, while unrealistically optimistic subjects assign higher probabilities (than the group average to the desired outcomes. Experiment was conducted in order to test the presumption that there's a relation between unrealistic optimism and decision-making under risk. We expected optimists to be risk seeking, and pessimist to be risk averse. We also expected such cognitive tendencies, if they should become manifest, to be framing effect resistant. Unrealistic optimism scale was applied, followed by the questionnaire composed of tasks of decision-making under risk. Results within the whole sample, and results of afterwards extracted groups of pessimists and optimists both revealed dominant risk seeking tendency that is resistant to the influence of subjective probabilities as well as to the influence of frame in which the outcome is presented.

  14. In-situ biogas sparging enhances the performance of an anaerobic membrane bioreactor (AnMBR) with mesh filter in low-strength wastewater treatment.

    Science.gov (United States)

    Li, Na; Hu, Yi; Lu, Yong-Ze; Zeng, Raymond J; Sheng, Guo-Ping

    2016-07-01

    In the recent years, anaerobic membrane bioreactor (AnMBR) technology is being considered as a very attractive alternative for wastewater treatment due to the striking advantages such as upgraded effluent quality. However, fouling control is still a problem for the application of AnMBR. This study investigated the performance of an AnMBR using mesh filter as support material to treat low-strength wastewater via in-situ biogas sparging. It was found that mesh AnMBR exhibited high and stable chemical oxygen demand (COD) removal efficiencies with values of 95 ± 5 % and an average methane yield of 0.24 L CH4/g CODremoved. Variation of transmembrane pressure (TMP) during operation indicated that mesh fouling was mitigated by in-situ biogas sparging and the fouling rate was comparable to that of aerobic membrane bioreactor with mesh filter reported in previous researches. The fouling layer formed on the mesh exhibited non-uniform structure; the porosity became larger from bottom layer to top layer. Biogas sparging could not change the composition but make thinner thickness of cake layer, which might be benefit for reducing membrane fouling rate. It was also found that ultrasonic cleaning of fouled mesh was able to remove most foulants on the surface or pores. This study demonstrated that in-situ biogas sparging enhanced the performance of AnMBRs with mesh filter in low-strength wastewater treatment. Apparently, AnMBRs with mesh filter can be used as a promising and sustainable technology for wastewater treatment.

  15. Applying short-duration pulses as a mean to enhance volatile organic compounds removal by air sparging.

    Science.gov (United States)

    Ben Neriah, Asaf; Paster, Amir

    2017-10-01

    Application of short-duration pulses of high air pressure, to an air sparging system for groundwater remediation, was tested in a two-dimensional laboratory setup. It was hypothesized that this injection mode, termed boxcar, can enhance the remediation efficiency due to the larger ZOI and enhanced mixing which results from the pressure pulses. To test this hypothesis, flow and transport experiments were performed. Results confirm that cyclically applying short-duration pressure pulses may enhance contaminant cleanup. Comparing the boxcar to conventional continuous air-injection shows up to a three-fold increase in the single well radius of influence, dependent on the intensity of the short-duration pressure-pulses. The cleanup efficiency of Toluene from the water was 95% higher than that achieved under continuous injection with the same average conditions. This improvement was attributed to the larger zone of influence and higher average air permeability achieved in the boxcar mode, relative to continuous sparging. Mixing enhancement resultant from recurring pressure pulses was suggested as one of the mechanisms which enhance the contaminant cleanup. The application of a boxcar mode in an existing, multiwell, air sparging setup can be relatively straightforward: it requires the installation of an on-off valve in each of the injection-wells and a central control system. Then, turning off some of the wells, for a short-duration, result in a stepwise increase in injection pressure in the rest of the wells. It is hoped that this work will stimulate the additional required research and ultimately a field scale application of this new injection mode. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

  17. Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis

    KAUST Repository

    Chikalov, Igor

    2017-10-19

    This paper is devoted to the study of bi-criteria optimization problems for decision trees. We consider different cost functions such as depth, average depth, and number of nodes. We design algorithms that allow us to construct the set of Pareto optimal points (POPs) for a given decision table and the corresponding bi-criteria optimization problem. These algorithms are suitable for investigation of medium-sized decision tables. We discuss three examples of applications of the created tools: the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees, and comparison of different greedy algorithms for decision tree construction as single- and bi-criteria optimization algorithms.

  18. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-14

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  19. Relationships among various parameters for decision tree optimization

    KAUST Repository

    Hussain, Shahid

    2014-01-01

    In this chapter, we study, in detail, the relationships between various pairs of cost functions and between uncertainty measure and cost functions, for decision tree optimization. We provide new tools (algorithms) to compute relationship functions, as well as provide experimental results on decision tables acquired from UCI ML Repository. The algorithms presented in this paper have already been implemented and are now a part of Dagger, which is a software system for construction/optimization of decision trees and decision rules. The main results presented in this chapter deal with two types of algorithms for computing relationships; first, we discuss the case where we construct approximate decision trees and are interested in relationships between certain cost function, such as depth or number of nodes of a decision trees, and an uncertainty measure, such as misclassification error (accuracy) of decision tree. Secondly, relationships between two different cost functions are discussed, for example, the number of misclassification of a decision tree versus number of nodes in a decision trees. The results of experiments, presented in the chapter, provide further insight. © 2014 Springer International Publishing Switzerland.

  20. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha

    2012-10-04

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  1. Dynamic programming approach for partial decision rule optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows optimization of partial decision rules relative to the length or coverage. We introduce an uncertainty measure J(T) which is the difference between number of rows in a decision table T and number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules (partial decision rules) that localize rows in subtables of T with uncertainty at most γ. Presented algorithm constructs a directed acyclic graph Δ γ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The graph Δ γ(T) allows us to describe the whole set of so-called irredundant γ-decision rules. We can optimize such set of rules according to length or coverage. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository.

  2. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  3. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  4. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha

    2013-02-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number of unordered pairs of rows with different decisions in the decision table T. For a nonnegative real number β, we consider β-decision rules that localize rows in subtables of T with uncertainty at most β. Our algorithm constructs a directed acyclic graph Δβ(T) which nodes are subtables of the decision table T given by systems of equations of the kind "attribute = value". This algorithm finishes the partitioning of a subtable when its uncertainty is at most β. The graph Δβ(T) allows us to describe the whole set of so-called irredundant β-decision rules. We can describe all irredundant β-decision rules with minimum length, and after that among these rules describe all rules with maximum coverage. We can also change the order of optimization. The consideration of irredundant rules only does not change the results of optimization. This paper contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2012 Elsevier Inc. All rights reserved.

  5. Classification and Optimization of Decision Trees for Inconsistent Decision Tables Represented as MVD Tables

    KAUST Repository

    Azad, Mohammad

    2015-10-11

    Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples (objects) with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. We consider three approaches (generalized, most common and many-valued decision) to handle such inconsistency. We created different greedy algorithms using various types of impurity and uncertainty measures to construct decision trees. We compared the three approaches based on the decision tree properties of the depth, average depth and number of nodes. Based on the result of the comparison, we choose to work with the many-valued decision approach. Now to determine which greedy algorithms are efficient, we compared them based on the optimization and classification results. It was found that some greedy algorithms Mult\\\\_ws\\\\_entSort, and Mult\\\\_ws\\\\_entML are good for both optimization and classification.

  6. Classification and Optimization of Decision Trees for Inconsistent Decision Tables Represented as MVD Tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2015-01-01

    Decision tree is a widely used technique to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples (objects) with equal values of conditional attributes but different decisions (values of the decision attribute), then to discover the essential patterns or knowledge from the data set is challenging. We consider three approaches (generalized, most common and many-valued decision) to handle such inconsistency. We created different greedy algorithms using various types of impurity and uncertainty measures to construct decision trees. We compared the three approaches based on the decision tree properties of the depth, average depth and number of nodes. Based on the result of the comparison, we choose to work with the many-valued decision approach. Now to determine which greedy algorithms are efficient, we compared them based on the optimization and classification results. It was found that some greedy algorithms Mult\\_ws\\_entSort, and Mult\\_ws\\_entML are good for both optimization and classification.

  7. Proposal optimization in nuclear accident emergency decision based on IAHP

    International Nuclear Information System (INIS)

    Xin Jing

    2007-01-01

    On the basis of establishing the multi-layer structure of nuclear accident emergency decision, several decision objectives are synthetically analyzed, and an optimization model of decision proposals for nuclear accident emergency based on interval analytic hierarchy process is proposed in the paper. The model makes comparisons among several emergency decision proposals quantified, and the optimum proposal is selected out, which solved the uncertain and fuzzy decision problem of judgments by experts' experiences in nuclear accidents emergency decision. Case study shows that the optimization result is much more reasonable, objective and reliable than subjective judgments, and it could be decision references for nuclear accident emergency. (authors)

  8. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.

  9. Simultaneous Optimization of Decisions Using a Linear Utility Function.

    Science.gov (United States)

    Vos, Hans J.

    1990-01-01

    An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)

  10. Volatile decision dynamics: experiments, stochastic description, intermittency control and traffic optimization

    Science.gov (United States)

    Helbing, Dirk; Schönhof, Martin; Kern, Daniel

    2002-06-01

    The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.

  11. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from

  12. A tool for study of optimal decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes a tool which allows us for relatively small decision tables to make consecutive optimization of decision trees relative to various complexity measures such as number of nodes, average depth, and depth, and to find parameters

  13. International Conference on Optimization and Decision Science

    CERN Document Server

    Sterle, Claudio

    2017-01-01

    This proceedings volume highlights the state-of-the-art knowledge related to optimization, decisions science and problem solving methods, as well as their application in industrial and territorial systems. It includes contributions tackling these themes using models and methods based on continuous and discrete optimization, network optimization, simulation and system dynamics, heuristics, metaheuristics, artificial intelligence, analytics, and also multiple-criteria decision making. The number and the increasing size of the problems arising in real life require mathematical models and solution methods adequate to their complexity. There has also been increasing research interest in Big Data and related challenges. These challenges can be recognized in many fields and systems which have a significant impact on our way of living: design, management and control of industrial production of goods and services; transportation planning and traffic management in urban and regional areas; energy production and exploit...

  14. On optimal soft-decision demodulation. [in digital communication system

    Science.gov (United States)

    Lee, L.-N.

    1976-01-01

    A necessary condition is derived for optimal J-ary coherent demodulation of M-ary (M greater than 2) signals. Optimality is defined as maximality of the symmetric cutoff rate of the resulting discrete memoryless channel. Using a counterexample, it is shown that the condition derived is generally not sufficient for optimality. This condition is employed as the basis for an iterative optimization method to find the optimal demodulator decision regions from an initial 'good guess'. In general, these regions are found to be bounded by hyperplanes in likelihood space; the corresponding regions in signal space are found to have hyperplane asymptotes for the important case of additive white Gaussian noise. Some examples are presented, showing that the regions in signal space bounded by these asymptotic hyperplanes define demodulator decision regions that are virtually optimal.

  15. Air sparging as a supporting measure to redevelopment of a LCFC-contaminated industrial site; Air-Sparging als begleitende Sanierungsmassnahme an einem LCKW-kontaminierten Industriestandort

    Energy Technology Data Exchange (ETDEWEB)

    Breh, W.; Suttheimer, J. [Karlsruhe Univ. (T.H.) (Germany). Lehrstuhl fuer Angewandte Geologie; Holub, B. [G.U.T Linz (Austria)

    1998-12-31

    On a company ground in Vorchdorf, Austria, from the 23{sup rd} to 26{sup th} of July 1996 an air-sparging experiment has been carried out as a supporting measure to a running redevelopment of an LCKW-contamination case. On this occasion compressed air, from which the oil had been extracted, was blown into the contaminated aquifer through a well with a maximum excess pressure of 0,6 MPa. The blowing-in of compressed air caused a mobilisation of the harmful substances in the ground water and the soil air. As a result circa 2,7 kg LCKW could be removed from the underground through neighbouring ground water and soil air wells. For the observed period of time this meant a tripling of the rate of discharge. On the basis of the obtained data we suggested a routine interval of blowing in compressed air into the well 1516. A blowing-in of compressed air into the highly contaminated wells 1617 and 1625 can not be realised until the construction of upstream situated injection wells, because of the danger of an uncontrollable spread of the harmful substances. (orig.) [Deutsch] Auf einem Firmengelaende in Vorchdorf, Oesterreich, wurde vom 23.07. bis 26.07.1996 ein Air-Sparging-Versuch als unterstuetzende Massnahme zur laufenden hydraulisch-pneumatischen Sanierung eines LCKW-Schadensfalles durchgefuehrt. Hierbei wurde entoelte Druckluft ueber einen Brunnen mit einem maximalen Ueberdruck von 600 mbar in den kontaminierten Aquifer eingeblasen. Die Drucklufteinblasung fuehrte zu einer Mobilisierung von Schadstoffen im Grundwasser und in der Bodenluft, so dass ueber benachbarte Grundwasser- und Bodenluftfoerderbrunnen ca. 2,7 kg LCKW aus dem Untergrund entfernt werden konnten. Fuer den Beobachtungszeitraum bedeutet dies eine Verdreifachung des Schadstoffaustrags. Aufgrund der gewonnenen Daten wird ein routinemaessiger Intervallbetrieb der Drucklufteinblasung in einen der Brunnen vorgeschlagen. Fuer zwei kontaminierte Brunnen ist eine Drucklufteinblasung wegen der Gefahr einer

  16. Dispositional Optimism as a Correlate of Decision-Making Styles in Adolescence

    Directory of Open Access Journals (Sweden)

    Paola Magnano

    2015-06-01

    Full Text Available Despite the numerous psychological areas in which optimism has been studied, including career planning, only a small amount of research has been done to investigate the relationship between optimism and decision-making styles. Consequently, we have investigated the role of dispositional optimism as a correlate of different decision-making styles, in particular, positive for effective styles and negative for ineffective ones (doubtfulness, procrastination, and delegation. Data were gathered through questionnaires administered to 803 Italian adolescents in their last 2 years of high schools with different fields of study, each at the beginning stages of planning for their professional future. A paper questionnaire was completed containing measures of dispositional optimism and career-related decision styles, during a vocational guidance intervention conducted at school. Data were analyzed using stepwise multiple regression. Results supported the proposed model by showing optimism to be a strong correlate of decision-making styles, thereby offering important intervention guidelines aimed at modifying unrealistically negative expectations regarding their future and helping students learn adaptive decision-making skills.

  17. Enhancement of the microbial community biomass and diversity during air sparging bioremediation of a soil highly contaminated with kerosene and BTEX

    Czech Academy of Sciences Publication Activity Database

    Kabelitz, N.; Macháčková, I.; Imfeld, G.; Brennerová, Mária; Pieper, D. H.; Heipieper, H. J.; Junca, H.

    2009-01-01

    Roč. 82, - (2009), s. 565-577 ISSN 0175-7598 Institutional research plan: CEZ:AV0Z50200510 Keywords : btex * air sparging * bioremediation Subject RIV: EE - Microbiology, Virology Impact factor: 2.896, year: 2009

  18. Investigation of effective decision criteria for multiobjective optimization in IMRT.

    Science.gov (United States)

    Holdsworth, Clay; Stewart, Robert D; Kim, Minsun; Liao, Jay; Phillips, Mark H

    2011-06-01

    To investigate how using different sets of decision criteria impacts the quality of intensity modulated radiation therapy (IMRT) plans obtained by multiobjective optimization. A multiobjective optimization evolutionary algorithm (MOEA) was used to produce sets of IMRT plans. The MOEA consisted of two interacting algorithms: (i) a deterministic inverse planning optimization of beamlet intensities that minimizes a weighted sum of quadratic penalty objectives to generate IMRT plans and (ii) an evolutionary algorithm that selects the superior IMRT plans using decision criteria and uses those plans to determine the new weights and penalty objectives of each new plan. Plans resulting from the deterministic algorithm were evaluated by the evolutionary algorithm using a set of decision criteria for both targets and organs at risk (OARs). Decision criteria used included variation in the target dose distribution, mean dose, maximum dose, generalized equivalent uniform dose (gEUD), an equivalent uniform dose (EUD(alpha,beta) formula derived from the linear-quadratic survival model, and points on dose volume histograms (DVHs). In order to quantatively compare results from trials using different decision criteria, a neutral set of comparison metrics was used. For each set of decision criteria investigated, IMRT plans were calculated for four different cases: two simple prostate cases, one complex prostate Case, and one complex head and neck Case. When smaller numbers of decision criteria, more descriptive decision criteria, or less anti-correlated decision criteria were used to characterize plan quality during multiobjective optimization, dose to OARs and target dose variation were reduced in the final population of plans. Mean OAR dose and gEUD (a = 4) decision criteria were comparable. Using maximum dose decision criteria for OARs near targets resulted in inferior populations that focused solely on low target variance at the expense of high OAR dose. Target dose range, (D

  19. Optimal soil venting design using Bayesian Decision analysis

    OpenAIRE

    Kaluarachchi, J. J.; Wijedasa, A. H.

    1994-01-01

    Remediation of hydrocarbon-contaminated sites can be costly and the design process becomes complex in the presence of parameter uncertainty. Classical decision theory related to remediation design requires the parameter uncertainties to be stipulated in terms of statistical estimates based on site observations. In the absence of detailed data on parameter uncertainty, classical decision theory provides little contribution in designing a risk-based optimal design strategy. Bayesian decision th...

  20. Improved glycerol production from cane molasses by the sulfite process with vacuum or continuous carbon dioxide sparging during fermentation

    Energy Technology Data Exchange (ETDEWEB)

    Kalle, G.P.; Naik, S.C.; Lashkari, B.Z.

    1985-01-01

    The conventional sulfite process for glycerol production from molasses using Saccharomyces cerevisiae var. Hansen was modified to obtain product concentrations of up to 230 g/l and productivity of 15 g/l.d by fermenting under vacuum (80 mm) or with continuous sparging of CO2 (0.4 vvm). Under these conditions the requirement of sulfite for optimum production of glycerol was reduced by two thirds (20 g/l), the ethanol concentration in the medium was kept below 30 g/l and the competence of yeast cells to ferment was conserved throughout the fermentation period for up to 20 days. In addition to the above, the rate of incorporation of sulfite had a significant effect on glucose fermentation and glycerol yields. There was an optimal relationship between glycerol yields and the molar ratio of sulfite to glucose consumed, which for cane molasses was 0.67. This ratio was characteristic of the medium composition.

  1. Improved glycerol production from cane molasses by the sulfite process with vacuum or continuous carbon dioxide sparging during fermentation

    Energy Technology Data Exchange (ETDEWEB)

    Kalle, G.P.; Naik, S.C.; Lashkari, B.Z.

    1985-01-01

    The conventional sulfite process for glycerol production from molasses using Saccharomyces cerevisiae var. Hansen was modified to obtain product concentrations of up to 230 g/l and productivity of 15 g/l x d by fermenting under vacuum (80 mm) or with continuous sparging of CO/sub 2/ (0.4 vvm). Under these conditions the requirement of sulfite for optimum production of glycerol was reduced by two thirds (20 g/l), the ethanol concentration in the medium was kept below 30 g/l and the competence of yeast cells to ferment was conserved throughout the fermentation period for up to 20 days. In addition to the above, the rate of incorporation of sulfite had a significant effect on glucose fermentation and glycerol yields. There was an optimal relationship between glycerol yields and the molar ratio of sulfite to glucose consumed, which for cane molasses was 0.67. This ratio was characteristic of the medium composition. 10 references, 4 figures, 3 tables.

  2. Dynamic Programming Approach for Exact Decision Rule Optimization

    KAUST Repository

    Amin, Talha

    2013-01-01

    This chapter is devoted to the study of an extension of dynamic programming approach that allows sequential optimization of exact decision rules relative to the length and coverage. It contains also results of experiments with decision tables from UCI Machine Learning Repository. © Springer-Verlag Berlin Heidelberg 2013.

  3. Optimization of β-decision rules relative to number of misclassifications

    KAUST Repository

    Zielosko, Beata

    2012-01-01

    In the paper, we present an algorithm for optimization of approximate decision rules relative to the number of misclassifications. The considered algorithm is based on extensions of dynamic programming and constructs a directed acyclic graph Δ β (T). Based on this graph we can describe the whole set of so-called irredundant β-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 Springer-Verlag.

  4. Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties

    Science.gov (United States)

    Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.

    2017-12-01

    Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.

  5. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic

  6. A case study of optimization in the decision process: Siting groundwater monitoring wells

    International Nuclear Information System (INIS)

    Cardwell, H.; Huff, D.; Douthitt, J.; Sale, M.

    1993-12-01

    Optimization is one of the tools available to assist decision makers in balancing multiple objectives and concerns. In a case study of the siting decision for groundwater monitoring wells, we look at the influence of the optimization models on the decisions made by the responsible groundwater specialist. This paper presents a multi-objective integer programming model for determining the location of monitoring wells associated with a groundwater pump-and-treat remediation. After presenting the initial optimization results, we analyze the actual decision and revise the model to incorporate elements of the problem that were later identified as important in the decision-making process. The results of a revised model are compared to the actual siting plans, the recommendations from the initial optimization runs, and the initial monitoring network proposed by the decision maker

  7. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization of decision trees and decision rules) to conduct experiments. We show that, for each monotone Boolean function with at most five variables, there exists a totally optimal decision tree which is optimal with respect to both depth and number of nodes.

  8. Demonstrating practical application of soil and groundwater clean-up and recovery technologies at natural gas processing facilities: Bioventing, air sparging and wetlands remediation

    International Nuclear Information System (INIS)

    Moore, B.

    1996-01-01

    This issue of the project newsletter described the nature of bioventing, air sparging and wetland remediation. It reviewed their effectiveness in remediating hydrocarbon contaminated soil above the groundwater surface. Bioventing was described as an effective, low cost treatment in which air is pumped below ground to stimulate indigenous bacteria. The bacteria then use the oxygen to consume the hydrocarbons, converting them to CO 2 and water. Air sparging involves the injection of air below the groundwater surface. As the air rises, hydrocarbons are stripped from the contaminated soil and water. The advantage of air sparging is that it cleans contaminated soil and water from below the groundwater surface. Hydrocarbon contamination of wetlands was described as fairly common. Conventional remediation methods of excavation, trenching, and bellholes to remove contamination often cause extreme harm to the ecosystem. Recent experimental evidence suggests that wetlands may be capable of attenuating contaminated water through natural processes. Four hydrocarbon contaminated wetlands in Alberta are currently under study. Results to date show that peat's high organic content promotes sorption and biodegradation and that some crude oil spills can been resolved by natural processes. It was suggested that assuming peat is present, a good clean-up approach may be to contain the contaminant source, monitor the lateral and vertical extent of contamination, and wait for natural processes to resolve the problem. 3 figs

  9. Algorithms for optimal dyadic decision trees

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  10. Optimization of approximate decision rules relative to number of misclassifications

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    In the paper, we study an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the number of misclassifications. We introduce an uncertainty measure J(T) which is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. The presented algorithm constructs a directed acyclic graph Δγ(T). Based on this graph we can describe the whole set of so-called irredundant γ-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 The authors and IOS Press. All rights reserved.

  11. Optimization of approximate decision rules relative to number of misclassifications

    KAUST Repository

    Amin, Talha

    2012-12-01

    In the paper, we study an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the number of misclassifications. We introduce an uncertainty measure J(T) which is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. The presented algorithm constructs a directed acyclic graph Δγ(T). Based on this graph we can describe the whole set of so-called irredundant γ-decision rules. We can optimize rules from this set according to the number of misclassifications. Results of experiments with decision tables from the UCI Machine Learning Repository are presented. © 2012 The authors and IOS Press. All rights reserved.

  12. CaBr2 hydrolysis for HBr production using a direct sparging contactor

    International Nuclear Information System (INIS)

    Doctor, R.D.; Yang, J.; Panchal, Ch.B.; Lottes, St.A.; Lyczkowski, R.W.

    2010-01-01

    We investigated a novel, continuous hybrid cycle for hydrogen production employing both heat and electricity. Calcium bromide (CaBr 2 ) hydrolysis, which is endothermic, generates hydrogen bromide (HBr), and this is electrolysed to produce hydrogen. CaBr 2 hydrolysis at 1050 K is endothermic with a 181.5 KJ/mol heat of reaction and the free energy change is positive at 99.6 kJ/mol. What makes this hydrolysis reaction attractive is both its rate and the fact that well over half the thermodynamic requirements for water-splitting free energy of ΔG T = 285.8 KJ/mol are supplied at this stage using heat rather than electricity. These experiments provide support for a second order hydrolysis reaction in CaBr 2 forming a complex involving CaBr 2 and CaO and the system appears to be: 3CaBr 2 + H 2 O → (CaBr 2 ) 2 .CaO + 2HBr. This reaction is highly endothermic and the complex also includes some water of hydration. COMSOL TM multi-physics modelling of sparging steam into a calcium bromide melt guided the design of an experiment using a mullite tube (ID 70 mm) capable of holding 0.3-0.5 kg (1.5-2.5 10 -3 kmol) CaBr 2 forming a melt with a maximum 0.08 m depth. Half of the experiments employed packings. Sparging steam at a steam rate of 0.02-0.04 mol/mol of CaBr 2 per minute into this molten bath promptly yielded HBr in a stable operation that converted up to 19 mol% of the calcium bromide. The kinetic constant derived from the experimental data was kinetic constant was 2.17 10 -12 kmol s -1 m -2 MPa -1 for the hydrolysis reaction. (authors)

  13. People adopt optimal policies in simple decision-making, after practice and guidance.

    Science.gov (United States)

    Evans, Nathan J; Brown, Scott D

    2017-04-01

    Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.

  14. ERDOS 1.0. Emergency response decisions as problems of optimal stopping

    International Nuclear Information System (INIS)

    Pauwels, N.

    1998-11-01

    The ERDOS-software is a stochastic dynamic program to support the decision problem of preventively evacuating the workers of an industrial company threatened by a nuclear accident taking place in the near future with a particular probability. ERDOS treats this problem as one of optimal stopping: the governmental decision maker initially holds a call option enabling him to postpone the evacuation decision and observe the further evolution of the alarm situation. As such, he has to decide on the optimal point in time to exercise this option, i.e. to take the irreversible decision to evacuate the threatened workers. ERDOS allows to calculate the expected costs of an optimal intervention strategy and to compare this outcome with the costs resulting from a myopic evacuation decision, ignoring the prospect of more complete information at later stages of the decision process. Furthermore, ERDOS determines the free boundary, giving the critical severity as a function of time that will trigger immediate evacuation in case it is exceeded. Finally, the software provides useful insights in the financial implications of loosing time during the initial stages of the decision process (due to the gathering of information, discussions on the intervention strategy and so on)

  15. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  16. Decision optimization of case-based computer-aided decision systems using genetic algorithms with application to mammography

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Habas, Piotr A; Zurada, Jacek M; Tourassi, Georgia D

    2008-01-01

    This paper presents an optimization framework for improving case-based computer-aided decision (CB-CAD) systems. The underlying hypothesis of the study is that each example in the knowledge database of a medical decision support system has different importance in the decision making process. A new decision algorithm incorporating an importance weight for each example is proposed to account for these differences. The search for the best set of importance weights is defined as an optimization problem and a genetic algorithm is employed to solve it. The optimization process is tailored to maximize the system's performance according to clinically relevant evaluation criteria. The study was performed using a CAD system developed for the classification of regions of interests (ROIs) in mammograms as depicting masses or normal tissue. The system was constructed and evaluated using a dataset of ROIs extracted from the Digital Database for Screening Mammography (DDSM). Experimental results show that, according to receiver operator characteristic (ROC) analysis, the proposed method significantly improves the overall performance of the CAD system as well as its average specificity for high breast mass detection rates

  17. Identification of Optimal Preventive Maintenance Decisions for Composite Components

    NARCIS (Netherlands)

    Laks, P.; Verhagen, W.J.C.; Gherman, B.; Porumbel, I.

    2018-01-01

    This research proposes a decision support tool which identifies cost-optimal maintenance decisions for a given planning period. Simultaneously, the reliability state of the component is kept at or below a given reliability threshold: a failure limit policy applies. The tool is developed to support

  18. Structure of two-phase adiabatic flow in air sparging regime in vertical cylindrical channel with water

    Directory of Open Access Journals (Sweden)

    V. I. Solonin

    2014-01-01

    Full Text Available The article presents a research of two-phase adiabatic flow in air sparging regime in vertical cylindrical channel filled with water. A purpose of the work is to obtain experimental data for further analysis of a character of the moving phases. Research activities used the optic methods PIV (Particle Image Visualization because of their noninvasiveness to obtain data without disturbing effect on the flow. A laser sheet illuminated the fluorescence particles, which were admixed in water along the channel length. A digital camera recorded their motion for a certain time interval that allowed building the velocity vector fields. As a result, gas phase velocity components typical for a steady area of the channel and their relations for various intensity of volume air rate were obtained. A character of motion both for an air bubble and for its surrounding liquid has been conducted. The most probable direction of phases moving in the channel under sparging regime is obtained by building the statistic scalar fields. The use of image processing enabled an analysis of the initial area of the air inlet into liquid. A characteristic curve of the bubbles offset from the axis for various intensity of volume gas rate and channel diameter is defined. A character of moving phases is obtained by building the statistic scalar fields. The values of vertical components of liquid velocity in the inlet part of channel are calculated. Using the obtained data of the gas phase velocities a true void fraction was calculated. It was compared with the values of void fraction, calculated according to the liquid level change in the channel. Obtained velocities were compared with those of the other researchers, and a small difference in their values was explained by experimental conditions. The article is one of the works to research the two-phase flows with no disturbing effect on them. Obtained data allow us to understand a character of moving the two-phase flows in

  19. The impact of chief executive officer optimism on hospital strategic decision making.

    Science.gov (United States)

    Langabeer, James R; Yao, Emery

    2012-01-01

    Previous strategic decision making research has focused mostly on the analytical positioning approach, which broadly emphasizes an alignment between rationality and the external environment. In this study, we propose that hospital chief executive optimism (or the general tendency to expect positive future outcomes) will moderate the relationship between comprehensively rational decision-making process and organizational performance. The purpose of this study was to explore the impact that dispositional optimism has on the well-established relationship between rational decision-making processes and organizational performance. Specifically, we hypothesized that optimism will moderate the relationship between the level of rationality and the organization's performance. We further suggest that this relationship will be more negative for those with high, as opposed to low, optimism. We surveyed 168 hospital CEOs and used moderated hierarchical regression methods to statically test our hypothesis. On the basis of a survey study of 168 hospital CEOs, we found evidence of a complex interplay of optimism in the rationality-organizational performance relationship. More specifically, we found that the two-way interactions between optimism and rational decision making were negatively associated with performance and that where optimism was the highest, the rationality-performance relationship was the most negative. Executive optimism was positively associated with organizational performance. We also found that greater perceived environmental turbulence, when interacting with optimism, did not have a significant interaction effect on the rationality-performance relationship. These findings suggest potential for broader participation in strategic processes and the use of organizational development techniques that assess executive disposition and traits for recruitment processes, because CEO optimism influences hospital-level processes. Research implications include incorporating

  20. Bayesian emulation for optimization in multi-step portfolio decisions

    OpenAIRE

    Irie, Kaoru; West, Mike

    2016-01-01

    We discuss the Bayesian emulation approach to computational solution of multi-step portfolio studies in financial time series. "Bayesian emulation for decisions" involves mapping the technical structure of a decision analysis problem to that of Bayesian inference in a purely synthetic "emulating" statistical model. This provides access to standard posterior analytic, simulation and optimization methods that yield indirect solutions of the decision problem. We develop this in time series portf...

  1. Confronting dynamics and uncertainty in optimal decision making for conservation

    Science.gov (United States)

    Williams, Byron K.; Johnson, Fred A.

    2013-06-01

    The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a

  2. Confronting dynamics and uncertainty in optimal decision making for conservation

    Science.gov (United States)

    Williams, Byron K.; Johnson, Fred A.

    2013-01-01

    The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a

  3. Confronting dynamics and uncertainty in optimal decision making for conservation

    International Nuclear Information System (INIS)

    Williams, Byron K; Johnson, Fred A

    2013-01-01

    The effectiveness of conservation efforts ultimately depends on the recognition that decision making, and the systems that it is designed to affect, are inherently dynamic and characterized by multiple sources of uncertainty. To cope with these challenges, conservation planners are increasingly turning to the tools of decision analysis, especially dynamic optimization methods. Here we provide a general framework for optimal, dynamic conservation and then explore its capacity for coping with various sources and degrees of uncertainty. In broadest terms, the dynamic optimization problem in conservation is choosing among a set of decision options at periodic intervals so as to maximize some conservation objective over the planning horizon. Planners must account for immediate objective returns, as well as the effect of current decisions on future resource conditions and, thus, on future decisions. Undermining the effectiveness of such a planning process are uncertainties concerning extant resource conditions (partial observability), the immediate consequences of decision choices (partial controllability), the outcomes of uncontrolled, environmental drivers (environmental variation), and the processes structuring resource dynamics (structural uncertainty). Where outcomes from these sources of uncertainty can be described in terms of probability distributions, a focus on maximizing the expected objective return, while taking state-specific actions, is an effective mechanism for coping with uncertainty. When such probability distributions are unavailable or deemed unreliable, a focus on maximizing robustness is likely to be the preferred approach. Here the idea is to choose an action (or state-dependent policy) that achieves at least some minimum level of performance regardless of the (uncertain) outcomes. We provide some examples of how the dynamic optimization problem can be framed for problems involving management of habitat for an imperiled species, conservation of a

  4. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    International Nuclear Information System (INIS)

    Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng

    2015-01-01

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  5. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feifei [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Liu, Yong, E-mail: yongliu@pku.edu.cn [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Institute of Water Sciences, Peking University, Beijing 100871 (China); Su, Han [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Zou, Rui [Tetra Tech, Inc., 10306 Eaton Place, Ste 340, Fairfax, VA 22030 (United States); Yunnan Key Laboratory of Pollution Process and Management of Plateau Lake-Watershed, Kunming 650034 (China); Guo, Huaicheng [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China)

    2015-05-15

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  6. A Branch-and-Price approach to find optimal decision trees

    NARCIS (Netherlands)

    Firat, M.; Crognier, Guillaume; Gabor, Adriana; Zhang, Y.

    2018-01-01

    In Artificial Intelligence (AI) field, decision trees have gained certain importance due to their effectiveness in solving classification and regression problems. Recently, in the literature we see finding optimal decision trees are formulated as Mixed Integer Linear Programming (MILP) models. This

  7. Optimal decisions principles of programming

    CERN Document Server

    Lange, Oskar

    1971-01-01

    Optimal Decisions: Principles of Programming deals with all important problems related to programming.This book provides a general interpretation of the theory of programming based on the application of the Lagrange multipliers, followed by a presentation of the marginal and linear programming as special cases of this general theory. The praxeological interpretation of the method of Lagrange multipliers is also discussed.This text covers the Koopmans' model of transportation, geometric interpretation of the programming problem, and nature of activity analysis. The solution of t

  8. Optimization of protection as a decision-making tool for radioactive waste disposal

    International Nuclear Information System (INIS)

    Bragg, K.

    1988-03-01

    This paper discusses whether optimization of radiation protection is a workable or helpful concept or tool with respect to decisions in the field of long-term radioactive waste management. Examples of three waste types (high-level, low-level and uranium mine tailings) are used to illustrate that actual decisions are made taking account of more complex factors and that optimization of protection plays a relatively minor role. It is thus concluded that it is not a useful general tool for waste management decision-making. Discussion of the nature of the differences between technical and non-technical factors is also presented along with suggestions to help facilitate future decision-making

  9. Decision-Aiding and Optimization for Vertical Navigation of Long-Haul Aircraft

    Science.gov (United States)

    Patrick, Nicholas J. M.; Sheridan, Thomas B.

    1996-01-01

    Most decisions made in the cockpit are related to safety, and have therefore been proceduralized in order to reduce risk. There are very few which are made on the basis of a value metric such as economic cost. One which can be shown to be value based, however, is the selection of a flight profile. Fuel consumption and flight time both have a substantial effect on aircraft operating cost, but they cannot be minimized simultaneously. In addition, winds, turbulence, and performance vary widely with altitude and time. These factors make it important and difficult for pilots to (a) evaluate the outcomes associated with a particular trajectory before it is flown and (b) decide among possible trajectories. The two elements of this problem considered here are: (1) determining what constitutes optimality, and (2) finding optimal trajectories. Pilots and dispatchers from major u.s. airlines were surveyed to determine which attributes of the outcome of a flight they considered the most important. Avoiding turbulence-for passenger comfort-topped the list of items which were not safety related. Pilots' decision making about the selection of flight profile on the basis of flight time, fuel burn, and exposure to turbulence was then observed. Of the several behavioral and prescriptive decision models invoked to explain the pilots' choices, utility maximization is shown to best reproduce the pilots' decisions. After considering more traditional methods for optimizing trajectories, a novel method is developed using a genetic algorithm (GA) operating on a discrete representation of the trajectory search space. The representation is a sequence of command altitudes, and was chosen to be compatible with the constraints imposed by Air Traffic Control, and with the training given to pilots. Since trajectory evaluation for the GA is performed holistically, a wide class of objective functions can be optimized easily. Also, using the GA it is possible to compare the costs associated with

  10. Optimizing Negotiation Conflict in the Cloud Service Negotiation Framework Using Probabilistic Decision Making Model.

    Science.gov (United States)

    Rajavel, Rajkumar; Thangarathinam, Mala

    2015-01-01

    Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.

  11. Optimization of decision rules based on dynamic programming approach

    KAUST Repository

    Zielosko, Beata

    2014-01-14

    This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δ γ T which nodes are subtables of the decision table T given by pairs "attribute = value". The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository. © 2014 Springer International Publishing Switzerland.

  12. A feasibility study on the bioconversion of CO2 and H2 to biomethane by gas sparging through polymeric membranes.

    Science.gov (United States)

    Díaz, I; Pérez, C; Alfaro, N; Fdz-Polanco, F

    2015-06-01

    In this study, the potential of a pilot hollow-fiber membrane bioreactor for the conversion of H2 and CO2 to CH4 was evaluated. The system transformed 95% of H2 and CO2 fed at a maximum loading rate of 40.2 [Formula: see text] and produced 0.22m(3) of CH4 per m(3) of H2 fed at thermophilic conditions. H2 mass transfer to the liquid phase was identified as the limiting step for the conversion, and kLa values of 430h(-1) were reached in the bioreactor by sparging gas through the membrane module. A simulation showed that the bioreactor could upgrade biogas at a rate of 25m(3)/mR(3)d, increasing the CH4 concentration from 60 to 95%v. This proof-of-concept study verified that gas sparging through a membrane module can efficiently transfer H2 from gas to liquid phase and that the conversion of H2 and CO2 to biomethane is feasible on a pilot scale at noteworthy load rates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Research on the decision-making model of land-use spatial optimization

    Science.gov (United States)

    He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu

    2009-10-01

    Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.

  14. Optimized approach to decision fusion of heterogeneous data for breast cancer diagnosis

    International Nuclear Information System (INIS)

    Jesneck, Jonathan L.; Nolte, Loren W.; Baker, Jay A.; Floyd, Carey E.; Lo, Joseph Y.

    2006-01-01

    As more diagnostic testing options become available to physicians, it becomes more difficult to combine various types of medical information together in order to optimize the overall diagnosis. To improve diagnostic performance, here we introduce an approach to optimize a decision-fusion technique to combine heterogeneous information, such as from different modalities, feature categories, or institutions. For classifier comparison we used two performance metrics: The receiving operator characteristic (ROC) area under the curve [area under the ROC curve (AUC)] and the normalized partial area under the curve (pAUC). This study used four classifiers: Linear discriminant analysis (LDA), artificial neural network (ANN), and two variants of our decision-fusion technique, AUC-optimized (DF-A) and pAUC-optimized (DF-P) decision fusion. We applied each of these classifiers with 100-fold cross-validation to two heterogeneous breast cancer data sets: One of mass lesion features and a much more challenging one of microcalcification lesion features. For the calcification data set, DF-A outperformed the other classifiers in terms of AUC (p 0.10), the DF-P did significantly improve specificity versus the LDA at both 98% and 100% sensitivity (p<0.04). In conclusion, decision fusion directly optimized clinically significant performance measures, such as AUC and pAUC, and sometimes outperformed two well-known machine-learning techniques when applied to two different breast cancer data sets

  15. Making the Optimal Decision in Selecting Protective Clothing

    International Nuclear Information System (INIS)

    Price, J. Mark

    2008-01-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered. This article discusses these factors as well as surveys of plants regarding their level of usage of single use protective clothing and should help individuals making decisions about protective clothing as it applies to their application. Individuals considering using SUPC should not jump to conclusions. The survey conducted clearly indicates that plants have different drivers. An evaluation should be performed to understand the facility's true drivers for selecting clothing. It is recommended that an interdisciplinary team be formed including representatives from budgets and cost, safety, radwaste, health physics, and key user groups to perform the analysis. The right questions need to be asked and answered by the company providing the clothing to formulate a proper perspective and conclusion. The conclusions and recommendations need to be shared with senior management so that the drivers, expected results, and associated costs are understood and endorsed. In the end, the individual making the recommendation should ask himself/herself: 'Is my decision emotional, or logical and economical?' 'Have I reached the optimal decision for my plant?'

  16. Optimization of protection as a decision-making tool, for radioactive waste disposal

    International Nuclear Information System (INIS)

    Bragg, K.

    1988-01-01

    Politically-based considerations and processes including public perception and confidence appear to be the basis for real decisions affecting waste management activities such as siting, construction, operation and monitoring. Optimization of radiation protection is not a useful general tool for waste disposal decision making. Optimization of radiation protection is essentially a technical tool which can, under appropriate circumstances, provide a clear preference among major management options. The level of discrimination will be case-specific but, in general, only fairly coarse differences can be discriminated. The preferences determined by optimization of protection tend not to be related to the final choices made for disposal of radioactive wastes. Tools such as multi-attribute analysis are very useful as they provide a convenient means to rationalize the real decisions and give them some air of technical respectability. They do not, however, provide the primary basis for the decisions. Technical experts must develop an awareness of the non-technical approach to decision making an attempt to adjust their method of analyses and their presentation of information to encourage dialogue rather than confrontation. Simple expressions of technical information will be needed and the use of analogues should prove helpful

  17. Optimizing Negotiation Conflict in the Cloud Service Negotiation Framework Using Probabilistic Decision Making Model

    Directory of Open Access Journals (Sweden)

    Rajkumar Rajavel

    2015-01-01

    Full Text Available Optimization of negotiation conflict in the cloud service negotiation framework is identified as one of the major challenging issues. This negotiation conflict occurs during the bilateral negotiation process between the participants due to the misperception, aggressive behavior, and uncertain preferences and goals about their opponents. Existing research work focuses on the prerequest context of negotiation conflict optimization by grouping similar negotiation pairs using distance, binary, context-dependent, and fuzzy similarity approaches. For some extent, these approaches can maximize the success rate and minimize the communication overhead among the participants. To further optimize the success rate and communication overhead, the proposed research work introduces a novel probabilistic decision making model for optimizing the negotiation conflict in the long-term negotiation context. This decision model formulates the problem of managing different types of negotiation conflict that occurs during negotiation process as a multistage Markov decision problem. At each stage of negotiation process, the proposed decision model generates the heuristic decision based on the past negotiation state information without causing any break-off among the participants. In addition, this heuristic decision using the stochastic decision tree scenario can maximize the revenue among the participants available in the cloud service negotiation framework.

  18. Mathematical optimization of incore nuclear fuel management decisions: Status and trends

    International Nuclear Information System (INIS)

    Turinsky, P.J.

    1999-01-01

    Nuclear fuel management involves making decisions about the number of fresh assemblies to purchase and their Attributes (e.g. enrichment and burnable poison loading), burnt fuel to reinsert, location of the assemblies in the core (i.e. loading pattern (LP)), and insertion of control rods as a function of cycle exposure (i.e. control rod pattern (CRP)). The out-of-core and incore nuclear fuel management problems denote an artificial separation of decisions to simplify the decisionmaking. The out-of-core problem involves multicycle analysis so that levelized fuel cycle cost can be evaluated; whereas, the incore problem normally involves single cycle analysis. Decision variables for the incore problem normally include all of the above noted decisions with the exception of the number of fresh assemblies, which is restricted by discharge burnup limits and therefore involves multicycle considerations. This paper reports on the progress that is being made in addressing the incore nuclear fuel management problem utilizing formal mathematical optimization methods. Advances in utilizing the Simulating Annealing, Genetic Algorithm and Tabu Search methods, with applications to pressurized and boiling water reactor incore optimization problem, will be reviewed. Recent work on the addition of multiobjective optimization capability to aide the decision maker, and utilization of heuristic rules and incorporation of parallel algorithms to increase computational efficiency, will be discussed. (orig.) [de

  19. Treatment of waste salts by oxygen sparging and vacuum distillation

    International Nuclear Information System (INIS)

    Cho, Y.J.; Yang, H.C.; Kim, E.H.; Kin, I.T.; Eun, H.C.

    2007-01-01

    Full text of publication follows. During the electrorefining process of the oxide spent fuel from LWR, amounts of waste salts containing some metal chloride species such as rare earths and actinide chlorides are generated, where the reuse of the waste salts is very important from the standpoint of an economical as well as an environmental aspect. In order to reuse the waste salts, a salt vacuum distillation method can be used. For the best separation by a vacuum distillation, the metal chloride species involved in the waste salts must be converted into their oxide(or oxychloride) forms due to the their low volatility compared to that of LiCl-KCl. In this study, an oxygen sparging process was adopted for the oxidation (or precipitation) of rare earth chlorides. The effects of oxygen flow rate and molten salt temperature on the conversion of rare earth chlorides to the precipitate phase (i.e. oxide or oxychloride) were investigated. In addition, distillation characteristics of LiCl-KCl molten salt with system pressure and temperature were studied. (authors)

  20. A cognitive decision agent architecture for optimal energy management of microgrids

    International Nuclear Information System (INIS)

    Velik, Rosemarie; Nicolay, Pascal

    2014-01-01

    Highlights: • We propose an optimization approach for energy management in microgrids. • The optimizer emulates processes involved in human decision making. • Optimization objectives are energy self-consumption and financial gain maximization. • We gain improved optimization results in significantly reduced computation time. - Abstract: Via the integration of renewable energy and storage technologies, buildings have started to change from passive (electricity) consumers to active prosumer microgrids. Along with this development come a shift from centralized to distributed production and consumption models as well as discussions about the introduction of variable demand–supply-driven grid electricity prices. Together with upcoming ICT and automation technologies, these developments open space to a wide range of novel energy management and energy trading possibilities to optimally use available energy resources. However, what is considered as an optimal energy management and trading strategy heavily depends on the individual objectives and needs of a microgrid operator. Accordingly, elaborating the most suitable strategy for each particular system configuration and operator need can become quite a complex and time-consuming task, which can massively benefit from computational support. In this article, we introduce a bio-inspired cognitive decision agent architecture for optimized, goal-specific energy management in (interconnected) microgrids, which are additionally connected to the main electricity grid. For evaluating the performance of the architecture, a number of test cases are specified targeting objectives like local photovoltaic energy consumption maximization and financial gain maximization. Obtained outcomes are compared against a modified simulating annealing optimization approach in terms of objective achievement and computational effort. Results demonstrate that the cognitive decision agent architecture yields improved optimization results in

  1. Application of Bayesian statistical decision theory to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-07-01

    Statistical decision theory could be a alternative for the optimization of preventive maintenance periodicity. In effect, this theory concerns the situation in which a decision maker has to make a choice between a set of reasonable decisions, and where the loss associated to a given decision depends on a probabilistic risk, called state of nature. In the case of maintenance optimization, the decisions to be analyzed are different periodicities proposed by the experts, given the observed feedback experience, the states of nature are the associated failure probabilities, and the losses are the expectations of the induced cost of maintenance and of consequences of the failures. As failure probabilities concern rare events, at the ultimate state of RCM analysis (failure of sub-component), and as expected foreseeable behaviour of equipment has to be evaluated by experts, Bayesian approach is successfully used to compute states of nature. In Bayesian decision theory, a prior distribution for failure probabilities is modeled from expert knowledge, and is combined with few stochastic information provided by feedback experience, giving a posterior distribution of failure probabilities. The optimized decision is the decision that minimizes the expected loss over the posterior distribution. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants. In these plants, auxiliary electric power is supplied by 2 redundant diesel generators which are tested every 2 weeks during about 1 hour. Until now, during yearly refueling of each plant, one endoscopic inspection of diesel cylinders is performed, and every 5 operating years, all cylinders are replaced. RCM has shown that cylinder failures could be critical. So Bayesian decision theory has been applied, taking into account expert opinions, and possibility of aging when maintenance periodicity is extended. (authors). 8 refs., 5 figs., 1 tab

  2. Optimization-based decision support systems for planning problems in processing industries

    NARCIS (Netherlands)

    Claassen, G.D.H.

    2014-01-01

    Summary

    Optimization-based decision support systems for planning problems in processing industries

    Nowadays, efficient planning of material flows within and between supply chains is of vital importance and has become one of the most challenging problems for decision support in

  3. Evolutionary Artificial Neural Network Weight Tuning to Optimize Decision Making for an Abstract Game

    Science.gov (United States)

    2010-03-01

    EVOLUTIONARY ARTIFICIAL NEURAL NETWORK WEIGHT TUNING TO OPTIMIZE DECISION MAKING FOR AN ABSTRACT...AFIT/GCS/ENG/10-06 EVOLUTIONARY ARTIFICIAL NEURAL NETWORK WEIGHT TUNING TO OPTIMIZE DECISION MAKING FOR AN ABSTRACT GAME THESIS Presented...35 14: Diagram of pLoGANN’s Artificial Neural Network and

  4. Optimal decision making on the basis of evidence represented in spike trains.

    Science.gov (United States)

    Zhang, Jiaxiang; Bogacz, Rafal

    2010-05-01

    Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.

  5. Antagonistic and Bargaining Games in Optimal Marketing Decisions

    Science.gov (United States)

    Lipovetsky, S.

    2007-01-01

    Game theory approaches to find optimal marketing decisions are considered. Antagonistic games with and without complete information, and non-antagonistic games techniques are applied to paired comparison, ranking, or rating data for a firm and its competitors in the market. Mix strategy, equilibrium in bi-matrix games, bargaining models with…

  6. Increase in volatilization of organic compounds using air sparging through addition in alcohol in a soil-water system.

    Science.gov (United States)

    Chao, Huan-Ping; Hsieh, Lin-Han Chiang; Tran, Hai Nguyen

    2018-02-15

    This study developed a novel method to promote the remediation efficiency of air sparging. According to the enhanced-volatilization theory presented in this study, selected alcohols added to groundwater can highly enhance the volatilization amounts of organic compounds with high Henry's law constants. In this study, the target organic compounds consisted of n-hexane, n-heptane, benzene, toluene, 1,1,2-trichloroethane, and tetrachloroethene. n-pentanol, n-hexanol, and n-heptanol were used to examine the changes in the volatilization amounts of organic compounds in the given period. Two types of soils with high and low organic matter were applied to evaluate the transport of organic compounds in the soil-water system. The volatilization amounts of the organic compounds increased with increasing alcohol concentrations. The volatilization amounts of the test organic compounds exhibited a decreasing order: n-heptanol>n-hexanol>n-pentanol. When 10mg/L n-heptanol was added to the system, the maximum volatilization enhancement rate was 18-fold higher than that in distilled water. Samples of soil with high organic matter might reduce the volatilization amounts by a factor of 5-10. In the present study, the optimal removal efficiency for aromatic compounds was approximately 98%. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Identification of Optimal Policies in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf

  8. Where should I send it? Optimizing the submission decision process.

    Directory of Open Access Journals (Sweden)

    Santiago Salinas

    Full Text Available How do scientists decide where to submit manuscripts? Many factors influence this decision, including prestige, acceptance probability, turnaround time, target audience, fit, and impact factor. Here, we present a framework for evaluating where to submit a manuscript based on the theory of Markov decision processes. We derive two models, one in which an author is trying to optimally maximize citations and another in which that goal is balanced by either minimizing the number of resubmissions or the total time in review. We parameterize the models with data on acceptance probability, submission-to-decision times, and impact factors for 61 ecology journals. We find that submission sequences beginning with Ecology Letters, Ecological Monographs, or PLOS ONE could be optimal depending on the importance given to time to acceptance or number of resubmissions. This analysis provides some guidance on where to submit a manuscript given the individual-specific values assigned to these disparate objectives.

  9. Dynamic programming approach to optimization of approximate decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure R(T) which is the number

  10. Extensions of dynamic programming as a new tool for decision tree optimization

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-01-01

    The chapter is devoted to the consideration of two types of decision trees for a given decision table: α-decision trees (the parameter α controls the accuracy of tree) and decision trees (which allow arbitrary level of accuracy). We study possibilities of sequential optimization of α-decision trees relative to different cost functions such as depth, average depth, and number of nodes. For decision trees, we analyze relationships between depth and number of misclassifications. We also discuss results of computer experiments with some datasets from UCI ML Repository. ©Springer-Verlag Berlin Heidelberg 2013.

  11. Optimization-based decision support systems for planning problems in processing industries

    OpenAIRE

    Claassen, G.D.H.

    2014-01-01

    Summary Optimization-based decision support systems for planning problems in processing industries Nowadays, efficient planning of material flows within and between supply chains is of vital importance and has become one of the most challenging problems for decision support in practice. The tremendous progress in hard- and software of the past decades was an important gateway for developing computerized systems that are able to support decision making on different levels within enterprises. T...

  12. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  13. Application of stochastic optimization to nuclear power plant asset management decisions

    International Nuclear Information System (INIS)

    Morton, D.; Koc, A.; Hess, S. M.

    2013-01-01

    We describe the development and application of stochastic optimization models and algorithms to address an issue of critical importance in the strategic allocation of resources; namely, the selection of a portfolio of capital investment projects under the constraints of a limited and uncertain budget. This issue is significant and one that faces decision-makers across all industries. The objective of this strategic decision process is generally self evident - to maximize the value obtained from the portfolio of selected projects (with value usually measured in terms of the portfolio's net present value). However, heretofore, many organizations have developed processes to make these investment decisions using simplistic rule-based rank-ordering schemes. This approach has the significant limitation of not accounting for the (often large) uncertainties in the costs or economic benefits associated with the candidate projects or in the uncertainties in the actual funds available to be expended over the projected period of time. As a result, the simple heuristic approaches that typically are employed in industrial practice generate outcomes that are non-optimal and do not achieve the level of benefits intended. In this paper we describe the results of research performed to utilize stochastic optimization models and algorithms to address this limitation by explicitly incorporating the evaluation of uncertainties in the analysis and decision making process. (authors)

  14. Application of stochastic optimization to nuclear power plant asset management decisions

    Energy Technology Data Exchange (ETDEWEB)

    Morton, D. [Graduate Program in Operations Research and Industrial Engineering, University of Texas at Austin, Austin, TX, 78712 (United States); Koc, A. [IBM T.J. Watson Research Center, Business Analytics and Mathematical Sciences Dept., 1101 Kitchawan Rd., Yorktown Heights, NY, 10598 (United States); Hess, S. M. [Electric Power Research Institute, 300 Baywood Road, West Chester, PA, 19382 (United States)

    2013-07-01

    We describe the development and application of stochastic optimization models and algorithms to address an issue of critical importance in the strategic allocation of resources; namely, the selection of a portfolio of capital investment projects under the constraints of a limited and uncertain budget. This issue is significant and one that faces decision-makers across all industries. The objective of this strategic decision process is generally self evident - to maximize the value obtained from the portfolio of selected projects (with value usually measured in terms of the portfolio's net present value). However, heretofore, many organizations have developed processes to make these investment decisions using simplistic rule-based rank-ordering schemes. This approach has the significant limitation of not accounting for the (often large) uncertainties in the costs or economic benefits associated with the candidate projects or in the uncertainties in the actual funds available to be expended over the projected period of time. As a result, the simple heuristic approaches that typically are employed in industrial practice generate outcomes that are non-optimal and do not achieve the level of benefits intended. In this paper we describe the results of research performed to utilize stochastic optimization models and algorithms to address this limitation by explicitly incorporating the evaluation of uncertainties in the analysis and decision making process. (authors)

  15. Road maintenance optimization through a discrete-time semi-Markov decision process

    International Nuclear Information System (INIS)

    Zhang Xueqing; Gao Hui

    2012-01-01

    Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.

  16. Optimization and analysis of decision trees and rules: Dynamic programming approach

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-08-01

    This paper is devoted to the consideration of software system Dagger created in KAUST. This system is based on extensions of dynamic programming. It allows sequential optimization of decision trees and rules relative to different cost functions, derivation of relationships between two cost functions (in particular, between number of misclassifications and depth of decision trees), and between cost and uncertainty of decision trees. We describe features of Dagger and consider examples of this systems work on decision tables from UCI Machine Learning Repository. We also use Dagger to compare 16 different greedy algorithms for decision tree construction. © 2013 Taylor and Francis Group, LLC.

  17. Optimization and analysis of decision trees and rules: Dynamic programming approach

    KAUST Repository

    Alkhalid, Abdulaziz; Amin, Talha M.; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    This paper is devoted to the consideration of software system Dagger created in KAUST. This system is based on extensions of dynamic programming. It allows sequential optimization of decision trees and rules relative to different cost functions, derivation of relationships between two cost functions (in particular, between number of misclassifications and depth of decision trees), and between cost and uncertainty of decision trees. We describe features of Dagger and consider examples of this systems work on decision tables from UCI Machine Learning Repository. We also use Dagger to compare 16 different greedy algorithms for decision tree construction. © 2013 Taylor and Francis Group, LLC.

  18. Field Application of Modified In Situ Soil Flushing in Combination with Air Sparging at a Military Site Polluted by Diesel and Gasoline in Korea

    Science.gov (United States)

    Lee, Hwan; Lee, Yoonjin; Kim, Jaeyoung; Kim, Choltae

    2014-01-01

    In this study the full-scale operation of soil flushing with air sparging to improve the removal efficiency of petroleum at depths of less than 7 m at a military site in Korea was evaluated. The target area was polluted by multiple gasoline and diesel fuel sources. The soil was composed of heterogeneous layers of granules, sand, silt and clay. The operation factors were systemically assessed using a column test and a pilot study before running the full-scale process at the site. The discharged TPH and BTEX (benzene, toluene, ethylbenzene, and xylenes) concentrations in the water were highest at 20 min and at a rate of 350 L/min, which was selected as the volume of air for the full-scale operation in the pilot air sparging test. The surfactant-aid condition was 1.4 times more efficient than the non-surfactant condition in the serial operations of modified soil flushing followed by air sparging. The hydraulic conductivity (3.13 × 10−3 cm/s) increased 4.7 times after the serial operation of both processes relative to the existing condition (6.61 × 10−4 cm/s). The removal efficiencies of TPH were 52.8%, 57.4%, and 61.8% for the soil layers at 6 to 7, 7 to 8 and 8 to 9 m, respectively. Therefore, the TPH removal was improved at depth of less than 7 m by using this modified remediation system. The removal efficiencies for the areas with TPH and BTEX concentrations of more than 500 and 80 mg/kg, were 55.5% and 92.9%, respectively, at a pore volume of 2.9. The total TPH and BTEX mass removed during the full-scale operation was 5109 and 752 kg, respectively. PMID:25166919

  19. Second Order Optimality in Markov Decision Chains

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2017-01-01

    Roč. 53, č. 6 (2017), s. 1086-1099 ISSN 0023-5954 R&D Projects: GA ČR GA15-10331S Institutional support: RVO:67985556 Keywords : Markov decision chains * second order optimality * optimalilty conditions for transient, discounted and average models * policy and value iterations Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.379, year: 2016 http://library.utia.cas.cz/separaty/2017/E/sladky-0485146.pdf

  20. Particle swarm optimization of driving torque demand decision based on fuel economy for plug-in hybrid electric vehicle

    International Nuclear Information System (INIS)

    Shen, Peihong; Zhao, Zhiguo; Zhan, Xiaowen; Li, Jingwei

    2017-01-01

    In this paper, an energy management strategy based on logic threshold is proposed for a plug-in hybrid electric vehicle. The plug-in hybrid electric vehicle powertrain model is established using MATLAB/Simulink based on experimental tests of the power components, which is validated by the comparison with the verified simulation model which is built in the AVL Cruise. The influence of the driving torque demand decision on the fuel economy of plug-in hybrid electric vehicle is studied using a simulation. The optimization method for the driving torque demand decision, which refers to the relationship between the accelerator pedal opening and driving torque demand, from the perspective of fuel economy is formulated. The dynamically changing inertia weight particle swarm optimization is used to optimize the decision parameters. The simulation results show that the optimized driving torque demand decision can improve the PHEV fuel economy by 15.8% and 14.5% in the fuel economy test driving cycle of new European driving cycle and worldwide harmonized light vehicles test respectively, using the same rule-based energy management strategy. The proposed optimization method provides a theoretical guide for calibrating the parameters of driving torque demand decision to improve the fuel economy of the real plug-in hybrid electric vehicle. - Highlights: • The influence of the driving torque demand decision on the fuel economy is studied. • The optimization method for the driving torque demand decision is formulated. • An improved particle swarm optimization is utilized to optimize the parameters. • Fuel economy is improved by using the optimized driving torque demand decision.

  1. Learning decision trees with flexible constraints and objectives using integer optimization

    NARCIS (Netherlands)

    Verwer, S.; Zhang, Y.

    2017-01-01

    We encode the problem of learning the optimal decision tree of a given depth as an integer optimization problem. We show experimentally that our method (DTIP) can be used to learn good trees up to depth 5 from data sets of size up to 1000. In addition to being efficient, our new formulation allows

  2. Optimal condition-based maintenance decisions for systems with dependent stochastic degradation of components

    International Nuclear Information System (INIS)

    Hong, H.P.; Zhou, W.; Zhang, S.; Ye, W.

    2014-01-01

    Components in engineered systems are subjected to stochastic deterioration due to the operating environmental conditions, and the uncertainty in material properties. The components need to be inspected and possibly replaced based on preventive or failure replacement criteria to provide the intended and safe operation of the system. In the present study, we investigate the influence of dependent stochastic degradation of multiple components on the optimal maintenance decisions. We use copula to model the dependent stochastic degradation of components, and formulate the optimal decision problem based on the minimum expected cost rule and the stochastic dominance rules. The latter is used to cope with decision maker's risk attitude. We illustrate the developed probabilistic analysis approach and the influence of the dependency of the stochastic degradation on the preferred decisions through numerical examples

  3. Dimensions of design space: a decision-theoretic approach to optimal research design.

    Science.gov (United States)

    Conti, Stefano; Claxton, Karl

    2009-01-01

    Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.

  4. Inverse Optimization and Forecasting Techniques Applied to Decision-making in Electricity Markets

    DEFF Research Database (Denmark)

    Saez Gallego, Javier

    patterns that the load traditionally exhibited. On the other hand, this thesis is motivated by the decision-making processes of market players. In response to these challenges, this thesis provides mathematical models for decision-making under uncertainty in electricity markets. Demand-side bidding refers......This thesis deals with the development of new mathematical models that support the decision-making processes of market players. It addresses the problems of demand-side bidding, price-responsive load forecasting and reserve determination. From a methodological point of view, we investigate a novel...... approach to model the response of aggregate price-responsive load as a constrained optimization model, whose parameters are estimated from data by using inverse optimization techniques. The problems tackled in this dissertation are motivated, on one hand, by the increasing penetration of renewable energy...

  5. A modeling framework for optimal long-term care insurance purchase decisions in retirement planning.

    Science.gov (United States)

    Gupta, Aparna; Li, Lepeng

    2004-05-01

    The level of need and costs of obtaining long-term care (LTC) during retired life require that planning for it is an integral part of retirement planning. In this paper, we divide retirement planning into two phases, pre-retirement and post-retirement. On the basis of four interrelated models for health evolution, wealth evolution, LTC insurance premium and coverage, and LTC cost structure, a framework for optimal LTC insurance purchase decisions in the pre-retirement phase is developed. Optimal decisions are obtained by developing a trade-off between post-retirement LTC costs and LTC insurance premiums and coverage. Two-way branching models are used to model stochastic health events and asset returns. The resulting optimization problem is formulated as a dynamic programming problem. We compare the optimal decision under two insurance purchase scenarios: one assumes that insurance is purchased for good and other assumes it may be purchased, relinquished and re-purchased. Sensitivity analysis is performed for the retirement age.

  6. Optimal Guaranteed Service Time and Service Level Decision with Time and Service Level Sensitive Demand

    Directory of Open Access Journals (Sweden)

    Sangjun Park

    2014-01-01

    Full Text Available We consider a two-stage supply chain with one supplier and one retailer. The retailer sells a product to customer and the supplier provides a product in a make-to-order mode. In this case, the supplier’s decisions on service time and service level and the retailer’s decision on retail price have effects on customer demand. We develop optimization models to determine the optimal retail price, the optimal guaranteed service time, the optimal service level, and the optimal capacity to maximize the expected profit of the whole supply chain. The results of numerical experiments show that it is more profitable to determine the optimal price, the optimal guaranteed service time, and the optimal service level simultaneously and the proposed model is more profitable in service level sensitive market.

  7. SOLVING OPTIMAL ASSEMBLY LINE CONFIGURATION TASK BY MULTIOBJECTIVE DECISION MAKING METHODS

    Directory of Open Access Journals (Sweden)

    Ján ČABALA

    2017-06-01

    Full Text Available This paper deals with looking for the optimal configuration of automated assembly line model placed within Department of Cybernetics and Artificial Intelligence (DCAI. In order to solve this problem, Stateflow model of each configuration was created to simulate the behaviour of particular assembly line configuration. Outputs from these models were used as inputs into the multiobjective decision making process. Multi-objective decision-making methods were subsequently used to find the optimal configuration of assembly line. Paper describes the whole process of solving this task, from building the models to choosing the best configuration. Specifically, the problem was resolved using the experts’ evaluation method for evaluating the weights of every decision-making criterion, while the ELECTRE III, TOPSIS and AGREPREF methods were used for ordering the possible solutions from the most to the least suitable alternative. Obtained results were compared and final solution of this multi-objective decisionmaking problem is chosen.

  8. DECISION SUPPORT TOOL FOR RETAIL SHELF SPACE OPTIMIZATION

    OpenAIRE

    B. RAMASESHAN; N. R. ACHUTHAN; R. COLLINSON

    2008-01-01

    Efficient allocation of shelf space and product assortment can significantly improve a retailer's profitability. This paper addresses the problem from the perspective of an independent franchise retailer. A Category Management Decision Support Tool (CMDST) is proposed that efficiently generates optimal shelf space allocations and product assortments by using the existing scarce resources, resulting in increased profitability. CMDST utilizes two practical integrated category management models ...

  9. Application of goal programming to decision problem on optimal allocation of radiation workers

    International Nuclear Information System (INIS)

    Sa, Sangduk; Narita, Masakuni

    1993-01-01

    This paper is concerned with an optimal planning in a multiple objective decision-making problem of allocating radiation workers to workplaces associated with occupational exposure. The model problem is formulated with the application of goal programming which effectively followed up diverse and conflicting factors influencing the optimal decision. The formulation is based on the data simulating the typical situations encountered at the operating facilities such as nuclear power plants where exposure control is critical to the management. Multiple goals set by the decision-maker/manager who has the operational responsibilities for radiological protection are illustrated in terms of work requirements, exposure constraints of the places, desired allocation of specific personnel and so on. Test results of the model are considered to indicate that the model structure and its solution process can provide the manager with a good set of analysis of his problems in implementing the optimization review of radiation protection during normal operation. (author)

  10. Constructing an optimal decision tree for FAST corner point detection

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2011-01-01

    In this paper, we consider a problem that is originated in computer vision: determining an optimal testing strategy for the corner point detection problem that is a part of FAST algorithm [11,12]. The problem can be formulated as building a decision tree with the minimum average depth for a decision table with all discrete attributes. We experimentally compare performance of an exact algorithm based on dynamic programming and several greedy algorithms that differ in the attribute selection criterion. © 2011 Springer-Verlag.

  11. Updating Optimal Decisions Using Game Theory and Exploring Risk Behavior Through Response Surface Methodology

    National Research Council Canada - National Science Library

    Jordan, Jeremy D

    2007-01-01

    .... Methodology is developed that allows a decision maker to change his perceived optimal policy based on available knowledge of the opponents strategy, where the opponent is a rational decision maker...

  12. Implementing of the multi-objective particle swarm optimizer and fuzzy decision-maker in exergetic, exergoeconomic and environmental optimization of a benchmark cogeneration system

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn; Babaie, Meisam; Farmani, Mohammad Reza

    2011-01-01

    Multi-objective optimization for design of a benchmark cogeneration system namely as the CGAM cogeneration system is performed. In optimization approach, Exergetic, Exergoeconomic and Environmental objectives are considered, simultaneously. In this regard, the set of Pareto optimal solutions known as the Pareto frontier is obtained using the MOPSO (multi-objective particle swarm optimizer). The exergetic efficiency as an exergetic objective is maximized while the unit cost of the system product and the cost of the environmental impact respectively as exergoeconomic and environmental objectives are minimized. Economic model which is utilized in the exergoeconomic analysis is built based on both simple model (used in original researches of the CGAM system) and the comprehensive modeling namely as TTR (total revenue requirement) method (used in sophisticated exergoeconomic analysis). Finally, a final optimal solution from optimal set of the Pareto frontier is selected using a fuzzy decision-making process based on the Bellman-Zadeh approach and results are compared with corresponding results obtained in a traditional decision-making process. Further, results are compared with the corresponding performance of the base case CGAM system and optimal designs of previous works and discussed. -- Highlights: → A multi-objective optimization approach has been implemented in optimization of a benchmark cogeneration system. → Objective functions based on the environmental impact evaluation, thermodynamic and economic analysis are obtained and optimized. → Particle swarm optimizer implemented and its robustness is compared with NSGA-II. → A final optimal configuration is found using various decision-making approaches. → Results compared with previous works in the field.

  13. Generalized concavity in fuzzy optimization and decision analysis

    CERN Document Server

    Ramík, Jaroslav

    2002-01-01

    Convexity of sets in linear spaces, and concavity and convexity of functions, lie at the root of beautiful theoretical results that are at the same time extremely useful in the analysis and solution of optimization problems, including problems of either single objective or multiple objectives. Not all of these results rely necessarily on convexity and concavity; some of the results can guarantee that each local optimum is also a global optimum, giving these methods broader application to a wider class of problems. Hence, the focus of the first part of the book is concerned with several types of generalized convex sets and generalized concave functions. In addition to their applicability to nonconvex optimization, these convex sets and generalized concave functions are used in the book's second part, where decision-making and optimization problems under uncertainty are investigated. Uncertainty in the problem data often cannot be avoided when dealing with practical problems. Errors occur in real-world data for...

  14. Optimal Waste Load Allocation Using Multi-Objective Optimization and Multi-Criteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    L. Saberi

    2016-10-01

    Full Text Available Introduction: Increasing demand for water, depletion of resources of acceptable quality, and excessive water pollution due to agricultural and industrial developments has caused intensive social and environmental problems all over the world. Given the environmental importance of rivers, complexity and extent of pollution factors and physical, chemical and biological processes in these systems, optimal waste-load allocation in river systems has been given considerable attention in the literature in the past decades. The overall objective of planning and quality management of river systems is to develop and implement a coordinated set of strategies and policies to reduce or allocate of pollution entering the rivers so that the water quality matches by proposing environmental standards with an acceptable reliability. In such matters, often there are several different decision makers with different utilities which lead to conflicts. Methods/Materials: In this research, a conflict resolution framework for optimal waste load allocation in river systems is proposed, considering the total treatment cost and the Biological Oxygen Demand (BOD violation characteristics. There are two decision-makers inclusive waste load discharges coalition and environmentalists who have conflicting objectives. This framework consists of an embedded river water quality simulator, which simulates the transport process including reaction kinetics. The trade-off curve between objectives is obtained using the Multi-objective Particle Swarm Optimization Algorithm which these objectives are minimization of the total cost of treatment and penalties that must be paid by discharges and a violation of water quality standards considering BOD parameter which is controlled by environmentalists. Thus, the basic policy of river’s water quality management is formulated in such a way that the decision-makers are ensured their benefits will be provided as far as possible. By using MOPSO

  15. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...

  16. Integrated decision making for the optimal bioethanol supply chain

    International Nuclear Information System (INIS)

    Corsano, Gabriela; Fumero, Yanina; Montagna, Jorge M.

    2014-01-01

    Highlights: • Optimal allocation, design and production planning of integrated ethanol plants is considered. • Mixed Integer Programming model is presented for solving the integration problem. • Different tradeoffs can be assessed and analyzed. • The modeling framework represents an useful tool for guiding decision making. - Abstract: Bioethanol production poses different challenges that require an integrated approach. Usually previous works have focused on specific perspectives of the global problem. On the contrary, bioethanol, in particular, and biofuels, in general, requires an integrated decision making framework that takes into account the needs and concerns of the different members involved in its supply chain. In this work, a Mixed Integer Linear Programming (MILP) model for the optimal allocation, design and production planning of integrated ethanol/yeast plants is considered. The proposed formulation addresses the relations between different aspects of the bioethanol supply chain and provides an efficient tool to assess the global operation of the supply chain taking into account different points of view. The model proposed in this work simultaneously determines the structure of a three-echelon supply chain (raw material sites, production facilities and customer zones), the design of each installed plant and operational considerations through production campaigns. Yeast production is considered in order to reduce the negative environmental impact caused by bioethanol residues. Several cases are presented in order to assess the approach capabilities and to evaluate the tradeoffs among all the decisions

  17. Development of Decision-Making Automated System for Optimal Placement of Physical Access Control System’s Elements

    Science.gov (United States)

    Danilova, Olga; Semenova, Zinaida

    2018-04-01

    The objective of this study is a detailed analysis of physical protection systems development for information resources. The optimization theory and decision-making mathematical apparatus is used to formulate correctly and create an algorithm of selection procedure for security systems optimal configuration considering the location of the secured object’s access point and zones. The result of this study is a software implementation scheme of decision-making system for optimal placement of the physical access control system’s elements.

  18. Reward Rate Optimization in Two-Alternative Decision Making: Empirical Tests of Theoretical Predictions

    Science.gov (United States)

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D.

    2009-01-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response…

  19. Do the right thing: the assumption of optimality in lay decision theory and causal judgment.

    Science.gov (United States)

    Johnson, Samuel G B; Rips, Lance J

    2015-03-01

    Human decision-making is often characterized as irrational and suboptimal. Here we ask whether people nonetheless assume optimal choices from other decision-makers: Are people intuitive classical economists? In seven experiments, we show that an agent's perceived optimality in choice affects attributions of responsibility and causation for the outcomes of their actions. We use this paradigm to examine several issues in lay decision theory, including how responsibility judgments depend on the efficacy of the agent's actual and counterfactual choices (Experiments 1-3), individual differences in responsibility assignment strategies (Experiment 4), and how people conceptualize decisions involving trade-offs among multiple goals (Experiments 5-6). We also find similar results using everyday decision problems (Experiment 7). Taken together, these experiments show that attributions of responsibility depend not only on what decision-makers do, but also on the quality of the options they choose not to take. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Decision Support Model for Optimal Management of Coastal Gate

    Science.gov (United States)

    Ditthakit, Pakorn; Chittaladakorn, Suwatana

    2010-05-01

    The coastal areas are intensely settled by human beings owing to their fertility of natural resources. However, at present those areas are facing with water scarcity problems: inadequate water and poor water quality as a result of saltwater intrusion and inappropriate land-use management. To solve these problems, several measures have been exploited. The coastal gate construction is a structural measure widely performed in several countries. This manner requires the plan for suitably operating coastal gates. Coastal gate operation is a complicated task and usually concerns with the management of multiple purposes, which are generally conflicted one another. This paper delineates the methodology and used theories for developing decision support modeling for coastal gate operation scheduling. The developed model was based on coupling simulation and optimization model. The weighting optimization technique based on Differential Evolution (DE) was selected herein for solving multiple objective problems. The hydrodynamic and water quality models were repeatedly invoked during searching the optimal gate operations. In addition, two forecasting models:- Auto Regressive model (AR model) and Harmonic Analysis model (HA model) were applied for forecasting water levels and tide levels, respectively. To demonstrate the applicability of the developed model, it was applied to plan the operations for hypothetical system of Pak Phanang coastal gate system, located in Nakhon Si Thammarat province, southern part of Thailand. It was found that the proposed model could satisfyingly assist decision-makers for operating coastal gates under various environmental, ecological and hydraulic conditions.

  1. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  2. Optimizing patient treatment decisions in an era of rapid technological advances: the case of hepatitis C treatment.

    Science.gov (United States)

    Liu, Shan; Brandeau, Margaret L; Goldhaber-Fiebert, Jeremy D

    2017-03-01

    How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient's quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3-4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment-despite expectations for future treatment improvement-for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population.

  3. Humans Optimize Decision-Making by Delaying Decision Onset

    Science.gov (United States)

    Teichert, Tobias; Ferrera, Vincent P.; Grinband, Jack

    2014-01-01

    Why do humans make errors on seemingly trivial perceptual decisions? It has been shown that such errors occur in part because the decision process (evidence accumulation) is initiated before selective attention has isolated the relevant sensory information from salient distractors. Nevertheless, it is typically assumed that subjects increase accuracy by prolonging the decision process rather than delaying decision onset. To date it has not been tested whether humans can strategically delay decision onset to increase response accuracy. To address this question we measured the time course of selective attention in a motion interference task using a novel variant of the response signal paradigm. Based on these measurements we estimated time-dependent drift rate and showed that subjects should in principle be able trade speed for accuracy very effectively by delaying decision onset. Using the time-dependent estimate of drift rate we show that subjects indeed delay decision onset in addition to raising response threshold when asked to stress accuracy over speed in a free reaction version of the same motion-interference task. These findings show that decision onset is a critical aspect of the decision process that can be adjusted to effectively improve decision accuracy. PMID:24599295

  4. Decision Support System for Optimized Herbicide Dose in Spring Barley

    DEFF Research Database (Denmark)

    Sønderskov, Mette; Kudsk, Per; Mathiassen, Solvejg K

    2014-01-01

    Crop Protection Online (CPO) is a decision support system, which integrates decision algorithms quantifying the requirement for weed control and a herbicide dose model. CPO was designed to be used by advisors and farmers to optimize the choice of herbicide and dose. The recommendations from CPO...... as the Treatment Frequency Index (TFI)) compared to a high level of required weed control. The observations indicated that the current level of weed control required is robust for a range of weed scenarios. Weed plant numbers 3 wk after spraying indicated that the growth of the weed species were inhibited...

  5. Optimization of decision making to avoid stochastically predicted air traffic conflicts

    Directory of Open Access Journals (Sweden)

    В.М. Васильєв

    2005-01-01

    Full Text Available  The method of decision-making optimization on planning an aircraft trajectory to avoid potential conflict with restricted minimal level of separation standard is proposed. Evaluation and monitoring the conflict probability are made using the probabilistic composite method.

  6. A Generalized Decision Framework Using Multi-objective Optimization for Water Resources Planning

    Science.gov (United States)

    Basdekas, L.; Stewart, N.; Triana, E.

    2013-12-01

    Colorado Springs Utilities (CSU) is currently engaged in an Integrated Water Resource Plan (IWRP) to address the complex planning scenarios, across multiple time scales, currently faced by CSU. The modeling framework developed for the IWRP uses a flexible data-centered Decision Support System (DSS) with a MODSIM-based modeling system to represent the operation of the current CSU raw water system coupled with a state-of-the-art multi-objective optimization algorithm. Three basic components are required for the framework, which can be implemented for planning horizons ranging from seasonal to interdecadal. First, a water resources system model is required that is capable of reasonable system simulation to resolve performance metrics at the appropriate temporal and spatial scales of interest. The system model should be an existing simulation model, or one developed during the planning process with stakeholders, so that 'buy-in' has already been achieved. Second, a hydrologic scenario tool(s) capable of generating a range of plausible inflows for the planning period of interest is required. This may include paleo informed or climate change informed sequences. Third, a multi-objective optimization model that can be wrapped around the system simulation model is required. The new generation of multi-objective optimization models do not require parameterization which greatly reduces problem complexity. Bridging the gap between research and practice will be evident as we use a case study from CSU's planning process to demonstrate this framework with specific competing water management objectives. Careful formulation of objective functions, choice of decision variables, and system constraints will be discussed. Rather than treating results as theoretically Pareto optimal in a planning process, we use the powerful multi-objective optimization models as tools to more efficiently and effectively move out of the inferior decision space. The use of this framework will help CSU

  7. Optimal Solutions of Multiproduct Batch Chemical Process Using Multiobjective Genetic Algorithm with Expert Decision System

    Science.gov (United States)

    Mokeddem, Diab; Khellaf, Abdelhafid

    2009-01-01

    Optimal design problem are widely known by their multiple performance measures that are often competing with each other. In this paper, an optimal multiproduct batch chemical plant design is presented. The design is firstly formulated as a multiobjective optimization problem, to be solved using the well suited non dominating sorting genetic algorithm (NSGA-II). The NSGA-II have capability to achieve fine tuning of variables in determining a set of non dominating solutions distributed along the Pareto front in a single run of the algorithm. The NSGA-II ability to identify a set of optimal solutions provides the decision-maker DM with a complete picture of the optimal solution space to gain better and appropriate choices. Then an outranking with PROMETHEE II helps the decision-maker to finalize the selection of a best compromise. The effectiveness of NSGA-II method with multiojective optimization problem is illustrated through two carefully referenced examples. PMID:19543537

  8. Investigation of Multi-Criteria Decision Consistency: A Triplex Approach to Optimal Oilfield Portfolio Investment Decisions

    Science.gov (United States)

    Qaradaghi, Mohammed

    Complexity of the capital intensive oil and gas portfolio investments is continuously growing. It is manifested in the constant increase in the type, number and degree of risks and uncertainties, which consequently lead to more challenging decision making problems. A typical complex decision making problem in petroleum exploration and production (E&P) is the selection and prioritization of oilfields/projects in a portfolio investment. Prioritizing oilfields maybe required for different purposes, including the achievement of a targeted production and allocation of limited available development resources. These resources cannot be distributed evenly nor can they be allocated based on the oilfield size or production capacity alone since various other factors need to be considered simultaneously. These factors may include subsurface complexity, size of reservoir, plateau production and needed infrastructure in addition to other issues of strategic concern, such as socio-economic, environmental and fiscal policies, particularly when the decision making involves governments or national oil companies. Therefore, it would be imperative to employ decision aiding tools that not only address these factors, but also incorporate the decision makers' preferences clearly and accurately. However, the tools commonly used in project portfolio selection and optimization, including intuitive approaches, vary in their focus and strength in addressing the different criteria involved in such decision problems. They are also disadvantaged by a number of drawbacks, which may include lacking the capacity to address multiple and interrelated criteria, uncertainty and risk, project relationship with regard to value contribution and optimum resource utilization, non-monetary attributes, decision maker's knowledge and expertise, in addition to varying levels of ease of use and other practical and theoretical drawbacks. These drawbacks have motivated researchers to investigate other tools and

  9. OPTIMAL BUSINESS DECISION SYSTEM FOR MULTINATIONALS: A MULTIFACTOR ANALYSIS OF SELECTED MANUFACTURING FIRMS

    Directory of Open Access Journals (Sweden)

    Oforegbunam Thaddeus Ebiringa

    2011-03-01

    Full Text Available Traditional MIS has been made more effective through the integration of organization, human andtechnology factors into a decision matrix. The study is motivated by the need to find an optimal mixof interactive factors that will optimize the result of decision to apply ICT to manufacturingprocesses. The study used Factor analysis model based on the sampled opinion of forty (40operations/production managers and two thousand (2000 production line workers of three leadingmanufacturing firms: Uniliver Plc., PZ Plc, and Nigerian Breweries Plc operating in Aba IndustrialEstate of Nigeria. The results shows that a progressive mixed factor loading matrix, based on thepreferred ordered importance of resources factors in the formulation, implementation, monitoring,control and evaluation of ICT projects of the selected firms led to an average capability improvementof 0.764 in decision efficiency. This is considered strategic for achieving balanced corporate growthand development.

  10. Robust Inventory System Optimization Based on Simulation and Multiple Criteria Decision Making

    Directory of Open Access Journals (Sweden)

    Ahmad Mortazavi

    2014-01-01

    Full Text Available Inventory management in retailers is difficult and complex decision making process which is related to the conflict criteria, also existence of cyclic changes and trend in demand is inevitable in many industries. In this paper, simulation modeling is considered as efficient tool for modeling of retailer multiproduct inventory system. For simulation model optimization, a novel multicriteria and robust surrogate model is designed based on multiple attribute decision making (MADM method, design of experiments (DOE, and principal component analysis (PCA. This approach as a main contribution of this paper, provides a framework for robust multiple criteria decision making under uncertainty.

  11. Considering Decision Variable Diversity in Multi-Objective Optimization: Application in Hydrologic Model Calibration

    Science.gov (United States)

    Sahraei, S.; Asadzadeh, M.

    2017-12-01

    Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.

  12. TREATMENT OF CYANIDE SOLUTIONS AND SLURRIES USING AIR-SPARGED HYDROCYCLONE (ASH) TECHNOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Jan D. Miller; Terrence Chatwin; Jan Hupka; Doug Halbe; Tao Jiang; Bartosz Dabrowski; Lukasz Hupka

    2003-03-31

    The two-year Department of Energy (DOE) project ''Treatment of Cyanide Solutions and Slurries Using Air-Sparged Hydrocyclone (ASH) Technology'' (ASH/CN) has been completed. This project was also sponsored by industrial partners, ZPM Inc., Elbow Creek Engineering, Solvay Minerals, EIMCO-Baker Process, Newmont Mining Corporation, Cherokee Chemical Co., Placer Dome Inc., Earthworks Technology, Dawson Laboratories and Kennecott Minerals. Development of a new technology using the air-sparged hydrocyclone (ASH) as a reactor for either cyanide recovery or destruction was the research objective. It was expected that the ASH could potentially replace the conventional stripping tower presently used for HCN stripping and absorption with reduced power costs. The project was carried out in two phases. The first phase included calculation of basic processing parameters for ASH technology, development of the flowsheet, and design/adaptation of the ASH mobile system for hydrogen cyanide (HCN) recovery from cyanide solutions. This was necessary because the ASH was previously used for volatile organics removal from contaminated water. The design and modification of the ASH were performed with the help from ZPM Inc. personnel. Among the modifications, the system was adapted for operation under negative pressure to assure safe operating conditions. The research staff was trained in the safe use of cyanide and in hazardous material regulations. Cyanide chemistry was reviewed resulting in identification of proper chemical dosages for cyanide destruction, after completion of each pilot plant run. The second phase of the research consisted of three field tests that were performed at the Newmont Mining Corporation gold cyanidation plant near Midas, Nevada. The first field test was run between July 26 and August 2, 2002, and the objective was to demonstrate continuous operation of the modified ASH mobile system. ASH units were applied for both stripping and absorption

  13. The Application of Time-Delay Dependent H∞ Control Model in Manufacturing Decision Optimization

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2015-01-01

    Full Text Available This paper uses a time-delay dependent H∞ control model to analyze the effect of manufacturing decisions on the process of transmission from resources to capability. We establish a theoretical framework of manufacturing management process based on three terms: resource, manufacturing decision, and capability. Then we build a time-delay H∞ robust control model to analyze the robustness of manufacturing management. With the state feedback controller between manufacturing resources and decision, we find that there is an optimal decision to adjust the process of transmission from resources to capability under uncertain environment. Finally, we provide an example to prove the robustness of this model.

  14. Uranium (III) precipitation in molten chloride by wet argon sparging

    Energy Technology Data Exchange (ETDEWEB)

    Vigier, Jean-François, E-mail: jean-francois.vigier@ec.europa.eu [CEA, Nuclear Energy Division, Radiochemistry & Processes Department, F-30207 Bagnols sur Cèze (France); Unité de Catalyse et de Chimie du Solide, UCCS UMR CNRS 8181, Univ. Lille Nord de France, ENSCL-USTL, B.P. 90108, 59652 Villeneuve d' Ascq Cedex (France); Laplace, Annabelle [CEA, Nuclear Energy Division, Radiochemistry & Processes Department, F-30207 Bagnols sur Cèze (France); Renard, Catherine [Unité de Catalyse et de Chimie du Solide, UCCS UMR CNRS 8181, Univ. Lille Nord de France, ENSCL-USTL, B.P. 90108, 59652 Villeneuve d' Ascq Cedex (France); Miguirditchian, Manuel [CEA, Nuclear Energy Division, Radiochemistry & Processes Department, F-30207 Bagnols sur Cèze (France); Abraham, Francis [Unité de Catalyse et de Chimie du Solide, UCCS UMR CNRS 8181, Univ. Lille Nord de France, ENSCL-USTL, B.P. 90108, 59652 Villeneuve d' Ascq Cedex (France)

    2016-06-15

    In the context of pyrochemical processes for nuclear fuel treatment, the precipitation of uranium (III) in molten salt LiCl-CaCl{sub 2} (30–70 mol%) at 705 °C is studied. First, this molten chloride is characterized with the determination of the water dissociation constant. With a value of 10{sup −4.0}, the salt has oxoacid properties. Then, the uranium (III) precipitation using wet argon sparging is studied. The salt is prepared using UCl{sub 3} precursor. At the end of the precipitation, the salt is totally free of solubilized uranium. The main part is converted into UO{sub 2} powder but some uranium is lost during the process due to the volatility of uranium chloride. The main impurity of the resulting powder is calcium. The consequences of oxidative and reductive conditions on precipitation are studied. Finally, coprecipitation of uranium (III) and neodymium (III) is studied, showing a higher sensitivity of uranium (III) than neodymium (III) to precipitation. - Highlights: • Precipitation of Uranium (III) is quantitative in molten salt LiCl-CaCl{sub 2} (30–70 mol%). • The salt is oxoacid with a water dissociation constant of 10{sup −4.0} at 705 °C. • Volatility of uranium chloride is strongly reduced in reductive conditions. • Coprecipitation of U(III) and Nd(III) leads to a consecutive precipitation of the two elements.

  15. AngelStow: A Commercial Optimization-Based Decision Support Tool for Stowage Planning

    DEFF Research Database (Denmark)

    Delgado-Ortegon, Alberto; Jensen, Rune Møller; Guilbert, Nicolas

    save port fees, optimize use of vessel capacity, and reduce bunker consumption. Stowage Coordinators (SCs) produce these plans manually with the help of graphical tools, but high-quality SPs are hard to generate with the limited support they provide. In this abstract, we introduce AngelStow which...... is a commercial optimization-based decision support tool for stowing container vessels developed in collaboration between Ange Optimization and The IT University of Copenhagen. The tool assists SCs in the process of generating SPs interactively, focusing on satisfying and optimizing constraints and objectives...... that are tedious to deal with for humans, while letting the SCs use their expertise to deal with hard combinatorial objectives and corner cases....

  16. A decision support system using analytical hierarchy process (AHP) for the optimal environmental reclamation of an open-pit mine

    Science.gov (United States)

    Bascetin, A.

    2007-04-01

    The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.

  17. Decision Making with Imperfect Decision Makers

    CERN Document Server

    Guy, Tatiana Valentine; Wolpert, David H

    2012-01-01

    Prescriptive Bayesian decision making has reached a high level of maturity and is well-supported algorithmically. However, experimental data shows that real decision makers choose such Bayes-optimal decisions surprisingly infrequently, often making decisions that are badly sub-optimal. So prevalent is such imperfect decision-making that it should be accepted as an inherent feature of real decision makers living within interacting societies. To date such societies have been investigated from an economic and gametheoretic perspective, and even to a degree from a physics perspective. However, lit

  18. A methodological model to assist in the optimization and risk management of mining investment decisions

    International Nuclear Information System (INIS)

    Botin, Jose A; Guzman, Ronald R; Smith, Martin L

    2011-01-01

    Identifying, quantifying, and minimizing technical risks associated with investment decisions is a key challenge for mineral industry decision makers and investors. However, risk analysis in most bankable mine feasibility studies are based on the stochastic modeling of project N et Present Value (NPV)which, in most cases, fails to provide decision makers with a truly comprehensive analysis of risks associated with technical and management uncertainty and, as a result, are of little use for risk management and project optimization. This paper presents a value-chain risk management approach where project risk is evaluated for each step of the project life cycle, from exploration to mine closure, and risk management is performed as a part of a stepwise value-added optimization process.

  19. Evaluation of the need for stochastic optimization of out-of-core nuclear fuel management decisions

    International Nuclear Information System (INIS)

    Thomas, R.L. Jr.

    1989-01-01

    Work has been completed on utilizing mathematical optimization techniques to optimize out-of-core nuclear fuel management decisions. The objective of such optimization is to minimize the levelized fuel cycle cost over some planning horizon. Typical decision variables include feed enrichments and number of assemblies, burnable poison requirements, and burned fuel to reinsert for every cycle in the planning horizon. Engineering constraints imposed consist of such items as discharge burnup limits, maximum enrichment limit, and target cycle energy productions. Earlier the authors reported on the development of the OCEON code, which employs the integer Monte Carlo Programming method as the mathematical optimization method. The discharge burnpups, and feed enrichment and burnable poison requirements are evaluated, initially employing a linear reactivity core physics model and refined using a coarse mesh nodal model. The economic evaluation is completed using a modification of the CINCAS methodology. Interest now is to assess the need for stochastic optimization, which will account for cost components and cycle energy production uncertainties. The implication of the present studies is that stochastic optimization in regard to cost component uncertainties need not be completed since deterministic optimization will identify nearly the same family of near-optimum cycling schemes

  20. A Fuzzy Max–Min Decision Bi-Level Fuzzy Programming Model for Water Resources Optimization Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Chongfeng Ren

    2018-04-01

    Full Text Available Water competing conflict among water competing sectors from different levels should be taken under consideration during the optimization allocation of water resources. Furthermore, uncertainties are inevitable in the optimization allocation of water resources. In order to deal with the above problems, this study developed a fuzzy max–min decision bi-level fuzzy programming model. The developed model was then applied to a case study in Wuwei, Gansu Province, China. In this study, the net benefit and yield were regarded as the upper-level and lower-level objectives, respectively. Optimal water resource plans were obtained under different possibility levels of fuzzy parameters, which could deal with water competing conflict between the upper level and the lower level effectively. The obtained results are expected to make great contribution in helping local decision-makers to make decisions on dealing with the water competing conflict between the upper and lower level and the optimal use of water resources under uncertainty.

  1. Optimal management of replacement heifers in a beef herd: a model for simultaneous optimization of rearing and breeding decisions.

    Science.gov (United States)

    Stygar, A H; Kristensen, A R; Makulska, J

    2014-08-01

    The aim of this study was to provide farmers an efficient tool for supporting optimal decisions in the beef heifer rearing process. The complexity of beef heifer management prompted the development of a model including decisions on the feeding level during prepuberty (age optimal rearing strategy was found by maximizing the total discounted net revenues from the predicted future productivity of the Polish Limousine heifers defined as the cumulative BW of calves born from a cow calved until the age of 5 yr, standardized on the 210th day of age. According to the modeled optimal policy, heifers fed during the whole rearing period at the ADG of 810 g/d and generally weaned after the maximum suckling period of 9 mo should already be bred at the age of 13.2 mo and BW constituting 55.6% of the average mature BW. Based on the optimal strategy, 52% of all heifers conceived from May to July and calved from February to April. This optimal rearing pattern resulted in an average net return of EUR 311.6 per pregnant heifer. It was found that the economic efficiency of beef operations can be improved by applying different herd management practices to those currently used in Poland. Breeding at 55.6% of the average mature BW, after a shorter and less expensive rearing period, resulted in an increase in the average net return per heifer by almost 18% compared to the conventional system, in which heifers were bred after attaining 65% of the mature BW. Extension of the rearing period by 2.5 mo (breeding at the age 15.7 mo), due to a prepubertal growth rate lowered by 200 g, reduced the average net return per heifer by 6.2% compared to the results obtained under the basic model assumptions. In the future, the model may also be extended to investigate additional aspects of the beef heifer development, such as the environmental impacts of various heifer management decisions.

  2. Optimal decision procedures for satisfiability in fragments of alternating-time temporal logics

    DEFF Research Database (Denmark)

    Goranko, Valentin; Vester, Steen

    2014-01-01

    We consider several natural fragments of the alternating-time temporal logics ATL*and ATL with restrictions on the nesting between temporal operators and strate-gicquantifiers. We develop optimal decision procedures for satisfiability in these fragments, showing that they have much lower complexi...

  3. Multi-objective thermodynamic optimization of an irreversible regenerative Brayton cycle using evolutionary algorithm and decision making

    OpenAIRE

    Rajesh Kumar; S.C. Kaushik; Raj Kumar; Ranjana Hans

    2016-01-01

    Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is s...

  4. Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions

    KAUST Repository

    Azad, Mohammad

    2014-09-13

    The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.

  5. Minimization of Decision Tree Average Depth for Decision Tables with Many-valued Decisions

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    The paper is devoted to the analysis of greedy algorithms for the minimization of average depth of decision trees for decision tables such that each row is labeled with a set of decisions. The goal is to find one decision from the set of decisions. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of average depth of decision trees.

  6. Electricity Purchase Optimization Decision Based on Data Mining and Bayesian Game

    Directory of Open Access Journals (Sweden)

    Yajing Gao

    2018-04-01

    Full Text Available The openness of the electricity retail market results in the power retailers facing fierce competition in the market. This article aims to analyze the electricity purchase optimization decision-making of each power retailer with the background of the big data era. First, in order to guide the power retailer to make a purchase of electricity, this paper considers the users’ historical electricity consumption data and a comprehensive consideration of multiple factors, then uses the wavelet neural network (WNN model based on “meteorological similarity day (MSD” to forecast the user load demand. Second, in order to guide the quotation of the power retailer, this paper considers the multiple factors affecting the electricity price to cluster the sample set, and establishes a Genetic algorithm- back propagation (GA-BP neural network model based on fuzzy clustering (FC to predict the short-term market clearing price (MCP. Thirdly, based on Sealed-bid Auction (SA in game theory, a Bayesian Game Model (BGM of the power retailer’s bidding strategy is constructed, and the optimal bidding strategy is obtained by obtaining the Bayesian Nash Equilibrium (BNE under different probability distributions. Finally, a practical example is proposed to prove that the model and method can provide an effective reference for the decision-making optimization of the sales company.

  7. Reward rate optimization in two-alternative decision making: empirical tests of theoretical predictions.

    Science.gov (United States)

    Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D

    2009-12-01

    The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.

  8. Multi-objective thermodynamic optimization of an irreversible regenerative Brayton cycle using evolutionary algorithm and decision making

    Directory of Open Access Journals (Sweden)

    Rajesh Kumar

    2016-06-01

    Full Text Available Brayton heat engine model is developed in MATLAB simulink environment and thermodynamic optimization based on finite time thermodynamic analysis along with multiple criteria is implemented. The proposed work investigates optimal values of various decision variables that simultaneously optimize power output, thermal efficiency and ecological function using evolutionary algorithm based on NSGA-II. Pareto optimal frontier between triple and dual objectives is obtained and best optimal value is selected using Fuzzy, TOPSIS, LINMAP and Shannon’s entropy decision making methods. Triple objective evolutionary approach applied to the proposed model gives power output, thermal efficiency, ecological function as (53.89 kW, 0.1611, −142 kW which are 29.78%, 25.86% and 21.13% lower in comparison with reversible system. Furthermore, the present study reflects the effect of various heat capacitance rates and component efficiencies on triple objectives in graphical custom. Finally, with the aim of error investigation, average and maximum errors of obtained results are computed.

  9. A multi-criteria optimization and decision-making approach for improvement of food engineering processes

    Directory of Open Access Journals (Sweden)

    Alik Abakarov

    2013-04-01

    Full Text Available The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demonstrated using experimental data obtained on osmotic dehydration of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses, namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality. Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP and the Tabular Method (TM, were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.

  10. Do different methods of modeling statin treatment effectiveness influence the optimal decision?

    NARCIS (Netherlands)

    B.J.H. van Kempen (Bob); B.S. Ferket (Bart); A. Hofman (Albert); S. Spronk (Sandra); E.W. Steyerberg (Ewout); M.G.M. Hunink (Myriam)

    2012-01-01

    textabstractPurpose. Modeling studies that evaluate statin treatment for the prevention of cardiovascular disease (CVD) use different methods to model the effect of statins. The aim of this study was to evaluate the impact of using different modeling methods on the optimal decision found in such

  11. Making optimal investment decisions for energy service companies under uncertainty: A case study

    International Nuclear Information System (INIS)

    Deng, Qianli; Jiang, Xianglin; Zhang, Limao; Cui, Qingbin

    2015-01-01

    Varied initial energy efficiency investments would result in different annual energy savings achievements. In order to balance the savings revenue and the potential capital loss through EPC (Energy Performance Contracting), a cost-effective investment decision is needed when selecting energy efficiency technologies. In this research, an approach is developed for the ESCO (Energy Service Company) to evaluate the potential energy savings profit, and thus make the optimal investment decisions. The energy savings revenue under uncertainties, which are derived from energy efficiency performance variation and energy price fluctuation, are first modeled as stochastic processes. Then, the derived energy savings profit is shared by the owner and the ESCO according to the contract specification. A simulation-based model is thus built to maximize the owner's profit, and at the same time, satisfy the ESCO's expected rate of return. In order to demonstrate the applicability of the proposed approach, the University of Maryland campus case is also presented. The proposed method could not only help the ESCO determine the optimal energy efficiency investments, but also assist the owner's decision in the bidding selection. - Highlights: • An optimization model is built for determining energy efficiency investment for ESCO. • Evolution of the energy savings revenue is modeled as a stochastic process. • Simulation is adopted to calculate investment balancing the owner and the ESCO's profit. • A campus case is presented to demonstrate applicability of the proposed approach

  12. Totally Optimal Decision Trees for Monotone Boolean Functions with at Most Five Variables

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    In this paper, we present the empirical results for relationships between time (depth) and space (number of nodes) complexity of decision trees computing monotone Boolean functions, with at most five variables. We use Dagger (a tool for optimization

  13. A model of reward- and effort-based optimal decision making and motor control.

    Directory of Open Access Journals (Sweden)

    Lionel Rigoux

    Full Text Available Costs (e.g. energetic expenditure and benefits (e.g. food are central determinants of behavior. In ecology and economics, they are combined to form a utility function which is maximized to guide choices. This principle is widely used in neuroscience as a normative model of decision and action, but current versions of this model fail to consider how decisions are actually converted into actions (i.e. the formation of trajectories. Here, we describe an approach where decision making and motor control are optimal, iterative processes derived from the maximization of the discounted, weighted difference between expected rewards and foreseeable motor efforts. The model accounts for decision making in cost/benefit situations, and detailed characteristics of control and goal tracking in realistic motor tasks. As a normative construction, the model is relevant to address the neural bases and pathological aspects of decision making and motor control.

  14. Optimal decision making and matching are tied through diminishing returns.

    Science.gov (United States)

    Kubanek, Jan

    2017-08-08

    How individuals make decisions has been a matter of long-standing debate among economists and researchers in the life sciences. In economics, subjects are viewed as optimal decision makers who maximize their overall reward income. This framework has been widely influential, but requires a complete knowledge of the reward contingencies associated with a given choice situation. Psychologists and ecologists have observed that individuals tend to use a simpler "matching" strategy, distributing their behavior in proportion to relative rewards associated with their options. This article demonstrates that the two dominant frameworks of choice behavior are linked through the law of diminishing returns. The relatively simple matching can in fact provide maximal reward when the rewards associated with decision makers' options saturate with the invested effort. Such saturating relationships between reward and effort are hallmarks of the law of diminishing returns. Given the prevalence of diminishing returns in nature and social settings, this finding can explain why humans and animals so commonly behave according to the matching law. The article underscores the importance of the law of diminishing returns in choice behavior.

  15. Decision and Inhibitory Trees for Decision Tables with Many-Valued Decisions

    KAUST Repository

    Azad, Mohammad

    2018-06-06

    Decision trees are one of the most commonly used tools in decision analysis, knowledge representation, machine learning, etc., for its simplicity and interpretability. We consider an extension of dynamic programming approach to process the whole set of decision trees for the given decision table which was previously only attainable by brute-force algorithms. We study decision tables with many-valued decisions (each row may contain multiple decisions) because they are more reasonable models of data in many cases. To address this problem in a broad sense, we consider not only decision trees but also inhibitory trees where terminal nodes are labeled with “̸= decision”. Inhibitory trees can sometimes describe more knowledge from datasets than decision trees. As for cost functions, we consider depth or average depth to minimize time complexity of trees, and the number of nodes or the number of the terminal, or nonterminal nodes to minimize the space complexity of trees. We investigate the multi-stage optimization of trees relative to some cost functions, and also the possibility to describe the whole set of strictly optimal trees. Furthermore, we study the bi-criteria optimization cost vs. cost and cost vs. uncertainty for decision trees, and cost vs. cost and cost vs. completeness for inhibitory trees. The most interesting application of the developed technique is the creation of multi-pruning and restricted multi-pruning approaches which are useful for knowledge representation and prediction. The experimental results show that decision trees constructed by these approaches can often outperform the decision trees constructed by the CART algorithm. Another application includes the comparison of 12 greedy heuristics for single- and bi-criteria optimization (cost vs. cost) of trees. We also study the three approaches (decision tables with many-valued decisions, decision tables with most common decisions, and decision tables with generalized decisions) to handle

  16. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Science.gov (United States)

    Paasche, H.; Tronicke, J.

    2012-04-01

    optimality of the found solutions can be made. Identification of the leading particle traditionally requires a costly combination of ranking and niching techniques. In our approach, we use a decision rule under uncertainty to identify the currently leading particle of the swarm. In doing so, we consider the different objectives of our optimization problem as competing agents with partially conflicting interests. Analysis of the maximin fitness function allows for robust and cheap identification of the currently leading particle. The final optimization result comprises a set of possible models spread along the Pareto front. For convex Pareto fronts, solution density is expected to be maximal in the region ideally compromising all objectives, i.e. the region of highest curvature.

  17. Optimized inorganic carbon regime for enhanced growth and lipid accumulation in Chlorella vulgaris.

    Science.gov (United States)

    Lohman, Egan J; Gardner, Robert D; Pedersen, Todd; Peyton, Brent M; Cooksey, Keith E; Gerlach, Robin

    2015-01-01

    Large-scale algal biofuel production has been limited, among other factors, by the availability of inorganic carbon in the culture medium at concentrations higher than achievable with atmospheric CO2. Life cycle analyses have concluded that costs associated with supplying CO2 to algal cultures are significant contributors to the overall energy consumption. A two-phase optimal growth and lipid accumulation scenario is presented, which (1) enhances the growth rate and (2) the triacylglyceride (TAG) accumulation rate in the oleaginous Chlorophyte Chlorella vulgaris strain UTEX 395, by growing the organism in the presence of low concentrations of NaHCO3 (5 mM) and controlling the pH of the system with a periodic gas sparge of 5 % CO2 (v/v). Once cultures reached the desired cell densities, which can be "fine-tuned" based on initial nutrient concentrations, cultures were switched to a lipid accumulation metabolism through the addition of 50 mM NaHCO3. This two-phase approach increased the specific growth rate of C. vulgaris by 69 % compared to cultures sparged continuously with 5 % CO2 (v/v); further, biomass productivity (g L(-1) day(-1)) was increased by 27 %. Total biodiesel potential [assessed as total fatty acid methyl ester (FAME) produced] was increased from 53.3 to 61 % (FAME biomass(-1)) under the optimized conditions; biodiesel productivity (g FAME L(-1) day(-1)) was increased by 7.7 %. A bicarbonate salt screen revealed that American Chemical Society (ACS) and industrial grade NaHCO3 induced the highest TAG accumulation (% w/w), whereas Na2CO3 did not induce significant TAG accumulation. NH4HCO3 had a negative effect on cell health presumably due to ammonia toxicity. The raw, unrefined form of trona, NaHCO3∙Na2CO3 (sodium sesquicarbonate) induced TAG accumulation, albeit to a slightly lower extent than the more refined forms of sodium bicarbonate. The strategic addition of sodium bicarbonate was found to enhance growth and lipid accumulation rates in

  18. A Joint Optimal Decision on Shipment Size and Carbon Reduction under Direct Shipment and Peddling Distribution Strategies

    Directory of Open Access Journals (Sweden)

    Daiki Min

    2017-11-01

    Full Text Available Recently, much research has focused on lowering carbon emissions in logistics. This paper attempts to contribute to the literature on the joint shipment size and carbon reduction decisions by developing novel models for distribution systems under direct shipment and peddling distribution strategies. Unlike the literature that has simply investigated the effects of carbon costs on operational decisions, we address how to reduce carbon emissions and logistics costs by adjusting shipment size and making an optimal decision on carbon reduction investment. An optimal decision is made by analyzing the distribution cost including not only logistics and carbon trading costs but also the cost for adjusting carbon emission factors. No research has explicitly considered the two sources of carbon emissions, but we develop a model covering the difference in managing carbon emissions from transportation and storage. Structural analysis guides how to determine an optimal shipment size and emission factors in a closed form. Moreover, we analytically prove the possibility of reducing the distribution cost and carbon emissions at the same time. Numerical analysis follows validation of the results and demonstrates some interesting findings on carbon and distribution cost reduction.

  19. Optimization as a Reasoning Strategy for Dealing with Socioscientific Decision-Making Situations

    Science.gov (United States)

    Papadouris, Nicos

    2012-01-01

    This paper reports on an attempt to help 12-year-old students develop a specific optimization strategy for selecting among possible solutions in socioscientific decision-making situations. We have developed teaching and learning materials for elaborating this strategy, and we have implemented them in two intact classes (N = 48). Prior to and after…

  20. Weather Avoidance Using Route Optimization as a Decision Aid: An AWIN Topical Study. Phase 1

    Science.gov (United States)

    1998-01-01

    The aviation community is faced with reducing the fatal aircraft accident rate by 80 percent within 10 years. This must be achieved even with ever increasing, traffic and a changing National Airspace System. This is not just an altruistic goal, but a real necessity, if our growing level of commerce is to continue. Honeywell Technology Center's topical study, "Weather Avoidance Using Route Optimization as a Decision Aid", addresses these pressing needs. The goal of this program is to use route optimization and user interface technologies to develop a prototype decision aid for dispatchers and pilots. This decision aid will suggest possible diversions through single or multiple weather hazards and present weather information with a human-centered design. At the conclusion of the program, we will have a laptop prototype decision aid that will be used to demonstrate concepts to industry for integration into commercialized products for dispatchers and/or pilots. With weather a factor in 30% of aircraft accidents, our program will prevent accidents by strategically avoiding weather hazards in flight. By supplying more relevant weather information in a human-centered format along with the tools to generate flight plans around weather, aircraft exposure to weather hazards can be reduced. Our program directly addresses the NASA's five year investment areas of Strategic Weather Information and Weather Operations (simulation/hazard characterization and crew/dispatch/ATChazard monitoring, display, and decision support) (NASA Aeronautics Safety Investment Strategy: Weather Investment Recommendations, April 15, 1997). This program is comprised of two phases, Phase I concluded December 31, 1998. This first phase defined weather data requirements, lateral routing algorithms, an conceptual displays for a user-centered design. Phase II runs from January 1999 through September 1999. The second phase integrates vertical routing into the lateral optimizer and combines the user

  1. Optimal decisions and comparison of VMI and CPFR under price-sensitive uncertain demand

    Directory of Open Access Journals (Sweden)

    Yasaman Kazemi

    2013-06-01

    Full Text Available Purpose: The purpose of this study is to compare the performance of two advanced supply chain coordination mechanisms, Vendor Managed Inventory (VMI and Collaborative Planning Forecasting and Replenishment (CPFR, under a price-sensitive uncertain demand environment, and to make the optimal decisions on retail price and order quantity for both mechanisms. Design/ methodology/ approach: Analytical models are first applied to formulate a profit maximization problem; furthermore, by applying simulation optimization solution procedures, the optimal decisions and performance comparisons are accomplished. Findings: The results of the case study supported the widely held view that more advanced coordination mechanisms yield greater supply chain profit than less advanced ones. Information sharing does not only increase the supply chain profit, but also is required for the coordination mechanisms to achieve improved performance. Research limitations/implications: This study considers a single vendor and a single retailer in order to simplify the supply chain structure for modeling. Practical implications: Knowledge obtained from this study about the conditions appropriate for each specific coordination mechanism and the exact functions of coordination programs is critical to managerial decisions for industry practitioners who may apply the coordination mechanisms considered. Originality/value: This study includes the production cost in Economic Order Quantity (EOQ equations and combines it with price-sensitive demand under stochastic settings while comparing VMI and CPFR supply chain mechanisms and maximizing the total profit. Although many studies have worked on information sharing within the supply chain, determining the performance measures when the demand is price-sensitive and stochastic was not reported by researchers in the past literature.

  2. TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.

    Science.gov (United States)

    Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald

    2018-01-01

    Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.

  3. Decision theoretical justification of optimization criteria for near-real-time accountancy procedures

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1992-01-01

    In the beginning of nuclear material safeguards, emphasis was placed on safe detection of diversion of nuclear material. Later, the aspect of timely detection became equally important. Since there is a trade-off between these two objectives, the question of an appropriate compromise was raised. In this paper, a decision theoretical framework is presented in which the objectives of the two players, inspector and inspectee, are expressed in terms of general utility functions. Within this framework, optimal safeguards strategies are defined, and furthermore, conditions are formulated under which the optimization criteria corresponding to the objectives mentioned above can be justified

  4. Risk-Sensitive and Mean Variance Optimality in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    2013-01-01

    Roč. 7, č. 3 (2013), s. 146-161 ISSN 0572-3043 R&D Projects: GA ČR GAP402/10/0956; GA ČR GAP402/11/0150 Grant - others:AVČR a CONACyT(CZ) 171396 Institutional support: RVO:67985556 Keywords : Discrete-time Markov decision chains * exponential utility functions * certainty equivalent * mean-variance optimality * connections between risk -sensitive and risk -neutral models Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/sladky-0399099.pdf

  5. Optimal Financing Decisions of Two Cash-Constrained Supply Chains with Complementary Products

    Directory of Open Access Journals (Sweden)

    Yuting Li

    2016-04-01

    Full Text Available In recent years; financing difficulties have been obsessed small and medium enterprises (SMEs; especially emerging SMEs. Inter-members’ joint financing within a supply chain is one of solutions for SMEs. How about members’ joint financing of inter-supply chains? In order to answer the question, we firstly employ the Stackelberg game to propose three kinds of financing decision models of two cash-constrained supply chains with complementary products. Secondly, we analyze qualitatively these models and find the joint financing decision of the two supply chains is the most optimal one. Lastly, we conduct some numerical simulations not only to illustrate above results but also to find that the larger are cross-price sensitivity coefficients; the higher is the motivation for participants to make joint financing decisions; and the more are profits for them to gain.

  6. Risk based economic optimization of investment decisions of regulated power distribution system operators; Risikobasierte wirtschaftliche Optimierung von Investitionsentscheidungen regulierter Stromnetzbetreiber

    Energy Technology Data Exchange (ETDEWEB)

    John, Oliver

    2012-07-01

    The author of the contribution under consideration reports on risk-based economic optimization of investment decisions of regulated power distribution system operators. The focus is the economically rational decision behavior of operators under certain regulatory requirements. Investments in power distribution systems form the items subject to decisions. Starting from a description of theoretical and practical regulatory approaches, their financial implications are quantified at first. On this basis, optimization strategies are derived with respect to the investment behavior. For this purpose, an optimization algorithm is developed and applied to exemplary companies. Finally, effects of uncertainties in regulatory systems are investigated. In this context, Monte Carlo simulations are used in conjunction with real options analysis.

  7. Optimization of warehouse location through fuzzy multi-criteria decision making methods

    Directory of Open Access Journals (Sweden)

    C. L. Karmaker

    2015-07-01

    Full Text Available Strategic warehouse location-allocation problem is a multi-staged decision-making problem having both numerical and qualitative criteria. In order to survive in the global business scenario by improving supply chain performance, companies must examine the cross-functional drivers in the optimization of logistic systems. A meticulous observation makes evident that strategy warehouse location selection has become challenging as the number of alternatives and conflicting criteria increases. The issue becomes particularly problematic when the conventional concept has been applied in dealing with the imprecise nature of the linguistic assessment. The qualitative decisions for selection process are often complicated by the fact that often it is imprecise for the decision makers. Such problem must be overcome with defined efforts. Fuzzy multi-criteria decision making methods have been used in this research as aids in making location-allocation decisions. The anticipated methods in this research consist of two steps at its core. In the first step, the criteria of the existing problem are inspected and identified and then the weights of the sector and subsector are determined that have come to light by using Fuzzy AHP. In the second step, eligible alternatives are ranked by using TOPSIS and Fuzzy TOPSIS comparatively. A demonstration of the application of these methodologies in a real life problem is presented.

  8. OPTIMIZING USABILITY OF AN ECONOMIC DECISION SUPPORT TOOL: PROTOTYPE OF THE EQUIPT TOOL.

    Science.gov (United States)

    Cheung, Kei Long; Hiligsmann, Mickaël; Präger, Maximilian; Jones, Teresa; Józwiak-Hagymásy, Judit; Muñoz, Celia; Lester-George, Adam; Pokhrel, Subhash; López-Nicolás, Ángel; Trapero-Bertran, Marta; Evers, Silvia M A A; de Vries, Hein

    2018-01-01

    Economic decision-support tools can provide valuable information for tobacco control stakeholders, but their usability may impact the adoption of such tools. This study aims to illustrate a mixed-method usability evaluation of an economic decision-support tool for tobacco control, using the EQUIPT ROI tool prototype as a case study. A cross-sectional mixed methods design was used, including a heuristic evaluation, a thinking aloud approach, and a questionnaire testing and exploring the usability of the Return of Investment tool. A total of sixty-six users evaluated the tool (thinking aloud) and completed the questionnaire. For the heuristic evaluation, four experts evaluated the interface. In total twenty-one percent of the respondents perceived good usability. A total of 118 usability problems were identified, from which twenty-six problems were categorized as most severe, indicating high priority to fix them before implementation. Combining user-based and expert-based evaluation methods is recommended as these were shown to identify unique usability problems. The evaluation provides input to optimize usability of a decision-support tool, and may serve as a vantage point for other developers to conduct usability evaluations to refine similar tools before wide-scale implementation. Such studies could reduce implementation gaps by optimizing usability, enhancing in turn the research impact of such interventions.

  9. A Dynamic Intelligent Decision Approach to Dependency Modeling of Project Tasks in Complex Engineering System Optimization

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2013-01-01

    Full Text Available Complex engineering system optimization usually involves multiple projects or tasks. On the one hand, dependency modeling among projects or tasks highlights structures in systems and their environments which can help to understand the implications of connectivity on different aspects of system performance and also assist in designing, optimizing, and maintaining complex systems. On the other hand, multiple projects or tasks are either happening at the same time or scheduled into a sequence in order to use common resources. In this paper, we propose a dynamic intelligent decision approach to dependency modeling of project tasks in complex engineering system optimization. The approach takes this decision process as a two-stage decision-making problem. In the first stage, a task clustering approach based on modularization is proposed so as to find out a suitable decomposition scheme for a large-scale project. In the second stage, according to the decomposition result, a discrete artificial bee colony (ABC algorithm inspired by the intelligent foraging behavior of honeybees is developed for the resource constrained multiproject scheduling problem. Finally, a certain case from an engineering design of a chemical processing system is utilized to help to understand the proposed approach.

  10. Extending the horizons advances in computing, optimization, and decision technologies

    CERN Document Server

    Joseph, Anito; Mehrotra, Anuj; Trick, Michael

    2007-01-01

    Computer Science and Operations Research continue to have a synergistic relationship and this book represents the results of cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS/AI and of the high caliber of research being conducted by members of the INFORMS Computing Society. EXTENDING THE HORIZONS: Advances in Computing, Optimization, and Decision Technologies is a volume that presents the latest, leading research in the design and analysis of algorithms, computational optimization, heuristic search and learning, modeling languages, parallel and distributed computing, simulation, computational logic and visualization. This volume also emphasizes a variety of novel applications in the interface of CS, AI, and OR/MS.

  11. Driver's Behavior and Decision-Making Optimization Model in Mixed Traffic Environment

    Directory of Open Access Journals (Sweden)

    Xiaoyuan Wang

    2015-02-01

    Full Text Available Driving process is an information treating procedure going on unceasingly. It is very important for the research of traffic flow theory, to study on drivers' information processing pattern in mixed traffic environment. In this paper, bicycle is regarded as a kind of information source to vehicle drivers; the “conflict point method” is brought forward to analyze the influence of bicycles on driving behavior. The “conflict” is studied to be translated into a special kind of car-following or lane-changing process. Furthermore, the computer clocked scan step length is dropped to 0.1 s, in order to scan and analyze the dynamic (static information which influences driving behavior in a more exact way. The driver's decision-making process is described through information fusion based on duality contrast and fuzzy optimization theory. The model test and verification show that the simulation results with the “conflict point method” and the field data are consistent basically. It is feasible to imitate driving behavior and the driver information fusion process with the proposed methods. Decision-making optimized process can be described more accurately through computer precision clocked scan strategy. The study in this paper can provide the foundation for further research of multiresource information fusion process of driving behavior.

  12. Development of a fuzzy optimization model, supporting global warming decision-making

    International Nuclear Information System (INIS)

    Leimbach, M.

    1996-01-01

    An increasing number of models have been developed to support global warming response policies. The model constructors are facing a lot of uncertainties which limit the evidence of these models. The support of climate policy decision-making is only possible in a semi-quantitative way, as presented by a Fuzzy model. The model design is based on an optimization approach, integrated in a bounded risk decision-making framework. Given some regional emission-related and impact-related restrictions, optimal emission paths can be calculated. The focus is not only on carbon dioxide but on other greenhouse gases too. In the paper, the components of the model will be described. Cost coefficients, emission boundaries and impact boundaries are represented as Fuzzy parameters. The Fuzzy model will be transformed into a computational one by using an approach of Rommelfanger. In the second part, some problems of applying the model to computations will be discussed. This includes discussions on the data situation and the presentation, as well as interpretation of results of sensitivity analyses. The advantage of the Fuzzy approach is that the requirements regarding data precision are not so strong. Hence, the effort for data acquisition can be reduced and computations can be started earlier. 9 figs., 3 tabs., 17 refs., 1 appendix

  13. Privacy preserving mechanisms for optimizing cross-organizational collaborative decisions based on the Karmarkar algorithm

    NARCIS (Netherlands)

    Zhu, H.; Liu, H.W.; Ou, Carol; Davison, R.M.; Yang, Z.R.

    2017-01-01

    Cross-organizational collaborative decision-making involves a great deal of private information which companies are often reluctant to disclose, even when they need to analyze data collaboratively. The lack of effective privacy-preserving mechanisms for optimizing cross-organizational collaborative

  14. The decision optimization of product development by considering the customer demand saturation

    Directory of Open Access Journals (Sweden)

    Qing-song Xing

    2015-05-01

    Full Text Available Purpose: The purpose of this paper is to analyze the impacts of over meeting customer demands on the product development process, which is on the basis of the quantitative model of customer demands, development cost and time. Then propose the corresponding product development optimization decision. Design/methodology/approach: First of all, investigate to obtain the customer demand information, and then quantify customer demands weights by using variation coefficient method. Secondly, analyses the relationship between customer demands and product development time and cost based on the quality function deployment and establish corresponding mathematical model. On this basis, put forward the concept of customer demand saturation and optimization decision method of product development, and then apply it in the notebook development process of a company. Finally, when customer demand is saturated, it also needs to prove the consistency of strengthening satisfies customer demands and high attention degree customer demands, and the stability of customer demand saturation under different parameters. Findings: The development cost and the time will rise sharply when over meeting the customer demand. On the basis of considering the customer demand saturation, the relationship between customer demand and development time cost is quantified and balanced. And also there is basically consistent between the sequence of meeting customer demands and customer demands survey results. Originality/value: The paper proposes a model of customer demand saturation. It proves the correctness and effectiveness on the product development decision method.

  15. Optimization of approximate decision rules relative to number of misclassifications: Comparison of greedy and dynamic programming approaches

    KAUST Repository

    Amin, Talha

    2013-01-01

    In the paper, we present a comparison of dynamic programming and greedy approaches for construction and optimization of approximate decision rules relative to the number of misclassifications. We use an uncertainty measure that is a difference between the number of rows in a decision table T and the number of rows with the most common decision for T. For a nonnegative real number γ, we consider γ-decision rules that localize rows in subtables of T with uncertainty at most γ. Experimental results with decision tables from the UCI Machine Learning Repository are also presented. © 2013 Springer-Verlag.

  16. A note on “An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems”

    OpenAIRE

    R. Venkata Rao

    2012-01-01

    A paper published by Maniya and Bhatt (2011) (An alternative multiple attribute decision making methodology for solving optimal facility layout design selection problems, Computers & Industrial Engineering, 61, 542-549) proposed an alternative multiple attribute decision making method named as “Preference Selection Index (PSI) method” for selection of an optimal facility layout design. The authors had claimed that the method was logical and more appropriate and the method gives directly the o...

  17. Application of Bayesian Decision Theory Based on Prior Information in the Multi-Objective Optimization Problem

    Directory of Open Access Journals (Sweden)

    Xia Lei

    2010-12-01

    Full Text Available General multi-objective optimization methods are hard to obtain prior information, how to utilize prior information has been a challenge. This paper analyzes the characteristics of Bayesian decision-making based on maximum entropy principle and prior information, especially in case that how to effectively improve decision-making reliability in deficiency of reference samples. The paper exhibits effectiveness of the proposed method using the real application of multi-frequency offset estimation in distributed multiple-input multiple-output system. The simulation results demonstrate Bayesian decision-making based on prior information has better global searching capability when sampling data is deficient.

  18. An optimal hierarchical decision model for a regional logistics network with environmental impact consideration.

    Science.gov (United States)

    Zhang, Dezhi; Li, Shuangyan; Qin, Jin

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.

  19. An Optimal Hierarchical Decision Model for a Regional Logistics Network with Environmental Impact Consideration

    Directory of Open Access Journals (Sweden)

    Dezhi Zhang

    2014-01-01

    Full Text Available This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users’ demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators’ service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level.

  20. An Optimal Hierarchical Decision Model for a Regional Logistics Network with Environmental Impact Consideration

    Science.gov (United States)

    Zhang, Dezhi; Li, Shuangyan

    2014-01-01

    This paper proposes a new model of simultaneous optimization of three-level logistics decisions, for logistics authorities, logistics operators, and logistics users, for regional logistics network with environmental impact consideration. The proposed model addresses the interaction among the three logistics players in a complete competitive logistics service market with CO2 emission charges. We also explicitly incorporate the impacts of the scale economics of the logistics park and the logistics users' demand elasticity into the model. The logistics authorities aim to maximize the total social welfare of the system, considering the demand of green logistics development by two different methods: optimal location of logistics nodes and charging a CO2 emission tax. Logistics operators are assumed to compete with logistics service fare and frequency, while logistics users minimize their own perceived logistics disutility given logistics operators' service fare and frequency. A heuristic algorithm based on the multinomial logit model is presented for the three-level decision model, and a numerical example is given to illustrate the above optimal model and its algorithm. The proposed model provides a useful tool for modeling competitive logistics services and evaluating logistics policies at the strategic level. PMID:24977209

  1. An Elite Decision Making Harmony Search Algorithm for Optimization Problem

    Directory of Open Access Journals (Sweden)

    Lipu Zhang

    2012-01-01

    Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

  2. Optimizing perioperative decision making: improved information for clinical workflow planning.

    Science.gov (United States)

    Doebbeling, Bradley N; Burton, Matthew M; Wiebke, Eric A; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph

    2012-01-01

    Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40-70% of hospital revenues and 30-40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction.

  3. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-10-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  4. Minimization of decision tree depth for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    In this paper, we consider multi-label decision tables that have a set of decisions attached to each row. Our goal is to find one decision from the set of decisions for each row by using decision tree as our tool. Considering our target to minimize the depth of the decision tree, we devised various kinds of greedy algorithms as well as dynamic programming algorithm. When we compare with the optimal result obtained from dynamic programming algorithm, we found some greedy algorithms produces results which are close to the optimal result for the minimization of depth of decision trees.

  5. Estimation of power lithium-ion battery SOC based on fuzzy optimal decision

    Science.gov (United States)

    He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin

    2018-06-01

    In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.

  6. Justification, optimization and decision-aiding in existing exposure situations

    International Nuclear Information System (INIS)

    Hedemann-Jensen, Per

    2004-01-01

    The existing ICRP system of radiological protection from 1990 (ICRP Publication 60) can be seen as a binary or dual-line system dealing with protection in exposure situations categorized as either practices or interventions. The distinction between practices and interventions is summarized in the paper with focus on some of the problems experienced in making such a distinction. The protection principles within the existing system of protection are presented with emphasis on the application to de facto or existing exposure situations. Decision on countermeasures to mitigate the consequences of existing exposure situations such as nuclear or radiological accidents and naturally occurring exposure situations include factors or attributes describing benefits from the countermeasure and those describing harm. Some of these attributes are discussed and the general process of justification of intervention and optimization of protection arriving at generic reference levels for implementing protective measures is presented. In addition, the role of radiological protection professionals and other stakeholders in the decision-making process is discussed. Special attention is given to the question whether radiological protection should form only one of many decision-aiding inputs to a broader societal decision-making process or whether societal aspects should be fully integrated into the radiological protection framework. The concepts of practices and interventions, however logical they are, have created some confusion when applied to protection of the public following a nuclear or radiological accident. These problems may be solved in a new set of general ICRP recommendations on radiological protection, which are anticipated to supersede Publication 60 in 2005. The evolution of the basic ICRP principles for radiological protection in existing exposure situations into a new set of ICRP recommendations is briefly discussed based upon the various material that has been presented

  7. Optimal offering and operating strategies for wind-storage systems with linear decision rules

    DEFF Research Database (Denmark)

    Ding, Huajie; Pinson, Pierre; Hu, Zechun

    2016-01-01

    The participation of wind farm-energy storage systems (WF-ESS) in electricity markets calls for an integrated view of day-ahead offering strategies and real-time operation policies. Such an integrated strategy is proposed here by co-optimizing offering at the day-ahead stage and operation policy...... to be used at the balancing stage. Linear decision rules are seen as a natural approach to model and optimize the real-time operation policy. These allow enhancing profits from balancing markets based on updated information on prices and wind power generation. Our integrated strategies for WF...

  8. Trading river services: optimizing dam decisions at the basin scale to improve socio-ecological resilience

    Science.gov (United States)

    Roy, S. G.; Gold, A.; Uchida, E.; McGreavy, B.; Smith, S. M.; Wilson, K.; Blachly, B.; Newcomb, A.; Hart, D.; Gardner, K.

    2017-12-01

    Dam removal has become a cornerstone of environmental restoration practice in the United States. One outcome of dam removal that has received positive attention is restored access to historic habitat for sea-run fisheries, providing a crucial gain in ecosystem resilience. But dams also provide stakeholders with valuable services, and uncertain socio-ecological outcomes can arise if there is not careful consideration of the basin scale trade offs caused by dam removal. In addition to fisheries, dam removals can significantly affect landscape nutrient flux, municipal water storage, recreational use of lakes and rivers, property values, hydroelectricity generation, the cultural meaning of dams, and many other river-based ecosystem services. We use a production possibility frontiers approach to explore dam decision scenarios and opportunities for trading between ecosystem services that are positively or negatively affected by dam removal in New England. Scenarios that provide efficient trade off potentials are identified using a multiobjective genetic algorithm. Our results suggest that for many river systems, there is a significant potential to increase the value of fisheries and other ecosystem services with minimal dam removals, and further increases are possible by including decisions related to dam operations and physical modifications. Run-of-river dams located near the head of tide are often found to be optimal for removal due to low hydroelectric capacity and high impact on fisheries. Conversely, dams with large impoundments near a river's headwaters can be less optimal for dam removal because their value as nitrogen sinks often outweighs the potential value for fisheries. Hydropower capacity is negatively impacted by dam removal but there are opportunities to meet or exceed lost capacity by upgrading preserved hydropower dams. Improving fish passage facilities for dams that are critical for safety or water storage can also reduce impacts on fisheries. Our

  9. A complex systems approach to planning, optimization and decision making for energy networks

    International Nuclear Information System (INIS)

    Beck, Jessica; Kempener, Ruud; Cohen, Brett; Petrie, Jim

    2008-01-01

    This paper explores a new approach to planning and optimization of energy networks, using a mix of global optimization and agent-based modeling tools. This approach takes account of techno-economic, environmental and social criteria, and engages explicitly with inherent network complexity in terms of the autonomous decision-making capability of individual agents within the network, who may choose not to act as economic rationalists. This is an important consideration from the standpoint of meeting sustainable development goals. The approach attempts to set targets for energy planning, by determining preferred network development pathways through multi-objective optimization. The viability of such plans is then explored through agent-based models. The combined approach is demonstrated for a case study of regional electricity generation in South Africa, with biomass as feedstock

  10. Optimal Decision Rules in Repeated Games Where Players Infer an Opponent’s Mind via Simplified Belief Calculation

    Directory of Open Access Journals (Sweden)

    Mitsuhiro Nakamura

    2016-07-01

    Full Text Available In strategic situations, humans infer the state of mind of others, e.g., emotions or intentions, adapting their behavior appropriately. Nonetheless, evolutionary studies of cooperation typically focus only on reaction norms, e.g., tit for tat, whereby individuals make their next decisions by only considering the observed outcome rather than focusing on their opponent’s state of mind. In this paper, we analyze repeated two-player games in which players explicitly infer their opponent’s unobservable state of mind. Using Markov decision processes, we investigate optimal decision rules and their performance in cooperation. The state-of-mind inference requires Bayesian belief calculations, which is computationally intensive. We therefore study two models in which players simplify these belief calculations. In Model 1, players adopt a heuristic to approximately infer their opponent’s state of mind, whereas in Model 2, players use information regarding their opponent’s previous state of mind, obtained from external evidence, e.g., emotional signals. We show that players in both models reach almost optimal behavior through commitment-like decision rules by which players are committed to selecting the same action regardless of their opponent’s behavior. These commitment-like decision rules can enhance or reduce cooperation depending on the opponent’s strategy.

  11. Reward optimization in the primate brain: a probabilistic model of decision making under uncertainty.

    Directory of Open Access Journals (Sweden)

    Yanping Huang

    Full Text Available A key problem in neuroscience is understanding how the brain makes decisions under uncertainty. Important insights have been gained using tasks such as the random dots motion discrimination task in which the subject makes decisions based on noisy stimuli. A descriptive model known as the drift diffusion model has previously been used to explain psychometric and reaction time data from such tasks but to fully explain the data, one is forced to make ad-hoc assumptions such as a time-dependent collapsing decision boundary. We show that such assumptions are unnecessary when decision making is viewed within the framework of partially observable Markov decision processes (POMDPs. We propose an alternative model for decision making based on POMDPs. We show that the motion discrimination task reduces to the problems of (1 computing beliefs (posterior distributions over the unknown direction and motion strength from noisy observations in a bayesian manner, and (2 selecting actions based on these beliefs to maximize the expected sum of future rewards. The resulting optimal policy (belief-to-action mapping is shown to be equivalent to a collapsing decision threshold that governs the switch from evidence accumulation to a discrimination decision. We show that the model accounts for both accuracy and reaction time as a function of stimulus strength as well as different speed-accuracy conditions in the random dots task.

  12. The Bayesian statistical decision theory applied to the optimization of generating set maintenance

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1994-11-01

    The difficulty in RCM methodology is the allocation of a new periodicity of preventive maintenance on one equipment when a critical failure has been identified: until now this new allocation has been based on the engineer's judgment, and one must wait for a full cycle of feedback experience before to validate it. Statistical decision theory could be a more rational alternative for the optimization of preventive maintenance periodicity. This methodology has been applied to inspection and maintenance optimization of cylinders of diesel generator engines of 900 MW nuclear plants, and has shown that previous preventive maintenance periodicity can be extended. (authors). 8 refs., 5 figs

  13. Study on optimized decision-making model of offshore wind power projects investment

    Science.gov (United States)

    Zhao, Tian; Yang, Shangdong; Gao, Guowei; Ma, Li

    2018-02-01

    China’s offshore wind energy is of great potential and plays an important role in promoting China’s energy structure adjustment. However, the current development of offshore wind power in China is inadequate, and is much less developed than that of onshore wind power. On the basis of considering all kinds of risks faced by offshore wind power development, an optimized model of offshore wind power investment decision is established in this paper by proposing the risk-benefit assessment method. To prove the practicability of this method in improving the selection of wind power projects, python programming is used to simulate the investment analysis of a large number of projects. Therefore, the paper is dedicated to provide decision-making support for the sound development of offshore wind power industry.

  14. Simulation of Optimal Decision-Making Under the Impacts of Climate Change.

    Science.gov (United States)

    Møller, Lea Ravnkilde; Drews, Martin; Larsen, Morten Andreas Dahl

    2017-07-01

    Climate change causes transformations to the conditions of existing agricultural practices appointing farmers to continuously evaluate their agricultural strategies, e.g., towards optimising revenue. In this light, this paper presents a framework for applying Bayesian updating to simulate decision-making, reaction patterns and updating of beliefs among farmers in a developing country, when faced with the complexity of adapting agricultural systems to climate change. We apply the approach to a case study from Ghana, where farmers seek to decide on the most profitable of three agricultural systems (dryland crops, irrigated crops and livestock) by a continuous updating of beliefs relative to realised trajectories of climate (change), represented by projections of temperature and precipitation. The climate data is based on combinations of output from three global/regional climate model combinations and two future scenarios (RCP4.5 and RCP8.5) representing moderate and unsubstantial greenhouse gas reduction policies, respectively. The results indicate that the climate scenario (input) holds a significant influence on the development of beliefs, net revenues and thereby optimal farming practices. Further, despite uncertainties in the underlying net revenue functions, the study shows that when the beliefs of the farmer (decision-maker) opposes the development of the realised climate, the Bayesian methodology allows for simulating an adjustment of such beliefs, when improved information becomes available. The framework can, therefore, help facilitating the optimal choice between agricultural systems considering the influence of climate change.

  15. Optimization of matrix tablets controlled drug release using Elman dynamic neural networks and decision trees.

    Science.gov (United States)

    Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica

    2012-05-30

    The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Review of models and actors in energy mix optimization – can leader visions and decisions align with optimum model strategies for our future energy systems?

    NARCIS (Netherlands)

    Weijermars, R.; Taylor, P.; Bahn, O.; Das, S.R.; Wei, Y.M.

    2011-01-01

    Organizational behavior and stakeholder processes continually influence energy strategy choices and decisions. Although theoretical optimizations can provide guidance for energy mix decisions from a pure physical systems engineering point of view, these solutions might not be optimal from a

  17. Multiobjective Optimization of Aircraft Maintenance in Thailand Using Goal Programming: A Decision-Support Model

    Directory of Open Access Journals (Sweden)

    Yuttapong Pleumpirom

    2012-01-01

    Full Text Available The purpose of this paper is to develop the multiobjective optimization model in order to evaluate suppliers for aircraft maintenance tasks, using goal programming. The authors have developed a two-step process. The model will firstly be used as a decision-support tool for managing demand, by using aircraft and flight schedules to evaluate and generate aircraft-maintenance requirements, including spare-part lists. Secondly, they develop a multiobjective optimization model by minimizing cost, minimizing lead time, and maximizing the quality under various constraints in the model. Finally, the model is implemented in the actual airline's case.

  18. Optimal Decision Making Framework of an Electric Vehicle Aggregator in Future and Pool markets

    DEFF Research Database (Denmark)

    Rashidizadeh-Kermani, Homa; Najafi, Hamid Reza; Anvari-Moghaddam, Amjad

    2018-01-01

    An electric vehicle (EV) aggregator, as an agent between power producers and EV owners, participates in the future and pool market to supply EVs’ requirement. Because of uncertain nature of pool prices and EVs’ behavior, this paper proposed a two stage scenario-based model to obtain optimal decis...... electricity markets, a sensitivity analysis over risk factor is performed. The numerical results demonstrate that with the application of the proposed model, the aggregator can supply EVs with lower purchases from markets....... decision making of an EV aggregator. To deal with mentioned uncertainties, the aggregator’s risk aversion is applied using conditional value at risk (CVaR) method in the proposed model. The proposed two stage risk-constrained decision making problem is applied to maximize EV aggregator’s expected profit...... in an uncertain environment. The aggregator can participate in the future and pool market to buy required energy of EVs and offer optimal charge/discharge prices to the EV owners. In this model, in order to assess the effects of EVs owners’ reaction to the aggregator’s offered prices on the purchases from...

  19. Impacts of government subsidies on pricing and performance level choice in Energy Performance Contracting: A two-step optimal decision model

    International Nuclear Information System (INIS)

    Lu, Zhijian; Shao, Shuai

    2016-01-01

    Highlights: • An ESCO optimal decision model considering governmental subsidies is proposed. • Optimal price and performance level are deduced via a two-stage model. • Demand, profit, and performance level increase with increasing subsidies. • ESCO’s market strategy should firstly focus on high energy consumption industries. • Governmental subsidies standard in different industries should be differentiated. - Abstract: Government subsidies generally play a crucial role in pricing and the choice of performance levels in Energy Performance Contracting (EPC). However, the existing studies pay little attention to how the Energy Service Company (ESCO) prices and chooses performance levels for EPC with government subsidies. To fill such a gap, we propose a joint optimal decision model of pricing and performance level in EPC considering government subsidies. The optimization of the model is achieved via a two-stage process. At the first stage, given a performance level, ESCOs choose the best price; and at the second stage, ESCOs choose the optimal performance level for the optimal price. Furthermore, we carry out a numerical analysis to illuminate such an optimal decision mechanism. The results show that both price sensitivity and performance level sensitivity have significant effects on the choice of performance levels with government subsidies. Government subsidies can induce higher performance levels of EPC, the demand for EPC, and the profit of ESCO. We suggest that ESCO’s market strategy should firstly focus on high energy consumption industries with government subsidies and that government subsidies standard adopted in different industries should be differentiated according to the market characteristics and energy efficiency levels of various industries.

  20. Methods for providing decision makers with optimal solutions for multiple objectives that change over time

    CSIR Research Space (South Africa)

    Greeff, M

    2010-09-01

    Full Text Available Decision making - with the goal of finding the optimal solution - is an important part of modern life. For example: In the control room of an airport, the goals or objectives are to minimise the risk of airplanes colliding, minimise the time that a...

  1. Optimal Modeling of Wireless LANs: A Decision-Making Multiobjective Approach

    Directory of Open Access Journals (Sweden)

    Tomás de Jesús Mateo Sanguino

    2018-01-01

    Full Text Available Communication infrastructure planning is a critical design task that typically requires handling complex concepts on networking aimed at optimizing performance and resources, thus demanding high analytical and problem-solving skills to engineers. To reduce this gap, this paper describes an optimization algorithm—based on evolutionary strategy—created as an aid for decision-making prior to the real deployment of wireless LANs. The developed algorithm allows automating the design process, traditionally handmade by network technicians, in order to save time and cost by improving the WLAN arrangement. To this end, we implemented a multiobjective genetic algorithm (MOGA with the purpose of meeting two simultaneous design objectives, namely, to minimize the number of APs while maximizing the coverage signal over a whole planning area. Such approach provides efficient and scalable solutions closer to the best network design, so that we integrated the developed algorithm into an engineering tool with the goal of modelling the behavior of WLANs in ICT infrastructures. Called WiFiSim, it allows the investigation of various complex issues concerning the design of IEEE 802.11-based WLANs, thereby facilitating design of the study and design and optimal deployment of wireless LANs through complete modelling software. As a result, we comparatively evaluated three target applications considering small, medium, and large scenarios with a previous approach developed, a monoobjective genetic algorithm.

  2. Using multi-criteria decision making for selection of the optimal strategy for municipal solid waste management.

    Science.gov (United States)

    Jovanovic, Sasa; Savic, Slobodan; Jovicic, Nebojsa; Boskovic, Goran; Djordjevic, Zorica

    2016-09-01

    Multi-criteria decision making (MCDM) is a relatively new tool for decision makers who deal with numerous and often contradictory factors during their decision making process. This paper presents a procedure to choose the optimal municipal solid waste (MSW) management system for the area of the city of Kragujevac (Republic of Serbia) based on the MCDM method. Two methods of multiple attribute decision making, i.e. SAW (simple additive weighting method) and TOPSIS (technique for order preference by similarity to ideal solution), respectively, were used to compare the proposed waste management strategies (WMS). Each of the created strategies was simulated using the software package IWM2. Total values for eight chosen parameters were calculated for all the strategies. Contribution of each of the six waste treatment options was valorized. The SAW analysis was used to obtain the sum characteristics for all the waste management treatment strategies and they were ranked accordingly. The TOPSIS method was used to calculate the relative closeness factors to the ideal solution for all the alternatives. Then, the proposed strategies were ranked in form of tables and diagrams obtained based on both MCDM methods. As shown in this paper, the results were in good agreement, which additionally confirmed and facilitated the choice of the optimal MSW management strategy. © The Author(s) 2016.

  3. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad

    2014-09-29

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  4. Minimizing size of decision trees for multi-label decision tables

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail

    2014-01-01

    We used decision tree as a model to discover the knowledge from multi-label decision tables where each row has a set of decisions attached to it and our goal is to find out one arbitrary decision from the set of decisions attached to a row. The size of the decision tree can be small as well as very large. We study here different greedy as well as dynamic programming algorithms to minimize the size of the decision trees. When we compare the optimal result from dynamic programming algorithm, we found some greedy algorithms produce results which are close to the optimal result for the minimization of number of nodes (at most 18.92% difference), number of nonterminal nodes (at most 20.76% difference), and number of terminal nodes (at most 18.71% difference).

  5. Decision-making methodology of optimal shielding materials by using fuzzy linear programming

    International Nuclear Information System (INIS)

    Kanai, Y.; Miura, T.; Hirao, Y.

    2000-01-01

    The main purpose of our studies are to select materials and determine the ratio of constituent materials as the first stage of optimum shielding design to suit the individual requirements of nuclear reactors, reprocessing facilities, casks for shipping spent fuel, etc. The parameters of the shield optimization are cost, space, weight and some shielding properties such as activation rates or individual irradiation and cooling time, and total dose rate for neutrons (including secondary gamma ray) and for primary gamma ray. Using conventional two-valued logic (i.e. crisp) approaches, huge combination calculations are needed to identify suitable materials for optimum shielding design. Also, re-computation is required for minor changes, as the approach does not react sensitively to the computation result. Present approach using a fuzzy linear programming method is much of the decision-making toward the satisfying solution might take place in fuzzy environment. And it can quickly and easily provide a guiding principle of optimal selection of shielding materials under the above-mentioned conditions. The possibility or reducing radiation effects by optimizing the ratio of constituent materials is investigated. (author)

  6. Performance comparison of low-grade ORCs (organic Rankine cycles) using R245fa, pentane and their mixtures based on the thermoeconomic multi-objective optimization and decision makings

    International Nuclear Information System (INIS)

    Feng, Yongqiang; Hung, TzuChen; Zhang, Yaning; Li, Bingxi; Yang, Jinfu; Shi, Yang

    2015-01-01

    Based on the thermoeconomic multi-objective optimization and decision makings, considering both exergy efficiency and LEC (levelized energy cost), the performance comparison of low-grade ORCs (organic Rankine cycles) using R245fa, pentane and their mixtures has been investigated. The effects of mass fraction of R245fa and four key parameters on the exergy efficiency and LEC are examined. The Pareto-optimal solutions are selected from the Pareto optimal frontier obtained by NSGA-II algorithm using three decision makings, including Shannon Entropy, LINMAP and TOPSIS. The deviation index is introduced to evaluate different decision makings. Research demonstrates that as the mass fraction of R245fa increasing, the exergy efficiency decreases first and then increases, while LEC presents a reverse trend. The optimum values from TOPSIS decision making are selected as the preferred Pareto-optimal solution for its lowest deviation index. The Pareto-optimal solutions for pentane, R245fa, and 0.5pentane/0.5R245fa in pairs of (exergy efficiency, LEC) are (0.5425, 0.104), (0.5502, 0.111), and (0.5212, 0.108), respectively. The mixture working fluids present lower thermodynamic performance and moderate economic performance than the pure working fluids under the Pareto optimization. - Highlights: • The thermoeconomic comparison between pure and mixture working fluids is investigated. • The Pareto-optimal solutions with bi-objective function using three decision makings are obtained. • The optimum values from TOPSIS decision making are selected as the preferred Pareto-optimal solution. • The mixture yields lower thermodynamic performance and moderate economic performance.

  7. Research on the robust optimization of the enterprise's decision on the investment to the collaborative innovation: Under the risk constraints

    International Nuclear Information System (INIS)

    Zhou, Qing; Fang, Gang; Wang, Dong-peng; Yang, Wei

    2016-01-01

    Abstracts: The robust optimization model is applied to analyze the enterprise's decision of the investment portfolio for the collaborative innovation under the risk constraints. Through the mathematical model deduction and the simulation analysis, the research result shows that the enterprise's investment to the collaborative innovation has relatively obvious robust effect. As for the collaborative innovation, the return from the investment coexists with the risk of it. Under the risk constraints, the robust optimization method could solve the minimum risk as well as the proportion of each investment scheme in the portfolio on the condition of different target returns from the investment. On the basis of the result, the enterprise could balance between the investment return and risk and make optimal decision on the investment scheme.

  8. Decision-Making Approach to Selecting Optimal Platform of Service Variants

    Directory of Open Access Journals (Sweden)

    Vladimir Modrak

    2016-01-01

    Full Text Available Nowadays, it is anticipated that service sector companies will be inspired to follow mass customization trends of industrial sector. However, services are more abstract than products and therefore concepts for mass customization in manufacturing domain cannot be transformed without a methodical change. This paper is focused on the development of a methodological framework to support decisions in a selection of optimal platform of service variants when compatibility problems between service options occurred. The approach is based on mutual relations between waste and constrained design space entropy. For this purpose, software for quantification of constrained and waste design space is developed. Practicability of the methodology is presented on a realistic case.

  9. Towards optimal decision making in personalized medicine : Potential value assessment of biomarkers in heart failure exemplars

    NARCIS (Netherlands)

    Cao, Qi

    2016-01-01

    Treatment selection based on average effects observed in entire target population masks variation among patients (heterogeneity) and may result in less than optimal decision making. Personalized medicine is a new and complex concept, which aims to improve health by offering more tailored and

  10. Decision-Related Activity in Macaque V2 for Fine Disparity Discrimination Is Not Compatible with Optimal Linear Readout.

    Science.gov (United States)

    Clery, Stephane; Cumming, Bruce G; Nienborg, Hendrikje

    2017-01-18

    Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal "noise" correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple

  11. Empirical validation of a real options theory based method for optimizing evacuation decisions within chemical plants.

    Science.gov (United States)

    Reniers, G L L; Audenaert, A; Pauwels, N; Soudan, K

    2011-02-15

    This article empirically assesses and validates a methodology to make evacuation decisions in case of major fire accidents in chemical clusters. In this paper, a number of empirical results are presented, processed and discussed with respect to the implications and management of evacuation decisions in chemical companies. It has been shown in this article that in realistic industrial settings, suboptimal interventions may result in case the prospect to obtain additional information at later stages of the decision process is ignored. Empirical results also show that implications of interventions, as well as the required time and workforce to complete particular shutdown activities, may be very different from one company to another. Therefore, to be optimal from an economic viewpoint, it is essential that precautionary evacuation decisions are tailor-made per company. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Optimal Decision-making Model of Integrated Water Resources Management - A Case of Hsinchu Water Resources Management

    Science.gov (United States)

    Wang, S. Y.; Ho, C. C.; Chang, L. C.

    2017-12-01

    The public use water in Hsinchu are mainly supplied from Baoshan Reservoir, Second Baoshan Reservoir, Yongheshan Reservoir and Longen Weir. However, the increasing water demand, caused by development of the Hsinchu Science and Industrial Park, results in supply stable water getting more difficult. For stabilize water supply in Hsinchu, the study applies long-term and short-term plans to fulfill the water shortage. Developing an efficient methodology to define a cost-effective action portfolio is an important task. Hence, the study develops a novel decision model, the Stochastic Programming with Recourse Decision Model (SPRDM), to estimate a cost-effective action portfolio. The first-stage of SPRDM determine the long-term action portfolio and the portfolio accompany recourse information (the probability for water shortage event). The second-stage of SPRDM optimize the cost-effective action portfolio in response to the recourse information. In order to consider the uncertainty of reservoir sediment and demand growth, the study set 9 scenarios comprise optimistic, most likely, and pessimistic reservoir sediment and demand growth. The results show the optimal action portfolio consist of FengTain Lake and Panlon Weir, Hsinchu Desalination Plant, Domestic and Industrial Water long-term plans, and Emergency Backup Well, Irrigation Water Transference, Preliminary Water Rationing, Advanced Water Rationing and Water Transport from Other Districts short-term plans. The minimum expected cost of optimal action portfolio is NT$1.1002 billion. The results can be used as a reference for decision making because the results have considered the uncertainty of varied hydrology, reservoir sediment, and water demand growth.

  13. Optimization of urban water supply portfolios combining infrastructure capacity expansion and water use decisions

    Science.gov (United States)

    Medellin-Azuara, J.; Fraga, C. C. S.; Marques, G.; Mendes, C. A.

    2015-12-01

    The expansion and operation of urban water supply systems under rapidly growing demands, hydrologic uncertainty, and scarce water supplies requires a strategic combination of various supply sources for added reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources merits decisions of what and when to expand, and how much to use of each available sources accounting for interest rates, economies of scale and hydrologic variability. The present research provides a framework and an integrated methodology that optimizes the expansion of various water supply alternatives using dynamic programming and combining both short term and long term optimization of water use and simulation of water allocation. A case study in Bahia Do Rio Dos Sinos in Southern Brazil is presented. The framework couples an optimization model with quadratic programming model in GAMS with WEAP, a rain runoff simulation models that hosts the water supply infrastructure features and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions and (b) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion. Results also highlight the potential of various water supply alternatives including, conservation, groundwater, and infrastructural enhancements over time. The framework proves its usefulness for planning its transferability to similarly urbanized systems.

  14. Optimization of decision rule complexity for decision tables with many-valued decisions

    KAUST Repository

    Azad, Mohammad; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    compare our results with optimal result obtained by dynamic programming algorithms. The average percentage of relative difference between length (coverage) of constructed and optimal rules is at most 6.89% (15.89%, respectively) for leaders which seems

  15. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo

    2017-06-14

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  16. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo; Bajic, Vladimir B.

    2017-01-01

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  17. Optimal decisions of countries with carbon tax and carbon tariff

    Directory of Open Access Journals (Sweden)

    Yumei Hou

    2015-05-01

    Full Text Available Purpose: Reducing carbon emission has been the core problem of controlling global warming and climate deterioration recently. This paper focuses on the optimal carbon taxation policy levied by countries and the impact on firms’ optimal production decisions. Design/methodology/approach: This paper uses a two-stage game theory model to analyze the impact of carbon tariff and tax. Numerical simulation is used to supplement the theoretical analysis. Findings: Results derived from the paper indicate that the demand in an unstable market is significantly affected by environmental damage level. Carbon tariff is a policy-oriented tax while the carbon tax is a market-oriented one. Comprehensive carbon taxation policy benefit developed countries and basic policy is more suitable for developing countries. Research limitations/implications: In this research, we do not consider random demand and asymmetric information, which may not well suited the reality. Originality/value: This work provides a different perspective in analyzing the impact of carbon tax and tariff. It is the first study to consider two consuming market and the strategic game between two countries. Different international status of countries considered in the paper is also a unique point.

  18. Decision tables and rule engines in organ allocation systems for optimal transparency and flexibility.

    Science.gov (United States)

    Schaafsma, Murk; van der Deijl, Wilfred; Smits, Jacqueline M; Rahmel, Axel O; de Vries Robbé, Pieter F; Hoitsma, Andries J

    2011-05-01

    Organ allocation systems have become complex and difficult to comprehend. We introduced decision tables to specify the rules of allocation systems for different organs. A rule engine with decision tables as input was tested for the Kidney Allocation System (ETKAS). We compared this rule engine with the currently used ETKAS by running 11,000 historical match runs and by running the rule engine in parallel with the ETKAS on our allocation system. Decision tables were easy to implement and successful in verifying correctness, completeness, and consistency. The outcomes of the 11,000 historical matches in the rule engine and the ETKAS were exactly the same. Running the rule engine simultaneously in parallel and in real time with the ETKAS also produced no differences. Specifying organ allocation rules in decision tables is already a great step forward in enhancing the clarity of the systems. Yet, using these tables as rule engine input for matches optimizes the flexibility, simplicity and clarity of the whole process, from specification to the performed matches, and in addition this new method allows well controlled simulations. © 2011 The Authors. Transplant International © 2011 European Society for Organ Transplantation.

  19. Preventive maintenance: optimization of time - based discard decisions at the bruce nuclear generating station

    International Nuclear Information System (INIS)

    Doyle, E.K.; Jardine, A.K.S.

    2001-01-01

    The use of various maintenance optimization techniques at Bruce has lead to cost effective preventive maintenance applications for complex systems. As previously reported at ICONE 6 in New Orleans, 1996, several innovative practices reduced Reliability Centered Maintenance costs while maintaining the accuracy of the analysis. The optimization strategy has undergone further evolution and at the present an Integrated Maintenance Program (IMP) is in place where an Expert Panel consisting of all players/experts proceed through each system in a disciplined fashion and reach agreement on all items under a rigorous time frame. It is well known that there are essentially 3 maintenance based actions that can flow from a Maintenance Optimization Analysis: condition based maintenance, time based maintenance and time based discard. The present effort deals with time based discard decisions. Maintenance data from the Remote On-Power Fuel Changing System was used. (author)

  20. Design and optimization of a ground water monitoring system using GIS and multicriteria decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, D.; Gupta, A.D.; Ramnarong, V.

    1998-12-31

    A GIS-based methodology has been developed to design a ground water monitoring system and implemented for a selected area in Mae-Klong River Basin, Thailand. A multicriteria decision-making analysis has been performed to optimize the network system based on major criteria which govern the monitoring network design such as minimization of cost of construction, reduction of kriging standard deviations, etc. The methodology developed in this study is a new approach to designing monitoring networks which can be used for any site considering site-specific aspects. It makes it possible to choose the best monitoring network from various alternatives based on the prioritization of decision factors.

  1. Optimal Financing Order Decisions of a Supply Chain under the Retailer's Delayed Payment

    Directory of Open Access Journals (Sweden)

    Honglin Yang

    2014-01-01

    Full Text Available In real supply chain, a capital-constrained retailer has two typical payment choices: the up-front payment to receive a high discount price or the delayed payment to reduce capital pressure. We compare with the efficiency of optimal decisions of different participants, that is, supplier, retailer, and bank, under both types of payments based on a game equilibrium analysis. It shows that under the equilibrium, the delayed payment leads to a greater optimal order quantity from the retailer compared to the up-front payment and, thus, improves the whole benefit of the supply chain. The numerical simulation for the random demand following a uniform distribution further verifies our findings. This study provides novel evidence that a dominant supplier who actively offers trade credit helps enhance the whole efficiency of a supply chain.

  2. Improving medical diagnosis reliability using Boosted C5.0 decision tree empowered by Particle Swarm Optimization.

    Science.gov (United States)

    Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin

    2015-08-01

    Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.

  3. The Effects Of Decision Framing Influences On Decision Performance

    OpenAIRE

    David Shelby Harrison; Sanela Porca

    2011-01-01

    This study investigates the effects of two components of decision framing [commitment and verbalization] in decision optimization, and how information quality impacts framing effects on decision performance. The theory of cognitive dissonance predicts that commitment to a decision will foster insensitivity to alternative choices. We find that such bias can be beneficial in certain decision strategies, and more powerfully influential as information quality worsens. We used an interactive compu...

  4. Optimization of Aeroengine Shop Visit Decisions Based on Remaining Useful Life and Stochastic Repair Time

    Directory of Open Access Journals (Sweden)

    Jing Cai

    2016-01-01

    Full Text Available Considering the wide application of condition-based maintenance in aeroengine maintenance practice, it becomes possible for aeroengines to carry out their preventive maintenance in just-in-time (JIT manner by reasonably planning their shop visits (SVs. In this study, an approach is proposed to make aeroengine SV decisions following the concept of JIT. Firstly, a state space model (SSM for aeroengine based on exhaust gas temperature margin is developed to predict the remaining useful life (RUL of aeroengine. Secondly, the effect of SV decisions on risk and service level (SL is analyzed, and an optimization of the aeroengine SV decisions based on RUL and stochastic repair time is performed to carry out JIT manner with the requirement of safety and SL. Finally, a case study considering two CFM-56 aeroengines is presented to demonstrate the proposed approach. The results show that predictive accuracy of RUL with SSM is higher than with linear regression, and the process of SV decisions is simple and feasible for airlines to improve the inventory management level of their aeroengines.

  5. Modelling decision-making by pilots

    Science.gov (United States)

    Patrick, Nicholas J. M.

    1993-01-01

    Our scientific goal is to understand the process of human decision-making. Specifically, a model of human decision-making in piloting modern commercial aircraft which prescribes optimal behavior, and against which we can measure human sub-optimality is sought. This model should help us understand such diverse aspects of piloting as strategic decision-making, and the implicit decisions involved in attention allocation. Our engineering goal is to provide design specifications for (1) better computer-based decision-aids, and (2) better training programs for the human pilot (or human decision-maker, DM).

  6. Optimal cost-effective designs of Phase II proof of concept trials and associated go-no go decisions.

    Science.gov (United States)

    Chen, Cong; Beckman, Robert A

    2009-01-01

    This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.

  7. Determination of optimal pollution levels through multiple-criteria decision making: an application to the Spanish electricity sector

    International Nuclear Information System (INIS)

    Linares, P.

    1999-01-01

    An efficient pollution management requires the harmonisation of often conflicting economic and environmental aspects. A compromise has to be found, in which social welfare is maximised. The determination of this social optimum has been attempted with different tools, of which the most correct according to neo-classical economics may be the one based on the economic valuation of the externalities of pollution. However, this approach is still controversial, and few decision makers trust the results obtained enough to apply them. But a very powerful alternative exists, which avoids the problem of monetizing physical impacts. Multiple-criteria decision making provides methodologies for dealing with impacts in different units, and for incorporating the preferences of decision makers or society as a whole, thus allowing for the determination of social optima under heterogeneous criteria, which is usually the case of pollution management decisions. In this paper, a compromise programming model is presented for the determination of the optimal pollution levels for the electricity industry in Spain for carbon dioxide, sulphur dioxide, nitrous oxides, and radioactive waste. The preferences of several sectors of society are incorporated explicitly into the model, so that the solution obtained represents the optimal pollution level from a social point of view. Results show that cost minimisation is still the main objective for society, but the simultaneous consideration of the rest of the criteria achieves large pollution reductions at a low cost increment. (Author)

  8. Package of procedures for the decision of optimization tasks by the method of branches and borders

    OpenAIRE

    Nestor, Natalia

    2012-01-01

    The practical aspects of realization of method of branches and borders are examined. The structure of package of procedures is pointed for implementation of basic operations at the decision of optimization tasks. A package is projected as a programmatic kernel which can be used for the various tasks of exhaustive search with returning.

  9. A web-based feedback study on optimization-based training and analysis of human decision making

    Directory of Open Access Journals (Sweden)

    Michael Engelhart

    2017-05-01

    Full Text Available The question “How can humans learn efficiently to make decisions in a complex, dynamic, and uncertain environment” is still a very open question. We investigate what effects arise when feedback is given in a computer-simulated microworld that is controlled by participants. This has a direct impact on training simulators that are already in standard use in many professions, e.g., for flight simulators for pilots, and a potential impact on a better understanding of human decision making in general. Our study is based on a benchmark microworld with an economic framing, the IWR Tailorshop. N=94 participants played four rounds of the microworld, each 10 months, via a web interface. We propose a new approach to quantify performance and learning, which is based on a mathematical model of the microworld and optimization. Six participant groups receive different kinds of feedback in a training phase, then results in a performance phase without feedback are analyzed. As a main result, feedback of optimal solutions in training rounds improved model knowledge, early learning, and performance, especially when this information is encoded in a graphical representation (arrows.

  10. Decision analysis to define the optimal management of athletes with anomalous aortic origin of a coronary artery.

    Science.gov (United States)

    Mery, Carlos M; Lopez, Keila N; Molossi, Silvana; Sexson-Tejtel, S Kristen; Krishnamurthy, Rajesh; McKenzie, E Dean; Fraser, Charles D; Cantor, Scott B

    2016-11-01

    The goal of this study was to use decision analysis to evaluate the impact of varying uncertainties on the outcomes of patients with anomalous aortic origin of a coronary artery. Two separate decision analysis models were created: one for anomalous left coronary artery (ALCA) and one for anomalous right coronary artery (ARCA). Three strategies were compared: observation, exercise restriction, and surgery. Probabilities and health utilities were estimated on the basis of existing literature. Deterministic and probabilistic sensitivity analyses were performed. Surgery was the optimal management strategy for patients management in anomalous aortic origin of a coronary artery depends on multiple factors, including individual patient characteristics. Decision analysis provides a tool to understand how these characteristics affect the outcomes with each management strategy and thus may aid in the decision making process for a particular patient. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  11. Comparison of heuristic optimization techniques for the enrichment and gadolinia distribution in BWR fuel lattices and decision analysis

    International Nuclear Information System (INIS)

    Castillo, Alejandro; Martín-del-Campo, Cecilia; Montes-Tadeo, José-Luis; François, Juan-Luis; Ortiz-Servin, Juan-José; Perusquía-del-Cueto, Raúl

    2014-01-01

    Highlights: • Different metaheuristic optimization techniques were compared. • The optimal enrichment and gadolinia distribution in a BWR fuel lattice was studied. • A decision making tool based on the Position Vector of Minimum Regret was applied. • Similar results were found for the different optimization techniques. - Abstract: In the present study a comparison of the performance of five heuristic techniques for optimization of combinatorial problems is shown. The techniques are: Ant Colony System, Artificial Neural Networks, Genetic Algorithms, Greedy Search and a hybrid of Path Relinking and Scatter Search. They were applied to obtain an “optimal” enrichment and gadolinia distribution in a fuel lattice of a boiling water reactor. All techniques used the same objective function for qualifying the different distributions created during the optimization process as well as the same initial conditions and restrictions. The parameters included in the objective function are the k-infinite multiplication factor, the maximum local power peaking factor, the average enrichment and the average gadolinia concentration of the lattice. The CASMO-4 code was used to obtain the neutronic parameters. The criteria for qualifying the optimization techniques include also the evaluation of the best lattice with burnup and the number of evaluations of the objective function needed to obtain the best solution. In conclusion all techniques obtain similar results, but there are methods that found better solutions faster than others. A decision analysis tool based on the Position Vector of Minimum Regret was applied to aggregate the criteria in order to rank the solutions according to three functions: neutronic grade at 0 burnup, neutronic grade with burnup and global cost which aggregates the computing time in the decision. According to the results Greedy Search found the best lattice in terms of the neutronic grade at 0 burnup and also with burnup. However, Greedy Search is

  12. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  13. A Modified Bird-Mating Optimization with Hill-Climbing for Connection Decisions of Transformers

    Directory of Open Access Journals (Sweden)

    Ting-Chia Ou

    2016-08-01

    Full Text Available This paper endeavors to apply a hybrid bird-mating optimization approach to connection decisions of distribution transformers. It is expected that with the aid of hybrid bird-mating approach, the voltage imbalance and deviation can be mitigated, hence ensuring a satisfactory supplying power more effectively. To evaluate the effectiveness of this method, it has been tested through practical distribution systems with comparisons to other methods. Test results help confirm the feasibility of the approach, serving as beneficial references for the improvement of electric power grid operations.

  14. Safety Lead Optimization and Candidate Identification: Integrating New Technologies into Decision-Making.

    Science.gov (United States)

    Dambach, Donna M; Misner, Dinah; Brock, Mathew; Fullerton, Aaron; Proctor, William; Maher, Jonathan; Lee, Dong; Ford, Kevin; Diaz, Dolores

    2016-04-18

    Discovery toxicology focuses on the identification of the most promising drug candidates through the development and implementation of lead optimization strategies and hypothesis-driven investigation of issues that enable rational and informed decision-making. The major goals are to [a] identify and progress the drug candidate with the best overall drug safety profile for a therapeutic area, [b] remove the most toxic drugs from the portfolio prior to entry into humans to reduce clinical attrition due to toxicity, and [c] establish a well-characterized hazard and translational risk profile to enable clinical trial designs. This is accomplished through a framework that balances the multiple considerations to identify a drug candidate with the overall best drug characteristics and provides a cogent understanding of mechanisms of toxicity. The framework components include establishing a target candidate profile for each program that defines the qualities of a successful candidate based on the intended therapeutic area, including the risk tolerance for liabilities; evaluating potential liabilities that may result from engaging the therapeutic target (pharmacology-mediated or on-target) and that are chemical structure-mediated (off-target); and characterizing identified liabilities. Lead optimization and investigation relies upon the integrated use of a variety of technologies and models (in silico, in vitro, and in vivo) that have achieved a sufficient level of qualification or validation to provide confidence in their use. We describe the strategic applications of various nonclinical models (established and new) for a holistic and integrated risk assessment that is used for rational decision-making. While this review focuses on strategies for small molecules, the overall concepts, approaches, and technologies are generally applicable to biotherapeutics.

  15. Making the optimal decision in selecting protective clothing

    International Nuclear Information System (INIS)

    Price, J. Mark

    2007-01-01

    Protective Clothing plays a major role in the decommissioning and operation of nuclear facilities. Literally thousands of employee dress-outs occur over the life of a decommissioning project and during outages at operational plants. In order to make the optimal decision on which type of protective clothing is best suited for the decommissioning or maintenance and repair work on radioactive systems, a number of interrelating factors must be considered, including - Protection; - Personnel Contamination; - Cost; - Radwaste; - Comfort; - Convenience; - Logistics/Rad Material Considerations; - Reject Rate of Laundered Clothing; - Durability; - Security; - Personnel Safety including Heat Stress; - Disposition of Gloves and Booties. In addition, over the last several years there has been a trend of nuclear power plants either running trials or switching to Single Use Protective Clothing (SUPC) from traditional protective clothing. In some cases, after trial usage of SUPC, plants have chosen not to switch. In other cases after switching to SUPC for a period of time, some plants have chosen to switch back to laundering. Based on these observations, this paper reviews the 'real' drivers, issues, and interrelating factors regarding the selection and use of protective clothing throughout the nuclear industry. (authors)

  16. Dynamic excitatory and inhibitory gain modulation can produce flexible, robust and optimal decision-making.

    Directory of Open Access Journals (Sweden)

    Ritwik K Niyogi

    experimentally fitted value. Our work provides insights into the simultaneous and rapid modulation of excitatory and inhibitory neuronal gains, which enables flexible, robust, and optimal decision-making.

  17. Optimal Decisions in a Single-Period Supply Chain with Price-Sensitive Random Demand under a Buy-Back Contract

    Directory of Open Access Journals (Sweden)

    Feng Wang

    2014-01-01

    Full Text Available This paper studies a single-period supply chain with a buy-back contract under a Stackelberg game model, in which the supplier (leader decides on the wholesale price, and the retailer (follower responds to determine the retail price and the order quantity. We analytically investigate the decentralized retailer’s optimal decision. Our results demonstrate that the retailer has a unique optimal simultaneous decision on the retail price and the order quantity, under a mild restriction on the demand distribution. Moreover, as it can be shown that the decentralized supply chain facing price-sensitive random demand cannot be coordinated with buy-back contract, we propose a scheme for the system to achieve Pareto-improvement. Theoretical analysis suggests that there exists a unique Pareto-equilibrium for the supply chain. In particular, when the Pareto-equilibrium is reached, the supply chain is coordinated. Numerical experiments confirm our results.

  18. Optimal management of adults with pharyngitis – a multi-criteria decision analysis

    Directory of Open Access Journals (Sweden)

    Dolan James G

    2006-03-01

    Full Text Available Abstract Background Current practice guidelines offer different management recommendations for adults presenting with a sore throat. The key issue is the extent to which the clinical likelihood of a Group A streptococcal infection should affect patient management decisions. To help resolve this issue, we conducted a multi-criteria decision analysis using the Analytic Hierarchy Process. Methods We defined optimal patient management using four criteria: 1 reduce symptom duration; 2 prevent infectious complications, local and systemic; 3 minimize antibiotic side effects, minor and anaphylaxis; and 4 achieve prudent use of antibiotics, avoiding both over-use and under-use. In our baseline analysis we assumed that all criteria and sub-criteria were equally important except minimizing anaphylactic side effects, which was judged very strongly more important than minimizing minor side effects. Management strategies included: a No test, No treatment; b Perform a rapid strep test and treat if positive; c Perform a throat culture and treat if positive; d Perform a rapid strep test and treat if positive; if negative obtain a throat culture and treat if positive; and e treat without further tests. We defined four scenarios based on the likelihood of group A streptococcal infection using the Centor score, a well-validated clinical index. Published data were used to estimate the likelihoods of clinical outcomes and the test operating characteristics of the rapid strep test and throat culture for identifying group A streptococcal infections. Results Using the baseline assumptions, no testing and no treatment is preferred for patients with Centor scores of 1; two strategies – culture and treat if positive and rapid strep with culture of negative results – are equally preferable for patients with Centor scores of 2; and rapid strep with culture of negative results is the best management strategy for patients with Centor scores 3 or 4. These results are

  19. Microseismic Monitoring Design Optimization Based on Multiple Criteria Decision Analysis

    Science.gov (United States)

    Kovaleva, Y.; Tamimi, N.; Ostadhassan, M.

    2017-12-01

    Borehole microseismic monitoring of hydraulic fracture treatments of unconventional reservoirs is a widely used method in the oil and gas industry. Sometimes, the quality of the acquired microseismic data is poor. One of the reasons for poor data quality is poor survey design. We attempt to provide a comprehensive and thorough workflow, using multiple criteria decision analysis (MCDA), to optimize planning micriseismic monitoring. So far, microseismic monitoring has been used extensively as a powerful tool for determining fracture parameters that affect the influx of formation fluids into the wellbore. The factors that affect the quality of microseismic data and their final results include average distance between microseismic events and receivers, complexity of the recorded wavefield, signal-to-noise ratio, data aperture, etc. These criteria often conflict with each other. In a typical microseismic monitoring, those factors should be considered to choose the best monitoring well(s), optimum number of required geophones, and their depth. We use MDCA to address these design challenges and develop a method that offers an optimized design out of all possible combinations to produce the best data acquisition results. We believe that this will be the first research to include the above-mentioned factors in a 3D model. Such a tool would assist companies and practicing engineers in choosing the best design parameters for future microseismic projects.

  20. Visualising Pareto-optimal trade-offs helps move beyond monetary-only criteria for water management decisions

    Science.gov (United States)

    Hurford, Anthony; Harou, Julien

    2014-05-01

    Water related eco-system services are important to the livelihoods of the poorest sectors of society in developing countries. Degradation or loss of these services can increase the vulnerability of people decreasing their capacity to support themselves. New approaches to help guide water resources management decisions are needed which account for the non-market value of ecosystem goods and services. In case studies from Brazil and Kenya we demonstrate the capability of many objective Pareto-optimal trade-off analysis to help decision makers balance economic and non-market benefits from the management of existing multi-reservoir systems. A multi-criteria search algorithm is coupled to a water resources management simulator of each basin to generate a set of Pareto-approximate trade-offs representing the best case management decisions. In both cases, volume dependent reservoir release rules are the management decisions being optimised. In the Kenyan case we further assess the impacts of proposed irrigation investments, and how the possibility of new investments impacts the system's trade-offs. During the multi-criteria search (optimisation), performance of different sets of management decisions (policies) is assessed against case-specific objective functions representing provision of water supply and irrigation, hydropower generation and maintenance of ecosystem services. Results are visualised as trade-off surfaces to help decision makers understand the impacts of different policies on a broad range of stakeholders and to assist in decision-making. These case studies show how the approach can reveal unexpected opportunities for win-win solutions, and quantify the trade-offs between investing to increase agricultural revenue and negative impacts on protected ecosystems which support rural livelihoods.

  1. Decision making based on data analysis and optimization algorithm applied for cogeneration systems integration into a grid

    Science.gov (United States)

    Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan

    2018-05-01

    Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.

  2. Optimal Decision-Making in Fuzzy Economic Order Quantity (EOQ Model under Restricted Space: A Non-Linear Programming Approach

    Directory of Open Access Journals (Sweden)

    M. Pattnaik

    2013-08-01

    Full Text Available In this paper the concept of fuzzy Non-Linear Programming Technique is applied to solve an economic order quantity (EOQ model under restricted space. Since various types of uncertainties and imprecision are inherent in real inventory problems they are classically modeled using the approaches from the probability theory. However, there are uncertainties that cannot be appropriately treated by usual probabilistic models. The questions how to define inventory optimization tasks in such environment how to interpret optimal solutions arise. This paper allows the modification of the Single item EOQ model in presence of fuzzy decision making process where demand is related to the unit price and the setup cost varies with the quantity produced/Purchased. This paper considers the modification of objective function and storage area in the presence of imprecisely estimated parameters. The model is developed for the problem by employing different modeling approaches over an infinite planning horizon. It incorporates all concepts of a fuzzy arithmetic approach, the quantity ordered and the demand per unit compares both fuzzy non linear and other models. Investigation of the properties of an optimal solution allows developing an algorithm whose validity is illustrated through an example problem and ugh MATLAB (R2009a version software, the two and three dimensional diagrams are represented to the application. Sensitivity analysis of the optimal solution is also studied with respect to changes in different parameter values and to draw managerial insights of the decision problem.

  3. Review of experimental studies in social psychology of small groups when an optimal choice exists and application to operating room management decision-making.

    Science.gov (United States)

    Prahl, Andrew; Dexter, Franklin; Braun, Michael T; Van Swol, Lyn

    2013-11-01

    Because operating room (OR) management decisions with optimal choices are made with ubiquitous biases, decisions are improved with decision-support systems. We reviewed experimental social-psychology studies to explore what an OR leader can do when working with stakeholders lacking interest in learning the OR management science but expressing opinions about decisions, nonetheless. We considered shared information to include the rules-of-thumb (heuristics) that make intuitive sense and often seem "close enough" (e.g., staffing is planned based on the average workload). We considered unshared information to include the relevant mathematics (e.g., staffing calculations). Multiple studies have shown that group discussions focus more on shared than unshared information. Quality decisions are more likely when all group participants share knowledge (e.g., have taken a course in OR management science). Several biases in OR management are caused by humans' limited abilities to estimate tails of probability distributions in their heads. Groups are more susceptible to analogous biases than are educated individuals. Since optimal solutions are not demonstrable without groups sharing common language, only with education of most group members can a knowledgeable individual influence the group. The appropriate model of decision-making is autocratic, with information obtained from stakeholders. Although such decisions are good quality, the leaders often are disliked and the decisions considered unjust. In conclusion, leaders will find the most success if they do not bring OR management operational decisions to groups, but instead act autocratically while obtaining necessary information in 1:1 conversations. The only known route for the leader making such decisions to be considered likable and for the decisions to be considered fair is through colleagues and subordinates learning the management science.

  4. Life Cycle Assessment and Optimization-Based Decision Analysis of Construction Waste Recycling for a LEED-Certified University Building

    Directory of Open Access Journals (Sweden)

    Murat Kucukvar

    2016-01-01

    Full Text Available The current waste management literature lacks a comprehensive LCA of the recycling of construction materials that considers both process and supply chain-related impacts as a whole. Furthermore, an optimization-based decision support framework has not been also addressed in any work, which provides a quantifiable understanding about the potential savings and implications associated with recycling of construction materials from a life cycle perspective. The aim of this research is to present a multi-criteria optimization model, which is developed to propose economically-sound and environmentally-benign construction waste management strategies for a LEED-certified university building. First, an economic input-output-based hybrid life cycle assessment model is built to quantify the total environmental impacts of various waste management options: recycling, conventional landfilling and incineration. After quantifying the net environmental pressures associated with these waste treatment alternatives, a compromise programming model is utilized to determine the optimal recycling strategy considering environmental and economic impacts, simultaneously. The analysis results show that recycling of ferrous and non-ferrous metals significantly contributed to reductions in the total carbon footprint of waste management. On the other hand, recycling of asphalt and concrete increased the overall carbon footprint due to high fuel consumption and emissions during the crushing process. Based on the multi-criteria optimization results, 100% recycling of ferrous and non-ferrous metals, cardboard, plastic and glass is suggested to maximize the environmental and economic savings, simultaneously. We believe that the results of this research will facilitate better decision making in treating construction and debris waste for LEED-certified green buildings by combining the results of environmental LCA with multi-objective optimization modeling.

  5. Using Decision-Analytic Modeling to Isolate Interventions That Are Feasible, Efficient and Optimal: An Application from the Norwegian Cervical Cancer Screening Program.

    Science.gov (United States)

    Pedersen, Kine; Sørbye, Sveinung Wergeland; Burger, Emily Annika; Lönnberg, Stefan; Kristiansen, Ivar Sønbø

    2015-12-01

    Decision makers often need to simultaneously consider multiple criteria or outcomes when deciding whether to adopt new health interventions. Using decision analysis within the context of cervical cancer screening in Norway, we aimed to aid decision makers in identifying a subset of relevant strategies that are simultaneously efficient, feasible, and optimal. We developed an age-stratified probabilistic decision tree model following a cohort of women attending primary screening through one screening round. We enumerated detected precancers (i.e., cervical intraepithelial neoplasia of grade 2 or more severe (CIN2+)), colposcopies performed, and monetary costs associated with 10 alternative triage algorithms for women with abnormal cytology results. As efficiency metrics, we calculated incremental cost-effectiveness, and harm-benefit, ratios, defined as the additional costs, or the additional number of colposcopies, per additional CIN2+ detected. We estimated capacity requirements and uncertainty surrounding which strategy is optimal according to the decision rule, involving willingness to pay (monetary or resources consumed per added benefit). For ages 25 to 33 years, we eliminated four strategies that did not fall on either efficiency frontier, while one strategy was efficient with respect to both efficiency metrics. Compared with current practice in Norway, two strategies detected more precancers at lower monetary costs, but some required more colposcopies. Similar results were found for women aged 34 to 69 years. Improving the effectiveness and efficiency of cervical cancer screening may necessitate additional resources. Although efficient and feasible, both society and individuals must specify their willingness to accept the additional resources and perceived harms required to increase effectiveness before a strategy can be considered optimal. Copyright © 2015. Published by Elsevier Inc.

  6. Optimization as investment decision support in a Swedish medium-sized iron foundry - A move beyond traditional energy auditing

    International Nuclear Information System (INIS)

    Thollander, Patrik; Mardan, Nawzad; Karlsson, Magnus

    2009-01-01

    Due to increased globalisation, industries are facing greater competition that is pressing companies into decreasing their expenses in order to increase their profits. As regards Swedish industry, it has been faced with substantial increases in energy prices in recent years. Barriers to energy efficiency such as imperfect information inhibit investments in energy efficiency measures, energy audits being one means of reducing barriers and overcoming imperfect information. However, an evaluation of such energy audits in Sweden reveals that it is chiefly low-cost measures that are undertaken as a result of an audit. Moreover, these audits often tend to focus on support processes such as ventilation, lighting, air compressors etc., while measures impacting production processes are often not as extensively covered, which underlines the need for further support in addition to energy audits. Decision support is practised in a variety of different disciplines such as optimization and simulation and the aim of this paper is to explore whether investment decision support practices may be used successfully towards small and medium-sized manufacturers in Sweden when complex production-related investment decisions are taken. The optimization results from the different cases, involving a foundry's investment in a new melting unit, indicate that with no electricity price fluctuations over the day, the investment seems sound as it lowers the overall energy costs. However, with fluctuating electricity prices, there are no large differences in energy costs between the option of retaining the existing five melting furnaces at the foundry and investing in a twin furnace and removing the holding furnaces - which was the initial investment plan for the foundry in the study. It would not have been possible to achieve this outcome without the use of investment decision support such as MIND. One of the main conclusions in this paper is that investment decision support, when strategic

  7. The balance space approach to multicriteria decision making—involving the decision maker

    OpenAIRE

    Ehrgott, M.

    2002-01-01

    The balance space approach (introduced by Galperin in 1990) provides a new view on multicriteria optimization. Looking at deviations from global optimality of the different objectives, balance points and balance numbers are defined when either different or equal deviations for each objective are allowed. Apportioned balance numbers allow the specification of proportions among the deviations. Through this concept the decision maker can be involved in the decision process. In this paper we prov...

  8. Interactive Genetic Algorithm - An Adaptive and Interactive Decision Support Framework for Design of Optimal Groundwater Monitoring Plans

    Science.gov (United States)

    Babbar-Sebens, M.; Minsker, B. S.

    2006-12-01

    In the water resources management field, decision making encompasses many kinds of engineering, social, and economic constraints and objectives. Representing all of these problem dependant criteria through models (analytical or numerical) and various formulations (e.g., objectives, constraints, etc.) within an optimization- simulation system can be a very non-trivial issue. Most models and formulations utilized for discerning desirable traits in a solution can only approximate the decision maker's (DM) true preference criteria, and they often fail to consider important qualitative and incomputable phenomena related to the management problem. In our research, we have proposed novel decision support frameworks that allow DMs to actively participate in the optimization process. The DMs explicitly indicate their true preferences based on their subjective criteria and the results of various simulation models and formulations. The feedback from the DMs is then used to guide the search process towards solutions that are "all-rounders" from the perspective of the DM. The two main research questions explored in this work are: a) Does interaction between the optimization algorithm and a DM assist the system in searching for groundwater monitoring designs that are robust from the DM's perspective?, and b) How can an interactive search process be made more effective when human factors, such as human fatigue and cognitive learning processes, affect the performance of the algorithm? The application of these frameworks on a real-world groundwater long-term monitoring (LTM) case study in Michigan highlighted the following salient advantages: a) in contrast to the non-interactive optimization methodology, the proposed interactive frameworks were able to identify low cost monitoring designs whose interpolation maps respected the expected spatial distribution of the contaminants, b) for many same-cost designs, the interactive methodologies were able to propose multiple alternatives

  9. An Optimization Model For Strategy Decision Support to Select Kind of CPO’s Ship

    Science.gov (United States)

    Suaibah Nst, Siti; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    The selection of marine transport for the distribution of crude palm oil (CPO) is one of strategy that can be considered in reducing cost of transport. The cost of CPO’s transport from one area to CPO’s factory located at the port of destination may affect the level of CPO’s prices and the number of demands. In order to maintain the availability of CPO a strategy is required to minimize the cost of transporting. In this study, the strategy used to select kind of charter ships as barge or chemical tanker. This study aims to determine an optimization model for strategy decision support in selecting kind of CPO’s ship by minimizing costs of transport. The select of ship was done randomly, so that two-stage stochastic programming model was used to select the kind of ship. Model can help decision makers to select either barge or chemical tanker to distribute CPO.

  10. Management of redundancy in flight control systems using optimal decision theory

    Science.gov (United States)

    1981-01-01

    The problem of using redundancy that exists between dissimilar systems in aircraft flight control is addressed. That is, using the redundancy that exists between a rate gyro and an accelerometer--devices that have dissimilar outputs which are related only through the dynamics of the aircraft motion. Management of this type of redundancy requires advanced logic so that the system can monitor failure status and can reconfigure itself in the event of one or more failures. An optimal decision theory was tutorially developed for the management of sensor redundancy and the theory is applied to two aircraft examples. The first example is the space shuttle and the second is a highly maneuvering high performance aircraft--the F8-C. The examples illustrate the redundancy management design process and the performance of the algorithms presented in failure detection and control law reconfiguration.

  11. Optimization-based decision support to assist in logistics planning for hospital evacuations.

    Science.gov (United States)

    Glick, Roger; Bish, Douglas R; Agca, Esra

    2013-01-01

    The evacuation of the hospital is a very complex process and evacuation planning is an important part of a hospital's emergency management plan. There are numerous factors that affect the evacuation plan including the nature of threat, availability of resources and staff the characteristics of the evacuee population, and risk to patients and staff. The safety and health of patients is of fundamental importance, but safely moving patients to alternative care facilities while under threat is a very challenging task. This article describes the logistical issues and complexities involved in planning and execution of hospital evacuations. Furthermore, this article provides examples of how optimization-based decision support tools can help evacuation planners to better plan for complex evacuations by providing real-world solutions to various evacuation scenarios.

  12. Application of Bayesian statistical decision theory for a maintenance optimization problem

    International Nuclear Information System (INIS)

    Procaccia, H.; Cordier, R.; Muller, S.

    1997-01-01

    Reliability-centered maintenance (RCM) is a rational approach that can be used to identify the equipment of facilities that may turn out to be critical with respect to safety, to availability, or to maintenance costs. Is is dor these critical pieces of equipment alone that a corrective (one waits for a failure) or preventive (the type and frequency are specified) maintenance policy is established. But this approach has limitations: - when there is little operating feedback and it concerns rare events affecting a piece of equipment judged critical on a priori grounds (how is it possible, in this case, to decide whether or not it is critical, since there is conflict between the gravity of the potential failure and its frequency?); - when the aim is propose an optimal maintenance frequency for a critical piece of equipment - changing the maintenance frequency hitherto applied may cause a significant drift in the observed reliability of the equipment, an aspect not generally taken into account in the RCM approach. In these two situations, expert judgments can be combined with the available operating feedback (Bayesian approach) and the combination of risk of failure and economic consequences taken into account (statistical decision theory) to achieve a true optimization of maintenance policy choices. This paper presents an application on the maintenance of diesel generator component

  13. Optimal insemination and replacement decisions to minimize the cost of pathogen-specific clinical mastitis in dairy cows.

    Science.gov (United States)

    Cha, E; Kristensen, A R; Hertl, J A; Schukken, Y H; Tauer, L W; Welcome, F L; Gröhn, Y T

    2014-01-01

    Mastitis is a serious production-limiting disease, with effects on milk yield, milk quality, and conception rate, and an increase in the risk of mortality and culling. The objective of this study was 2-fold: (1) to develop an economic optimization model that incorporates all the different types of pathogens that cause clinical mastitis (CM) categorized into 8 classes of culture results, and account for whether the CM was a first, second, or third case in the current lactation and whether the cow had a previous case or cases of CM in the preceding lactation; and (2) to develop this decision model to be versatile enough to add additional pathogens, diseases, or other cow characteristics as more information becomes available without significant alterations to the basic structure of the model. The model provides economically optimal decisions depending on the individual characteristics of the cow and the specific pathogen causing CM. The net returns for the basic herd scenario (with all CM included) were $507/cow per year, where the incidence of CM (cases per 100 cow-years) was 35.6, of which 91.8% of cases were recommended for treatment under an optimal replacement policy. The cost per case of CM was $216.11. The CM cases comprised (incidences, %) Staphylococcus spp. (1.6), Staphylococcus aureus (1.8), Streptococcus spp. (6.9), Escherichia coli (8.1), Klebsiella spp. (2.2), other treated cases (e.g., Pseudomonas; 1.1), other not treated cases (e.g., Trueperella pyogenes; 1.2), and negative culture cases (12.7). The average cost per case, even under optimal decisions, was greatest for Klebsiella spp. ($477), followed by E. coli ($361), other treated cases ($297), and other not treated cases ($280). This was followed by the gram-positive pathogens; among these, the greatest cost per case was due to Staph. aureus ($266), followed by Streptococcus spp. ($174) and Staphylococcus spp. ($135); negative culture had the lowest cost ($115). The model recommended treatment for

  14. Measurements of the radiolytic oxidation of aqueous CsI using a sparging apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Ashmore, C B; Brown, D; Sims, H E [AEA Technology, Harwell (United Kingdom); Gwyther, J R [NE plc Berkeley Technology Centre, Berkeley (United Kingdom)

    1996-12-01

    Radiolytic oxidation is considered to be the main mechanism for the formation of I{sub 2} from aqueous CsI in containment of a water cooled reactor after a LOCA. Despite the amount of study over the last 60 years on the radiation chemistry of iodine there has been no consistent set of experiments spanning a wide enough range of conditions to verify models with confidence. This paper describes results from a set of experiments carried out in order to remedy this deficiency. In this work the rate of evolution of I{sub 2} from sparged irradiated CsI solution labeled with {sup 131}I was measured on-line over a range of conditions. This work involved the measurement of the effects of pH, temperature, O{sub 2} concentration, I{sup -} concentration, phosphate concentration, dose-rate and impurities on the rate of evolution of I{sub 2}. The range of conditions was chosen in order to span as closely as possible conditions expected in a LOCA but also to help to elucidate some of the mechanisms especially at high pH. pH was found to be a very important factor influencing iodine volatility, over the temperature range studied the extent of oxidation reduced with temperature but this was compensated for by the decrease in partition coefficient. Oxygen concentration was more important in solutions not containing phosphate. The fractional oxidation was not particularly dependent on iodide concentration but G{sub I2} was very dependent on [I{sup -}]. There was no effect of added impurities, Fe, Mn, Mo or organics although in separate work silver was found to have a very important effect. During attempts to interpret the data it was found that it was necessary to include the iodine atom as a volatile species with a partition coefficient of 1.9 taken from thermodynamic data. The modelling work is described in a separate paper. (author) 15 figs., 1 tab., 19 refs.

  15. Measurements of the radiolytic oxidation of aqueous CsI using a sparging apparatus

    International Nuclear Information System (INIS)

    Ashmore, C.B.; Brown, D.; Sims, H.E.; Gwyther, J.R.

    1996-01-01

    Radiolytic oxidation is considered to be the main mechanism for the formation of I 2 from aqueous CsI in containment of a water cooled reactor after a LOCA. Despite the amount of study over the last 60 years on the radiation chemistry of iodine there has been no consistent set of experiments spanning a wide enough range of conditions to verify models with confidence. This paper describes results from a set of experiments carried out in order to remedy this deficiency. In this work the rate of evolution of I 2 from sparged irradiated CsI solution labeled with 131 I was measured on-line over a range of conditions. This work involved the measurement of the effects of pH, temperature, O 2 concentration, I - concentration, phosphate concentration, dose-rate and impurities on the rate of evolution of I 2 . The range of conditions was chosen in order to span as closely as possible conditions expected in a LOCA but also to help to elucidate some of the mechanisms especially at high pH. pH was found to be a very important factor influencing iodine volatility, over the temperature range studied the extent of oxidation reduced with temperature but this was compensated for by the decrease in partition coefficient. Oxygen concentration was more important in solutions not containing phosphate. The fractional oxidation was not particularly dependent on iodide concentration but G I2 was very dependent on [I - ]. There was no effect of added impurities, Fe, Mn, Mo or organics although in separate work silver was found to have a very important effect. During attempts to interpret the data it was found that it was necessary to include the iodine atom as a volatile species with a partition coefficient of 1.9 taken from thermodynamic data. The modelling work is described in a separate paper. (author) 15 figs., 1 tab., 19 refs

  16. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    Science.gov (United States)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill

  17. Design and Analysis of Decision Rules via Dynamic Programming

    KAUST Repository

    Amin, Talha M.

    2017-04-24

    The areas of machine learning, data mining, and knowledge representation have many different formats used to represent information. Decision rules, amongst these formats, are the most expressive and easily-understood by humans. In this thesis, we use dynamic programming to design decision rules and analyze them. The use of dynamic programming allows us to work with decision rules in ways that were previously only possible for brute force methods. Our algorithms allow us to describe the set of all rules for a given decision table. Further, we can perform multi-stage optimization by repeatedly reducing this set to only contain rules that are optimal with respect to selected criteria. One way that we apply this study is to generate small systems with short rules by simulating a greedy algorithm for the set cover problem. We also compare maximum path lengths (depth) of deterministic and non-deterministic decision trees (a non-deterministic decision tree is effectively a complete system of decision rules) with regards to Boolean functions. Another area of advancement is the presentation of algorithms for constructing Pareto optimal points for rules and rule systems. This allows us to study the existence of “totally optimal” decision rules (rules that are simultaneously optimal with regards to multiple criteria). We also utilize Pareto optimal points to compare and rate greedy heuristics with regards to two criteria at once. Another application of Pareto optimal points is the study of trade-offs between cost and uncertainty which allows us to find reasonable systems of decision rules that strike a balance between length and accuracy.

  18. Intensity-modulated radiation therapy (IMRT) for locally advanced paranasal sinus tumors: incorporating clinical decisions in the optimization process

    International Nuclear Information System (INIS)

    Tsien, Christina; Eisbruch, Avraham; McShan, Daniel; Kessler, Marc; Marsh, Robin C.; Fraass, Benedick

    2003-01-01

    Purpose: Intensity-modulated radiotherapy (IMRT) plans require decisions about priorities and tradeoffs among competing goals. This study evaluates the incorporation of various clinical decisions into the optimization system, using locally advanced paranasal sinus tumors as a model. Methods and Materials: Thirteen patients with locally advanced paranasal sinus tumors were retrospectively replanned using inverse planning. Two clinical decisions were assumed: (1) Spare both optic pathways (OP), or (2) Spare only the contralateral OP. In each case, adequate tumor coverage (treated to 70 Gy in 35 fractions) was required. Two beamlet IMRT plans were thus developed for each patient using a class solution cost function. By altering one key variable at a time, different levels of risk of OP toxicity and planning target volume (PTV) compromise were compared in a systematic manner. The resulting clinical tradeoffs were analyzed using dosimetric criteria, tumor control probability (TCP), equivalent uniform dose (EUD), and normal tissue complication probability. Results: Plan comparisons representing the two clinical decisions (sparing both OP and sparing only the contralateral OP), with respect to minimum dose, TCP, V 95 , and EUD, demonstrated small, yet statistically significant, differences. However, when individual cases were analyzed further, significant PTV underdosage (>5%) was present in most cases for plans sparing both OP. In 6/13 cases (46%), PTV underdosage was between 5% and 15%, and in 3 cases (23%) was greater than 15%. By comparison, adequate PTV coverage was present in 8/13 cases (62%) for plans sparing only the contralateral OP. Mean target EUD comparisons between the two plans (including 9 cases where a clinical tradeoff between PTV coverage and OP sparing was required) were similar: 68.6 Gy and 69.1 Gy, respectively (p=0.02). Mean TCP values for those 9 cases were 56.5 vs. 61.7, respectively (p=0.006). Conclusions: In IMRT plans for paranasal sinus tumors

  19. Combining two strategies to optimize biometric decisions against spoofing attacks

    Science.gov (United States)

    Li, Weifeng; Poh, Norman; Zhou, Yicong

    2014-09-01

    Spoof attack by replicating biometric traits represents a real threat to an automatic biometric verification/ authentication system. This is because the system, originally designed to distinguish between genuine users from impostors, simply cannot distinguish between a replicated biometric sample (replica) from a live sample. An effective solution is to obtain some measures that can indicate whether or not a biometric trait has been tempered with, e.g., liveness detection measures. These measures are referred to as evidence of spoofing or anti-spoofing measures. In order to make the final accept/rejection decision, a straightforward solution to define two thresholds: one for the anti-spoofing measure, and another for the verification score. We compared two variants of a method that relies on applying two thresholds - one to the verification (matching) score and another to the anti-spoofing measure. Our experiments carried out using a signature database as well as by simulation show that both the brute-force and its probabilistic variant turn out to be optimal under different operating conditions.

  20. Building of Reusable Reverse Logistics Model and its Optimization Considering the Decision of Backorder or Next Arrival of Goods

    Science.gov (United States)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu; Lee, Hee-Hyol

    This paper deals with the building of the reusable reverse logistics model considering the decision of the backorder or the next arrival of goods. The optimization method to minimize the transportation cost and to minimize the volume of the backorder or the next arrival of goods occurred by the Just in Time delivery of the final delivery stage between the manufacturer and the processing center is proposed. Through the optimization algorithms using the priority-based genetic algorithm and the hybrid genetic algorithm, the sub-optimal delivery routes are determined. Based on the case study of a distilling and sale company in Busan in Korea, the new model of the reusable reverse logistics of empty bottles is built and the effectiveness of the proposed method is verified.

  1. 流域水质管理优化决策模型研究%Watershed optimal decision models for water-quality management

    Institute of Scientific and Technical Information of China (English)

    盛虎; 向男; 郭怀成; 刘永

    2013-01-01

    针对目前流域水污染难以有效控制的局面,依据已有的流域水文、水动力、水质、水生态相关机理模拟模型的研究,在考虑了流域社会经济发展条件的基础上,构建了流域水质管理优化决策模型框架体系.基于该框架体系,本文从简单流域系统优化模型、模拟与优化联合模型和时空尺度复杂优化模型3个方面对流域优化决策模型的研究发展历程进行综述,并指出其各自在发展过程中所出现的问题.最后,提出了优化决策模型面临的瓶颈问题,并从模型结构简化和适应性管理两个方面提出了相关的解决思路.%In light of the difficulties in effective water pollution control, this study formulated a watershed optimal management decision model framework based on relevant researches on mechanistic modeling of watershed hydrology, hydrodynamics, water quality and aquatic ecology. The decision model framework also took into account the existing socio-economic development status in watersheds. Based on this framework, we reviewed the history and current status of watershed optimal decision support models from three different aspects; simple systematic optimization models, coupled simulation-optimization model, and complicated optimization models on different temporal and spatial scales. Meanwhile, the problems during the development of watershed optimization models were identified. Finally, in order to solve the bottle-neck of computation for watershed optimization models, simplification of the structure of simulation models and adaptive management were recommended.

  2. Bayesian framework for managing preferences in decision-making

    International Nuclear Information System (INIS)

    Maes, Marc A.; Faber, Michael H.

    2006-01-01

    A rational decision-making process does not exclude the possibility of decision makers expressing different preferences and disagreeing regarding the effects of consequences and optimal course of actions. This point of view is explored in depth in this paper. A framework is developed that includes several decision makers (instead of just one) and allows for the variability of preferences among these decision makers. The information provided by the varying opinions of decision makers can be used to optimize our own decision-making. To achieve this, likelihood functions are developed for stated preferences among both discrete and continuous alternatives, and stated preference rankings of alternatives. Two applications are pursued: the optimization of the lifecycle utility of a structural system subject to consequences of failure proportional to the intensity of hazards exceeding a variable threshold, and to follow-up consequences. Also, the problem of tight decisions or close calls is investigated in order to explore the efficiency of a Bayesian approach using stated preferences and stated rankings

  3. Reliability-based optimization of engineering structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplitied with structural optimization. The basic reliability-based optimization...... problems are generalized to the following extensions: interactive optimization, inspection and repair costs, systematic reconstruction, re-assessment of existing structures. Illustrative examples are presented including a simple introductory example, a decision problem related to bridge re...

  4. Information-Theoretic Bounded Rationality and ε-Optimality

    Directory of Open Access Journals (Sweden)

    Daniel A. Braun

    2014-08-01

    Full Text Available Bounded rationality concerns the study of decision makers with limited information processing resources. Previously, the free energy difference functional has been suggested to model bounded rational decision making, as it provides a natural trade-off between an energy or utility function that is to be optimized and information processing costs that are measured by entropic search costs. The main question of this article is how the information-theoretic free energy model relates to simple ε-optimality models of bounded rational decision making, where the decision maker is satisfied with any action in an ε-neighborhood of the optimal utility. We find that the stochastic policies that optimize the free energy trade-off comply with the notion of ε-optimality. Moreover, this optimality criterion even holds when the environment is adversarial. We conclude that the study of bounded rationality based on ε-optimality criteria that abstract away from the particulars of the information processing constraints is compatible with the information-theoretic free energy model of bounded rationality.

  5. Barriers and facilitators to the dissemination of DECISION+, a continuing medical education program for optimizing decisions about antibiotics for acute respiratory infections in primary care: A study protocol

    Directory of Open Access Journals (Sweden)

    Gagnon Marie-Pierre

    2011-01-01

    decision making regarding the use of antibiotics in acute respiratory infections, to facilitate its dissemination in primary care on a large scale. Our results should help continuing medical educators develop a continuing medical education program in shared decision making for other clinically relevant topics. This will help optimize clinical decisions in primary care.

  6. Application of TOPSIS and VIKOR improved versions in a multi criteria decision analysis to develop an optimized municipal solid waste management model.

    Science.gov (United States)

    Aghajani Mir, M; Taherei Ghazvinei, P; Sulaiman, N M N; Basri, N E A; Saheri, S; Mahmood, N Z; Jahan, A; Begum, R A; Aghamohammadi, N

    2016-01-15

    Selecting a suitable Multi Criteria Decision Making (MCDM) method is a crucial stage to establish a Solid Waste Management (SWM) system. Main objective of the current study is to demonstrate and evaluate a proposed method using Multiple Criteria Decision Making methods (MCDM). An improved version of Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) applied to obtain the best municipal solid waste management method by comparing and ranking the scenarios. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Besides, Viekriterijumsko Kompromisno Rangiranje (VIKOR) compromise solution method applied for sensitivity analyses. The proposed method can assist urban decision makers in prioritizing and selecting an optimized Municipal Solid Waste (MSW) treatment system. Besides, a logical and systematic scientific method was proposed to guide an appropriate decision-making. A modified TOPSIS methodology as a superior to existing methods for first time was applied for MSW problems. Applying this method in order to rank treatment methods is introduced as one contribution of the study. Next, 11 scenarios of MSW treatment methods are defined and compared environmentally and economically based on the waste management conditions. Results show that integrating a sanitary landfill (18.1%), RDF (3.1%), composting (2%), anaerobic digestion (40.4%), and recycling (36.4%) was an optimized model of integrated waste management. An applied decision-making structure provides the opportunity for optimum decision-making. Therefore, the mix of recycling and anaerobic digestion and a sanitary landfill with Electricity Production (EP) are the preferred options for MSW management. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Optimal Channel Selection Based on Online Decision and Offline Learning in Multichannel Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mu Qiao

    2017-01-01

    Full Text Available We propose a channel selection strategy with hybrid architecture, which combines the centralized method and the distributed method to alleviate the overhead of access point and at the same time provide more flexibility in network deployment. By this architecture, we make use of game theory and reinforcement learning to fulfill the optimal channel selection under different communication scenarios. Particularly, when the network can satisfy the requirements of energy and computational costs, the online decision algorithm based on noncooperative game can help each individual sensor node immediately select the optimal channel. Alternatively, when the network cannot satisfy the requirements of energy and computational costs, the offline learning algorithm based on reinforcement learning can help each individual sensor node to learn from its experience and iteratively adjust its behavior toward the expected target. Extensive simulation results validate the effectiveness of our proposal and also prove that higher system throughput can be achieved by our channel selection strategy over the conventional off-policy channel selection approaches.

  8. Optimization Models for Petroleum Field Exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Jonsbraaten, Tore Wiig

    1998-12-31

    This thesis presents and discusses various models for optimal development of a petroleum field. The objective of these optimization models is to maximize, under many uncertain parameters, the project`s expected net present value. First, an overview of petroleum field optimization is given from the point of view of operations research. Reservoir equations for a simple reservoir system are derived and discretized and included in optimization models. Linear programming models for optimizing production decisions are discussed and extended to mixed integer programming models where decisions concerning platform, wells and production strategy are optimized. Then, optimal development decisions under uncertain oil prices are discussed. The uncertain oil price is estimated by a finite set of price scenarios with associated probabilities. The problem is one of stochastic mixed integer programming, and the solution approach is to use a scenario and policy aggregation technique developed by Rockafellar and Wets although this technique was developed for continuous variables. Stochastic optimization problems with focus on problems with decision dependent information discoveries are also discussed. A class of ``manageable`` problems is identified and an implicit enumeration algorithm for finding optimal decision policy is proposed. Problems involving uncertain reservoir properties but with a known initial probability distribution over possible reservoir realizations are discussed. Finally, a section on Nash-equilibrium and bargaining in an oil reservoir management game discusses the pool problem arising when two lease owners have access to the same underlying oil reservoir. Because the oil tends to migrate, both lease owners have incentive to drain oil from the competitors part of the reservoir. The discussion is based on a numerical example. 107 refs., 31 figs., 14 tabs.

  9. A hierarchical Markov decision process modeling feeding and marketing decisions of growing pigs

    DEFF Research Database (Denmark)

    Pourmoayed, Reza; Nielsen, Lars Relund; Kristensen, Anders Ringgaard

    2016-01-01

    Feeding is the most important cost in the production of growing pigs and has a direct impact on the marketing decisions, growth and the final quality of the meat. In this paper, we address the sequential decision problem of when to change the feed-mix within a finisher pig pen and when to pick pigs...... for marketing. We formulate a hierarchical Markov decision process with three levels representing the decision process. The model considers decisions related to feeding and marketing and finds the optimal decision given the current state of the pen. The state of the system is based on information from on...

  10. Heuristics in Managing Complex Clinical Decision Tasks in Experts' Decision Making.

    Science.gov (United States)

    Islam, Roosan; Weir, Charlene; Del Fiol, Guilherme

    2014-09-01

    Clinical decision support is a tool to help experts make optimal and efficient decisions. However, little is known about the high level of abstractions in the thinking process for the experts. The objective of the study is to understand how clinicians manage complexity while dealing with complex clinical decision tasks. After approval from the Institutional Review Board (IRB), three clinical experts were interviewed the transcripts from these interviews were analyzed. We found five broad categories of strategies by experts for managing complex clinical decision tasks: decision conflict, mental projection, decision trade-offs, managing uncertainty and generating rule of thumb. Complexity is created by decision conflicts, mental projection, limited options and treatment uncertainty. Experts cope with complexity in a variety of ways, including using efficient and fast decision strategies to simplify complex decision tasks, mentally simulating outcomes and focusing on only the most relevant information. Understanding complex decision making processes can help design allocation based on the complexity of task for clinical decision support design.

  11. Optimization of perfluorocarbon emulsion properties for enhancing oxygen mass transfer in a bio-artificial liver support system

    CSIR Research Space (South Africa)

    Moolman, FS

    2004-07-29

    Full Text Available : With increase in the dispersed phase volume fraction (phi(p)) both the oxygen holding capacity and the viscosity increases. These issues are addressed here using simplified mass transfer models, amenable to analytical solution, for both gas-sparged and membrane...

  12. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  13. Decision models for use with criterion-referenced tests

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1980-01-01

    The problem of mastery decisions and optimizing cutoff scores on criterion-referenced tests is considered. This problem can be formalized as an (empirical) Bayes problem with decisions rules of a monotone shape. Next, the derivation of optimal cutoff scores for threshold, linear, and normal ogive

  14. Transforming data into decisions to optimize the recovery of the Saih Rawl Field in Oman

    Energy Technology Data Exchange (ETDEWEB)

    Dozier, G C [Society of Petroleum Engineers, Dubai (United Arab Emirates); [Schlumberger Oilfield Services, Dubai (United Arab Emirates); Giacon, P [Society of Petroleum Engineers, Dubai (United Arab Emirates); [Petroleum Development of Oman (Oman)

    2006-07-01

    The Saih Rawl field of Oman has been producing for more than 5 years from the Barik and Miqrat Formations. Well productivity depends greatly on the effectiveness of hydraulic fracturing and other operating practices. Productivity is further complicated by the changing mechanical and reservoir properties related to depletion and intralayer communication. In this study, a systematic approach was used by a team of operators and service companies to optimize well production within a one-year time period. The approach involved a dynamic integration of historical data and new information technologies and engineering diagnostics to identify the key parameters that influence productivity and to optimize performance according to current analyses. In particular, historical pressure trends by unit were incorporated with theoretical assumptions validated by indirect field evidence. Onsite decision-making resulted in effective placement of fracture treatments. The approach has produced some of the highest producing wells in the field's history. It was concluded that optimization and maximization of well productivity requires multidiscipline inputs that should be managed through structured workflow that includes not only the classical simulation design inputs but entails the entire process from design to execution with particular emphasis on cleanup practices and induced fluid damage. 6 refs., 2 tabs., 25 figs.

  15. Optimal contracts decision of industrial customers

    International Nuclear Information System (INIS)

    Tsay, M.-T.; Lin, W.-M.; Lee, J.-L.

    2001-01-01

    This paper develops a software package to calculate the optimal contract capacities for industrial customers. Based on the time-of-use (TOU) rates employed by the Taiwan Power Company, the objective function is formulated, to minimize the electricity bill of industrial customers during the whole year period. Evolutionary programming (EP) was adopted to solve this problem. Users can get the optimal contract capacities for the peak load, semi-peak load, and off-peak load, respectively. Practical load consumption data were used to prove the validity of this program. Results show that the software developed in this paper can be used as a useful tool for industrial customers in selecting contract capacities to curtail the electricity bill. (author)

  16. Review of tri-generation technologies: Design evaluation, optimization, decision-making, and selection approach

    International Nuclear Information System (INIS)

    Al Moussawi, Houssein; Fardoun, Farouk; Louahlia-Gualous, Hasna

    2016-01-01

    Highlights: • Trigeneration technologies classified and reviewed according to prime movers. • Relevant heat recovery equipment discussed with thermal energy storage. • Trigeneration evaluated based on energy, exergy, economy, environment criteria. • Design, optimization, and decision-making methods classified and presented. • System selection suggested according to user preferences. - Abstract: Electricity, heating, and cooling are the three main components constituting the tripod of energy consumption in residential, commercial, and public buildings all around the world. Their separate generation causes higher fuel consumption, at a time where energy demands and fuel costs are continuously rising. Combined cooling, heating, and power (CCHP) or trigeneration could be a solution for such challenge yielding an efficient, reliable, flexible, competitive, and less pollutant alternative. A variety of trigeneration technologies are available and their proper choice is influenced by the employed energy system conditions and preferences. In this paper, different types of trigeneration systems are classified according to the prime mover, size and energy sequence usage. A leveled selection procedure is subsequently listed in the consecutive sections. The first level contains the applied prime mover technologies which are considered to be the heart of any CCHP system. The second level comprises the heat recovery equipment (heating and cooling) of which suitable selection should be compatible with the used prime mover. The third level includes the thermal energy storage system and heat transfer fluid to be employed. For each section of the paper, a survey of conducted studies with CHP/CCHP implementation is presented. A comprehensive table of evaluation criteria for such systems based on energy, exergy, economy, and environment measures is performed, along with a survey of the methods used in their design, optimization, and decision-making. Moreover, a classification

  17. Performance improvement of 64-QAM coherent optical communication system by optimizing symbol decision boundary based on support vector machine

    Science.gov (United States)

    Chen, Wei; Zhang, Junfeng; Gao, Mingyi; Shen, Gangxiang

    2018-03-01

    High-order modulation signals are suited for high-capacity communication systems because of their high spectral efficiency, but they are more vulnerable to various impairments. For the signals that experience degradation, when symbol points overlap on the constellation diagram, the original linear decision boundary cannot be used to distinguish the classification of symbol. Therefore, it is advantageous to create an optimum symbol decision boundary for the degraded signals. In this work, we experimentally demonstrated the 64-quadrature-amplitude modulation (64-QAM) coherent optical communication system using support-vector machine (SVM) decision boundary algorithm to create the optimum symbol decision boundary for improving the system performance. We investigated the influence of various impairments on the 64-QAM coherent optical communication systems, such as the impairments caused by modulator nonlinearity, phase skew between in-phase (I) arm and quadrature-phase (Q) arm of the modulator, fiber Kerr nonlinearity and amplified spontaneous emission (ASE) noise. We measured the bit-error-ratio (BER) performance of 75-Gb/s 64-QAM signals in the back-to-back and 50-km transmission. By using SVM to optimize symbol decision boundary, the impairments caused by I/Q phase skew of the modulator, fiber Kerr nonlinearity and ASE noise are greatly mitigated.

  18. Risk-Informed Decisions Optimization in Inspection and Maintenance

    International Nuclear Information System (INIS)

    Robertas Alzbutas

    2002-01-01

    The Risk-Informed Approach (RIA) used to support decisions related to inspection and maintenance program is considered. The use of risk-informed methods can help focus the adequate in-service inspections and control on the more important locations of complex dynamic systems. The focus is set on the highest risk measured as conditional core damage frequency, which is produced by the frequencies of degradation and final failure at different locations combined with the conditional failure consequence probability. The probabilities of different degradation states per year and consequences are estimated quantitatively. The investigation of inspection and maintenance process is presented as the combination of deterministic and probabilistic analysis based on general risk-informed model, which includes the inspection and maintenance program features. Such RIA allows an optimization of inspection program while maintaining probabilistic and fundamental deterministic safety requirements. The failure statistics analysis is used as well as the evaluation of reliability of inspections. The assumptions regarding the effectiveness of the inspection methods are based on a classification of the accessibility of the welds during the inspection and on the different techniques used for inspection. The probability of defect detection is assumed to depend on the parameters either through logarithmic or logit transformation. As example the modeling of the pipe systems inspection process is analyzed. The means to reduce a number of inspection sites and the cumulative radiation exposure to the NPP inspection personnel with a reduction of overall risk is presented together with used and developed software. The developed software can perform and administrate all the risk evaluations and ensure the possibilities to compare different options and perform sensitivity analysis. The approaches to define an acceptable level of risk are discussed. These approaches with appropriate software in

  19. Decision theory, reinforcement learning, and the brain.

    Science.gov (United States)

    Dayan, Peter; Daw, Nathaniel D

    2008-12-01

    Decision making is a core competence for animals and humans acting and surviving in environments they only partially comprehend, gaining rewards and punishments for their troubles. Decision-theoretic concepts permeate experiments and computational models in ethology, psychology, and neuroscience. Here, we review a well-known, coherent Bayesian approach to decision making, showing how it unifies issues in Markovian decision problems, signal detection psychophysics, sequential sampling, and optimal exploration and discuss paradigmatic psychological and neural examples of each problem. We discuss computational issues concerning what subjects know about their task and how ambitious they are in seeking optimal solutions; we address algorithmic topics concerning model-based and model-free methods for making choices; and we highlight key aspects of the neural implementation of decision making.

  20. Rules of Thumb in Life-Cycle Saving Decisions

    OpenAIRE

    Winter, Joachim; Schlafmann, Kathrin; Rodepeter, Ralf

    2011-01-01

    We analyse life-cycle saving decisions when households use simple heuristics, or rules of thumb, rather than solve the underlying intertemporal optimization problem. We simulate life-cycle saving decisions using three simple rules and compute utility losses relative to the solution of the optimization problem. Our simulations suggest that utility losses induced by following simple decision rules are relatively low. Moreover, the two main saving motives re ected by the canonical life-cyc...

  1. Information integration in perceptual and value-based decisions

    OpenAIRE

    Tsetsos, K.

    2012-01-01

    Research on the psychology and neuroscience of simple, evidence-based choices has led to an impressive progress in capturing the underlying mental processes as optimal mechanisms that make the fastest decision for a specified accuracy. The idea that decision-making is an optimal process stands in contrast with findings in more complex, motivation-based decisions, focussed on multiple goals with trade-offs. Here, a number of paradoxical and puzzling choice behaviours have been r...

  2. Multi-pruning of decision trees for knowledge representation and classification

    KAUST Repository

    Azad, Mohammad

    2016-06-09

    We consider two important questions related to decision trees: first how to construct a decision tree with reasonable number of nodes and reasonable number of misclassification, and second how to improve the prediction accuracy of decision trees when they are used as classifiers. We have created a dynamic programming based approach for bi-criteria optimization of decision trees relative to the number of nodes and the number of misclassification. This approach allows us to construct the set of all Pareto optimal points and to derive, for each such point, decision trees with parameters corresponding to that point. Experiments on datasets from UCI ML Repository show that, very often, we can find a suitable Pareto optimal point and derive a decision tree with small number of nodes at the expense of small increment in number of misclassification. Based on the created approach we have proposed a multi-pruning procedure which constructs decision trees that, as classifiers, often outperform decision trees constructed by CART. © 2015 IEEE.

  3. Multi-pruning of decision trees for knowledge representation and classification

    KAUST Repository

    Azad, Mohammad; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2016-01-01

    We consider two important questions related to decision trees: first how to construct a decision tree with reasonable number of nodes and reasonable number of misclassification, and second how to improve the prediction accuracy of decision trees when they are used as classifiers. We have created a dynamic programming based approach for bi-criteria optimization of decision trees relative to the number of nodes and the number of misclassification. This approach allows us to construct the set of all Pareto optimal points and to derive, for each such point, decision trees with parameters corresponding to that point. Experiments on datasets from UCI ML Repository show that, very often, we can find a suitable Pareto optimal point and derive a decision tree with small number of nodes at the expense of small increment in number of misclassification. Based on the created approach we have proposed a multi-pruning procedure which constructs decision trees that, as classifiers, often outperform decision trees constructed by CART. © 2015 IEEE.

  4. Anytime decision making based on unconstrained influence diagrams

    DEFF Research Database (Denmark)

    Luque, Manuel; Nielsen, Thomas Dyhre; Jensen, Finn Verner

    2016-01-01

    . This paper addresses this problem by proposing an anytime algorithm that at any time provides a qualified recommendation for the first decisions of the problem. The algorithm performs a heuristic-based search in a decision tree representation of the problem. We provide a framework for analyzing......Unconstrained influence diagrams extend the language of influence diagrams to cope with decision problems in which the order of the decisions is unspecified. Thus, when solving an unconstrained influence diagram we not only look for an optimal policy for each decision, but also for a so-called step......-policy specifying the next decision given the observations made so far. However, due to the complexity of the problem, temporal constraints can force the decision maker to act before the solution algorithm has finished, and, in particular, before an optimal policy for the first decision has been computed...

  5. Robust Bayesian decision theory applied to optimal dosage.

    Science.gov (United States)

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.

  6. Optimal Detection under the Restricted Bayesian Criterion

    Directory of Open Access Journals (Sweden)

    Shujun Liu

    2017-07-01

    Full Text Available This paper aims to find a suitable decision rule for a binary composite hypothesis-testing problem with a partial or coarse prior distribution. To alleviate the negative impact of the information uncertainty, a constraint is considered that the maximum conditional risk cannot be greater than a predefined value. Therefore, the objective of this paper becomes to find the optimal decision rule to minimize the Bayes risk under the constraint. By applying the Lagrange duality, the constrained optimization problem is transformed to an unconstrained optimization problem. In doing so, the restricted Bayesian decision rule is obtained as a classical Bayesian decision rule corresponding to a modified prior distribution. Based on this transformation, the optimal restricted Bayesian decision rule is analyzed and the corresponding algorithm is developed. Furthermore, the relation between the Bayes risk and the predefined value of the constraint is also discussed. The Bayes risk obtained via the restricted Bayesian decision rule is a strictly decreasing and convex function of the constraint on the maximum conditional risk. Finally, the numerical results including a detection example are presented and agree with the theoretical results.

  7. Decision Optimization for Power Grid Operating Conditions with High- and Low-Voltage Parallel Loops

    Directory of Open Access Journals (Sweden)

    Dong Yang

    2017-05-01

    Full Text Available With the development of higher-voltage power grids, the high- and low-voltage parallel loops are emerging, which lead to energy losses and even threaten the security and stability of power systems. The multi-infeed high-voltage direct current (HVDC configurations widely appearing in AC/DC interconnected power systems make this situation even worse. Aimed at energy saving and system security, a decision optimization method for power grid operating conditions with high- and low-voltage parallel loops is proposed in this paper. Firstly, considering hub substation distribution and power grid structure, parallel loop opening schemes are generated with GN (Girvan-Newman algorithms. Then, candidate opening schemes are preliminarily selected from all these generated schemes based on a filtering index. Finally, with the influence on power system security, stability and operation economy in consideration, an evaluation model for candidate opening schemes is founded based on analytic hierarchy process (AHP. And a fuzzy evaluation algorithm is used to find the optimal scheme. Simulation results of a New England 39-bus system and an actual power system validate the effectiveness and superiority of this proposed method.

  8. Capital Equipment Replacement Decisions

    OpenAIRE

    Batterham, Robert L.; Fraser, K.I.

    1995-01-01

    This paper reviews the literature on the optimal replacement of capital equipment, especially farm machinery. It also considers the influence of taxation and capital rationing on replacement decisions. It concludes that special taxation provisions such as accelerated depreciation and investment allowances are unlikely to greatly influence farmers' capital equipment replacement decisions in Australia.

  9. Understanding Optimal Decision-Making in Wargaming

    Science.gov (United States)

    2013-10-01

    beneficial outcomes from wargaming, one of which is a better understanding of the impact of decisions as a part of combat processes. However, using...under instrument flight rules ( IFR ) (Bellenkes et al., 1997; Katoh, 1997). Of note, eye-tracking technology also has been applied to investigate...Neuroscience, 7 . Skinner, A., Berka, C., Ohara-Long, L., & Sebrechts, M. (2010). Impact of Virtual En- vironment Fidelity on Behavioral and

  10. Decision on risk-averse dual-channel supply chain under demand disruption

    Science.gov (United States)

    Yan, Bo; Jin, Zijie; Liu, Yanping; Yang, Jianbo

    2018-02-01

    We studied dual-channel supply chains using centralized and decentralized decision-making models. We also conducted a comparative analysis of the decisions before and after demand disruption. The study shows that the amount of change in decision-making is a linear function of the amount of demand disruption, and it is independent of the risk-averse coefficient. The optimal sales volume decision of the disturbing supply chain is related to market share and demand disruption in the decentralized decision-making model. The optimal decision is only influenced by demand disruption in the centralized decision-making model. The stability of the sales volume of the two models is related to market share and demand disruption. The optimal system production of the two models shows robustness, but their stable internals are different.

  11. In situ iron activated persulfate oxidative fluid sparging treatment of TCE contamination--a proof of concept study.

    Science.gov (United States)

    Liang, Chenju; Lee, I-Ling

    2008-09-10

    In situ chemical oxidation (ISCO) is considered a reliable technology to treat groundwater contaminated with high concentrations of organic contaminants. An ISCO oxidant, persulfate anion (S(2)O(8)(2-)) can be activated by ferrous ion (Fe(2+)) to generate sulfate radicals (E(o)=2.6 V), which are capable of destroying trichloroethylene (TCE). The property of polarity inhibits S(2)O(8)(2-) or sulfate radical (SO(4)(-)) from effectively oxidizing separate phase TCE, a dense non-aqueous phase liquid (DNAPL). Thus the oxidation primarily takes place in the aqueous phase where TCE is dissolved. A bench column study was conducted to demonstrate a conceptual remediation method by flushing either S(2)O(8)(2-) or Fe(2+) through a soil column, where the TCE DNAPL was present, and passing the dissolved mixture through either a Fe(2+) or S(2)O(8)(2-) fluid sparging curtain. Also, the effect of a solubility enhancing chemical, hydroxypropyl-beta-cyclodextrin (HPCD), was tested to evaluate its ability to increase the aqueous TCE concentration. Both flushing arrangements may result in similar TCE degradation efficiencies of 35% to 42% estimated by the ratio of TCE degraded/(TCE degraded+TCE remained in effluent) and degradation byproduct chloride generation rates of 4.9 to 7.6 mg Cl(-) per soil column pore volume. The addition of HPCD did greatly increase the aqueous TCE concentration. However, the TCE degradation efficiency decreased because the TCE degradation was a lower percentage of the relatively greater amount of dissolved TCE by HPCD. This conceptual treatment may serve as a reference for potential on-site application.

  12. EXPLOT - decision support system for optimization of oil exploitation; EXPLOT - sistema de apoio a decisao para a otimizacao da explotacao de petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Tupac Valdivia, Yvan Jesus; Almeida, Luciana Faletti; Pacheco, Marco Aurelio Cavalcanti; Vellasco, Marley Maria Bernardes Rebuzzi [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Eletrica. Lab. de Inteligencia Computacional], e-mail: yvantv@ele.puc-rio.br, e-mail: faletti@ele.puc-rio.br, e-mail: marco@ele.puc-rio.br, e-mail: marley@ele.puc-rio.br

    2007-06-15

    The present work offers a decision supporting system, integrated in different techniques (genetic algorithms, cultural algorithms, co-evolution, neural networks, neuro fuzzy model and distributed processing) for optimization of exploitation of oil reservoirs. The EXPLOT system identifies exploitation alternatives and determines the quantity, position, type (producers or injectors) and structure (horizontal or vertical) of wells, that maximize the present net value of the alternative (VPL). The EXPLOT system is composed of three main modules: the optimizer (genetic algorithms, cultural algorithms and co-evolution), the Production Curves Obtention (approximator neuro fuzzy-NFHB of the production curve) and the present net value calculation. To estimate the VPL of each developmental alternative, the system utilizes a reservoir simulator, specifically the IMEX, although other simulators may be utilized. In addition to these technologies, the system also utilizes distributed processing, based on the CORBA architecture for distributed execution of the reservoir simulator in a computer network, which significantly reduces the total optimization time. The EXPLOT system was already tested in different examples of oil fields. Results obtained so far are considered consistent according to the opinion of specialists, who consider the system as a new decision support tool concept in the area. The differences of EXPLOT are not only to be found in its efficient optimization model, but also in its interface, through which the specialists interact with the system, introducing project recommendations (e.g., five-spot wells), commanding a localized search for best solutions, sizing the simulation network and monitoring simulation distribution by means of available networks. The EXPLOT system is the result of joint research between CENPES and the Applied Computational Intelligence Lab, PUC-Rio, accomplished during the past three years. The continuation of this research project expands

  13. Optimum equipment maintenance/replacement policy. Part 2: Markov decision approach

    Science.gov (United States)

    Charng, T.

    1982-01-01

    Dynamic programming was utilized as an alternative optimization technique to determine an optimal policy over a given time period. According to a joint effect of the probabilistic transition of states and the sequence of decision making, the optimal policy is sought such that a set of decisions optimizes the long-run expected average cost (or profit) per unit time. Provision of an alternative measure for the expected long-run total discounted costs is also considered. A computer program based on the concept of the Markov Decision Process was developed and tested. The program code listing, the statement of a sample problem, and the computed results are presented.

  14. Radiological optimization

    International Nuclear Information System (INIS)

    Zeevaert, T.

    1998-01-01

    Radiological optimization is one of the basic principles in each radiation-protection system and it is a basic requirement in the safety standards for radiation protection in the European Communities. The objectives of the research, performed in this field at the Belgian Nuclear Research Centre SCK-CEN, are: (1) to implement the ALARA principles in activities with radiological consequences; (2) to develop methodologies for optimization techniques in decision-aiding; (3) to optimize radiological assessment models by validation and intercomparison; (4) to improve methods to assess in real time the radiological hazards in the environment in case of an accident; (5) to develop methods and programmes to assist decision-makers during a nuclear emergency; (6) to support the policy of radioactive waste management authorities in the field of radiation protection; (7) to investigate existing software programmes in the domain of multi criteria analysis. The main achievements for 1997 are given

  15. Fleet Planning Decision-Making: Two-Stage Optimization with Slot Purchase

    Directory of Open Access Journals (Sweden)

    Lay Eng Teoh

    2016-01-01

    Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

  16. Decision support model for establishing the optimal energy retrofit strategy for existing multi-family housing complexes

    International Nuclear Information System (INIS)

    Hong, Taehoon; Koo, Choongwan; Kim, Hyunjoong; Seon Park, Hyo

    2014-01-01

    The number of multi-family housing complexes (MFHCs) over 15 yr old in South Korea is expected to exceed 5 million by 2015. Accordingly, the demand for energy retrofit in the deteriorating MFHCs is rapidly increasing. This study aimed to develop a decision support model for establishing the optimal energy retrofit strategy for existing MFHCs. It can provide clear criteria for establishing the carbon emissions reduction target (CERT) and allow efficient budget allocation for conducting the energy retrofit. The CERT for “S” MFHC, one of MFHCs located in Seoul, as a case study, was set at 23.0% (electricity) and 27.9% (gas energy). In the economic and environmental assessment, it was determined that scenario #12 was the optimal scenario (ranked second with regard to NPV 40 (net present value at year 40) and third with regard to SIR 40 (saving to investment ratio at year 40). The proposed model could be useful for owners, construction managers, or policymakers in charge of establishing energy retrofit strategy for existing MFHCs. It could allow contractors in a competitive bidding process to rationally establish the CERT and select the optimal energy retrofit strategy. It can be also applied to any other country or sector in a global environment. - Highlights: • The proposed model was developed to establish the optimal energy retrofit strategy. • Advanced case-based reasoning was applied to establish the community-based CERT. • Energy simulation was conducted to analyze the effects of energy retrofit strategy. • The optimal strategy can be finally selected based on the LCC and LCCO 2 analysis. • It could be extended to any other country or sector in the global environment

  17. Decision-making styles and depressive symptomatology: Development of the Decision Styles Questionnaire

    Directory of Open Access Journals (Sweden)

    Yan Leykin

    2010-12-01

    Full Text Available Difficulty making decisions is one of the symptoms of the depressive illness. Previous research suggests that depressed individuals may make decisions that differ from those made by the non-depressed, and that they use sub-optimal decision-making strategies. For this study we constructed an instrument that aims to measure a variety of decision-making styles as well as the respondent's view of him or herself as a decision-maker (decisional self-esteem. These styles and estimates of decisional self-esteem were then related to depressive symptoms. Depressive symptomatology correlated negatively with perception of self as a decision-maker. Those with higher depression severity scores characterized themselves as being more anxious about decisions, and more likely to procrastinate. They also reported using fewer productive decision-making strategies, depending more on other people for help with decisions, and relying less on their own intuitions when making decisions. Further research is needed to determine the extent to which these decision-making styles are antecedents to depressive symptomatology or are instead products of, or aspects of, the phenomenology associated with depression.

  18. A Plastic Cortico-Striatal Circuit Model of Adaptation in Perceptual Decision

    Directory of Open Access Journals (Sweden)

    Pao-Yueh eHsiao

    2013-12-01

    Full Text Available The ability to optimize decisions and adapt them to changing environments is a crucial brain function that increase survivability. Although much has been learned about the neuronal activity in various brain regions that are associated with decision making, and about how the nervous systems may learn to achieve optimization, the underlying neuronal mechanisms of how the nervous systems optimize decision strategies with preference given to speed or accuracy, and how the systems adapt to changes in the environment, remain unclear. Based on extensive empirical observations, we addressed the question by extending a previously described cortico-basal ganglia circuit model of perceptual decisions with the inclusion of a dynamic dopamine (DA system that modulates spike-timing dependent plasticity. We found that, once an optimal model setting that maximized the reward rate was selected, the same setting automatically optimized decisions across different task environments through dynamic balancing between the facilitating and depressing components of the DA dynamics. Interestingly, other model parameters were also optimal if we considered the reward rate that was weighted by the subject’s preferences for speed or accuracy. Specifically, the circuit model favored speed if we increased the phasic DA response to the reward prediction error, whereas the model favored accuracy if we reduced the tonic DA activity or the phasic DA responses to the estimated reward probability. The proposed model provides insight into the roles of different components of DA responses in decision adaptation and optimization in a changing environment.

  19. Age Effects and Heuristics in Decision Making.

    Science.gov (United States)

    Besedeš, Tibor; Deck, Cary; Sarangi, Sudipta; Shor, Mikhael

    2012-05-01

    Using controlled experiments, we examine how individuals make choices when faced with multiple options. Choice tasks are designed to mimic the selection of health insurance, prescription drug, or retirement savings plans. In our experiment, available options can be objectively ranked allowing us to examine optimal decision making. First, the probability of a person selecting the optimal option declines as the number of options increases, with the decline being more pronounced for older subjects. Second, heuristics differ by age with older subjects relying more on suboptimal decision rules. In a heuristics validation experiment, older subjects make worse decisions than younger subjects.

  20. Decision support systems for recovery of endangered species

    International Nuclear Information System (INIS)

    Armstrong, C.E.

    1995-01-01

    The listing of a species as endangered under the Endangered Species Act invokes a suite of responses to help improve conditions for the recovery of that species, to include identification of stressors contributing to population loss, decision analysis of the impacts of proposed recovery options, and implementation of optimal recovery measures. The ability of a decision support system to quantify inherent stressor uncertainties and to identify the key stressors that can be controlled or eliminated becomes key to ensuring the recovery of an endangered species. The listing of the Snake River sockeye, spring/summer chinook, and fall chinook salmon species in the Snake River as endangered provides a vivid example of the importance of sophisticated decision support systems. Operational and physical changes under consideration at eight of the hydroelectric dams along the Columbia and Lower Snake River pose significant financial impacts to a variety of stakeholders involved in the salmon population recovery process and carry significant uncertainties of outcome. A decision support system is presented to assist in the identification of optimal recovery actions for this example that includes the following: creation of datamarts of information on environmental, engineering, and ecological values that influence species survival; incorporation of decision analysis tools to determine optimal decision policies; and the use of geographic information systems (GIS) to provide a context for decision analysis and to communicate the impacts of decision policies

  1. Information source exploitation/exploration and NPD decision-making

    DEFF Research Database (Denmark)

    Jespersen, Kristina Risom

    different Scandinavian companies. Data was analyzed using hierarchical regression models across decision criteria dimensions and NPD stages as well as analyzing the combination of selected information sources. Rather than forwarding one optimal search behavior for the entire NPD process, we find optimal...... information search behavior at either end of the exploitation/exploration continuum. Additionally, we find that overexploitation and overexploration is caused by managerial bias. This creates managerial misbehavior at gate decision-points of the NPD process.......The purpose of this study is to examine how the exploration/exploitation continuum is applied by decision-makers in new product gate decision-making. Specifically, we analyze at gate decision-points how the evaluation of a new product project is affected by the information source exploitation...

  2. Verification and synthesis of optimal decision strategies for complex systems

    International Nuclear Information System (INIS)

    Summers, S. J.

    2013-01-01

    Complex systems make a habit of disagreeing with the mathematical models strategically designed to capture their behavior. A recursive process ensues where data is used to gain insight into the disagreement. A simple model may give way to a model with hybrid dynamics. A deterministic model may give way to a model with stochastic dynamics. In many cases, the modeling framework that sufficiently characterises the system is both hybrid and stochastic; these systems are referred to as stochastic hybrid systems. This dissertation considers the stochastic hybrid system framework for modeling complex systems and provides mathematical methods for analysing, and synthesizing decision laws for, such systems. We first propose a stochastic reach-avoid problem for discrete time stochastic hybrid systems. In particular, we present a dynamic programming based solution to a probabilistic reach-avoid problem for a controlled discrete time stochastic hybrid system. We address two distinct interpretations of the reach-avoid problem via stochastic optimal control. In the first case, a sum-multiplicative cost function is introduced along with a corresponding dynamic recursion that quantifies the probability of hitting a target set at some point during a finite time horizon, while avoiding an unsafe set at all preceding time steps. In the second case, we introduce a multiplicative cost function and a dynamic recursion that quantifies the probability of hitting a target set at the terminal time, while avoiding an unsafe set at all preceding time steps. In each case, optimal reach-avoid control policies are derived as the solution to an optimal control problem via dynamic programming. We next introduce an extension of the reach-avoid problem where we consider the verification of discrete time stochastic hybrid systems when there exists uncertainty in the reachability specifications themselves. A sum multiplicative cost function is introduced along with a corresponding dynamic recursion

  3. Verification and synthesis of optimal decision strategies for complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Summers, S. J.

    2013-07-01

    Complex systems make a habit of disagreeing with the mathematical models strategically designed to capture their behavior. A recursive process ensues where data is used to gain insight into the disagreement. A simple model may give way to a model with hybrid dynamics. A deterministic model may give way to a model with stochastic dynamics. In many cases, the modeling framework that sufficiently characterises the system is both hybrid and stochastic; these systems are referred to as stochastic hybrid systems. This dissertation considers the stochastic hybrid system framework for modeling complex systems and provides mathematical methods for analysing, and synthesizing decision laws for, such systems. We first propose a stochastic reach-avoid problem for discrete time stochastic hybrid systems. In particular, we present a dynamic programming based solution to a probabilistic reach-avoid problem for a controlled discrete time stochastic hybrid system. We address two distinct interpretations of the reach-avoid problem via stochastic optimal control. In the first case, a sum-multiplicative cost function is introduced along with a corresponding dynamic recursion that quantifies the probability of hitting a target set at some point during a finite time horizon, while avoiding an unsafe set at all preceding time steps. In the second case, we introduce a multiplicative cost function and a dynamic recursion that quantifies the probability of hitting a target set at the terminal time, while avoiding an unsafe set at all preceding time steps. In each case, optimal reach-avoid control policies are derived as the solution to an optimal control problem via dynamic programming. We next introduce an extension of the reach-avoid problem where we consider the verification of discrete time stochastic hybrid systems when there exists uncertainty in the reachability specifications themselves. A sum multiplicative cost function is introduced along with a corresponding dynamic recursion

  4. An LWR design decision Methodology

    International Nuclear Information System (INIS)

    Leahy, T.J.; Rees, D.C.; Young, J.

    1982-01-01

    While all parties involved in nuclear plant regulation endeavor to make decisions which optimize the considerations of plant safety and financial impacts, these decisions are generally made without the benefit of a systematic and rigorous approach to the questions confronting the decision makers. A Design Decision Methodology has been developed which provides such a systematic approach. By employing this methodology, which makes use of currently accepted probabilistic risk assessment techniques and cost estimation, informed decisions may be made against a background of comparisons between the relative levels of safety and costs associated with various design alternatives

  5. Development of an integrated economic decision-support tool for the remediation of contaminated sites. Overview note

    International Nuclear Information System (INIS)

    Samson, R.; Bage, G.

    2004-05-01

    This report concludes the first design phase of an innovative software tool which, when completed, will allow managers of contaminated sites to make optimal decisions with respect to site remediation. The principal objective of the project was to develop the foundations for decision-support software (SITE VII) which will allow a comprehensive and rigorous approach to the comparison of remediation scenarios for sites contaminated with petroleum hydrocarbons. During this first phase of the project, the NSERC Industrial Chair in Site Remediation and Management of the Ecole Polytechnique de Montreal has completed four stages in the design of a decision-support tool that could be applied by any site manager using a simple computer. These four stages are: refinement of a technico-economic evaluation model; development of databases for five soil remediation technologies; design of a structure for integration of the databases with the technico-economic model; and simulation of the remediation of a contaminated site using the technico-economic model and a subset of the databases. In the interim report, the emphasis was placed on the development of the technico-economic model, supported by a very simple, single-technology simulation of remediation. In the present report, the priority is placed on the integration of the different components required for the creation of decision-support software based on the technico-economic model. An entire chapter of this report is devoted to elaborating the decision structure of the software. The treatment of information within the software is shown schematically and explained step-by-step. Five remediation technologies are handled by the software: three in-situ technologies (bio-venting, bio-slurping, bio-sparging) and two ex-situ technologies (thermal desorption, Bio-pile treatment). A technology file has been created for each technology, containing a brief description of the technology, its performance, its criteria of applicability

  6. Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2017-01-01

    : the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees

  7. Graph-related optimization and decision support systems

    CERN Document Server

    Krichen, Saoussen

    2014-01-01

    Constrained optimization is a challenging branch of operations research that aims to create a model which has a wide range of applications in the supply chain, telecommunications and medical fields. As the problem structure is split into two main components, the objective is to accomplish the feasible set framed by the system constraints. The aim of this book is expose optimization problems that can be expressed as graphs, by detailing, for each studied problem, the set of nodes and the set of edges.  This graph modeling is an incentive for designing a platform that integrates all optimizatio

  8. Inertia and Decision Making.

    Science.gov (United States)

    Alós-Ferrer, Carlos; Hügelschäfer, Sabine; Li, Jiahui

    2016-01-01

    Decision inertia is the tendency to repeat previous choices independently of the outcome, which can give rise to perseveration in suboptimal choices. We investigate this tendency in probability-updating tasks. Study 1 shows that, whenever decision inertia conflicts with normatively optimal behavior (Bayesian updating), error rates are larger and decisions are slower. This is consistent with a dual-process view of decision inertia as an automatic process conflicting with a more rational, controlled one. We find evidence of decision inertia in both required and autonomous decisions, but the effect of inertia is more clear in the latter. Study 2 considers more complex decision situations where further conflict arises due to reinforcement processes. We find the same effects of decision inertia when reinforcement is aligned with Bayesian updating, but if the two latter processes conflict, the effects are limited to autonomous choices. Additionally, both studies show that the tendency to rely on decision inertia is positively associated with preference for consistency.

  9. Optimization of temporal networks under uncertainty

    CERN Document Server

    Wiesemann, Wolfram

    2012-01-01

    Many decision problems in Operations Research are defined on temporal networks, that is, workflows of time-consuming tasks whose processing order is constrained by precedence relations. For example, temporal networks are used to model projects, computer applications, digital circuits and production processes. Optimization problems arise in temporal networks when a decision maker wishes to determine a temporal arrangement of the tasks and/or a resource assignment that optimizes some network characteristic (e.g. the time required to complete all tasks). The parameters of these optimization probl

  10. Doubly Bayesian Analysis of Confidence in Perceptual Decision-Making.

    Science.gov (United States)

    Aitchison, Laurence; Bang, Dan; Bahrami, Bahador; Latham, Peter E

    2015-10-01

    Humans stand out from other animals in that they are able to explicitly report on the reliability of their internal operations. This ability, which is known as metacognition, is typically studied by asking people to report their confidence in the correctness of some decision. However, the computations underlying confidence reports remain unclear. In this paper, we present a fully Bayesian method for directly comparing models of confidence. Using a visual two-interval forced-choice task, we tested whether confidence reports reflect heuristic computations (e.g. the magnitude of sensory data) or Bayes optimal ones (i.e. how likely a decision is to be correct given the sensory data). In a standard design in which subjects were first asked to make a decision, and only then gave their confidence, subjects were mostly Bayes optimal. In contrast, in a less-commonly used design in which subjects indicated their confidence and decision simultaneously, they were roughly equally likely to use the Bayes optimal strategy or to use a heuristic but suboptimal strategy. Our results suggest that, while people's confidence reports can reflect Bayes optimal computations, even a small unusual twist or additional element of complexity can prevent optimality.

  11. Optimizing model. 1. Insemination, replacement, seasonal production and cash flow.

    NARCIS (Netherlands)

    Delorenzo, M.A.; Spreen, T.H.; Bryan, G.R.; Beede, D.K.; Arendonk, van J.A.M.

    1992-01-01

    Dynamic programming to solve the Markov decision process problem of optimal insemination and replacement decisions was adapted to address large dairy herd management decision problems in the US. Expected net present values of cow states (151,200) were used to determine the optimal policy. States

  12. Individual feature identification method for nuclear accident emergency decision-making

    International Nuclear Information System (INIS)

    Chen Yingfeng; Wang Jianlong; Lin Xiaoling; Yang Yongxin; Lu Xincheng

    2014-01-01

    According to the individual feature identification method and combining with the characteristics of nuclear accident emergency decision-making, the evaluation index system of the nuclear accident emergency decision-making was determined on the basis of investigation and analysis. The effectiveness of the nuclear accident emergency decision-making was evaluated based on the individual standards by solving the individual features of the individual standard identification decisions. The case study shows that the optimization result is reasonable, objective and reliable, and it can provide an effective analysis method and decision-making support for optimization of nuclear accident emergency protective measures. (authors)

  13. Optimal scope of supply chain network & operations design

    NARCIS (Netherlands)

    Ma, N.

    2014-01-01

    The increasingly complex supply chain networks and operations call for the development of decision support systems and optimization techniques that take a holistic view of supply chain issues and provide support for integrated decision-making. The economic impacts of optimized supply chain are

  14. Probabilistic Analysis in Management Decision Making

    DEFF Research Database (Denmark)

    Delmar, M. V.; Sørensen, John Dalsgaard

    1992-01-01

    The target group in this paper is people concerned with mathematical economic decision theory. It is shown how the numerically effective First Order Reliability Methods (FORM) can be used in rational management decision making, where some parameters in the applied decision basis are uncertainty...... quantities. The uncertainties are taken into account consistently and the decision analysis is based on the general decision theory in combination with reliability and optimization theory. Examples are shown where the described technique is used and some general conclusion are stated....

  15. Communicating Optimized Decision Input from Stochastic Turbulence Forecasts

    National Research Council Canada - National Science Library

    Szczes, Jeanne R

    2008-01-01

    .... It demonstrates the methodology and importance of incorporating ambiguity, the uncertainty in forecast uncertainty, into the decision making process using the Taijitu method to estimate ambiguity...

  16. Extensions of dynamic programming as a new tool for decision tree optimization

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2013-01-01

    The chapter is devoted to the consideration of two types of decision trees for a given decision table: α-decision trees (the parameter α controls the accuracy of tree) and decision trees (which allow arbitrary level of accuracy). We study

  17. Integrating LCA and Risk Assessment for Decision Support

    DEFF Research Database (Denmark)

    Dong, Yan; Miraglia, Simona; Manzo, Stefano

    The study aims at developing a methodology using decision analysis theory and tools to find the optimal policy (or design) of the studied system, to ensure both sustainability and meanwhile manage risks.......The study aims at developing a methodology using decision analysis theory and tools to find the optimal policy (or design) of the studied system, to ensure both sustainability and meanwhile manage risks....

  18. A decision algorithm for patch spraying

    DEFF Research Database (Denmark)

    Christensen, Svend; Heisel, Torben; Walter, Mette

    2003-01-01

    method that estimates an economic optimal herbicide dose according to site-specific weed composition and density is presented in this paper. The method was termed a ‘decision algorithm for patch spraying’ (DAPS) and was evaluated in a 5-year experiment, in Denmark. DAPS consists of a competition model......, a herbicide dose–response model and an algorithm that estimates the economically optimal doses. The experiment was designed to compare herbicide treatments with DAPS recommendations and the Danish decision support system PC-Plant Protection. The results did not show any significant grain yield difference...

  19. Applying real options in investment decisions relating to environmental pollution

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Tyrone T. [Department of International Business, National Dong Hwa University, 1, Sec. 2, Da Hsueh Road, Shou-Feng, Hualien 974, Taiwan (China)]. E-mail: tjlin@mail.ndhu.edu.tw; Ko, C.-C. [Department of International Trade, Jin Wen Institute of Technology, Taiwan (China); Yeh, H.-N. [Graduate School of Management, Ming Chuan University, Taiwan (China)

    2007-04-15

    This study focuses on how to assess the optimal environmental investment decisions under economic and ecological uncertainty, and establishes the continuous time model using the real option approach to optimize environmental pollution policy. Unlike traditional cost benefit analysis, this work extends the model of [Pindyck, R.S., 2002. Optimal timing problems in environmental economics. Journal of Economic Dynamics and Control 26(9-10), 1677-1697], and attempts to identify the storage threshold of pollution stocks and the optimal timing for implementing environmental pollution decisions.

  20. Applying real options in investment decisions relating to environmental pollution

    International Nuclear Information System (INIS)

    Lin, Tyrone T.; Ko, C.-C.; Yeh, H.-N.

    2007-01-01

    This study focuses on how to assess the optimal environmental investment decisions under economic and ecological uncertainty, and establishes the continuous time model using the real option approach to optimize environmental pollution policy. Unlike traditional cost benefit analysis, this work extends the model of [Pindyck, R.S., 2002. Optimal timing problems in environmental economics. Journal of Economic Dynamics and Control 26(9-10), 1677-1697], and attempts to identify the storage threshold of pollution stocks and the optimal timing for implementing environmental pollution decisions

  1. Optimal Selection of Clustering Algorithm via Multi-Criteria Decision Analysis (MCDA for Load Profiling Applications

    Directory of Open Access Journals (Sweden)

    Ioannis P. Panapakidis

    2018-02-01

    Full Text Available Due to high implementation rates of smart meter systems, considerable amount of research is placed in machine learning tools for data handling and information retrieval. A key tool in load data processing is clustering. In recent years, a number of researches have proposed different clustering algorithms in the load profiling field. The present paper provides a methodology for addressing the aforementioned problem through Multi-Criteria Decision Analysis (MCDA and namely, using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS. A comparison of the algorithms is employed. Next, a single test case on the selection of an algorithm is examined. User specific weights are applied and based on these weight values, the optimal algorithm is drawn.

  2. Evaluation of selected environmental decision support software

    International Nuclear Information System (INIS)

    Sullivan, T.M.; Moskowitz, P.D.; Gitten, M.

    1997-06-01

    Decision Support Software (DSS) continues to be developed to support analysis of decisions pertaining to environmental management. Decision support systems are computer-based systems that facilitate the use of data, models, and structured decision processes in decision making. The optimal DSS should attempt to integrate, analyze, and present environmental information to remediation project managers in order to select cost-effective cleanup strategies. The optimal system should have a balance between the sophistication needed to address the wide range of complicated sites and site conditions present at DOE facilities, and ease of use (e.g., the system should not require data that is typically unknown and should have robust error checking of problem definition through input, etc.). In the first phase of this study, an extensive review of the literature, the Internet, and discussions with sponsors and developers of DSS led to identification of approximately fifty software packages that met the preceding definition

  3. Decision support for choice optimal power generation projects: Fuzzy comprehensive evaluation model based on the electricity market

    International Nuclear Information System (INIS)

    Liang Zhihong; Yang Kun; Sun Yaowei; Yuan Jiahai; Zhang Hongwei; Zhang Zhizheng

    2006-01-01

    In 2002, China began to inspire restructuring of the electric power sector to improve its performance. Especially, with the rapid increase of electricity demand in China, there is a need for non-utility generation investment that cannot be met by government finance alone. However, a first prerequisite is that regulators and decision-makers (DMs) should carefully consider how to balance the need to attract private investment against the policy objectives of minimizing monopoly power and fostering competitive markets. So in the interim term of electricity market, a decentralized decision-making process should eventually replace the centralized generation capacity expansion planning. In this paper, firstly, on the basis of the current situation, a model for evaluating generation projects by comprehensive utilization of fuzzy appraisal and analytic hierarchy process (AHP) is developed. Secondly, a case study of generation project evaluation in China is presented to illustrate the effectiveness of the model in selecting optimal generation projects and attracting private investors. In the case study, with considerations of attracting adequate private investment and promoting energy conservation in China, five most promising policy instruments selected as evaluation factors include project duration, project costs, predicted on-grid price level, environmental protection, enterprise credit grading and performance. Finally, a comprehensive framework that enables the DM to have better concentration and to make more sound decisions by combining the model proposed with modern computer science is designed

  4. A method for optimizing the performance of buildings

    DEFF Research Database (Denmark)

    Pedersen, Frank

    2007-01-01

    needed for solving the optimization problem. Furthermore, the algorithm uses so-called domain constraint functions in order to ensure that the input to the simulation software is feasible. Using this technique avoids performing time-consuming simulations for unrealistic design decisions. The algorithm......This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects......, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building...

  5. Decision making analysis of walnut seedling production on a small ...

    African Journals Online (AJOL)

    The decision has to be made between those three alternatives aiming at achievement of optimal/best economic result for the family farm. Summarizing results obtained from the decision tree, simulation and sensitivity analysis, the optimal solution for the family farm should be to continue production of walnut seedlings with ...

  6. Total Path Length and Number of Terminal Nodes for Decision Trees

    KAUST Repository

    Hussain, Shahid

    2014-09-13

    This paper presents a new tool for study of relationships between total path length (average depth) and number of terminal nodes for decision trees. These relationships are important from the point of view of optimization of decision trees. In this particular case of total path length and number of terminal nodes, the relationships between these two cost functions are closely related with space-time trade-off. In addition to algorithm to compute the relationships, the paper also presents results of experiments with datasets from UCI ML Repository1. These experiments show how two cost functions behave for a given decision table and the resulting plots show the Pareto frontier or Pareto set of optimal points. Furthermore, in some cases this Pareto frontier is a singleton showing the total optimality of decision trees for the given decision table.

  7. Decentralized Channel Decisions of Green Supply Chain in a Fuzzy Decision Making Environment

    Directory of Open Access Journals (Sweden)

    Shengju Sang

    2017-01-01

    Full Text Available This paper considers the greening policies in a decentralized channel between one manufacturer and one retailer in a fuzzy decision making environment. We consider the manufacturing cost and the parameters of demand function as the fuzzy variables. Based on the different market structures, we develop three different fuzzy decentralized decision models. For each case, the expected value, optimistic value and pessimistic value models are formulated, and their optimal solutions are also derived through the fuzzy set theory. Finally, three numerical examples are solved to examine the effectiveness of fuzzy models. The effects of the confidence level of the supply chain memberrs profits and the fuzziness of parameters on optimal prices, level of green innovation, and fuzzy expected profits of actors are also analyzed.

  8. Analyzing Decision Logs to Understand Decision Making in Serious Crime Investigations.

    Science.gov (United States)

    Dando, Coral J; Ormerod, Thomas C

    2017-12-01

    Objective To study decision making by detectives when investigating serious crime through the examination of decision logs to explore hypothesis generation and evidence selection. Background Decision logs are used to record and justify decisions made during serious crime investigations. The complexity of investigative decision making is well documented, as are the errors associated with miscarriages of justice and inquests. The use of decision logs has not been the subject of an empirical investigation, yet they offer an important window into the nature of investigative decision making in dynamic, time-critical environments. Method A sample of decision logs from British police forces was analyzed qualitatively and quantitatively to explore hypothesis generation and evidence selection by police detectives. Results Analyses revealed diversity in documentation of decisions that did not correlate with case type and identified significant limitations of the decision log approach to supporting investigative decision making. Differences emerged between experienced and less experienced officers' decision log records in exploration of alternative hypotheses, generation of hypotheses, and sources of evidential inquiry opened over phase of investigation. Conclusion The practical use of decision logs is highly constrained by their format and context of use. Despite this, decision log records suggest that experienced detectives display strategic decision making to avoid confirmation and satisficing, which affect less experienced detectives. Application Potential applications of this research include both training in case documentation and the development of new decision log media that encourage detectives, irrespective of experience, to generate multiple hypotheses and optimize the timely selection of evidence to test them.

  9. Toward Optimal Decision Making among Vulnerable Patients Referred for Cardiac Surgery: A Qualitative Analysis of Patient and Provider Perspectives.

    Science.gov (United States)

    Gainer, Ryan A; Curran, Janet; Buth, Karen J; David, Jennie G; Légaré, Jean-Francois; Hirsch, Gregory M

    2017-07-01

    Comprehension of risks, benefits, and alternative treatment options has been shown to be poor among patients referred for cardiac interventions. Patients' values and preferences are rarely explicitly sought. An increasing proportion of frail and older patients are undergoing complex cardiac surgical procedures with increased risk of both mortality and prolonged institutional care. We sought input from patients and caregivers to determine the optimal approach to decision making in this vulnerable patient population. Focus groups were held with both providers and former patients. Three focus groups were convened for Coronary Artery Bypass Graft (CABG), Valve, or CABG +Valve patients ≥ 70 y old (2-y post-op, ≤ 8-wk post-op, complicated post-op course) (n = 15). Three focus groups were convened for Intermediate Medical Care Unit (IMCU) nurses, Intensive Care Unit (ICU) nurses, surgeons, anesthesiologists and cardiac intensivists (n = 20). We used a semi-structured interview format to ask questions surrounding the informed consent process. Transcribed audio data was analyzed to develop consistent and comprehensive themes. We identified 5 main themes that influence the decision making process: educational barriers, educational facilitators, patient autonomy and perceived autonomy, patient and family expectations of care, and decision making advocates. All themes were influenced by time constraints experienced in the current consent process. Patient groups expressed a desire to receive information earlier in their care to allow time to identify personal values and preferences in developing plans for treatment. Both groups strongly supported a formal approach for shared decision making with a decisional coach to provide information and facilitate communication with the care team. Identifying the barriers and facilitators to patient and caretaker engagement in decision making is a key step in the development of a structured, patient-centered SDM approach. Intervention

  10. Optimizing Decision Preparedness by Adapting Scenario Complexity and Automating Scenario Generation

    Science.gov (United States)

    Dunne, Rob; Schatz, Sae; Flore, Stephen M.; Nicholson, Denise

    2011-01-01

    Klein's recognition-primed decision (RPD) framework proposes that experts make decisions by recognizing similarities between current decision situations and previous decision experiences. Unfortunately, military personnel arQ often presented with situations that they have not experienced before. Scenario-based training (S8T) can help mitigate this gap. However, SBT remains a challenging and inefficient training approach. To address these limitations, the authors present an innovative formulation of scenario complexity that contributes to the larger research goal of developing an automated scenario generation system. This system will enable trainees to effectively advance through a variety of increasingly complex decision situations and experiences. By adapting scenario complexities and automating generation, trainees will be provided with a greater variety of appropriately calibrated training events, thus broadening their repositories of experience. Preliminary results from empirical testing (N=24) of the proof-of-concept formula are presented, and future avenues of scenario complexity research are also discussed.

  11. State dependent optimization of measurement policy

    Science.gov (United States)

    Konkarikoski, K.

    2010-07-01

    Measurements are the key to rational decision making. Measurement information generates value, when it is applied in the decision making. An investment cost and maintenance costs are associated with each component of the measurement system. Clearly, there is - under a given set of scenarios - a measurement setup that is optimal in expected (discounted) utility. This paper deals how the measurement policy optimization is affected by different system states and how this problem can be tackled.

  12. State dependent optimization of measurement policy

    International Nuclear Information System (INIS)

    Konkarikoski, K

    2010-01-01

    Measurements are the key to rational decision making. Measurement information generates value, when it is applied in the decision making. An investment cost and maintenance costs are associated with each component of the measurement system. Clearly, there is - under a given set of scenarios - a measurement setup that is optimal in expected (discounted) utility. This paper deals how the measurement policy optimization is affected by different system states and how this problem can be tackled.

  13. Optimization for decision making linear and quadratic models

    CERN Document Server

    Murty, Katta G

    2010-01-01

    While maintaining the rigorous linear programming instruction required, Murty's new book is unique in its focus on developing modeling skills to support valid decision-making for complex real world problems, and includes solutions to brand new algorithms.

  14. Bayesian Decision Theoretical Framework for Clustering

    Science.gov (United States)

    Chen, Mo

    2011-01-01

    In this thesis, we establish a novel probabilistic framework for the data clustering problem from the perspective of Bayesian decision theory. The Bayesian decision theory view justifies the important questions: what is a cluster and what a clustering algorithm should optimize. We prove that the spectral clustering (to be specific, the…

  15. Learning to maximize reward rate: a model based on semi-Markov decision processes.

    Science.gov (United States)

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R

    2014-01-01

    WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.

  16. A web-based Decision Support System for the optimal management of construction and demolition waste.

    Science.gov (United States)

    Banias, G; Achillas, Ch; Vlachokostas, Ch; Moussiopoulos, N; Papaioannou, I

    2011-12-01

    Wastes from construction activities constitute nowadays the largest by quantity fraction of solid wastes in urban areas. In addition, it is widely accepted that the particular waste stream contains hazardous materials, such as insulating materials, plastic frames of doors, windows, etc. Their uncontrolled disposal result to long-term pollution costs, resource overuse and wasted energy. Within the framework of the DEWAM project, a web-based Decision Support System (DSS) application - namely DeconRCM - has been developed, aiming towards the identification of the optimal construction and demolition waste (CDW) management strategy that minimises end-of-life costs and maximises the recovery of salvaged building materials. This paper addresses both technical and functional structure of the developed web-based application. The web-based DSS provides an accurate estimation of the generated CDW quantities of twenty-one different waste streams (e.g. concrete, bricks, glass, etc.) for four different types of buildings (residential, office, commercial and industrial). With the use of mathematical programming, the DeconRCM provides also the user with the optimal end-of-life management alternative, taking into consideration both economic and environmental criteria. The DSS's capabilities are illustrated through a real world case study of a typical five floor apartment building in Thessaloniki, Greece. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Frequencies of decision making and monitoring in adaptive resource management.

    Directory of Open Access Journals (Sweden)

    Byron K Williams

    Full Text Available Adaptive management involves learning-oriented decision making in the presence of uncertainty about the responses of a resource system to management. It is implemented through an iterative sequence of decision making, monitoring and assessment of system responses, and incorporating what is learned into future decision making. Decision making at each point is informed by a value or objective function, for example total harvest anticipated over some time frame. The value function expresses the value associated with decisions, and it is influenced by system status as updated through monitoring. Often, decision making follows shortly after a monitoring event. However, it is certainly possible for the cadence of decision making to differ from that of monitoring. In this paper we consider different combinations of annual and biennial decision making, along with annual and biennial monitoring. With biennial decision making decisions are changed only every other year; with biennial monitoring field data are collected only every other year. Different cadences of decision making combine with annual and biennial monitoring to define 4 scenarios. Under each scenario we describe optimal valuations for active and passive adaptive decision making. We highlight patterns in valuation among scenarios, depending on the occurrence of monitoring and decision making events. Differences between years are tied to the fact that every other year a new decision can be made no matter what the scenario, and state information is available to inform that decision. In the subsequent year, however, in 3 of the 4 scenarios either a decision is repeated or monitoring does not occur (or both. There are substantive differences in optimal values among the scenarios, as well as the optimal policies producing those values. Especially noteworthy is the influence of monitoring cadence on valuation in some years. We highlight patterns in policy and valuation among the scenarios, and

  18. Frequencies of decision making and monitoring in adaptive resource management

    Science.gov (United States)

    Williams, Byron K.; Johnson, Fred A.

    2017-01-01

    Adaptive management involves learning-oriented decision making in the presence of uncertainty about the responses of a resource system to management. It is implemented through an iterative sequence of decision making, monitoring and assessment of system responses, and incorporating what is learned into future decision making. Decision making at each point is informed by a value or objective function, for example total harvest anticipated over some time frame. The value function expresses the value associated with decisions, and it is influenced by system status as updated through monitoring. Often, decision making follows shortly after a monitoring event. However, it is certainly possible for the cadence of decision making to differ from that of monitoring. In this paper we consider different combinations of annual and biennial decision making, along with annual and biennial monitoring. With biennial decision making decisions are changed only every other year; with biennial monitoring field data are collected only every other year. Different cadences of decision making combine with annual and biennial monitoring to define 4 scenarios. Under each scenario we describe optimal valuations for active and passive adaptive decision making. We highlight patterns in valuation among scenarios, depending on the occurrence of monitoring and decision making events. Differences between years are tied to the fact that every other year a new decision can be made no matter what the scenario, and state information is available to inform that decision. In the subsequent year, however, in 3 of the 4 scenarios either a decision is repeated or monitoring does not occur (or both). There are substantive differences in optimal values among the scenarios, as well as the optimal policies producing those values. Especially noteworthy is the influence of monitoring cadence on valuation in some years. We highlight patterns in policy and valuation among the scenarios, and discuss management

  19. A fuzzy decision making method for outsourcing activities

    Directory of Open Access Journals (Sweden)

    Zahra Afrandkhalilabad

    2012-09-01

    Full Text Available Optimization of outsourcing operations plays an important role on development and progress for modern organizations. One important question in optimization process is to find a tradeoff between advantage and disadvantage of outsourcing and make appropriate decision whenever outsourcing action is necessary. In fact, there are several cases where outsourcing is not implemented properly and organizations suffer from the consequences. The primary purpose of this paper is to investigate various aspects of outsourcing to facilitate decision-making process in fuzzy environments. The preliminary results detect some of the necessary actions for decision making operations.DOI: 10.5267/j.msl.2012.10.005Keywords:

  20. Engaging Gatekeepers, Optimizing Decision Making, and Mitigating Bias: Design Specifications for Systemic Diversity Interventions.

    Science.gov (United States)

    Vinkenburg, Claartje J

    2017-06-01

    In this contribution to the Journal of Applied Behavioral Science Special Issue on Understanding Diversity Dynamics in Systems: Social Equality as an Organization Change Issue, I develop and describe design specifications for systemic diversity interventions in upward mobility career systems, aimed at optimizing decision making through mitigating bias by engaging gatekeepers. These interventions address the paradox of meritocracy that underlies the surprising lack of diversity at the top of the career pyramid in these systems. I ground the design specifications in the limited empirical evidence on "what works" in systemic interventions. Specifically, I describe examples from interventions in academic settings, including a bias literacy program, participatory modeling, and participant observation. The design specifications, paired with inspirational examples of successful interventions, should assist diversity officers and consultants in designing and implementing interventions to promote the advancement to and representation of nondominant group members at the top of the organizational hierarchy.

  1. Optimization of scintillator loading with the tellurium-130 isotope for long-term stability

    Science.gov (United States)

    Duhamel, Lauren; Song, Xiaoya; Goutnik, Michael; Kaptanoglu, Tanner; Klein, Joshua; SNO+ Collaboration

    2017-09-01

    Tellurium-130 was selected as the isotope for the SNO + neutrinoless double beta decay search, as 130Te decays to 130Xe via double beta decay. Linear alkyl benzene(LAB) is the liquid scintillator for the SNO + experiment. To load tellurium into scintillator, it is combined with 1,2-butanediol to form an organometallic complex, commonly called tellurium butanediol (TeBD). This study focuses on maximizing the percentage of tellurium loaded into scintillator and evaluates the complex's long-term stability. Studies on the effect of nucleation due to imperfections in the detector's surface and external particulates were employed by filtration and induced nucleation. The impact of water on the stability of TeBD complex was evaluated by liquid-nitrogen sparging, variability in pH and induced humidity. Alternative loading methods were evaluated, including the addition of stability-inducing organic compounds. Samples of tellurium-loaded scintillator were synthesized, treated, and consistently monitored in a controlled environment. It was found that the hydronium ions cause precipitation in the loaded scintillator, demonstrating that water has a detrimental effect on long-term stability. Optimization of loaded scintillator stability can contribute to the SNO + double beta decay search.

  2. Goal-Directed Decision Making with Spiking Neurons.

    Science.gov (United States)

    Friedrich, Johannes; Lengyel, Máté

    2016-02-03

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules

  3. New Min-Max Approach to Optimal Choice of the Weights in Multi-Criteria Group Decision-Making Problems

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2015-11-01

    Full Text Available In multi-criteria group decision-making (MCGDM, one of the most important problems is to determine the weights of criteria and experts. This paper intends to present two Min-Max models to optimize the point estimates of the weights. Since each expert generally possesses a uniform viewpoint on the importance (weighted value of each criterion when he/she needs to rank the alternatives, the objective function in the first model is to minimize the maximum variation between the actual score vector and the ideal one for all the alternatives such that the optimal weights of criteria are consistent in ranking all the alternatives for the same expert. The second model is designed to optimize the weights of experts such that the obtained overall evaluation for each alternative can collect the perspectives of the experts as many as possible. Thus, the objective function in the second model is to minimize the maximum variation between the actual vector of evaluations and the ideal one for all the experts, such that the optimal weights can reduce the difference among the experts in evaluating the same alternative. For the constructed Min-Max models, another focus in this paper is on the development of an efficient algorithm for the optimal weights. Some applications are employed to show the significance of the models and algorithm. From the numerical results, it is clear that the developed Min-Max models more effectively solve the MCGDM problems including the ones with incomplete score matrices, compared with the methods available in the literature. Specifically, by the proposed method, (1 the evaluation uniformity of each expert on the same criteria is guaranteed; (2 The overall evaluation for each alternative can collect the judgements of the experts as many as possible; (3 The highest discrimination degree of the alternatives is obtained.

  4. A method for optimizing the performance of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, Frank

    2006-07-01

    This thesis describes a method for optimizing the performance of buildings. Design decisions made in early stages of the building design process have a significant impact on the performance of buildings, for instance, the performance with respect to the energy consumption, economical aspects, and the indoor environment. The method is intended for supporting design decisions for buildings, by combining methods for calculating the performance of buildings with numerical optimization methods. The method is able to find optimum values of decision variables representing different features of the building, such as its shape, the amount and type of windows used, and the amount of insulation used in the building envelope. The parties who influence design decisions for buildings, such as building owners, building users, architects, consulting engineers, contractors, etc., often have different and to some extent conflicting requirements to buildings. For instance, the building owner may be more concerned about the cost of constructing the building, rather than the quality of the indoor climate, which is more likely to be a concern of the building user. In order to support the different types of requirements made by decision-makers for buildings, an optimization problem is formulated, intended for representing a wide range of design decision problems for buildings. The problem formulation involves so-called performance measures, which can be calculated with simulation software for buildings. For instance, the annual amount of energy required by the building, the cost of constructing the building, and the annual number of hours where overheating occurs, can be used as performance measures. The optimization problem enables the decision-makers to specify many different requirements to the decision variables, as well as to the performance of the building. Performance measures can for instance be required to assume their minimum or maximum value, they can be subjected to upper or

  5. Rational Decision-Making in Inhibitory Control

    Science.gov (United States)

    Shenoy, Pradeep; Yu, Angela J.

    2011-01-01

    An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability. PMID:21647306

  6. Rational decision-making in inhibitory control.

    Science.gov (United States)

    Shenoy, Pradeep; Yu, Angela J

    2011-01-01

    An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability.

  7. A Selection Approach for Optimized Problem-Solving Process by Grey Relational Utility Model and Multicriteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    Chih-Kun Ke

    2012-01-01

    Full Text Available In business enterprises, especially the manufacturing industry, various problem situations may occur during the production process. A situation denotes an evaluation point to determine the status of a production process. A problem may occur if there is a discrepancy between the actual situation and the desired one. Thus, a problem-solving process is often initiated to achieve the desired situation. In the process, how to determine an action need to be taken to resolve the situation becomes an important issue. Therefore, this work uses a selection approach for optimized problem-solving process to assist workers in taking a reasonable action. A grey relational utility model and a multicriteria decision analysis are used to determine the optimal selection order of candidate actions. The selection order is presented to the worker as an adaptive recommended solution. The worker chooses a reasonable problem-solving action based on the selection order. This work uses a high-tech company’s knowledge base log as the analysis data. Experimental results demonstrate that the proposed selection approach is effective.

  8. Mean-Variance Optimization in Markov Decision Processes

    OpenAIRE

    Mannor, Shie; Tsitsiklis, John N.

    2011-01-01

    We consider finite horizon Markov decision processes under performance measures that involve both the mean and the variance of the cumulative reward. We show that either randomized or history-based policies can improve performance. We prove that the complexity of computing a policy that maximizes the mean reward under a variance constraint is NP-hard for some cases, and strongly NP-hard for others. We finally offer pseudo-polynomial exact and approximation algorithms.

  9. Multistage stochastic optimization

    CERN Document Server

    Pflug, Georg Ch

    2014-01-01

    Multistage stochastic optimization problems appear in many ways in finance, insurance, energy production and trading, logistics and transportation, among other areas. They describe decision situations under uncertainty and with a longer planning horizon. This book contains a comprehensive treatment of today’s state of the art in multistage stochastic optimization.  It covers the mathematical backgrounds of approximation theory as well as numerous practical algorithms and examples for the generation and handling of scenario trees. A special emphasis is put on estimation and bounding of the modeling error using novel distance concepts, on time consistency and the role of model ambiguity in the decision process. An extensive treatment of examples from electricity production, asset liability management and inventory control concludes the book

  10. The anatomy of choice: dopamine and decision-making.

    Science.gov (United States)

    Friston, Karl; Schwartenbeck, Philipp; FitzGerald, Thomas; Moutoussis, Michael; Behrens, Timothy; Dolan, Raymond J

    2014-11-05

    This paper considers goal-directed decision-making in terms of embodied or active inference. We associate bounded rationality with approximate Bayesian inference that optimizes a free energy bound on model evidence. Several constructs such as expected utility, exploration or novelty bonuses, softmax choice rules and optimism bias emerge as natural consequences of free energy minimization. Previous accounts of active inference have focused on predictive coding. In this paper, we consider variational Bayes as a scheme that the brain might use for approximate Bayesian inference. This scheme provides formal constraints on the computational anatomy of inference and action, which appear to be remarkably consistent with neuroanatomy. Active inference contextualizes optimal decision theory within embodied inference, where goals become prior beliefs. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (associated with softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution. Crucially, this sensitivity corresponds to the precision of beliefs about behaviour. The changes in precision during variational updates are remarkably reminiscent of empirical dopaminergic responses-and they may provide a new perspective on the role of dopamine in assimilating reward prediction errors to optimize decision-making.

  11. The optimization model of the logging machinery usage in forestry practice

    Directory of Open Access Journals (Sweden)

    Jitka Janová

    2009-01-01

    Full Text Available The decision support systems commonly used in industry and economy managerial practice for optimizing the processes are based on algoritmization of the typical decision problems. In Czech forestry business, there is a lack of developed decision support systems, which could be easily used in daily practice. This stems from the fact, that the application of optimization methods is less successful in forestry decision making than in industry or economy due to inherent complexity of the forestry decision problems. There is worldwide ongoing research on optimization models applicable in forestry decision making, but the results are not globally applicable and moreover the cost of possibly arising software tools are indispensable. Especially small and medium forestry companies in Czech Republic can not afford such additional costs, although the results of optimization could positively in­fluen­ce not only the business itself but also the impact of forestry business on the environment. Hence there is a need for user friendly optimization models for forestry decision making in the area of Czech Republic, which could be easily solved in commonly available software, and whose results would be both, realistic and easily applicable in the daily decision making.The aim of this paper is to develop the optimization model for the machinery use planning in Czech logging firm in such a way, that the results can be obtained using MS EXCEL. The goal is to identify the integer number of particular machines which should be outsourced for the next period, when the total cost minimization is required. The linear programming model is designed covering the typical restrictions on available machinery and total volume of trees to be cut and transported. The model offers additional result in the form of optimal employment of particular machines. The solution procedure is described in detail and the results obtained are discussed with respect to its applicability in

  12. (Too) optimistic about optimism: the belief that optimism improves performance.

    Science.gov (United States)

    Tenney, Elizabeth R; Logg, Jennifer M; Moore, Don A

    2015-03-01

    A series of experiments investigated why people value optimism and whether they are right to do so. In Experiments 1A and 1B, participants prescribed more optimism for someone implementing decisions than for someone deliberating, indicating that people prescribe optimism selectively, when it can affect performance. Furthermore, participants believed optimism improved outcomes when a person's actions had considerable, rather than little, influence over the outcome (Experiment 2). Experiments 3 and 4 tested the accuracy of this belief; optimism improved persistence, but it did not improve performance as much as participants expected. Experiments 5A and 5B found that participants overestimated the relationship between optimism and performance even when their focus was not on optimism exclusively. In summary, people prescribe optimism when they believe it has the opportunity to improve the chance of success-unfortunately, people may be overly optimistic about just how much optimism can do. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  13. Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach.

    Science.gov (United States)

    Bennett, Casey C; Hauser, Kris

    2013-01-01

    In the modern healthcare system, rapidly expanding costs/complexity, the growing myriad of treatment options, and exploding information streams that often do not effectively reach the front lines hinder the ability to choose optimal treatment decisions over time. The goal in this paper is to develop a general purpose (non-disease-specific) computational/artificial intelligence (AI) framework to address these challenges. This framework serves two potential functions: (1) a simulation environment for exploring various healthcare policies, payment methodologies, etc., and (2) the basis for clinical artificial intelligence - an AI that can "think like a doctor". This approach combines Markov decision processes and dynamic decision networks to learn from clinical data and develop complex plans via simulation of alternative sequential decision paths while capturing the sometimes conflicting, sometimes synergistic interactions of various components in the healthcare system. It can operate in partially observable environments (in the case of missing observations or data) by maintaining belief states about patient health status and functions as an online agent that plans and re-plans as actions are performed and new observations are obtained. This framework was evaluated using real patient data from an electronic health record. The results demonstrate the feasibility of this approach; such an AI framework easily outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service models of healthcare. The cost per unit of outcome change (CPUC) was $189 vs. $497 for AI vs. TAU (where lower is considered optimal) - while at the same time the AI approach could obtain a 30-35% increase in patient outcomes. Tweaking certain AI model parameters could further enhance this advantage, obtaining approximately 50% more improvement (outcome change) for roughly half the costs. Given careful design and problem formulation, an AI simulation framework can approximate optimal

  14. CREATION OF OPTIMAL PERFORMANCE OF AN INVESTMENT PROJECT

    Directory of Open Access Journals (Sweden)

    Višnja Vojvodić Rosenzweig

    2010-12-01

    Full Text Available The selection of an investment project is formulated as a multi-criteria decision-making problem. This paper presents a case in which the decision-maker uses nine criteria or rather attributes (Net Present Value, Internal Rate of Return, Payback Period, Accounting Rate of Return, Cumulative Cash Flows, Return on Investment, Net Profit Margin, Interest Coverage Ratio and Current Ratio. Individual utility functions are constructed for each attribute separately, as well as a global utility function representing a weighted sum of individual utility functions. For every attribute a finite set of ordered pairs or utility points is determined, taking into account the decision-maker’s assessment. The given points are then approximated by the utility function. Finally, according to the decision-maker’s assessment the optimization problem is solved with the purpose of achieving an optimal performance for each project. By way of negotiation the performances on offer approach the optimal performance of the project with the purpose of realising an agreement between the decision-maker and the investor.

  15. Product portfolio optimization based on substitution

    DEFF Research Database (Denmark)

    Myrodia, Anna; Moseley, A.; Hvam, Lars

    2017-01-01

    The development of production capabilities has led to proliferation of the product variety offered to the customer. Yet this fact does not directly imply increase of manufacturers' profitability, nor customers' satisfaction. Consequently, recent research focuses on portfolio optimization through...... substitution and standardization techniques. However when re-defining the strategic market decisions are characterized by uncertainty due to several parameters. In this study, by using a GAMS optimization model we present a method for supporting strategic decisions on substitution, by quantifying the impact...

  16. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  17. Pricing and collecting decisions in a closed-loop supply chain with symmetric and asymmetric information

    DEFF Research Database (Denmark)

    Wei, Jie; Govindan, Kannan; Li, Yongjian

    2015-01-01

    . The optimal strategies in closed form are given under the decision scenarios with symmetric information; moreover, the first order conditions that the optimal retail price, optimal wholesale price, and optimal collection rate satisfy are given under the decision scenarios with asymmetric information......The optimal decision problem of a closed-loop supply chain with symmetric and asymmetric information structures is considered using game theory in this paper. The paper aims to explore how the manufacturer and the retailer make their own decisions about wholesale price, retail price, and collection...... rate under symmetric and asymmetric information conditions. Four game models are established, which allow one to examine the strategies of each firm and explore the role of the manufacturer and the retailer in four different game scenarios under symmetric and asymmetric information structures...

  18. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  19. the influence of dwelling place and self- efficacy on career decision

    African Journals Online (AJOL)

    Ada

    Results were discussed in context relating to theories and previous findings on career decision making. The findings ... decisions. KEY WORDS: Dwelling place, Self- efficacy, Career, Career decision, Decision making. INTRODUCTION. What children will be when they grow up ..... emotions, dispositional, optimism and work.

  20. An Integrated Decision-Making Model for Categorizing Weather Products and Decision Aids

    Science.gov (United States)

    Elgin, Peter D.; Thomas, Rickey P.

    2004-01-01

    The National Airspace System s capacity will experience considerable growth in the next few decades. Weather adversely affects safe air travel. The FAA and NASA are working to develop new technologies that display weather information to support situation awareness and optimize pilot decision-making in avoiding hazardous weather. Understanding situation awareness and naturalistic decision-making is an important step in achieving this goal. Information representation and situation time stress greatly influence attentional resource allocation and working memory capacity, potentially obstructing accurate situation awareness assessments. Three naturalistic decision-making theories were integrated to provide an understanding of the levels of decision making incorporated in three operational situations and two conditions. The task characteristics associated with each phase of flight govern the level of situation awareness attained and the decision making processes utilized. Weather product s attributes and situation task characteristics combine to classify weather products according to the decision-making processes best supported. In addition, a graphical interface is described that affords intuitive selection of the appropriate weather product relative to the pilot s current flight situation.

  1. Management decision of optimal recharge water in groundwater artificial recharge conditions- A case study in an artificial recharge test site

    Science.gov (United States)

    He, H. Y.; Shi, X. F.; Zhu, W.; Wang, C. Q.; Ma, H. W.; Zhang, W. J.

    2017-11-01

    The city conducted groundwater artificial recharge test which was taken a typical site as an example, and the purpose is to prevent and control land subsidence, increase the amount of groundwater resources. To protect groundwater environmental quality and safety, the city chose tap water as recharge water, however, the high cost makes it not conducive to the optimal allocation of water resources and not suitable to popularize widely. To solve this, the city selects two major surface water of River A and B as the proposed recharge water, to explore its feasibility. According to a comprehensive analysis of the cost of recharge, the distance of the water transport, the quality of recharge water and others. Entropy weight Fuzzy Comprehensive Evaluation Method is used to prefer tap water and water of River A and B. Evaluation results show that water of River B is the optimal recharge water, if used; recharge cost will be from 0.4724/m3 to 0.3696/m3. Using Entropy weight Fuzzy Comprehensive Evaluation Method to confirm water of River B as optimal water is scientific and reasonable. The optimal water management decisions can provide technical support for the city to carry out overall groundwater artificial recharge engineering in deep aquifer.

  2. Investment in flood protection measures under climate change uncertainty. An investment decision

    Energy Technology Data Exchange (ETDEWEB)

    Bruin, Karianne de

    2012-11-01

    Recent river flooding in Europe has triggered debates among scientists and policymakers on future projections of flood frequency and the need for adaptive investments, such as flood protection measures. Because there exists uncertainty about the impact of climate change of flood risk, such investments require a careful analysis of expected benefits and costs. The objective of this paper is to show how climate change uncertainty affects the decision to invest in flood protection measures. We develop a model that simulates optimal decision making in flood protection, it incorporates flexible timing of investment decisions and scientific uncertainty on the extent of climate change impacts. This model allows decision-makers to cope with the uncertain impacts of climate change on the frequency and damage of river flood events and minimises the risk of under- or over-investment. One of the innovative elements is that we explicitly distinguish between structural and non-structural flood protection measures. Our results show that the optimal investment decision today depends strongly on the cost structure of the adaptation measures and the discount rate, especially the ratio of fixed and weighted annual costs of the measures. A higher level of annual flood damage and later resolution of uncertainty in time increases the optimal investment. Furthermore, the optimal investment decision today is influenced by the possibility of the decision-maker to adjust his decision at a future moment in time.(auth)

  3. A compensatory approach to optimal selection with mastery scores

    NARCIS (Netherlands)

    van der Linden, Willem J.; Vos, Hendrik J.

    1994-01-01

    This paper presents some Bayesian theories of simultaneous optimization of decision rules for test-based decisions. Simultaneous decision making arises when an institution has to make a series of selection, placement, or mastery decisions with respect to subjects from a population. An obvious

  4. Many-objective thermodynamic optimization of Stirling heat engine

    International Nuclear Information System (INIS)

    Patel, Vivek; Savsani, Vimal; Mudgal, Anurag

    2017-01-01

    This paper presents a rigorous investigation of many-objective (four-objective) thermodynamic optimization of a Stirling heat engine. Many-objective optimization problem is formed by considering maximization of thermal efficiency, power output, ecological function and exergy efficiency. Multi-objective heat transfer search (MOHTS) algorithm is proposed and applied to obtain a set of Pareto-optimal points. Many objective optimization results form a solution in a four dimensional hyper objective space and for visualization it is represented on a two dimension objective space. Thus, results of four-objective optimization are represented by six Pareto fronts in two dimension objective space. These six Pareto fronts are compared with their corresponding two-objective Pareto fronts. Quantitative assessment of the obtained Pareto solutions is reported in terms of spread and the spacing measures. Different decision making approaches such as LINMAP, TOPSIS and fuzzy are used to select a final optimal solution from Pareto optimal set of many-objective optimization. Finally, to reveal the level of conflict between these objectives, distribution of each decision variable in their allowable range is also shown in two dimensional objective spaces. - Highlights: • Many-objective (i.e. four objective) optimization of Stirling engine is investigated. • MOHTS algorithm is introduced and applied to obtain a set of Pareto points. • Comparative results of many-objective and multi-objectives are presented. • Relationship of design variables in many-objective optimization are obtained. • Optimum solution is selected by using decision making approaches.

  5. Encyclopedia of optimization

    CERN Document Server

    Pardalos, Panos

    2001-01-01

    Optimization problems are widespread in the mathematical modeling of real world systems and their applications arise in all branches of science, applied science and engineering. The goal of the Encyclopedia of Optimization is to introduce the reader to a complete set of topics in order to show the spectrum of recent research activities and the richness of ideas in the development of theories, algorithms and the applications of optimization. It is directed to a diverse audience of students, scientists, engineers, decision makers and problem solvers in academia, business, industry, and government.

  6. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    Science.gov (United States)

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  7. Reliability-Based Optimization in Structural Engineering

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1994-01-01

    In this paper reliability-based optimization problems in structural engineering are formulated on the basis of the classical decision theory. Several formulations are presented: Reliability-based optimal design of structural systems with component or systems reliability constraints, reliability...

  8. Sustainability Concept in Decision-Making: Carbon Tax Consideration for Joint Product Mix Decision

    Directory of Open Access Journals (Sweden)

    Wen-Hsien Tsai

    2016-11-01

    Full Text Available Carbon emissions are receiving greater scrutiny in many countries due to international forces to reduce anthropogenic global climate change. Carbon taxation is one of the most common carbon emission regulation policies, and companies must incorporate it into their production and pricing decisions. Activity-based costing (ABC and the theory of constraints (TOC have been applied to solve product mix problems; however, a challenging aspect of the product mix problem involves evaluating joint manufactured products, while reducing carbon emissions and environmental pollution to fulfill social responsibility. The aim of this paper is to apply ABC and TOC to analyze green product mix decision-making for joint products using a mathematical programming model and the joint production data of pharmaceutical industry companies for the processing of active pharmaceutical ingredients (APIs in drugs for medical use. This paper illustrates that the time-driven ABC model leads to optimal joint product mix decisions and performs sensitivity analysis to study how the optimal solution will change with the carbon tax. Our findings provide insight into ‘sustainability decisions’ and are beneficial in terms of environmental management in a competitive pharmaceutical industry.

  9. Radiological assessment and optimization

    International Nuclear Information System (INIS)

    Zeevaert, T.; Sohier, A.

    1998-01-01

    The objectives of SCK-CEN's research in the field of radiological assessment and optimization are (1) to implement ALARA principles in activities with radiological consequences; (2) to develop methodologies for radiological optimization in decision-aiding; (3) to improve methods to assess in real time the radiological hazards in the environment in case of an accident; (4) to develop methods and programmes to assist decision-makers during a nuclear emergency; (5) to support the policy of radioactive waste management authorities in the field of radiation protection; (6) to investigate computer codes in the area of multi criteria analysis; (7) to organise courses on off-site emergency response to nuclear accidents. Main achievements in these areas for 1997 are summarised

  10. Intelligent Decision Support in Proportional–Stop-Loss Reinsurance Using Multiple Attribute Decision-Making (MADM

    Directory of Open Access Journals (Sweden)

    Shirley Jie Xuan Wang

    2017-11-01

    Full Text Available This article addresses the possibility of incorporating intelligent decision support systems into reinsurance decision-making. This involves the insurance company and the reinsurance company, and is negotiated through reinsurance intermediaries. The article proposes a decision flow to model the reinsurance design and selection process. This article focuses on adopting more than one optimality criteria under a more generic combinational design of commonly used reinsurance products, i.e., proportional reinsurance and stop-loss reinsurance. In terms of methodology, the significant contribution of the study the incorporation of the well-established decision analysis tool multiple-attribute decision-making (MADM into the modelling of reinsurance selection. To illustrate the feasibility of incorporating intelligent decision supporting systems in the reinsurance market, the study includes a numerical case study using the simulation software @Risk in modeling insurance claims, as well as programming in MATLAB to realize MADM. A list of managerial implications could be drawn from the case study results. Most importantly, when choosing the most appropriate type of reinsurance, insurance companies should base their decisions on multiple measurements instead of single-criteria decision-making models so that their decisions may be more robust.

  11. A Swarm Optimization approach for clinical knowledge mining.

    Science.gov (United States)

    Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A

    2015-10-01

    Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright

  12. Binary Cockroach Swarm Optimization for Combinatorial Optimization Problem

    Directory of Open Access Journals (Sweden)

    Ibidun Christiana Obagbuwa

    2016-09-01

    Full Text Available The Cockroach Swarm Optimization (CSO algorithm is inspired by cockroach social behavior. It is a simple and efficient meta-heuristic algorithm and has been applied to solve global optimization problems successfully. The original CSO algorithm and its variants operate mainly in continuous search space and cannot solve binary-coded optimization problems directly. Many optimization problems have their decision variables in binary. Binary Cockroach Swarm Optimization (BCSO is proposed in this paper to tackle such problems and was evaluated on the popular Traveling Salesman Problem (TSP, which is considered to be an NP-hard Combinatorial Optimization Problem (COP. A transfer function was employed to map a continuous search space CSO to binary search space. The performance of the proposed algorithm was tested firstly on benchmark functions through simulation studies and compared with the performance of existing binary particle swarm optimization and continuous space versions of CSO. The proposed BCSO was adapted to TSP and applied to a set of benchmark instances of symmetric TSP from the TSP library. The results of the proposed Binary Cockroach Swarm Optimization (BCSO algorithm on TSP were compared to other meta-heuristic algorithms.

  13. 2D Decision-Making for Multi-Criteria Design Optimization

    National Research Council Canada - National Science Library

    Engau, A; Wiecek, M. M

    2006-01-01

    .... To facilitate those analyses and enhance decision-making and design selection, we propose to decompose the original problem by considering only pairs of criteria at a time, thereby making tradeoff...

  14. Optimization in radiological protection

    International Nuclear Information System (INIS)

    Acosta Perez, Clarice de Freitas

    1996-01-01

    The optimization concept in radiation protection is, in its essence, practical. In each aspect that we deal with the man, it is necessary to take frequent decisions such as: what is the protection level to be pursued, since the protection levels under consideration provide doses lower than the appropriate annual limits. The optimization gives a basic framework of the minding that is appropriate to conduct to a balance kind of the resources available for the protection and protection level obtained against a multitude of factors and constrains in a manner to obtain the best result. In this work, was performed the optimization, from the radiation protection point of view, of a facility project who enclose two shielded hot cells where will be handled UO 2 small plate with 50% of U-235 burn-up, irradiated in the research swimming pool reactor, IEA-R1. To obtain this goal were specified the relevant factors and criteria, were applied the main techniques used in a decision-making in radiological protection, presently adopted and was performed a sensibility study of the factors and criteria used in this work. In order to obtain a greater agility in applying the techniques for decision-making was developed a micro computer program. (author)

  15. The decision tree approach to classification

    Science.gov (United States)

    Wu, C.; Landgrebe, D. A.; Swain, P. H.

    1975-01-01

    A class of multistage decision tree classifiers is proposed and studied relative to the classification of multispectral remotely sensed data. The decision tree classifiers are shown to have the potential for improving both the classification accuracy and the computation efficiency. Dimensionality in pattern recognition is discussed and two theorems on the lower bound of logic computation for multiclass classification are derived. The automatic or optimization approach is emphasized. Experimental results on real data are reported, which clearly demonstrate the usefulness of decision tree classifiers.

  16. A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.

    Science.gov (United States)

    Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy

    2015-06-20

    The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Genetic algorithm based separation cascade optimization

    International Nuclear Information System (INIS)

    Mahendra, A.K.; Sanyal, A.; Gouthaman, G.; Bera, T.K.

    2008-01-01

    The conventional separation cascade design procedure does not give an optimum design because of squaring-off, variation of flow rates and separation factor of the element with respect to stage location. Multi-component isotope separation further complicates the design procedure. Cascade design can be stated as a constrained multi-objective optimization. Cascade's expectation from the separating element is multi-objective i.e. overall separation factor, cut, optimum feed and separative power. Decision maker may aspire for more comprehensive multi-objective goals where optimization of cascade is coupled with the exploration of separating element optimization vector space. In real life there are many issues which make it important to understand the decision maker's perception of cost-quality-speed trade-off and consistency of preferences. Genetic algorithm (GA) is one such evolutionary technique that can be used for cascade design optimization. This paper addresses various issues involved in the GA based multi-objective optimization of the separation cascade. Reference point based optimization methodology with GA based Pareto optimality concept for separation cascade was found pragmatic and promising. This method should be explored, tested, examined and further developed for binary as well as multi-component separations. (author)

  18. Radioactive Air Emissions Notice of Construction for the 105-KW Basin integrated water treatment system filter vessel sparging vent

    Energy Technology Data Exchange (ETDEWEB)

    Kamberg, L.D.

    1998-02-23

    This document serves as a notice of construction (NOC), pursuant to the requirements of Washington Administrative Code (WAC) 246-247-060, and as a request for approval to construct, pursuant to 40 Code of Federal Regulations (CFR) 61.07, for the Integrated Water Treatment System (IWTS) Filter Vessel Sparging Vent at 105-KW Basin. Additionally, the following description, and references are provided as the notices of startup, pursuant to 40 CFR 61.09(a)(1) and (2) in accordance with Title 40 Code of Federal Regulations, Part 61, National Emission Standards for Hazardous Air Pollutants. The 105-K West Reactor and its associated spent nuclear fuel (SNF) storage basin were constructed in the early 1950s and are located on the Hanford Site in the 100-K Area about 1,400 feet from the Columbia River. The 105-KW Basin contains 964 Metric Tons of SNF stored under water in approximately 3,800 closed canisters. This SNF has been stored for varying periods of time ranging from 8 to 17 years. The 105-KW Basin is constructed of concrete with an epoxy coating and contains approximately 1.3 million gallons of water with an asphaltic membrane beneath the pool. The IWTS, which has been described in the Radioactive Air Emissions NOC for Fuel Removal for 105-KW Basin (DOE/RL-97-28 and page changes per US Department of Energy, Richland Operations Office letter 97-EAP-814) will be used to remove radionuclides from the basin water during fuel removal operations. The purpose of the modification described herein is to provide operational flexibility for the IWTS at the 105-KW basin. The proposed modification is scheduled to begin in calendar year 1998.

  19. Some Links Between Game Theory and Decision Theory in Economics

    OpenAIRE

    Dominika Crnjac; Goran Martinovic

    2009-01-01

    Certain optimal strategies based upon game theory are given in this paper. A decision-making function and a risk function are explained. Decision-making criteria are applied for determining best decision-making functions with respect to a specific criterion. Special attention is given to the minimax criterion.

  20. Comparative Analysis of Investment Decision Models

    Directory of Open Access Journals (Sweden)

    Ieva Kekytė

    2017-06-01

    Full Text Available Rapid development of financial markets resulted new challenges for both investors and investment issues. This increased demand for innovative, modern investment and portfolio management decisions adequate for market conditions. Financial market receives special attention, creating new models, includes financial risk management and investment decision support systems.Researchers recognize the need to deal with financial problems using models consistent with the reality and based on sophisticated quantitative analysis technique. Thus, role mathematical modeling in finance becomes important. This article deals with various investments decision-making models, which include forecasting, optimization, stochatic processes, artificial intelligence, etc., and become useful tools for investment decisions.

  1. Intelligent Decision Support and Big Data for Logistics and Supply Chain Management

    DEFF Research Database (Denmark)

    Voss, Stefan; Sebastian, Hans-Jürgen; Pahl, Julia

    2017-01-01

    Intelligent Decision Support and Big Data for Logistics and Supply Chain Management” features theoretical developments, real-world applications and information systems related to solving decision problems in logistics and supply chain management. Methods include optimization, heuristics, metaheur......Intelligent Decision Support and Big Data for Logistics and Supply Chain Management” features theoretical developments, real-world applications and information systems related to solving decision problems in logistics and supply chain management. Methods include optimization, heuristics......, metaheuristics and matheuristics, simulation, agent technologies, and descriptive methods. In a sense, we were and are representing the future of logistics over the years....

  2. Acquisition of decision making criteria: reward rate ultimately beats accuracy.

    Science.gov (United States)

    Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D

    2011-02-01

    Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.

  3. A markov decision process model for the optimal dispatch of military medical evacuation assets.

    Science.gov (United States)

    Keneally, Sean K; Robbins, Matthew J; Lunday, Brian J

    2016-06-01

    We develop a Markov decision process (MDP) model to examine aerial military medical evacuation (MEDEVAC) dispatch policies in a combat environment. The problem of deciding which aeromedical asset to dispatch to each service request is complicated by the threat conditions at the service locations and the priority class of each casualty event. We assume requests for MEDEVAC support arrive sequentially, with the location and the priority of each casualty known upon initiation of the request. The United States military uses a 9-line MEDEVAC request system to classify casualties as being one of three priority levels: urgent, priority, and routine. Multiple casualties can be present at a single casualty event, with the highest priority casualty determining the priority level for the casualty event. Moreover, an armed escort may be required depending on the threat level indicated by the 9-line MEDEVAC request. The proposed MDP model indicates how to optimally dispatch MEDEVAC helicopters to casualty events in order to maximize steady-state system utility. The utility gained from servicing a specific request depends on the number of casualties, the priority class for each of the casualties, and the locations of both the servicing ambulatory helicopter and casualty event. Instances of the dispatching problem are solved using a relative value iteration dynamic programming algorithm. Computational examples are used to investigate optimal dispatch policies under different threat situations and armed escort delays; the examples are based on combat scenarios in which United States Army MEDEVAC units support ground operations in Afghanistan.

  4. Optimal Replacement and Management Policies for Beef Cows

    OpenAIRE

    W. Marshall Frasier; George H. Pfeiffer

    1994-01-01

    Beef cow replacement studies have not reflected the interaction between herd management and the culling decision. We demonstrate techniques for modeling optimal beef cow replacement intervals and discrete management policies by incorporating the dynamic effects of management on future productivity when biological response is uncertain. Markovian decision analysis is used to identify optimal beef cow management on a ranch typical of the Sandhills region of Nebraska. Issues of breeding season l...

  5. Improving the yield from fermentative hydrogen production.

    Science.gov (United States)

    Kraemer, Jeremy T; Bagley, David M

    2007-05-01

    Efforts to increase H(2) yields from fermentative H(2) production include heat treatment of the inoculum, dissolved gas removal, and varying the organic loading rate. Although heat treatment kills methanogens and selects for spore-forming bacteria, the available evidence indicates H(2) yields are not maximized compared to bromoethanesulfonate, iodopropane, or perchloric acid pre-treatments and spore-forming acetogens are not killed. Operational controls (low pH, short solids retention time) can replace heat treatment. Gas sparging increases H(2) yields compared to un-sparged reactors, but no relationship exists between the sparging rate and H(2) yield. Lower sparging rates may improve the H(2) yield with less energy input and product dilution. The reasons why sparging improves H(2) yields are unknown, but recent measurements of dissolved H(2) concentrations during sparging suggest the assumption of decreased inhibition of the H(2)-producing enzymes is unlikely. Significant disagreement exists over the effect of organic loading rate (OLR); some studies show relatively higher OLRs improve H(2) yield while others show the opposite. Discovering the reasons for higher H(2) yields during dissolved gas removal and changes in OLR will help improve H(2) yields.

  6. Multiple Criteria Decision Making by Generalized Data Envelopment Analysis Introducing Aspiration Level Method

    International Nuclear Information System (INIS)

    Yun, Yeboon; Arakawa, Masao; Hiroshi, Ishikawa; Nakayama, Hirotaka

    2002-01-01

    It has been proved in problems with 2-objective functions that genetic algorithms (GAs) are well utilized for generating Pareto optimal solutions, and then decision making can be easily performed on the basis of visualized Pareto optimal solutions. However, GAs are difficult to visualize Pareto optimal solutions in cases in which the number of objective function is more than 4. Hence, it is trouble some to grasp the trade-off among many objective functions, and decision makers hesitate to choose a final solution from a number of Pareto optimal solutions. In order to solve these problems, we suggest an aspiration level approach to the method using the generalized data envelopment analysis and GAs. We show that the proposed method supports decision makers to choose their desirable solution from many Pareto optimal solutions. Furthermore, it will be seen that engineering design can be effectively done by the proposed method, which makes generation of several Pareto optimal solutions close to the aspiration level and trade-off analysis easily

  7. An Indirect Simulation-Optimization Model for Determining Optimal TMDL Allocation under Uncertainty

    Directory of Open Access Journals (Sweden)

    Feng Zhou

    2015-11-01

    Full Text Available An indirect simulation-optimization model framework with enhanced computational efficiency and risk-based decision-making capability was developed to determine optimal total maximum daily load (TMDL allocation under uncertainty. To convert the traditional direct simulation-optimization model into our indirect equivalent model framework, we proposed a two-step strategy: (1 application of interval regression equations derived by a Bayesian recursive regression tree (BRRT v2 algorithm, which approximates the original hydrodynamic and water-quality simulation models and accurately quantifies the inherent nonlinear relationship between nutrient load reductions and the credible interval of algal biomass with a given confidence interval; and (2 incorporation of the calibrated interval regression equations into an uncertain optimization framework, which is further converted to our indirect equivalent framework by the enhanced-interval linear programming (EILP method and provides approximate-optimal solutions at various risk levels. The proposed strategy was applied to the Swift Creek Reservoir’s nutrient TMDL allocation (Chesterfield County, VA to identify the minimum nutrient load allocations required from eight sub-watersheds to ensure compliance with user-specified chlorophyll criteria. Our results indicated that the BRRT-EILP model could identify critical sub-watersheds faster than the traditional one and requires lower reduction of nutrient loadings compared to traditional stochastic simulation and trial-and-error (TAE approaches. This suggests that our proposed framework performs better in optimal TMDL development compared to the traditional simulation-optimization models and provides extreme and non-extreme tradeoff analysis under uncertainty for risk-based decision making.

  8. Maintenance optimization after RCM

    International Nuclear Information System (INIS)

    Doyle, E.K.; Lee, C.-G.; Cho, D.

    2005-01-01

    Variant forms of RCM (Reliability Centered Maintenance) have been the maintenance optimizing tools of choice in industry for the last 20 years. Several such optimization techniques have been implemented at the Bruce Nuclear Station. Further cost refinement of the Station preventive maintenance strategy whereby decisions are based on statistical analysis of historical failure data are now being evaluated. The evaluation includes a requirement to demonstrate that earlier optimization projects have long term positive impacts. This proved to be a significant challenge. Eventually a methodology was developed using Crowe/AMSAA (Army Materials Systems Analysis Activity) plots to justify expenditures on further optimization efforts. (authors)

  9. Decision support system for optimally managing water resources to meet multiple objectives in the Savannah River Basin

    Science.gov (United States)

    Roehl, Edwin A.; Conrads, Paul

    2015-01-01

    Managers of large river basins face conflicting demands for water resources such as wildlife habitat, water supply, wastewater assimilative capacity, flood control, hydroelectricity, and recreation. The Savannah River Basin, for example, has experienced three major droughts since 2000 that resulted in record low water levels in its reservoirs, impacting dependent economies for years. The Savannah River estuary contains two municipal water intakes and the ecologically sensitive freshwater tidal marshes of the Savannah National Wildlife Refuge. The Port of Savannah is the fourth busiest in the United States, and modifications to the harbor to expand ship traffic since the 1970s have caused saltwater to migrate upstream, reducing the freshwater marsh’s acreage more than 50 percent. A planned deepening of the harbor includes flow-alteration features to minimize further migration of salinity, whose effectiveness will only be known after all construction is completed.One of the challenges of large basin management is the optimization of water use through ongoing regional economic development, droughts, and climate change. This paper describes a model of the Savannah River Basin designed to continuously optimize regulated flow to meet prioritized objectives set by resource managers and stakeholders. The model was developed from historical data using machine learning, making it more accurate and adaptable to changing conditions than traditional models. The model is coupled to an optimization routine that computes the daily flow needed to most efficiently meet the water-resource management objectives. The model and optimization routine are packaged in a decision support system that makes it easy for managers and stakeholders to use. Simulation results show that flow can be regulated to substantially reduce salinity intrusions in the Savannah National Wildlife Refuge, while conserving more water in the reservoirs. A method for using the model to assess the effectiveness of

  10. RAMS+C informed decision-making with application to multi-objective optimization of technical specifications and maintenance using genetic algorithms

    International Nuclear Information System (INIS)

    Martorell, S.; Villanueva, J.F.; Carlos, S.; Nebot, Y.; Sanchez, A.; Pitarch, J.L.; Serradell, V.

    2005-01-01

    The role of technical specifications and maintenance (TSM) activities at nuclear power plants (NPP) aims to increase reliability, availability and maintainability (RAM) of Safety-Related Equipment, which, in turn, must yield to an improved level of plant safety. However, more resources (e.g. costs, task force, etc.) have to be assigned in above areas to achieve better scores in reliability, availability, maintainability and safety (RAMS). Current situation at NPP shows different programs implemented at the plant that aim to the improvement of particular TSM-related parameters where the decision-making process is based on the assessment of the impact of the change proposed on a subgroup of RAMS+C attributes. This paper briefly reviews the role of TSM and two main groups of improvement programs at NPP, which suggest the convenience of considering the approach proposed in this paper for the Integrated Multi-Criteria Decision-Making on changes to TSM-related parameters based on RAMS+C criteria as a whole, as it can be seem as a decision-making process more consistent with the role and synergic effects of TSM and the objectives and goals of current improvement programs at NPP. The case of application to the Emergency Diesel Generator system demonstrates the viability and significance of the proposed approach for the Multi-objective Optimization of TSM-related parameters using a Genetic Algorithm

  11. An Analysis of the Optimal Multiobjective Inventory Clustering Decision with Small Quantity and Great Variety Inventory by Applying a DPSO

    Science.gov (United States)

    Li, Meng-Hua

    2014-01-01

    When an enterprise has thousands of varieties in its inventory, the use of a single management method could not be a feasible approach. A better way to manage this problem would be to categorise inventory items into several clusters according to inventory decisions and to use different management methods for managing different clusters. The present study applies DPSO (dynamic particle swarm optimisation) to a problem of clustering of inventory items. Without the requirement of prior inventory knowledge, inventory items are automatically clustered into near optimal clustering number. The obtained clustering results should satisfy the inventory objective equation, which consists of different objectives such as total cost, backorder rate, demand relevance, and inventory turnover rate. This study integrates the above four objectives into a multiobjective equation, and inputs the actual inventory items of the enterprise into DPSO. In comparison with other clustering methods, the proposed method can consider different objectives and obtain an overall better solution to obtain better convergence results and inventory decisions. PMID:25197713

  12. Q-Learning Multi-Objective Sequential Optimal Sensor Parameter Weights

    Directory of Open Access Journals (Sweden)

    Raquel Cohen

    2016-04-01

    Full Text Available The goal of our solution is to deliver trustworthy decision making analysis tools which evaluate situations and potential impacts of such decisions through acquired information and add efficiency for continuing mission operations and analyst information.We discuss the use of cooperation in modeling and simulation and show quantitative results for design choices to resource allocation. The key contribution of our paper is to combine remote sensing decision making with Nash Equilibrium for sensor parameter weighting optimization. By calculating all Nash Equilibrium possibilities per period, optimization of sensor allocation is achieved for overall higher system efficiency. Our tool provides insight into what are the most important or optimal weights for sensor parameters and can be used to efficiently tune those weights.

  13. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    Directory of Open Access Journals (Sweden)

    Y. H. Subagadis

    2014-09-01

    Full Text Available The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water–society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  14. CEO emotional bias and investment decision, Bayesian network method

    Directory of Open Access Journals (Sweden)

    Jarboui Anis

    2012-08-01

    Full Text Available This research examines the determinants of firms’ investment introducing a behavioral perspective that has received little attention in corporate finance literature. The following central hypothesis emerges from a set of recently developed theories: Investment decisions are influenced not only by their fundamentals but also depend on some other factors. One factor is the biasness of any CEO to their investment, biasness depends on the cognition and emotions, because some leaders use them as heuristic for the investment decision instead of fundamentals. This paper shows how CEO emotional bias (optimism, loss aversion and overconfidence affects the investment decisions. The proposed model of this paper uses Bayesian Network Method to examine this relationship. Emotional bias has been measured by means of a questionnaire comprising several items. As for the selected sample, it has been composed of some 100 Tunisian executives. Our results have revealed that the behavioral analysis of investment decision implies leader affected by behavioral biases (optimism, loss aversion, and overconfidence adjusts its investment choices based on their ability to assess alternatives (optimism and overconfidence and risk perception (loss aversion to create of shareholder value and ensure its place at the head of the management team.

  15. Neural signatures of experience-based improvements in deterministic decision-making.

    Science.gov (United States)

    Tremel, Joshua J; Laurent, Patryk A; Wolk, David A; Wheeler, Mark E; Fiez, Julie A

    2016-12-15

    Feedback about our choices is a crucial part of how we gather information and learn from our environment. It provides key information about decision experiences that can be used to optimize future choices. However, our understanding of the processes through which feedback translates into improved decision-making is lacking. Using neuroimaging (fMRI) and cognitive models of decision-making and learning, we examined the influence of feedback on multiple aspects of decision processes across learning. Subjects learned correct choices to a set of 50 word pairs across eight repetitions of a concurrent discrimination task. Behavioral measures were then analyzed with both a drift-diffusion model and a reinforcement learning model. Parameter values from each were then used as fMRI regressors to identify regions whose activity fluctuates with specific cognitive processes described by the models. The patterns of intersecting neural effects across models support two main inferences about the influence of feedback on decision-making. First, frontal, anterior insular, fusiform, and caudate nucleus regions behave like performance monitors, reflecting errors in performance predictions that signal the need for changes in control over decision-making. Second, temporoparietal, supplementary motor, and putamen regions behave like mnemonic storage sites, reflecting differences in learned item values that inform optimal decision choices. As information about optimal choices is accrued, these neural systems dynamically adjust, likely shifting the burden of decision processing from controlled performance monitoring to bottom-up, stimulus-driven choice selection. Collectively, the results provide a detailed perspective on the fundamental ability to use past experiences to improve future decisions. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Development of transportation asset management decision support tools : final report.

    Science.gov (United States)

    2017-08-09

    This study developed a web-based prototype decision support platform to demonstrate the benefits of transportation asset management in monitoring asset performance, supporting asset funding decisions, planning budget tradeoffs, and optimizing resourc...

  17. Applying rigorous decision analysis methodology to optimization of a tertiary recovery project

    International Nuclear Information System (INIS)

    Wackowski, R.K.; Stevens, C.E.; Masoner, L.O.; Attanucci, V.; Larson, J.L.; Aslesen, K.S.

    1992-01-01

    This paper reports that the intent of this study was to rigorously look at all of the possible expansion, investment, operational, and CO 2 purchase/recompression scenarios (over 2500) to yield a strategy that would maximize net present value of the CO 2 project at the Rangely Weber Sand Unit. Traditional methods of project management, which involve analyzing large numbers of single case economic evaluations, was found to be too cumbersome and inaccurate for an analysis of this scope. The decision analysis methodology utilized a statistical approach which resulted in a range of economic outcomes. Advantages of the decision analysis methodology included: a more organized approach to classification of decisions and uncertainties; a clear sensitivity method to identify the key uncertainties; an application of probabilistic analysis through the decision tree; and a comprehensive display of the range of possible outcomes for communication to decision makers. This range made it possible to consider the upside and downside potential of the options and to weight these against the Unit's strategies. Savings in time and manpower required to complete the study were also realized

  18. Optimal Procedure for siting of Nuclear Power Plant

    International Nuclear Information System (INIS)

    Aziuddin, Khairiah Binti; Park, Seo Yeon; Roh, Myung Sub

    2013-01-01

    This study discusses on a simulation approach for sensitivity analysis of the weights of multi-criteria decision models. The simulation procedures can also be used to aid the actual decision process, particularly when the task is to select a subset of superior alternatives. This study is to identify the criteria or parameters which are sensitive to the weighting factor that can affect the results in the decision making process to determine the optimal site for nuclear power plant (NPP) site. To perform this study, we adhere to IAEA NS-R-3 and DS 433. The siting process for nuclear installation consists of site survey and site selection stages. The siting process generally consists of an investigation of a large region to select one or more candidate sites by surveying the sites. After comparing the ROI, two candidate sites are compared for final determination, which are Wolsong and Kori site. Some assumptions are taken into consideration due to limitations and constraints throughout performing this study. Sensitivity analysis of multi criteria decision models is performed in this study to determine the optimal site in the site selection stage. Logical Decisions software will be employed as a tool to perform this analysis. Logical Decisions software helps to formulate the preferences and then rank the alternatives. It provides clarification of the rankings and hence aids the decision makers on evaluating the alternatives, and finally draw a conclusion on the selection of the optimal site

  19. Optimal Procedure for siting of Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Aziuddin, Khairiah Binti; Park, Seo Yeon; Roh, Myung Sub [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    This study discusses on a simulation approach for sensitivity analysis of the weights of multi-criteria decision models. The simulation procedures can also be used to aid the actual decision process, particularly when the task is to select a subset of superior alternatives. This study is to identify the criteria or parameters which are sensitive to the weighting factor that can affect the results in the decision making process to determine the optimal site for nuclear power plant (NPP) site. To perform this study, we adhere to IAEA NS-R-3 and DS 433. The siting process for nuclear installation consists of site survey and site selection stages. The siting process generally consists of an investigation of a large region to select one or more candidate sites by surveying the sites. After comparing the ROI, two candidate sites are compared for final determination, which are Wolsong and Kori site. Some assumptions are taken into consideration due to limitations and constraints throughout performing this study. Sensitivity analysis of multi criteria decision models is performed in this study to determine the optimal site in the site selection stage. Logical Decisions software will be employed as a tool to perform this analysis. Logical Decisions software helps to formulate the preferences and then rank the alternatives. It provides clarification of the rankings and hence aids the decision makers on evaluating the alternatives, and finally draw a conclusion on the selection of the optimal site.

  20. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis.

    Science.gov (United States)

    Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.

  1. An intutionistic fuzzy optimization approach to vendor selection problem

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2016-09-01

    Full Text Available Selecting the right vendor is an important business decision made by any organization. The decision involves multiple criteria and if the objectives vary in preference and scope, then nature of decision becomes multiobjective. In this paper, a vendor selection problem has been formulated as an intutionistic fuzzy multiobjective optimization where appropriate number of vendors is to be selected and order allocated to them. The multiobjective problem includes three objectives: minimizing the net price, maximizing the quality, and maximizing the on time deliveries subject to supplier's constraints. The objection function and the demand are treated as intutionistic fuzzy sets. An intutionistic fuzzy set has its ability to handle uncertainty with additional degrees of freedom. The Intutionistic fuzzy optimization (IFO problem is converted into a crisp linear form and solved using optimization software Tora. The advantage of IFO is that they give better results than fuzzy/crisp optimization. The proposed approach is explained by a numerical example.

  2. Guide to Decision-Making Getting it more right than wrong

    CERN Document Server

    Drummond, Helga

    2012-01-01

    We make decisions, and these decisions make us and our organisations. And in theory, decision-making should be easy: a problem is identified, the decision-makers generate solutions, and choose the optimal one - and powerful mathematical tools are available to facilitate the task. Yet if it is all so simple why do organisations, both private and public sector, keep making mistakes - the results of which are borne by shareholders, employees, taxpayers and ultimately society at large? This guide to decision making. by leading decision science academic Helga Drummond, aims to improve decision-maki

  3. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  4. Neural basis of quasi-rational decision making.

    Science.gov (United States)

    Lee, Daeyeol

    2006-04-01

    Standard economic theories conceive homo economicus as a rational decision maker capable of maximizing utility. In reality, however, people tend to approximate optimal decision-making strategies through a collection of heuristic routines. Some of these routines are driven by emotional processes, and others are adjusted iteratively through experience. In addition, routines specialized for social decision making, such as inference about the mental states of other decision makers, might share their origins and neural mechanisms with the ability to simulate or imagine outcomes expected from alternative actions that an individual can take. A recent surge of collaborations across economics, psychology and neuroscience has provided new insights into how such multiple elements of decision making interact in the brain.

  5. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  6. Optimized electricity expansions with external costs internalized and risk of severe accidents as a new criterion in the decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martin del Campo M, C.; Estrada S, G. J., E-mail: cmcm@fi-b.unam.mx [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico)

    2011-11-15

    The external cost of severe accidents was incorporated as a new element for the assessment of energy technologies in the expansion plans of the Mexican electric generating system. Optimizations of the electric expansions were made by internalizing the external cost into the objective function of the WASP-IV model as a variable cost, and these expansions were compared with the expansion plans that did not internalize them. Average external costs reported by the Extern E Project were used for each type of technology and were added to the variable component of operation and maintenance cost in the study cases in which the externalises were internalized. Special attention was paid to study the convenience of including nuclear energy in the generating mix. The comparative assessment of six expansion plans was made by means of the Position Vector of Minimum Regret Analysis (PVMRA) decision analysis tool. The expansion plans were ranked according to seven decision criteria which consider internal costs, economical impact associated with incremental fuel prices, diversity, external costs, foreign capital fraction, carbon-free fraction, and external costs of severe accidents. A set of data for the calculation of the last criterion was obtained from a Report of the European Commission. We found that with the external costs included in the optimization process of WASP-IV, better electric expansion plans, with lower total (internal + external) generating costs, were found. On the other hand, the plans which included the participation of nuclear power plants were in general relatively more attractive than the plans that did not. (Author)

  7. A Rational Decision Maker with Ordinal Utility under Uncertainty: Optimism and Pessimism

    OpenAIRE

    Han, Ji

    2009-01-01

    In game theory and artificial intelligence, decision making models often involve maximizing expected utility, which does not respect ordinal invariance. In this paper, the author discusses the possibility of preserving ordinal invariance and still making a rational decision under uncertainty.

  8. Risk Acceptance Criteria and/or Decision optimization

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    1996-01-01

    Acceptance criteria applied in practical risk analysis are recapitulated including the concept of rist profile. Modelling of risk profiles is illustrated on the basis of compound Poisson process models. The current practice of authoritative acceptance criteria formulation is discussed from...... a decision theoretical point of view. It is argued that the phenomenon of risk aversion rather than being of concern to the authority should be of concern to the owner. Finally it is discussed whether there is an ethical problem when formally capitalising human lives with a positive interest rate. Keywords......: Risk acceptance, Risk profile, Compound Poisson model for risk profile, Capitalization of human life, Risk aversion....

  9. Decision making with epistemic uncertainty under safety constraints: An application to seismic design

    Science.gov (United States)

    Veneziano, D.; Agarwal, A.; Karaca, E.

    2009-01-01

    The problem of accounting for epistemic uncertainty in risk management decisions is conceptually straightforward, but is riddled with practical difficulties. Simple approximations are often used whereby future variations in epistemic uncertainty are ignored or worst-case scenarios are postulated. These strategies tend to produce sub-optimal decisions. We develop a general framework based on Bayesian decision theory and exemplify it for the case of seismic design of buildings. When temporal fluctuations of the epistemic uncertainties and regulatory safety constraints are included, the optimal level of seismic protection exceeds the normative level at the time of construction. Optimal Bayesian decisions do not depend on the aleatory or epistemic nature of the uncertainties, but only on the total (epistemic plus aleatory) uncertainty and how that total uncertainty varies randomly during the lifetime of the project. ?? 2009 Elsevier Ltd. All rights reserved.

  10. Optimization of Algorithms Using Extensions of Dynamic Programming

    KAUST Repository

    AbouEisha, Hassan M.

    2017-04-09

    We study and answer questions related to the complexity of various important problems such as: multi-frontal solvers of hp-adaptive finite element method, sorting and majority. We advocate the use of dynamic programming as a viable tool to study optimal algorithms for these problems. The main approach used to attack these problems is modeling classes of algorithms that may solve this problem using a discrete model of computation then defining cost functions on this discrete structure that reflect different complexity measures of the represented algorithms. As a last step, dynamic programming algorithms are designed and used to optimize those models (algorithms) and to obtain exact results on the complexity of the studied problems. The first part of the thesis presents a novel model of computation (element partition tree) that represents a class of algorithms for multi-frontal solvers along with cost functions reflecting various complexity measures such as: time and space. It then introduces dynamic programming algorithms for multi-stage and bi-criteria optimization of element partition trees. In addition, it presents results based on optimal element partition trees for famous benchmark meshes such as: meshes with point and edge singularities. New improved heuristics for those benchmark meshes were ob- tained based on insights of the optimal results found by our algorithms. The second part of the thesis starts by introducing a general problem where different problems can be reduced to and show how to use a decision table to model such problem. We describe how decision trees and decision tests for this table correspond to adaptive and non-adaptive algorithms for the original problem. We present exact bounds on the average time complexity of adaptive algorithms for the eight elements sorting problem. Then bounds on adaptive and non-adaptive algorithms for a variant of the majority problem are introduced. Adaptive algorithms are modeled as decision trees whose depth

  11. Optimal control of raw timber production processes

    Science.gov (United States)

    Ivan Kolenka

    1978-01-01

    This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...

  12. Optimal Lease Contract for Remanufactured Equipment

    Science.gov (United States)

    Iskandar, B. P.; Wangsaputra, R.; Pasaribu, U. S.; Husniah, H.

    2018-03-01

    In the last two decades, the business of lease products (or equipment) has grown significantly, and many companies acquire equipment through leasing. In this paper, we propose a new lease contract under which a product (or equipment) is leased for a period of time with maximum usage per period (e.g. 1 year). This lease contract has only a time limit but no usage limit. If the total usage per period exceeds the maximum usage allowed in the contract, then the customer (as a lessee) will be charged an additional cost. In general, the lessor (OEM) provides a full coverage of maintenance, which includes PM and CM under the lease contract. It is considered that the lessor offers the lease contract for a remanufactured product. We presume that the price of the lease contract for the remanufactured product is much lower than that of a new one, and hence it would be a more attractive option to the customer. The decision problem for the lessee is to select the best option offered that fits to its requirement, and the decision problem for the lessor is find the optimal maintenance efforts for a given price of the lease option offered. We first find the optimal decisions independently for each party, and then the joint optimal decisions for both parties.

  13. Optimal Responsible Investment

    DEFF Research Database (Denmark)

    Jessen, Pernille

    The paper studies retail Socially Responsible Investment and portfolio allocation. It extends conventional portfolio theory by allowing for a personal value based investment decision. When preferences for responsibility enter the framework for mean-variance analysis, it yields an optimal...... responsible investment model. An example of index investing illustrates the theory. Results show that it is crucial for the responsible investor to consider portfolio risk, expected return, and responsibility simultaneously in order to obtain an optimal portfolio. The model enables responsible investors...

  14. A multi attribute decision making method for selection of optimal assembly line

    Directory of Open Access Journals (Sweden)

    B. Vijaya Ramnath

    2011-01-01

    Full Text Available With globalization, sweeping technological development, and increasing competition, customers are placing greater demands on manufacturers to increase quality, flexibility, on time delivery of product and less cost. Therefore, manufacturers must develop and maintain a high degree of coherence among competitive priorities, order winning criteria and improvement activities. Thus, the production managers are making an attempt to transform their organization by adopting familiar and beneficial management philosophies like cellular manufacturing (CM, lean manufacturing (LM, green manufacturing (GM, total quality management (TQM, agile manufacturing (AM, and just in time manufacturing (JIT. The main objective of this paper is to propose an optimal assembly method for an engine manufacturer’s assembly line in India. Currently, the Indian manufacturer is following traditional assembly method where the raw materials for assembly are kept along the sideways of conveyor line. It consumes more floor space, more work in process inventory, more operator's walking time and more operator's walking distance per day. In order to reduce the above mentioned wastes, lean kitting assembly is suggested by some managers. Another group of managers suggest JIT assembly as it consumes very less inventory cost compared to other types of assembly processes. Hence, a Multi-attribute decision making model namely analytical hierarchy process (AHP is applied to analyse the alternative assembly methods based on various important factors.

  15. Optimal statistical decisions about some alternative financial models

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; Stummer, W.

    2007-01-01

    Roč. 137, č. 2 (2007), s. 441-471 ISSN 0304-4076 R&D Projects: GA MŠk(CZ) 1M0572; GA ČR GA201/02/1391; GA AV ČR IAA1075403 Institutional research plan: CEZ:AV0Z10750506 Keywords : Black-Scholes-Merton models * Relative entropies * Power divergences * Hellinger intergrals * Total variation distance * Bayesian decisions * Neyman-Pearson testing Subject RIV: BD - Theory of Information Impact factor: 1.990, year: 2007

  16. Optimal Decision Fusion for Urban Land-Use/Land-Cover Classification Based on Adaptive Differential Evolution Using Hyperspectral and LiDAR Data

    Directory of Open Access Journals (Sweden)

    Yanfei Zhong

    2017-08-01

    Full Text Available Hyperspectral images and light detection and ranging (LiDAR data have, respectively, the high spectral resolution and accurate elevation information required for urban land-use/land-cover (LULC classification. To combine the respective advantages of hyperspectral and LiDAR data, this paper proposes an optimal decision fusion method based on adaptive differential evolution, namely ODF-ADE, for urban LULC classification. In the ODF-ADE framework the normalized difference vegetation index (NDVI, gray-level co-occurrence matrix (GLCM and digital surface model (DSM are extracted to form the feature map. The three different classifiers of the maximum likelihood classifier (MLC, support vector machine (SVM and multinomial logistic regression (MLR are used to classify the extracted features. To find the optimal weights for the different classification maps, weighted voting is used to obtain the classification result and the weights of each classification map are optimized by the differential evolution algorithm which uses a self-adaptive strategy to obtain the parameter adaptively. The final classification map is obtained after post-processing based on conditional random fields (CRF. The experimental results confirm that the proposed algorithm is very effective in urban LULC classification.

  17. Optimal sequence of landfills in solid waste management

    Energy Technology Data Exchange (ETDEWEB)

    Andre, F.J. [Universidad Pablo de Olavide (Spain); Cerda, E. [Universidad Complutense de Madrid (Spain)

    2001-07-01

    Given that landfills are depletable and replaceable resources, the right approach, when dealing with landfill management, is that of designing an optimal sequence of landfills rather than designing every single landfill separately. In this paper, we use Optimal Control models, with mixed elements of both continuous-and discrete-time problems, to determine an optimal sequence of landfills, as regarding their capacity and lifetime. The resulting optimization problems involve splitting a time horizon of planning into several subintervals, the length of which has to be decided. In each of the subintervals some costs, the amount of which depends on the value of the decision variables, have to be borne. The obtained results may be applied to other economic problems such as private and public investments, consumption decisions on durable goods, etc. (Author)

  18. Joint Ordering and Pricing Decisions for New Repeat-Purchase Products

    Directory of Open Access Journals (Sweden)

    Xiang Wu

    2015-01-01

    Full Text Available This paper studies ordering and pricing problems for new repeat-purchase products. We incorporate the repeat-purchase rate and price effects into the Bass model to characterize the demand pattern. We consider two decision models: (1 two-stage decision model, in which the sales division chooses a price to maximize the gross profit and the purchasing division determines an optimal ordering decision to minimize the total cost under a given demand subsequently, and (2 joint decision model, in which the firm makes ordering and pricing decisions simultaneously to maximize the profit. We combine the generalized Bass model with dynamic lot sizing model to formulate the joint decision model. We apply both models to a specific imported food provided by an online fresh produce retailer in Central China, solve them by Gaussian Random-Walk and Wagner-Whitin based algorithms, and observe three results. First, joint pricing and ordering decisions bring more significant profits than making pricing and ordering decisions sequentially. Second, a great initiative in adoption significantly increases price premium and profit. Finally, the optimal price shows a U-shape (i.e., decreases first and increases later relationship and the profit increases gradually with the repeat-purchase rate when it is still not very high.

  19. Optimization Research of Generation Investment Based on Linear Programming Model

    Science.gov (United States)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  20. A framework for sensitivity analysis of decision trees.

    Science.gov (United States)

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  1. Portfolio optimization and performance evaluation

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Christensen, Michael

    2013-01-01

    Based on an exclusive business-to-business database comprising nearly 1,000 customers, the applicability of portfolio analysis is documented, and it is examined how such an optimization analysis can be used to explore the growth potential of a company. As opposed to any previous analyses, optimal...... customer portfolios are determined, and it is shown how marketing decision-makers can use this information in their marketing strategies to optimize the revenue growth of the company. Finally, our analysis is the first analysis which applies portfolio based methods to measure customer performance......, and it is shown how these performance measures complement the optimization analysis....

  2. Neural signatures of experience-based improvements in deterministic decision-making

    OpenAIRE

    Tremel, Joshua J.; Laurent, Patryk A.; Wolk, David A.; Wheeler, Mark E.; Fiez, Julie A.

    2016-01-01

    Feedback about our choices is a crucial part of how we gather information and learn from our environment. It provides key information about decision experiences that can be used to optimize future choices. However, our understanding of the processes through which feedback translates into improved decision-making is lacking. Using neuroimaging (fMRI) and cognitive models of decision-making and learning, we examined the influence of feedback on multiple aspects of decision processes across lear...

  3. Optimization of the air cargo supply chain

    Directory of Open Access Journals (Sweden)

    María Pérez Bernal

    2012-10-01

    Full Text Available Purpose: This paper aims to evaluate and optimize the various operations within the air cargo chain. It pursues to improve the efficiency of the air cargo supply chain and to provide more information to the decision-makers to optimize their fields.Design/methodology/approach: The method used is a process simulation modelling software, WITNESS, which provides information to the decision-makers about the most relevant parameters subject to optimization. The input for the simulation is obtained from a qualitative analysis of the air cargo supply chain with the involved agents and from a study of the external trade by air mode, given that their behaviour depend on the location. The case study is focused on a particular location, the Case of Zaragoza Airport (Spain.Findings: This paper demonstrates that efficiency of the air cargo supply chain can increase by leveraging several parameters such as bottlenecks, resources or warehouses.Originality/value: It explores the use of a simulation modeling software originally intended for manufacturing processes and extended to support decision making processes in the area of air cargo.

  4. Fuzzy-like multiple objective multistage decision making

    CERN Document Server

    Xu, Jiuping

    2014-01-01

    Decision has inspired reflection of many thinkers since the ancient times. With the rapid development of science and society, appropriate dynamic decision making has been playing an increasingly important role in many areas of human activity including engineering, management, economy and others. In most real-world problems, decision makers usually have to make decisions sequentially at different points in time and space, at different levels for a component or a system, while facing multiple and conflicting objectives and a hybrid uncertain environment where fuzziness and randomness co-exist in a decision making process. This leads to the development of fuzzy-like multiple objective multistage decision making. This book provides a thorough understanding of the concepts of dynamic optimization from a modern perspective and presents the state-of-the-art methodology for modeling, analyzing and solving the most typical multiple objective multistage decision making practical application problems under fuzzy-like un...

  5. Representing Boolean Functions by Decision Trees

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    A Boolean or discrete function can be represented by a decision tree. A compact form of decision tree named binary decision diagram or branching program is widely known in logic design [2, 40]. This representation is equivalent to other forms, and in some cases it is more compact than values table or even the formula [44]. Representing a function in the form of decision tree allows applying graph algorithms for various transformations [10]. Decision trees and branching programs are used for effective hardware [15] and software [5] implementation of functions. For the implementation to be effective, the function representation should have minimal time and space complexity. The average depth of decision tree characterizes the expected computing time, and the number of nodes in branching program characterizes the number of functional elements required for implementation. Often these two criteria are incompatible, i.e. there is no solution that is optimal on both time and space complexity. © Springer-Verlag Berlin Heidelberg 2011.

  6. Ship speed optimization: Concepts, models and combined speed-routing scenarios

    DEFF Research Database (Denmark)

    Psaraftis, Harilaos N.; Kontovas, Christos A.

    2014-01-01

    The purpose of this paper is to clarify some important issues as regards ship speed optimization at the operational level and develop models that optimize ship speed for a spectrum of routing scenarios in a single ship setting. The paper's main contribution is the incorporation of those fundament...... parameters and other considerations that weigh heavily in a ship owner's or charterer's speed decision and in his routing decision, wherever relevant. Various examples are given so as to illustrate the properties of the optimal solution and the various trade-offs that are involved....

  7. Decision aids for multiple-decision disease management as affected by weather input errors.

    Science.gov (United States)

    Pfender, W F; Gent, D H; Mahaffee, W F; Coop, L B; Fox, A D

    2011-06-01

    Many disease management decision support systems (DSSs) rely, exclusively or in part, on weather inputs to calculate an indicator for disease hazard. Error in the weather inputs, typically due to forecasting, interpolation, or estimation from off-site sources, may affect model calculations and management decision recommendations. The extent to which errors in weather inputs affect the quality of the final management outcome depends on a number of aspects of the disease management context, including whether management consists of a single dichotomous decision, or of a multi-decision process extending over the cropping season(s). Decision aids for multi-decision disease management typically are based on simple or complex algorithms of weather data which may be accumulated over several days or weeks. It is difficult to quantify accuracy of multi-decision DSSs due to temporally overlapping disease events, existence of more than one solution to optimizing the outcome, opportunities to take later recourse to modify earlier decisions, and the ongoing, complex decision process in which the DSS is only one component. One approach to assessing importance of weather input errors is to conduct an error analysis in which the DSS outcome from high-quality weather data is compared with that from weather data with various levels of bias and/or variance from the original data. We illustrate this analytical approach for two types of DSS, an infection risk index for hop powdery mildew and a simulation model for grass stem rust. Further exploration of analysis methods is needed to address problems associated with assessing uncertainty in multi-decision DSSs.

  8. Optimally Locating MARFORRES Units

    OpenAIRE

    Salmeron, Javier; Dell, Rob

    2015-01-01

    Javier Salmeron and Rob Dell The U.S. Marine Forces Reserve (USMCR, MARFORRES) is conducting realignment studies where discretionary changes may benefit from formal mathematical analysis. This study has developed an optimization tool to guide and/or support Commander, MARFORRES (CMFR) decisions. A prototype of the optimization tool has been tested with data from the units and Reserve Training Centers (RTCs) in the San Francisco, CA and Sacramento, CA areas. Prepared for: MARFORRES, POC:...

  9. Decision tree ensembles for online operation of large smart grids

    International Nuclear Information System (INIS)

    Steer, Kent C.B.; Wirth, Andrew; Halgamuge, Saman K.

    2012-01-01

    Highlights: ► We present a new technique for the online control of large smart grids. ► We use a Decision Tree Ensemble in a Receding Horizon Controller. ► Decision Trees can approximate online optimisation approaches. ► Decision Trees can make adjustments to their output in real time. ► The new technique outperforms heuristic online optimisation approaches. - Abstract: Smart grids utilise omnidirectional data transfer to operate a network of energy resources. Associated technologies present operators with greater control over system elements and more detailed information on the system state. While these features may improve the theoretical optimal operating performance, determining the optimal operating strategy becomes more difficult. In this paper, we show how a decision tree ensemble or ‘forest’ can produce a near-optimal control strategy in real time. The approach substitutes the decision forest for the simulation–optimisation sub-routine commonly employed in receding horizon controllers. The method is demonstrated on a small and a large network, and compared to controllers employing particle swarm optimisation and evolutionary strategies. For the smaller network the proposed method performs comparably in terms of total energy usage, but delivers a greater demand deficit. On the larger network the proposed method is superior with respect to all measures. We conclude that the method is useful when the time required to evaluate possible strategies via simulation is high.

  10. Economic irrationality is optimal during noisy decision making.

    Science.gov (United States)

    Tsetsos, Konstantinos; Moran, Rani; Moreland, James; Chater, Nick; Usher, Marius; Summerfield, Christopher

    2016-03-15

    According to normative theories, reward-maximizing agents should have consistent preferences. Thus, when faced with alternatives A, B, and C, an individual preferring A to B and B to C should prefer A to C. However, it has been widely argued that humans can incur losses by violating this axiom of transitivity, despite strong evolutionary pressure for reward-maximizing choices. Here, adopting a biologically plausible computational framework, we show that intransitive (and thus economically irrational) choices paradoxically improve accuracy (and subsequent economic rewards) when decision formation is corrupted by internal neural noise. Over three experiments, we show that humans accumulate evidence over time using a "selective integration" policy that discards information about alternatives with momentarily lower value. This policy predicts violations of the axiom of transitivity when three equally valued alternatives differ circularly in their number of winning samples. We confirm this prediction in a fourth experiment reporting significant violations of weak stochastic transitivity in human observers. Crucially, we show that relying on selective integration protects choices against "late" noise that otherwise corrupts decision formation beyond the sensory stage. Indeed, we report that individuals with higher late noise relied more strongly on selective integration. These findings suggest that violations of rational choice theory reflect adaptive computations that have evolved in response to irreducible noise during neural information processing.

  11. A Decision Support Model and Tool to Assist Financial Decision-Making in Universities

    Science.gov (United States)

    Bhayat, Imtiaz; Manuguerra, Maurizio; Baldock, Clive

    2015-01-01

    In this paper, a model and tool is proposed to assist universities and other mission-based organisations to ascertain systematically the optimal portfolio of projects, in any year, meeting the organisations risk tolerances and available funds. The model and tool presented build on previous work on university operations and decision support systems…

  12. Decision support system for the optimal location of electrical and electronic waste treatment plants: A case study in Greece

    International Nuclear Information System (INIS)

    Achillas, Ch.; Vlachokostas, Ch.; Moussiopoulos, N.; Banias, G.

    2010-01-01

    Environmentally sound end-of-life management of Electrical and Electronic Equipment has been realised as a top priority issue internationally, both due to the waste stream's continuously increasing quantities, as well as its content in valuable and also hazardous materials. In an effort to manage Waste Electrical and Electronic Equipment (WEEE), adequate infrastructure in treatment and recycling facilities is considered a prerequisite. A critical number of such plants are mandatory to be installed in order: (i) to accommodate legislative needs, (ii) decrease transportation cost, and (iii) expand reverse logistics network and cover more areas. However, WEEE recycling infrastructures require high expenditures and therefore the decision maker need to be most precautious. In this context, special care should be given on the viability of infrastructure which is heavily dependent on facilities' location. To this end, a methodology aiming towards optimal location of Units of Treatment and Recycling is developed, taking into consideration economical together with social criteria, in an effort to interlace local acceptance and financial viability. For the decision support system's needs, ELECTRE III is adopted as a multicriteria analysis technique. The methodology's applicability is demonstrated with a real-world case study in Greece.

  13. Poly-optimization: a paradigm in engineering design in mechatronics

    Energy Technology Data Exchange (ETDEWEB)

    Tarnowski, Wojciech [Koszalin University of Technology, Department of Control and Driving Systems, Institute of Mechatronics, Nanotechnology and Vacuum Technique, Koszalin (Poland); Krzyzynski, Tomasz; Maciejewski, Igor; Oleskiewicz, Robert [Koszalin University of Technology, Department of Mechatronics and Applied Mechanics, Institute of Mechatronics, Nanotechnology and Vacuum Technique, Koszalin (Poland)

    2011-02-15

    The paper deals with the Engineering Design that is a general methodology of a design process. It is assumed that a designer has to solve a design task as an inverse problem in an iterative way. After each iteration, a decision should be taken on the information that is called a centre of integration in a systematic design system. For this purpose, poly-optimal solutions may be used. The poly-optimization is presented and contrasted against the Multi Attribute Decision Making, and a set of the poly-optimal solutions is defined. Then Mechatronics is defined and its characteristics given, to prove that mechatronic design process vitally needs CAD tools. Three examples are quoted to demonstrate a key role of the poly-optimization in the mechatronic design. (orig.)

  14. Optimal pricing of non-utility generated electric power

    International Nuclear Information System (INIS)

    Siddiqi, S.N.; Baughman, M.L.

    1994-01-01

    The importance of an optimal pricing policy for pricing non-utility generated power is pointed out in this paper. An optimal pricing policy leads to benefits for all concerned: the utility, industry, and the utility's other customers. In this paper, it is shown that reliability differentiated real-time pricing provides an optimal non-utility generated power pricing policy, from a societal welfare point of view. Firm capacity purchase, and hence an optimal price for purchasing firm capacity, are an integral part of this pricing policy. A case study shows that real-time pricing without firm capacity purchase results in improper investment decisions and higher costs for the system as a whole. Without explicit firm capacity purchase, the utility makes greater investment in capacity addition in order to meet its reliability criteria than is socially optimal. It is concluded that the non-utility generated power pricing policy presented in this paper and implied by reliability differentiated pricing policy results in social welfare-maximizing investment and operation decisions

  15. Economic comparison of common treatment protocols and J5 vaccination for clinical mastitis in dairy herds using optimized culling decisions.

    Science.gov (United States)

    Kessels, J A; Cha, E; Johnson, S K; Welcome, F L; Kristensen, A R; Gröhn, Y T

    2016-05-01

    This study used an existing dynamic optimization model to compare costs of common treatment protocols and J5 vaccination for clinical mastitis in US dairy herds. Clinical mastitis is an infection of the mammary gland causing major economic losses in dairy herds due to reduced milk production, reduced conception, and increased risk of mortality and culling for infected cows. Treatment protocols were developed to reflect common practices in dairy herds. These included targeted therapy following pathogen identification, and therapy without pathogen identification using a broad-spectrum antimicrobial or treating with the cheapest treatment option. The cost-benefit of J5 vaccination was also estimated. Effects of treatment were accounted for as changes in treatment costs, milk loss due to mastitis, milk discarded due to treatment, and mortality. Following ineffective treatments, secondary decisions included extending the current treatment, alternative treatment, discontinuing treatment, and pathogen identification followed by recommended treatment. Average net returns for treatment protocols and vaccination were generated using an existing dynamic programming model. This model incorporates cow and pathogen characteristics to optimize management decisions to treat, inseminate, or cull cows. Of the treatment protocols where 100% of cows received recommended treatment, pathogen-specific identification followed by recommended therapy yielded the highest average net returns per cow per year. Out of all treatment scenarios, the highest net returns were achieved with selecting the cheapest treatment option and discontinuing treatment, or alternate treatment with a similar spectrum therapy; however, this may not account for the full consequences of giving nonrecommended therapies to cows with clinical mastitis. Vaccination increased average net returns in all scenarios. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. A decision support system for planning biomass-based energy production

    Energy Technology Data Exchange (ETDEWEB)

    Frombo, Francesco; Robba, Michela [DIST, Department of Communication, Computer and System Sciences, University of Genoa, Via Opera Pia 13, 16145 Genova (Italy); Renewable Energy Laboratory, Modelling and Optimization, Via A. Magliotto 2, 17100 Savona (Italy); Minciardi, Riccardo; Sacile, Roberto [DIST, Department of Communication, Computer and System Sciences, University of Genoa, Via Opera Pia 13, 16145 Genova (Italy)

    2009-03-15

    Environmental decision support systems (EDSS) are recognized as valuable tools for environmental planning and management. In this paper, a geographic information system (GIS)-based EDSS for the optimal planning of forest biomass use for energy production is presented. A user-friendly interface allows the creation of Scenarios and the running of the developed decision and environmental models. In particular, the optimization model regards decisions over a long-term period (e.g. years) and includes decision variables related to plant locations, conversion processes (pyrolisis, gasification, combustion), harvested biomass. Moreover, different energy products and different definitions of the harvesting and pre-treatment operations are taken into account. The correct management of the forest is considered through specific constraints, security factors, and procedures for parcel selection. The EDSS features and capabilities are described in detail, with specific reference to a case study. Discussion and further research are reported. (author)

  17. Economic optimization in new distribution system construction

    International Nuclear Information System (INIS)

    Freese, J.

    1994-01-01

    The substantial capital investment and the long-term nature of extension projects make it necessary, in particular for local utilities, to intensively prepare their construction projects. Resulting from this context, the PC-program MAFIOSY for calculating and optimizing the economics of pipeline extension projects has been developed to facilitate the decision-making process and to ensure an optimum decision. The optimum structure of a distribution network to be designed for a new service area is defined using the four-phase method set out below: Situation Audit; Determination of Potential; Determination of Economic and Technical Parameters; Optimization. (orig.)

  18. Impaired strategic decision making in schizophrenia.

    Science.gov (United States)

    Kim, Hyojin; Lee, Daeyeol; Shin, Young-Min; Chey, Jeanyung

    2007-11-14

    Adaptive decision making in dynamic social settings requires frequent re-evaluation of choice outcomes and revision of strategies. This requires an array of multiple cognitive abilities, such as working memory and response inhibition. Thus, the disruption of such abilities in schizophrenia can have significant implications for social dysfunctions in affected patients. In the present study, 20 schizophrenia patients and 20 control subjects completed two computerized binary decision-making tasks. In the first task, the participants played a competitive zero-sum game against a computer in which the predictable choice behavior was penalized and the optimal strategy was to choose the two targets stochastically. In the second task, the expected payoffs of the two targets were fixed and unaffected by the subject's choices, so the optimal strategy was to choose the target with the higher expected payoff exclusively. The schizophrenia patients earned significantly less money during the first task, even though their overall choice probabilities were not significantly different from the control subjects. This was mostly because patients were impaired in integrating the outcomes of their previous choices appropriately in order to maintain the optimal strategy. During the second task, the choices of patients and control subjects displayed more similar patterns. This study elucidated the specific components in strategic decision making that are impaired in schizophrenia. The deficit, which can be characterized as strategic stiffness, may have implications for the poor social adjustment in schizophrenia patients.

  19. 2D Decision-Making for Multi-Criteria Design Optimization

    National Research Council Canada - National Science Library

    Engau, A; Wiecek, M. M

    2006-01-01

    The high dimensionality encountered in engineering design optimization due to large numbers of performance criteria and specifications leads to cumbersome and sometimes unachievable tradeoff analyses...

  20. Food processing optimization using evolutionary algorithms | Enitan ...

    African Journals Online (AJOL)

    Evolutionary algorithms are widely used in single and multi-objective optimization. They are easy to use and provide solution(s) in one simulation run. They are used in food processing industries for decision making. Food processing presents constrained and unconstrained optimization problems. This paper reviews the ...

  1. CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY

    Energy Technology Data Exchange (ETDEWEB)

    BERGMAN, T. B.; STEFANSKI, L. D.; SEELEY, P. N.; ZINSLI, L. C.; CUSACK, L. J.

    2012-09-19

    THE CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY WAS CONDUCTED TO DEVELOP AN OPTIMAL SEQUENCE OF REMEDIATION ACTIVITIES IMPLEMENTING THE CERCLA DECISION ON THE CENTRAL PLATEAU. THE STUDY DEFINES A SEQUENCE OF ACTIVITIES THAT RESULT IN AN EFFECTIVE USE OF RESOURCES FROM A STRATEGIC PERSPECTIVE WHEN CONSIDERING EQUIPMENT PROCUREMENT AND STAGING, WORKFORCE MOBILIZATION/DEMOBILIZATION, WORKFORCE LEVELING, WORKFORCE SKILL-MIX, AND OTHER REMEDIATION/DISPOSITION PROJECT EXECUTION PARAMETERS.

  2. Joint Ordering and Pricing Decisions for New Repeat-Purchase Products

    OpenAIRE

    Wu, Xiang; Zhang, Jinlong

    2015-01-01

    This paper studies ordering and pricing problems for new repeat-purchase products. We incorporate the repeat-purchase rate and price effects into the Bass model to characterize the demand pattern. We consider two decision models: (1) two-stage decision model, in which the sales division chooses a price to maximize the gross profit and the purchasing division determines an optimal ordering decision to minimize the total cost under a given demand subsequently, and (2) joint decision model, in w...

  3. Does training family physicians in shared decision making promote optimal use of antibiotics for acute respiratory infections? Study protocol of a pilot clustered randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Côté Luc

    2007-11-01

    Full Text Available Abstract Background In North America, although it varies according to the specific type of acute respiratory infections (ARI, use of antibiotics is estimated to be well above the expected prevalence of bacterial infections. The objective of this pilot clustered randomized controlled trial (RCT is to assess the feasibility of a larger clustered RCT aiming at evaluating the impact of DECISION+, a continuing professional development (CPD program in shared decision making, on the optimal use of antibiotics in the context of ARI. Methods/design This pilot study is a cluster RCT conducted with family physicians from Family Medicine Groups (FMG in the Quebec City area, Canada. Participating FMG are randomised to an immediate DECISION+ group, a CPD program in shared decision making, (experimental group, or a delayed DECISION+ group (control group. Data collection involves recruiting five patients consulting for ARI per physician from both study groups before (Phase 1 and after (Phase 2 exposure of the experimental group to the DECISION+ program, and after exposure of the control group to the DECISION+ program (Phase 3. The primary outcome measures to assess the feasibility of a larger RCT include: 1 proportion of contacted FMG that agree to participate; 2 proportion of recruited physicians who participate in the DECISION+ program; 3 level of satisfaction of physicians regarding DECISION+; and 4 proportion of missing data in each data collection phase. Levels of agreement of the patient-physician dyad on the Decisional Conflict Scale and physicians' prescription profile for ARI are performed as secondary outcome measures. Discussion This study protocol is informative for researchers and clinicians interested in designing and/or conducting clustered RCT with FMG regarding training of physicians in shared decision making. Trial Registration ClinicalTrials.gov Identifier: NCT00354315

  4. Joint Inventory, Pricing, and Advertising Decisions with Surplus and Stockout Loss Aversions

    Directory of Open Access Journals (Sweden)

    Bing-Bing Cao

    2016-01-01

    Full Text Available The newsvendor models considering decision-makers’ behavioral factors remain a fruitful research area in operation management field in past decade. In this paper, we further extend the current literatures to look into joint inventory, pricing, and advertising decisions considering loss aversion effects under the newsvendor setting. The purpose is to explore how the loss aversions affect the optimal policy of order quantity, price, and advertising effort level. We present an integrated utility model to measure both economic payoff and loss aversion utility of the newsvendor, where surplus loss aversion and stockout loss aversion are first separately defined and quantified. Then, we analyze the optimal solution conditions of the integrated model under exogenous and endogenous price cases, respectively. Under exogenous price case, we find that the uniquely optimal policy exists and is presented in the closed form. Under endogenous price case, the optimal policy is determined under mild conditions; we also provide the solutions when order quantity factor or advertising effort level is fixed in this case. In addition, the sensitivity analysis shows that the loss aversions affect the optimal decisions of order quantity, price, and advertising effort level in a systematic way.

  5. Tax-Rate Biases in Tax-Planning Decisions: Experimental Evidence

    OpenAIRE

    Amberger, Harald; Eberhartinger, Eva; Kasper, Helmut

    2016-01-01

    Contrary to standard economic theory, recent empirical findings suggest that firms do not always engage in economically optimal tax planning. We conduct a laboratory experiment and find robust evidence that decision biases offer a behavioral explanation for suboptimal tax planning. When facing time pressure in an intra-group cross-border financing decision, subjects apply heuristics based on the salience of statutory tax rates. This stirs decision makers to underestimate the effects of tax-ba...

  6. Optimization in the decommissioning of uranium tailings

    International Nuclear Information System (INIS)

    1987-06-01

    This report examines in detail the problem of choosing the optimal decommissioning approach for uranium and mill tailings sites. Various decision methods are discussed and evaluated, and their application in similar decision problems are summarized. This report includes, by means of a demonstration, a step by step guide of how a number of selected techniques can be applied to a decommissioning problem. The strengths and weaknesses of various methods are highlighted. A decision system approach is recommended for its flexibility and incorporation of many of the strengths found in other decision methods

  7. Model based decision support for planning of road maintenance

    NARCIS (Netherlands)

    van Harten, Aart; Worm, J.M.; Worm, J.M.

    1996-01-01

    In this article we describe a Decision Support Model, based on Operational Research methods, for the multi-period planning of maintenance of bituminous pavements. This model is a tool for the road manager to assist in generating an optimal maintenance plan for a road. Optimal means: minimising the

  8. A Visualization Technique for Accessing Solution Pool in Interactive Methods of Multiobjective Optimization

    OpenAIRE

    Filatovas, Ernestas; Podkopaev, Dmitry; Kurasova, Olga

    2015-01-01

    Interactive methods of multiobjective optimization repetitively derive Pareto optimal solutions based on decision maker’s preference information and present the obtained solutions for his/her consideration. Some interactive methods save the obtained solutions into a solution pool and, at each iteration, allow the decision maker considering any of solutions obtained earlier. This feature contributes to the flexibility of exploring the Pareto optimal set and learning about the op...

  9. Analysis and decision making method for radiation accident situation

    International Nuclear Information System (INIS)

    Jammet, H.; Hamard, J.

    1975-01-01

    Decisions on the application of countermeasures for accident situations must take into account the cost of these countermeasures and the feasibility of reducing the exposure. A contribution to the solution of this problem, rested on the application of the principle of choice rationalization and optimization of decision making method, is presented [fr

  10. Total Path Length and Number of Terminal Nodes for Decision Trees

    KAUST Repository

    Hussain, Shahid

    2014-01-01

    This paper presents a new tool for study of relationships between total path length (average depth) and number of terminal nodes for decision trees. These relationships are important from the point of view of optimization of decision trees

  11. Finding Multiple Optimal Solutions to Optimal Load Distribution Problem in Hydropower Plant

    Directory of Open Access Journals (Sweden)

    Xinhao Jiang

    2012-05-01

    Full Text Available Optimal load distribution (OLD among generator units of a hydropower plant is a vital task for hydropower generation scheduling and management. Traditional optimization methods for solving this problem focus on finding a single optimal solution. However, many practical constraints on hydropower plant operation are very difficult, if not impossible, to be modeled, and the optimal solution found by those models might be of limited practical uses. This motivates us to find multiple optimal solutions to the OLD problem, which can provide more flexible choices for decision-making. Based on a special dynamic programming model, we use a modified shortest path algorithm to produce multiple solutions to the problem. It is shown that multiple optimal solutions exist for the case study of China’s Geheyan hydropower plant, and they are valuable for assessing the stability of generator units, showing the potential of reducing occurrence times of units across vibration areas.

  12. Structured decision making for managing pneumonia epizootics in bighorn sheep

    Science.gov (United States)

    Sells, Sarah N.; Mitchell, Michael S.; Edwards, Victoria L.; Gude, Justin A.; Anderson, Neil J.

    2016-01-01

    Good decision-making is essential to conserving wildlife populations. Although there may be multiple ways to address a problem, perfect solutions rarely exist. Managers are therefore tasked with identifying decisions that will best achieve desired outcomes. Structured decision making (SDM) is a method of decision analysis used to identify the most effective, efficient, and realistic decisions while accounting for values and priorities of the decision maker. The stepwise process includes identifying the management problem, defining objectives for solving the problem, developing alternative approaches to achieve the objectives, and formally evaluating which alternative is most likely to accomplish the objectives. The SDM process can be more effective than informal decision-making because it provides a transparent way to quantitatively evaluate decisions for addressing multiple management objectives while incorporating science, uncertainty, and risk tolerance. To illustrate the application of this process to a management need, we present an SDM-based decision tool developed to identify optimal decisions for proactively managing risk of pneumonia epizootics in bighorn sheep (Ovis canadensis) in Montana. Pneumonia epizootics are a major challenge for managers due to long-term impacts to herds, epistemic uncertainty in timing and location of future epizootics, and consequent difficulty knowing how or when to manage risk. The decision tool facilitates analysis of alternative decisions for how to manage herds based on predictions from a risk model, herd-specific objectives, and predicted costs and benefits of each alternative. Decision analyses for 2 example herds revealed that meeting management objectives necessitates specific approaches unique to each herd. The analyses showed how and under what circumstances the alternatives are optimal compared to other approaches and current management. Managers can be confident that these decisions are effective, efficient, and

  13. The impact of taxation on international assignment decisions: A principal-agent approach

    OpenAIRE

    Martini, Jan Thomas; Niemann, Rainer

    2013-01-01

    In many industries like management consulting, IT consulting, or construction highly qualified employees, i.e., experts or executive managers, have to be assigned to temporary projects. In firms with many employees and various different projects, this assignment decision involves a complex optimization procedure. Obviously, the employees’ productivities in the respective projects are crucial for the employer’s optimal assignment decision, but assignment can also be affected by risk-incent...

  14. A cognitive prosthesis for complex decision-making.

    Science.gov (United States)

    Tremblay, Sébastien; Gagnon, Jean-François; Lafond, Daniel; Hodgetts, Helen M; Doiron, Maxime; Jeuniaux, Patrick P J M H

    2017-01-01

    While simple heuristics can be ecologically rational and effective in naturalistic decision making contexts, complex situations require analytical decision making strategies, hypothesis-testing and learning. Sub-optimal decision strategies - using simplified as opposed to analytic decision rules - have been reported in domains such as healthcare, military operational planning, and government policy making. We investigate the potential of a computational toolkit called "IMAGE" to improve decision-making by developing structural knowledge and increasing understanding of complex situations. IMAGE is tested within the context of a complex military convoy management task through (a) interactive simulations, and (b) visualization and knowledge representation capabilities. We assess the usefulness of two versions of IMAGE (desktop and immersive) compared to a baseline. Results suggest that the prosthesis helped analysts in making better decisions, but failed to increase their structural knowledge about the situation once the cognitive prosthesis is removed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Generation Expansion Planning With Large Amounts of Wind Power via Decision-Dependent Stochastic Programming

    Energy Technology Data Exchange (ETDEWEB)

    Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui; Pinson, Pierre

    2017-07-01

    Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of wind power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.

  16. An optimization methodology for identifying robust process integration investments under uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Elin; Berntsson, Thore [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden); Stroemberg, Ann-Brith [Fraunhofer-Chalmers Research Centre for Industrial Mathematics, Chalmers Science Park, SE-412 88 Gothenburg (Sweden); Patriksson, Michael [Department of Mathematical Sciences, Chalmers University of Technology and Department of Mathematical Sciences, University of Gothenburg, SE-412 96 Goeteborg (Sweden)

    2009-02-15

    Uncertainties in future energy prices and policies strongly affect decisions on investments in process integration measures in industry. In this paper, we present a five-step methodology for the identification of robust investment alternatives incorporating explicitly such uncertainties in the optimization model. Methods for optimization under uncertainty (or, stochastic programming) are thus combined with a deep understanding of process integration and process technology in order to achieve a framework for decision-making concerning the investment planning of process integration measures under uncertainty. The proposed methodology enables the optimization of investments in energy efficiency with respect to their net present value or an environmental objective. In particular, as a result of the optimization approach, complex investment alternatives, allowing for combinations of energy efficiency measures, can be analyzed. Uncertainties as well as time-dependent parameters, such as energy prices and policies, are modelled using a scenario-based approach, enabling the identification of robust investment solutions. The methodology is primarily an aid for decision-makers in industry, but it will also provide insight for policy-makers into how uncertainties regarding future price levels and policy instruments affect the decisions on investments in energy efficiency measures. (author)

  17. An optimization methodology for identifying robust process integration investments under uncertainty

    International Nuclear Information System (INIS)

    Svensson, Elin; Berntsson, Thore; Stroemberg, Ann-Brith; Patriksson, Michael

    2009-01-01

    Uncertainties in future energy prices and policies strongly affect decisions on investments in process integration measures in industry. In this paper, we present a five-step methodology for the identification of robust investment alternatives incorporating explicitly such uncertainties in the optimization model. Methods for optimization under uncertainty (or, stochastic programming) are thus combined with a deep understanding of process integration and process technology in order to achieve a framework for decision-making concerning the investment planning of process integration measures under uncertainty. The proposed methodology enables the optimization of investments in energy efficiency with respect to their net present value or an environmental objective. In particular, as a result of the optimization approach, complex investment alternatives, allowing for combinations of energy efficiency measures, can be analyzed. Uncertainties as well as time-dependent parameters, such as energy prices and policies, are modelled using a scenario-based approach, enabling the identification of robust investment solutions. The methodology is primarily an aid for decision-makers in industry, but it will also provide insight for policy-makers into how uncertainties regarding future price levels and policy instruments affect the decisions on investments in energy efficiency measures. (author)

  18. Optimizing Maintenance Planning in the Production Industry Using the Markovian Approach

    Directory of Open Access Journals (Sweden)

    B Kareem

    2012-12-01

    Full Text Available Maintenance is an essential activity in every manufacturing establishment, as manufacturing effectiveness counts on the functionality of production equipment and machinery in terms of their productivity and operational life. Maintenance cost minimization can be achieved by adopting an appropriate maintenance planning policy. This paper applies the Markovian approach to maintenance planning decision, thereby generating optimal maintenance policy from the identified alternatives over a specified period of time. Markov chains, transition matrices, decision processes, and dynamic programming models were formulated for the decision problem related to maintenance operations of a cable production company. Preventive and corrective maintenance data based on workloads and costs, were collected from the company and utilized in this study. The result showed variability in the choice of optimal maintenance policy that was adopted in the case study. Post optimality analysis of the process buttressed the claim. The proposed approach is promising for solving the maintenance scheduling decision problems of the company.

  19. Impact of discretization of the decision variables in the search of optimized solutions for history matching and injection rate optimization; Impacto do uso de variaveis discretas na busca de solucoes otimizadas para o ajuste de historico e distribuicao de vazoes de injecao

    Energy Technology Data Exchange (ETDEWEB)

    Sousa, Sergio H.G. de; Madeira, Marcelo G. [Halliburton Servicos Ltda., Rio de Janeiro, RJ (Brazil)

    2008-07-01

    In the classical operations research arena, there is the notion that the search for optimized solutions in continuous solution spaces is easier than on discrete solution spaces, even when the latter is a subset of the first. On the upstream oil industry, there is an additional complexity in the optimization problems because there usually are no analytical expressions for the objective function, which require some form of simulation in order to be evaluated. Thus, the use of meta heuristic optimizers like scatter search, tabu search and genetic algorithms is common. In this meta heuristic context, there are advantages in transforming continuous solution spaces in equivalent discrete ones; the goal to do so usually is to speed up the search for optimized solutions. However, these advantages can be masked when the problem has restrictions formed by linear combinations of its decision variables. In order to study these aspects of meta heuristic optimization, two optimization problems are proposed and solved with both continuous and discrete solution spaces: assisted history matching and injection rates optimization. Both cases operate on a model of the Wytch Farm onshore oil filed located in England. (author)

  20. Spike-based decision learning of Nash equilibria in two-player games.

    Directory of Open Access Journals (Sweden)

    Johannes Friedrich

    Full Text Available Humans and animals face decision tasks in an uncertain multi-agent environment where an agent's strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic and mixed (stochastic Nash equilibrium, respectively. In contrast, temporal-difference(TD-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.