WorldWideScience

Sample records for cost optimized interstellar

  1. Messaging with Cost-Optimized Interstellar Beacons

    Science.gov (United States)

    Benford, James; Benford, Gregory; Benford, Dominic

    2010-06-01

    On Earth, how would we build galactic-scale beacons to attract the attention of extraterrestrials, as some have suggested we should do? From the point of view of expense to a builder on Earth, experience shows an optimum trade-off. This emerges by minimizing the cost of producing a desired power density at long range, which determines the maximum range of detectability of a transmitted signal. We derive general relations for cost-optimal aperture and power. For linear dependence of capital cost on transmitter power and antenna area, minimum capital cost occurs when the cost is equally divided between antenna gain and radiated power. For nonlinear power-law dependence, a similar simple division occurs. This is validated in cost data for many systems; industry uses this cost optimum as a rule of thumb. Costs of pulsed cost-efficient transmitters are estimated from these relations by using current cost parameters (/W, /m2) as a basis. We show the scaling and give examples of such beacons. Galactic-scale beacons can be built for a few billion dollars with our present technology. Such beacons have narrow "searchlight" beams and short "dwell times" when the beacon would be seen by an alien observer in their sky. More-powerful beacons are more efficient and have economies of scale: cost scales only linearly with range R, not as R2, so number of stars radiated to iincreases as the square of cost. On a cost basis, they will likely transmit at higher microwave frequencies, ˜10 GHz. The natural corridor to broadcast is along the galactic radius or along the local spiral galactic arm we are in. A companion paper asks "If someone like us were to produce a beacon, how should we look for it?"

  2. Messaging with Cost-Optimized Interstellar Beacons

    Science.gov (United States)

    Benford, James; Benford, Gregory; Benford, Dominic

    2010-01-01

    On Earth, how would we build galactic-scale beacons to attract the attention of extraterrestrials, as some have suggested we should do? From the point of view of expense to a builder on Earth, experience shows an optimum trade-off. This emerges by minimizing the cost of producing a desired power density at long range, which determines the maximum range of detectability of a transmitted signal. We derive general relations for cost-optimal aperture and power. For linear dependence of capital cost on transmitter power and antenna area, minimum capital cost occurs when the cost is equally divided between antenna gain and radiated power. For nonlinear power-law dependence, a similar simple division occurs. This is validated in cost data for many systems; industry uses this cost optimum as a rule of thumb. Costs of pulsed cost-efficient transmitters are estimated from these relations by using current cost parameters ($/W, $/sq m) as a basis. We show the scaling and give examples of such beacons. Galactic-scale beacons can be built for a few billion dollars with our present technology. Such beacons have narrow "searchlight" beams and short "dwell times" when the beacon would be seen by an alien observer in their sky. More-powerful beacons are more efficient and have economies of scale: cost scales only linearly with range R, not as R(exp 2), so number of stars radiated to increases as the square of cost. On a cost basis, they will likely transmit at higher microwave frequencies, -10 GHz. The natural corridor to broadcast is along the galactic radius or along the local spiral galactic arm we are in. A companion paper asks "If someone like us were to produce a beacon, how should we look for it?"

  3. Heliostat cost optimization study

    Science.gov (United States)

    von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus

    2016-05-01

    This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.

  4. Starship Sails Propelled by Cost-Optimized Directed Energy

    Science.gov (United States)

    Benford, J.

    Microwave and laser-propelled sails are a new class of spacecraft using photon acceleration. It is the only method of interstellar flight that has no physics issues. Laboratory demonstrations of basic features of beam-driven propulsion, flight, stability (`beam-riding'), and induced spin, have been completed in the last decade, primarily in the microwave. It offers much lower cost probes after a substantial investment in the launcher. Engineering issues are being addressed by other applications: fusion (microwave, millimeter and laser sources) and astronomy (large aperture antennas). There are many candidate sail materials: carbon nanotubes and microtrusses, beryllium, graphene, etc. For acceleration of a sail, what is the cost-optimum high power system? Here the cost is used to constrain design parameters to estimate system power, aperture and elements of capital and operating cost. From general relations for cost-optimal transmitter aperture and power, system cost scales with kinetic energy and inversely with sail diameter and frequency. So optimal sails will be larger, lower in mass and driven by higher frequency beams. Estimated costs include economies of scale. We present several starship point concepts. Systems based on microwave, millimeter wave and laser technologies are of equal cost at today's costs. The frequency advantage of lasers is cancelled by the high cost of both the laser and the radiating optic. Cost of interstellar sailships is very high, driven by current costs for radiation source, antennas and especially electrical power. The high speeds necessary for fast interstellar missions make the operating cost exceed the capital cost. Such sailcraft will not be flown until the cost of electrical power in space is reduced orders of magnitude below current levels.

  5. Starship Sails Propelled by Cost-Optimized Directed Energy

    CERN Document Server

    Benford, James

    2011-01-01

    Microwave propelled sails are a new class of spacecraft using photon acceleration. It is the only method of interstellar flight that has no physics issues. Laboratory demonstrations of basic features of beam-driven propulsion, flight, stability ('beam-riding'), and induced spin, have been completed in the last decade, primarily in the microwave. It offers much lower cost probes after a substantial investment in the launcher. Engineering issues are being addressed by other applications: fusion (microwave, millimeter and laser sources) and astronomy (large aperture antennas). There are many candidate sail materials: carbon nanotubes and microtrusses, graphene, beryllium, etc. For acceleration of a sail, what is the cost-optimum high power system? Here the cost is used to constrain design parameters to estimate system power, aperture and elements of capital and operating cost. From general relations for cost-optimal transmitter aperture and power, system cost scales with kinetic energy and inversely with sail di...

  6. Wind Electrolysis: Hydrogen Cost Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Saur, G.; Ramsden, T.

    2011-05-01

    This report describes a hydrogen production cost analysis of a collection of optimized central wind based water electrolysis production facilities. The basic modeled wind electrolysis facility includes a number of low temperature electrolyzers and a co-located wind farm encompassing a number of 3MW wind turbines that provide electricity for the electrolyzer units.

  7. Cost Overrun Optimism: Fact or Fiction

    Science.gov (United States)

    2016-02-29

    Image designed by Diane Fleischer Cost Overrun Optimism : FACT or FICTION? Maj David D. Christensen, USAF Program managers are advocates by...3 : 254–271 A Publication of the Defense Acquisition University http://www.dau.mil Cost Overrun Optimism : Fact or Fiction? According to Gansler...generally confirmed. From as early as the 10 percent completion point, the optimism of the pro- jected cost overrun at completion is apparent. Throughout

  8. Portfolio Optimization Model with Transaction Costs

    Institute of Scientific and Technical Information of China (English)

    Shu-ping Chen; Chong Li; Sheng-hong Li; Xiong-wei Wu

    2002-01-01

    The purpose of the article is to formulate, under the l∞ risk measure, a model of portfolio selection with transaction costs and then investigate the optimal strategy within the proposed. The characterization of a optimal strategy and the efficient algorithm for finding the optimal strategy are given.

  9. Cost Optimization of Product Families using Analytic Cost Models

    DEFF Research Database (Denmark)

    Brunø, Thomas Ditlev; Nielsen, Peter

    2012-01-01

    This paper presents a new method for analysing the cost structure of a mass customized product family. The method uses linear regression and backwards selection to reduce the complexity of a data set describing a number of historical product configurations and incurred costs. By reducing the data...... set, the configuration variables which best describe the variation in product costs are identified. The method is tested using data from a Danish manufacturing company and the results indicate that the method is able to identify the most critical configuration variables. The method can be applied...... in product family redesign projects focusing on cost reduction to identify which modules contribute the most to cost variation and should thus be optimized....

  10. Cost Optimization of Product Families using Analytic Cost Models

    DEFF Research Database (Denmark)

    Brunø, Thomas Ditlev; Nielsen, Peter

    2012-01-01

    This paper presents a new method for analysing the cost structure of a mass customized product family. The method uses linear regression and backwards selection to reduce the complexity of a data set describing a number of historical product configurations and incurred costs. By reducing the data...... set, the configuration variables which best describe the variation in product costs are identified. The method is tested using data from a Danish manufacturing company and the results indicate that the method is able to identify the most critical configuration variables. The method can be applied...... in product family redesign projects focusing on cost reduction to identify which modules contribute the most to cost variation and should thus be optimized....

  11. Cost optimal levels for energy performance requirements

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike;

    This report summarises the work done within the Concerted Action EPBD from December 2010 to April 2011 in order to feed into the European Commission's proposal for a common European procedure for a Cost-Optimal methodology under the Directive on the Energy Performance of Buildings (recast) 2010/3...

  12. Optimal hedging of Derivatives with transaction costs

    CERN Document Server

    Aurell, E; Aurell, Erik; Muratore-Ginanneschi, Paolo

    2005-01-01

    We investigate the optimal strategy over a finite time horizon for a portfolio of stock and bond and a derivative in an multiplicative Markovian market model with transaction costs (friction). The optimization problem is solved by a Hamilton-Bellman-Jacobi equation, which by the verification theorem has well-behaved solutions if certain conditions on a potential are satisfied. In the case at hand, these conditions simply imply arbitrage-free ("Black-Scholes") pricing of the derivative. While pricing is hence not changed by friction allow a portfolio to fluctuate around a delta hedge. In the limit of weak friction, we determine the optimal control to essentially be of two parts: a strong control, which tries to bring the stock-and-derivative portfolio towards a Black-Scholes delta hedge; and a weak control, which moves the portfolio by adding or subtracting a Black-Scholes hedge. For simplicity we assume growth-optimal investment criteria and quadratic friction.

  13. OPTIMIZING HIGHWAY PROFILES FOR INDIVIDUAL COST ITEMS

    Directory of Open Access Journals (Sweden)

    Essam Dabbour

    2013-12-01

    Full Text Available According to the current practice, vertical alignment of a highway segment is usually selected by creating a profile showing the actual ground surface and selecting initial and final grades to minimize the overall cut and fill quantities. Those grades are connected together with a parabolic curve. However, in many highway construction or rehabilitation projects, the cost of cut may be substantially different from that of fill (e.g. in extremely hard soils where blasting is needed to cut the soil. In that case, an optimization process will be needed to minimize the overall cost of cut and fill rather than to minimize their quantities. This paper proposes a nonlinear optimization model to select optimum vertical curve parameters based on individual cost items of cut and fill. The parameters selected by the optimization model include the initial grade, the final grade, the station and elevation of the point of vertical curvature (PVC, and the station and elevation of the point of vertical tangency (PVT. The model is flexible to include any design constraints for particular design problems. Different application examples are provided using the Evolutionary Algorithm in Microsoft Excel’s Solver add-in. The application examples validated the model and demonstrated its advantage of minimizing the overall cost rather than minimizing the overall volume of cut and fill quantities.

  14. GASIFICATION PLANT COST AND PERFORMANCE OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Samuel S. Tam

    2002-05-01

    The goal of this series of design and estimating efforts was to start from the as-built design and actual operating data from the DOE sponsored Wabash River Coal Gasification Repowering Project and to develop optimized designs for several coal and petroleum coke IGCC power and coproduction projects. First, the team developed a design for a grass-roots plant equivalent to the Wabash River Coal Gasification Repowering Project to provide a starting point and a detailed mid-year 2000 cost estimate based on the actual as-built plant design and subsequent modifications (Subtask 1.1). This unoptimized plant has a thermal efficiency of 38.3% (HHV) and a mid-year 2000 EPC cost of 1,681 $/kW. This design was enlarged and modified to become a Petroleum Coke IGCC Coproduction Plant (Subtask 1.2) that produces hydrogen, industrial grade steam, and fuel gas for an adjacent Gulf Coast petroleum refinery in addition to export power. A structured Value Improving Practices (VIP) approach was applied to reduce costs and improve performance. The base case (Subtask 1.3) Optimized Petroleum Coke IGCC Coproduction Plant increased the power output by 16% and reduced the plant cost by 23%. The study looked at several options for gasifier sparing to enhance availability. Subtask 1.9 produced a detailed report on this availability analyses study. The Subtask 1.3 Next Plant, which retains the preferred spare gasification train approach, only reduced the cost by about 21%, but it has the highest availability (94.6%) and produces power at 30 $/MW-hr (at a 12% ROI). Thus, such a coke-fueled IGCC coproduction plant could fill a near term niche market. In all cases, the emissions performance of these plants is superior to the Wabash River project. Subtasks 1.5A and B developed designs for single-train coal and coke-fueled power plants. This side-by-side comparison of these plants, which contain the Subtask 1.3 VIP enhancements, showed their similarity both in design and cost (1,318 $/kW for the

  15. Cost Optimization and Technology Enablement COTSAT-1

    Science.gov (United States)

    Spremo, Stevan; Lindsay, Michael C.; Klupar, Peter Damian; Swank, Aaron J.

    2010-01-01

    Cost Optimized Test of Spacecraft Avionics and Technologies (COTSAT-1) is an ongoing spacecraft research and development project at NASA Ames Research Center (ARC). The space industry was a hot bed of innovation and development at its birth. Many new technologies were developed for and first demonstrated in space. In the recent past this trend has reversed with most of the new technology funding and research being driven by the private industry. Most of the recent advances in spaceflight hardware have come from the cell phone industry with a lag of about 10 to 15 years from lab demonstration to in space usage. NASA has started a project designed to address this problem. The prototype spacecraft known as Cost Optimized Test of Spacecraft Avionics and Technologies (COTSAT-1) and CheapSat work to reduce these issues. This paper highlights the approach taken by NASA Ames Research center to achieve significant subsystem cost reductions. The COSTAT-1 research system design incorporates use of COTS (Commercial Off The Shelf), MOTS (Modified Off The Shelf), and GOTS (Government Off The Shelf) hardware for a remote sensing spacecraft. The COTSAT-1 team demonstrated building a fully functional spacecraft for $500K parts and $2.0M labor. The COTSAT-1 system, including a selected science payload, is described within this paper. Many of the advancements identified in the process of cost reduction can be attributed to the use of a one-atmosphere pressurized structure to house the spacecraft components. By using COTS hardware, the spacecraft program can utilize investments already made by commercial vendors. This ambitious project development philosophy/cycle has yielded the COTSAT-1 flight hardware. This paper highlights the advancements of the COTSAT-1 spacecraft leading to the delivery of the current flight hardware that is now located at NASA Ames Research Center. This paper also addresses the plans for COTSAT-2.

  16. GASIFICATION PLANT COST AND PERFORMANCE OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Sheldon Kramer

    2003-09-01

    This project developed optimized designs and cost estimates for several coal and petroleum coke IGCC coproduction projects that produced hydrogen, industrial grade steam, and hydrocarbon liquid fuel precursors in addition to power. The as-built design and actual operating data from the DOE sponsored Wabash River Coal Gasification Repowering Project was the starting point for this study that was performed by Bechtel, Global Energy and Nexant under Department of Energy contract DE-AC26-99FT40342. First, the team developed a design for a grass-roots plant equivalent to the Wabash River Coal Gasification Repowering Project to provide a starting point and a detailed mid-year 2000 cost estimate based on the actual as-built plant design and subsequent modifications (Subtask 1.1). This non-optimized plant has a thermal efficiency to power of 38.3% (HHV) and a mid-year 2000 EPC cost of 1,681 $/kW.1 This design was enlarged and modified to become a Petroleum Coke IGCC Coproduction Plant (Subtask 1.2) that produces hydrogen, industrial grade steam, and fuel gas for an adjacent Gulf Coast petroleum refinery in addition to export power. A structured Value Improving Practices (VIP) approach was applied to reduce costs and improve performance. The base case (Subtask 1.3) Optimized Petroleum Coke IGCC Coproduction Plant increased the power output by 16% and reduced the plant cost by 23%. The study looked at several options for gasifier sparing to enhance availability. Subtask 1.9 produced a detailed report on this availability analyses study. The Subtask 1.3 Next Plant, which retains the preferred spare gasification train approach, only reduced the cost by about 21%, but it has the highest availability (94.6%) and produces power at 30 $/MW-hr (at a 12% ROI). Thus, such a coke-fueled IGCC coproduction plant could fill a near term niche market. In all cases, the emissions performance of these plants is superior to the Wabash River project. Subtasks 1.5A and B developed designs for

  17. Costs and Difficulties of Interstellar 'Messaging' and the Need for International Debate on Potential Risks

    Science.gov (United States)

    Billingham, J.; Benford, James

    We advocate international consultations on societal and technical issues to address the risk of Messaging to Extraterrestrial Intelligence (METI) transmissions, and a moratorium on future transmissions until such issues are resolved. Instead, we recommend continuing to conduct SETI by listening, with no innate risk, while using powerful new search systems to give a better total probability of detection of beacons and messages than METI for the same cost, and with no need for a long obligatory wait for a response. Realistically, beacons are costly. In light of recent work on the economics of contact by radio, we offer alternatives to the current standard methods of SETI searches. METI transmissions to date are faint and very unlikely to be detected, even by nearby stars. We show that historical leakage from Earth has been undetectable for Earth-scale receiver systems. Future space microwave and laser power systems will likely be more detectable.

  18. Interstellar PAHs

    Science.gov (United States)

    Allamandola, Louis J.; DeVincenzi, Donald L. (Technical Monitor)

    2000-01-01

    Tremendous strides have been made in our understanding of interstellar material over the past twenty years thanks to significant, parallel developments in two closely related areas: observational astronomy and laboratory astrophysics. Twenty years ago the composition of interstellar dust was largely guessed at and the notion of abundant, gas phase, polycyclic aromatic hydrocarbons (PAHs) anywhere in the interstellar medium (ISM) considered impossible. Today the dust composition of the diffuse and dense ISM is reasonably well constrained and the spectroscopic case for interstellar PAHs, shockingly large molecules by early interstellar chemistry standards, is very strong.

  19. Introducing cost-optimal levels for energy requirements

    DEFF Research Database (Denmark)

    Wittchen, Kim Bjarne; Thomsen, Kirsten Engelund

    2012-01-01

    The recast of the Directive on the Energy Performance of Buildings (EPBD) states that Member States (MS) must ensure that minimum energy performance requirements for buildings are set “with a view to achieve cost-optimal levels”, and that the cost-optimal level must be calculated in accordance...... with a comparative methodology. The ultimate goal of this is to achieve a cost-optimal improvement of buildings’ energy performance (new and existing) in reality....

  20. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    OpenAIRE

    Mihaela STET

    2016-01-01

    The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  1. Optimal pricing decision model based on activity-based costing

    Institute of Scientific and Technical Information of China (English)

    王福胜; 常庆芳

    2003-01-01

    In order to find out the applicability of the optimal pricing decision model based on conventional costbehavior model after activity-based costing has given strong shock to the conventional cost behavior model andits assumptions, detailed analyses have been made using the activity-based cost behavior and cost-volume-profitanalysis model, and it is concluded from these analyses that the theory behind the construction of optimal pri-cing decision model is still tenable under activity-based costing, but the conventional optimal pricing decisionmodel must be modified as appropriate to the activity-based costing based cost behavior model and cost-volume-profit analysis model, and an optimal pricing decision model is really a product pricing decision model construc-ted by following the economic principle of maximizing profit.

  2. An optimal promotion cost control model for a markovian manpower ...

    African Journals Online (AJOL)

    An optimal promotion cost control model for a markovian manpower system. ... Log in or Register to get access to full text downloads. ... A theory concerning the existence of an optimal promotion control strategy for controlling a Markovian ...

  3. Introducing cost-optimal levels for energy requirements

    DEFF Research Database (Denmark)

    Wittchen, Kim Bjarne; Thomsen, Kirsten Engelund

    2012-01-01

    The recast of the Directive on the Energy Performance of Buildings (EPBD) states that Member States (MS) must ensure that minimum energy performance requirements for buildings are set “with a view to achieve cost-optimal levels”, and that the cost-optimal level must be calculated in accordance wi...

  4. Cost-optimal levels for energy performance requirements

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike

    2011-01-01

    The CA conducted a study on experiences and challenges for setting cost optimal levels for energy performance requirements. The results were used as input by the EU Commission in their work of establishing the Regulation on a comparative methodology framework for calculating cost optimal levels o...

  5. Gasification Plant Cost and Performance Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Samuel Tam; Alan Nizamoff; Sheldon Kramer; Scott Olson; Francis Lau; Mike Roberts; David Stopek; Robert Zabransky; Jeffrey Hoffmann; Erik Shuster; Nelson Zhan

    2005-05-01

    As part of an ongoing effort of the U.S. Department of Energy (DOE) to investigate the feasibility of gasification on a broader level, Nexant, Inc. was contracted to perform a comprehensive study to provide a set of gasification alternatives for consideration by the DOE. Nexant completed the first two tasks (Tasks 1 and 2) of the ''Gasification Plant Cost and Performance Optimization Study'' for the DOE's National Energy Technology Laboratory (NETL) in 2003. These tasks evaluated the use of the E-GAS{trademark} gasification technology (now owned by ConocoPhillips) for the production of power either alone or with polygeneration of industrial grade steam, fuel gas, hydrocarbon liquids, or hydrogen. NETL expanded this effort in Task 3 to evaluate Gas Technology Institute's (GTI) fluidized bed U-GAS{reg_sign} gasifier. The Task 3 study had three main objectives. The first was to examine the application of the gasifier at an industrial application in upstate New York using a Southeastern Ohio coal. The second was to investigate the GTI gasifier in a stand-alone lignite-fueled IGCC power plant application, sited in North Dakota. The final goal was to train NETL personnel in the methods of process design and systems analysis. These objectives were divided into five subtasks. Subtasks 3.2 through 3.4 covered the technical analyses for the different design cases. Subtask 3.1 covered management activities, and Subtask 3.5 covered reporting. Conceptual designs were developed for several coal gasification facilities based on the fluidized bed U-GAS{reg_sign} gasifier. Subtask 3.2 developed two base case designs for industrial combined heat and power facilities using Southeastern Ohio coal that will be located at an upstate New York location. One base case design used an air-blown gasifier, and the other used an oxygen-blown gasifier in order to evaluate their relative economics. Subtask 3.3 developed an advanced design for an air

  6. Optimal Joint Liability Lending and with Costly Peer Monitoring

    NARCIS (Netherlands)

    Carli, F.; Uras, R.B.

    2014-01-01

    This paper characterizes an optimal group loan contract with costly peer monitoring. Using a fairly standard moral hazard framework, we show that the optimal group lending contract could exhibit a joint-liability scheme. However, optimality of joint-liability requires the involvement of a group lead

  7. Exploiting cost distributions for query optimization

    NARCIS (Netherlands)

    Waas, F.; Pellenkoft, A.J.

    1998-01-01

    Large-scale query optimization is, besides its practical relevance, a hard test case for optimization techniques. Since exact methods cannot be applied due to the combinatorial explosion of the search space, heuristics and probabilistic strategies have been deployed for more than a decade. However,

  8. Optimization of life cycle management costs

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, A.K. [Stone & Webster Engineering Corp., Boston, MA (United States)

    1994-12-31

    As can be seen from the case studies, a LCM program needs to address and integrate, in the decision process, technical, political, licensing, remaining plant life, component replacement cycles, and financial issues. As part of the LCM evaluations, existing plant programs, ongoing replacement projects, short and long-term operation and maintenance issues, and life extension strategies must be considered. The development of the LCM evaluations and the cost benefit analysis identifies critical technical and life cycle cost parameters. These {open_quotes}discoveries{close_quotes} result from the detailed and effective use of a consistent, quantifiable, and well documented methodology. The systematic development and implementation of a plant-wide LCM program provides for an integrated and structured process that leads to the most practical and effective recommendations. Through the implementation of these recommendations and cost effective decisions, the overall power production costs can be controlled and ultimately lowered.

  9. Synthesis and Stochastic Assessment of Cost-Optimal Schedules

    OpenAIRE

    Mader, A.H.; Bohnenkamp, H.C.; Usenko, Y.S.; Jansen, D.N.; J L Hurink; Hermanns, H.

    2006-01-01

    We treat the problem of generating cost-optimal schedules for orders with individual due dates and cost functions based on earliness/tardiness. Orders can run in parallel in a resource-constrained manufacturing environment, where resources are subject to stochastic breakdowns. The goal is to generate schedules while minimizing the expected costs. First, we estimate the distribution of each order type by simulation (assuming a reasonable machine/load model) and derive from the cost-function an...

  10. Optimizing Data Centre Energy and Environmental Costs

    Science.gov (United States)

    Aikema, David Hendrik

    Data centres use an estimated 2% of US electrical power which accounts for much of their total cost of ownership. This consumption continues to grow, further straining power grids attempting to integrate more renewable energy. This dissertation focuses on assessing and reducing data centre environmental and financial costs. Emissions of projects undertaken to lower the data centre environmental footprints can be assessed and the emission reduction projects compared using an ISO-14064-2-compliant greenhouse gas reduction protocol outlined herein. I was closely involved with the development of the protocol. Full lifecycle analysis and verifying that projects exceed business-as-usual expectations are addressed, and a test project is described. Consuming power when it is low cost or when renewable energy is available can be used to reduce the financial and environmental costs of computing. Adaptation based on the power price showed 10--50% potential savings in typical cases, and local renewable energy use could be increased by 10--80%. Allowing a fraction of high-priority tasks to proceed unimpeded still allows significant savings. Power grid operators use mechanisms called ancillary services to address variation and system failures, paying organizations to alter power consumption on request. By bidding to offer these services, data centres may be able to lower their energy costs while reducing their environmental impact. If providing contingency reserves which require only infrequent action, savings of up to 12% were seen in simulations. Greater power cost savings are possible for those ceding more control to the power grid operator. Coordinating multiple data centres adds overhead, and altering at which data centre requests are processed based on changes in the financial or environmental costs of power is likely to increase this overhead. Tests of virtual machine migrations showed that in some cases there was no visible increase in power use while in others power use

  11. Life Cycle Cost Optimization of a Bolig+ Zero Energy Building

    DEFF Research Database (Denmark)

    Marszal, Anna Joanna

    included in the current building code, and ten renewable energy supply systems including both on-site and off-site options. Theresults indicated that although the off-site options have lower life cycle costs than the on-site alternatives, their application would promote renewable technologies overenergy......, cost-optimal “zero” energybalance accounts only for the building related energy use....... owners’ approach to it. For thisparticular target group, the cost is often an obstacle when investing money in environmental or climate friendly products. Therefore, this PhD project took theperspective of a future private ZEB owner to investigate the cost-optimal Net ZEB definition applicable...

  12. Cost Optimizations in Runtime Testing and Diagnosis

    NARCIS (Netherlands)

    Gonzalez Sanchez, A.

    2011-01-01

    Software testing and diagnosis (debugging) is a time-consuming but rather important task for improving software reliability. It is therefore necessary to devise an appropriate verification strategy that not only achieves this reliability goal, but also does this at a minimum cost. Since exhaustive

  13. Cost Optimizations in Runtime Testing and Diagnosis

    NARCIS (Netherlands)

    Gonzalez Sanchez, A.

    2011-01-01

    Software testing and diagnosis (debugging) is a time-consuming but rather important task for improving software reliability. It is therefore necessary to devise an appropriate verification strategy that not only achieves this reliability goal, but also does this at a minimum cost. Since exhaustive

  14. Refrigerator Optimal Scheduling to Minimise the Cost of Operation

    Directory of Open Access Journals (Sweden)

    Bálint Roland

    2016-12-01

    Full Text Available The cost optimal scheduling of a household refrigerator is presented in this work. The fundamental approach is the model predictive control methodology applied to the piecewise affine model of the refrigerator.

  15. Cost Optimization Through Open Source Software

    Directory of Open Access Journals (Sweden)

    Mark VonFange

    2010-12-01

    Full Text Available The cost of information technology (IT as a percentage of overall operating and capital expenditures is growing as companies modernize their operations and as IT becomes an increasingly indispensable part of company resources. The price tag associated with IT infrastructure is a heavy one, and, in today's economy, companies need to look for ways to reduce overhead while maintaining quality operations and staying current with technology. With its advancements in availability, usability, functionality, choice, and power, free/libre open source software (F/LOSS provides a cost-effective means for the modern enterprise to streamline its operations. iXsystems wanted to quantify the benefits associated with the use of open source software at their company headquarters. This article is the outgrowth of our internal analysis of using open source software instead of commercial software in all aspects of company operations.

  16. Optimal investment with fixed refinancing costs

    OpenAIRE

    Cummins, Jason G; Ingmar Nyman

    2001-01-01

    Case studies show that corporate managers seek financial independence to avoid interference by outside financiers. We incorporate this financial xenophobia as a fixed cost in a simple dynamic model of financing and investment. To avoid refinancing in the future, the firm alters its behavior depending on the extent of its financial xenophobia and the realization of a revenue shock. With a sufficiently adverse shock, the firm holds no liquidity. Otherwise, the firm precautionarily saves and hol...

  17. Optimal Covering Tours with Turn Costs

    OpenAIRE

    Arkin, Esther M.; Bender, Michael A.; Demaine, Erik D.; Fekete, Sandor P.; Mitchell, Joseph S.B.; Sethia, Saurabh

    2003-01-01

    We give the first algorithmic study of a class of ``covering tour'' problems related to the geometric Traveling Salesman Problem: Find a polygonal tour for a cutter so that it sweeps out a specified region (``pocket''), in order to minimize a cost that depends mainly on the number of em turns. These problems arise naturally in manufacturing applications of computational geometry to automatic tool path generation and automatic inspection systems, as well as arc routing (``postman'') problems w...

  18. Objective function of cost in optimal tolerance allocation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    An objective function model is proposed for cost in optimizing and allocating tolerance with consideration of manufacturing conditions. With the fuzzy comprehensive evaluation method, a manufacturing difficulty coefficient is derived, which takes into account of several factors affecting the manufacturing cost, including the forming means of the blank, size, machining surface features, operator's skills and machinability of materials.The coefficient is then converted into a weight factor used in the inversed square model representing the relationship between the cost and tolerance, and, hence, an objective function for cost is established in optimizing and allocating tolerance. The higher is the manufacturing difficulty coefficient, the higher is the relative manufacturing cost and the higher is the weight factor of the tolerance allocation, which indicates the increase of the tolerance's effects on the total manufacturing cost and, therefore, a larger tolerance should be allocated. The computer-aided tolerance allocation utilizing this model makes it more convenient, accurate and practicable.

  19. Optimal Shielding for Minimum Materials Cost of Mass

    Energy Technology Data Exchange (ETDEWEB)

    Woolley, Robert D. [PPPL

    2014-08-01

    Material costs dominate some shielding design problems. This is certainly the case for manned nuclear power space applications for which shielding is essential and the cost of launching by rocket from earth is high. In such situations or in those where shielding volume or mass is constrained, it is important to optimize the design. Although trial and error synthesis methods may succeed a more systematic approach is warranted. Design automation may also potentially reduce engineering costs.

  20. Interstellar Extinction

    Science.gov (United States)

    Gontcharov, G. A.

    2016-12-01

    This review describes our current understanding of interstellar extinction. This differ substantially from the ideas of the 20th century. With infrared surveys of hundreds of millions of stars over the entire sky, such as 2MASS, SPITZER-IRAC, and WISE, we have looked at the densest and most rarefied regions of the interstellar medium at distances of a few kpc from the Sun. Observations at infrared and microwave wavelengths, where the bulk of the interstellar dust absorbs and radiates, have brought us closer to an understanding of the distribution of the dust particles on scales of the Galaxy and the universe. We are in the midst of a scientific revolution in our understanding of the interstellar medium and dust. Progress in, and the key results of, this revolution are still difficult to predict. Nevertheless, (a) a physically justified model has been developed for the spatial distribution of absorbing material over the nearest few kiloparsecs, including the Gould belt as a dust container, which gives an accurate estimate of the extinction for any object just by its galactic coordinates. It is also clear that (b) the interstellar medium contains roughly half the mass of matter in the galactic vicinity of the solar system (the other half is made up of stars, their remnants, and dark matter) and (c) the interstellar medium and, especially, dust, differ substantially in different regions of space and deep space cannot be understood by only studying near space.

  1. Cost-optimized climate stabilisation (OPTIKS)

    Energy Technology Data Exchange (ETDEWEB)

    Leimbach, Marian; Bauer, Nico; Baumstark, Lavinia; Edenhofer, Ottmar [Potsdam Institut fuer Klimafolgenforschung, Potsdam (Germany)

    2009-11-15

    This study analyses the implications of suggestions for the design of post-2012 climate policy regimes on the basis of model simulations. The focus of the analysis, the determination of regional mitigation costs and the technological development in the energy sector, also considers the feedbacks of investment and trade decisions of the regions that are linked by different global markets for emission permits, goods and resources. The analysed policy regimes are primarily differentiated by their allocation of emission rights. Moreover, they represent alternative designs of an international cap and trade system that is geared to meet the 2 C climate target. The present study analyses ambitious climate protection scenarios that require drastic reduction policies (reductions of 60%-80% globally until 2050). Immediate and multilateral action is needed in such scenarios. Given the rather small variance of mitigation costs in major regions like UCA, Europe, MEA and China, a policy regime should be chosen that provides high incentives to join an international agreement for the remaining regions. From this perspective either the C and C scenario (incentive for Russia) is preferable or the multi-stage approach (incentive for Africa and India). (orig.)

  2. Optimizing quality, service, and cost through innovation.

    Science.gov (United States)

    Walker, Kathleen; Allen, Jennifer; Andrews, Richard

    2011-01-01

    With dramatic increases in health care costs and growing concerns about the quality of health care services, nurse executives are seeking ways to transform their organizations to improve operational and financial performance while enhancing quality care and patient safety. Nurse leaders are challenged to meet new cost, quality and service imperatives, and change cannot be achieved by traditional approaches, it must occur through innovation. Imagine an organization that can mitigate a $56 million loss in revenue and claim the following successes: Increase admissions by a 8 day and a $5.5 million annualized increase by repurposing existing space. Decrease emergency department holding hours by an average of 174 hours a day, with a labor savings of $502,000 annually. Reduce overall inpatient length of stay by 0.5 day with total compensation running $4.2 million less than the budget for first quarter of 2010. Grow emergency department volume 272 visits greater than budgeted for first quarter of 2010. Complete admission assessments and diagnostics in 90 minutes. This article will address how these outcomes were achieved by transforming care delivery, creating a patient transition center, enhancing outreach referrals, and revising admission processes through collaboration and innovation.

  3. Implementing the cost-optimal methodology in EU countries

    DEFF Research Database (Denmark)

    Atanasiu, Bogdan; Kouloumpi, Ilektra; Thomsen, Kirsten Engelund

    . Therefore, this study provides more evidence on the implementation of the cost-optimal methodology and highlights the implications of choosing different values for key factors (e.g. discount rates, simulation variants/packages, costs, energy prices) at national levels. The study demonstrates how existing...... national nZEB definitions can be tested for cost-optimality and explores additional implications of the EU decarbonisation and resource efficiency goals. Thus, the study will ultimately contribute to prompt the transition towards the implementation of nZEB by 2020. Project coordination is the Buildings...

  4. Feedback Solution to Optimal Switching Problems With Switching Cost.

    Science.gov (United States)

    Heydari, Ali

    2016-10-01

    The problem of optimal switching between nonlinear autonomous subsystems is investigated in this paper where the objective is not only bringing the states to close to the desired point, but also adjusting the switching pattern, in the sense of penalizing switching occurrences and assigning different preferences to utilization of different modes. The mode sequence is unspecified and a switching cost term is used in the cost function for penalizing each switching. It is shown that once a switching cost is incorporated, the optimal cost-to-go function depends on the subsystem which was active at the previous time step. Afterward, an approximate dynamic programming-based method is developed, which provides an approximation of the optimal solution to the problem in a feedback form and for different initial conditions. Finally, the performance of the method is analyzed through numerical examples.

  5. Optimal Power Cost Management Using Stored Energy in Data Centers

    CERN Document Server

    Urgaonkar, Rahul; Neely, Michael J; Sivasubramaniam, Anand

    2011-01-01

    Since the electricity bill of a data center constitutes a significant portion of its overall operational costs, reducing this has become important. We investigate cost reduction opportunities that arise by the use of uninterrupted power supply (UPS) units as energy storage devices. This represents a deviation from the usual use of these devices as mere transitional fail-over mechanisms between utility and captive sources such as diesel generators. We consider the problem of opportunistically using these devices to reduce the time average electric utility bill in a data center. Using the technique of Lyapunov optimization, we develop an online control algorithm that can optimally exploit these devices to minimize the time average cost. This algorithm operates without any knowledge of the statistics of the workload or electricity cost processes, making it attractive in the presence of workload and pricing uncertainties. An interesting feature of our algorithm is that its deviation from optimality reduces as the...

  6. Optimal Work Effort and Monitoring Cost

    Directory of Open Access Journals (Sweden)

    Tamara Todorova

    2012-12-01

    Full Text Available Using a simple job market equilibrium model we study the relationship between work effort and monitoring by firms. Some other determinants of work effort investigated include the educational level of the worker, the minimum or start-up salary as well as the economic conjuncture. As common logic dictates, optimal work effort increases with the amount of monitoring done by the employer. Quite contrary to common logic, though, we find that at the optimum employers observe and control good workers much more stringently and meticulously than poor workers. This is because under profit maximization most of the employer’s profit and surplus result from good workers and he risks losing a large amount of profit by not observing those. Managers monitor strictly more productive workers, fast learners and those starting at a higher autonomous level of monitoring, as those contribute more substantially to the firm’s profit.

  7. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    Directory of Open Access Journals (Sweden)

    Bolin Kristian

    2011-05-01

    Full Text Available Abstract Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set

  8. Optimal Orthogonal Graph Drawing with Convex Bend Costs

    CERN Document Server

    Bläsius, Thomas; Wagner, Dorothea

    2012-01-01

    Traditionally, the quality of orthogonal planar drawings is quantified by either the total number of bends, or the maximum number of bends per edge. However, this neglects that in typical applications, edges have varying importance. Moreover, as bend minimization over all planar embeddings is NP-hard, most approaches focus on a fixed planar embedding. We consider the problem OptimalFlexDraw that is defined as follows. Given a planar graph G on n vertices with maximum degree 4 and for each edge e a cost function cost_e : N_0 --> R defining costs depending on the number of bends on e, compute an orthogonal drawing of G of minimum cost. Note that this optimizes over all planar embeddings of the input graphs, and the cost functions allow fine-grained control on the bends of edges. In this generality OptimalFlexDraw is NP-hard. We show that it can be solved efficiently if 1) the cost function of each edge is convex and 2) the first bend on each edge does not cause any cost (which is a condition similar to the posi...

  9. PSO Based Optimization of Testing and Maintenance Cost in NPPs

    Directory of Open Access Journals (Sweden)

    Qiang Chou

    2014-01-01

    Full Text Available Testing and maintenance activities of safety equipment have drawn much attention in Nuclear Power Plant (NPP to risk and cost control. The testing and maintenance activities are often implemented in compliance with the technical specification and maintenance requirements. Technical specification and maintenance-related parameters, that is, allowed outage time (AOT, maintenance period and duration, and so forth, in NPP are associated with controlling risk level and operating cost which need to be minimized. The above problems can be formulated by a constrained multiobjective optimization model, which is widely used in many other engineering problems. Particle swarm optimizations (PSOs have proved their capability to solve these kinds of problems. In this paper, we adopt PSO as an optimizer to optimize the multiobjective optimization problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Numerical results have demonstrated the efficiency of our proposed algorithm.

  10. Throughput Optimal Scheduling with Feedback Cost Reduction

    CERN Document Server

    Karaca, Mehmet; Ercetin, Ozgur; Alpcan, Tansu; Boche, Holger

    2012-01-01

    It is well known that opportunistic scheduling algorithms are throughput optimal under full knowledge of channel and network conditions. However, these algorithms achieve a hypothetical achievable rate region which does not take into account the overhead associated with channel probing and feedback required to obtain the full channel state information at every slot. We adopt a channel probing model where $\\beta$ fraction of time slot is consumed for acquiring the channel state information (CSI) of a single channel. In this work, we design a joint scheduling and channel probing algorithm named SDF by considering the overhead of obtaining the channel state information. We analytically prove that when the number of users in the network is greater than 3, then SDF algorithm can achieve $1+\\epsilon$ of the full rate region achieved when all users are probed. We also demonstrate numerically in a realistic simulation setting that this rate region can be achieved by probing only less than 50% of all channels in a CDM...

  11. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    OpenAIRE

    Bolin Kristian; Mathiassen Svend Erik

    2011-01-01

    Abstract Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover...

  12. Cost Optimization Using Hybrid Evolutionary Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    B. Kavitha

    2015-07-01

    Full Text Available The main aim of this research is to design the hybrid evolutionary algorithm for minimizing multiple problems of dynamic resource allocation in cloud computing. The resource allocation is one of the big problems in the distributed systems when the client wants to decrease the cost for the resource allocation for their task. In order to assign the resource for the task, the client must consider the monetary cost and computational cost. Allocation of resources by considering those two costs is difficult. To solve this problem in this study, we make the main task of client into many subtasks and we allocate resources for each subtask instead of selecting the single resource for the main task. The allocation of resources for the each subtask is completed through our proposed hybrid optimization algorithm. Here, we hybrid the Binary Particle Swarm Optimization (BPSO and Binary Cuckoo Search algorithm (BCSO by considering monetary cost and computational cost which helps to minimize the cost of the client. Finally, the experimentation is carried out and our proposed hybrid algorithm is compared with BPSO and BCSO algorithms. Also we proved the efficiency of our proposed hybrid optimization algorithm.

  13. Life Cycle Cost Optimization of a BOLIG+ Zero Energy Building

    DEFF Research Database (Denmark)

    Marszal, Anna Joanna

    included in the current building code, and ten renewable energy supply systems including both on-site and off-site options. The results indicated that although the off-site options have lower life cycle costs than the on-site alternatives, their application would promote renewable technologies over energy......, cost-optimal “zero” energy balance accounts only for the building related energy use....... building owners’ approach to it. For this particular target group, the cost is often an obstacle when investing money in environmental or climate friendly products. Therefore, this PhD project took the perspective of a future private ZEB owner to investigate the cost-optimal Net ZEB definition applicable...

  14. Investigations of a Cost-Optimal Zero Energy Balance

    DEFF Research Database (Denmark)

    Marszal, Anna Joanna; Nørgaard, Jesper; Heiselberg, Per

    2012-01-01

    have indicated that with current energy prices and technology, a cost-optimal Net ZEB zero energy balance accounts for only the building related energy use. Moreover, with high user related energy use is even more in favour of excluding appliances from the zero energy balance......., in particular the types of energy use that should be included in it. Since the user perspective and the cost of energy-efficiency technologies is so crucial for the successful adaptation of energy-conservation solutions, such like the Net ZEB concept, this paper has deployed the Life Cycle Cost (LCC) analysis...... and taken a view point of private building owner to investigate what types of energy uses should be included in the cost-optimal zero energy balance. The analysis is conducted for five renewable energy supply systems and five user profiles with a study case of a multi-storey residential Net ZEB. The results...

  15. Increase Productivity and Cost Optimization in CNC Manufacturing

    Science.gov (United States)

    Musca, Gavril; Mihalache, Andrei; Tabacaru, Lucian

    2016-11-01

    The advantage of the technological assisted design consists in easy modification of the machining technologies for obtaining machine alternation, tool changing, working parameters variation or the modification of loads to which the tools are subjected. By determining tool movement inside machining and by using tool related moving speeds needed for both positioning and manufacturing we are able to compute the required machining time for each component of the machining operation in progress. The present study describes a cost optimization model for machining operations which uses the following components: machine and its operator related cost, set-up and adjustment, unproductive costs (idle state), direct and indirect costs. By using manufacturing technologies assisted design procedures we may obtain various variants for the technological model by modifying the machining strategy, tooling, working regimes or the machine-tool that are used. Simulating those variants allows us to compare and establish the optimal manufacturing variant as well as the most productive one.

  16. Pareto Optimal Insurance Policies in the Presence of Administrative Costs

    OpenAIRE

    Aase, Knut K.

    2010-01-01

    In his classical article in The American Economic Review, Arthur Raviv (1979) examines Pareto optimal insurance contracts when there are ex-post insurance costs c induced by the indemnity I for loss x. Raviv’s main result is that a necessary and sufficient condition for the Pareto optimal deductible to be equal to zero is c0(I) = 0 for all I > 0(or I=0). We claim that another type of cost function is called for in household insurance, caused by frequent but relatively small ...

  17. Optimal Road Capacity Building : Road Planning by Marginal Cost Pricing

    OpenAIRE

    NEMOTO, Toshinori; Misui, Yuki; Kajiwara, Akira

    2009-01-01

    The purpose of this study is to propose a new road planning and financing scheme based on short-term social marginal cost pricing that facilitates the establishment of optimal road standards in the long term. We conducted a simulation analysis based on the proposed planning scheme and observed that the simulation calculated the optimal road capacity in the future, and thus proved that the new planning scheme is feasible.

  18. Optimizing Ice Thermal Storage to Reduce Energy Cost

    Science.gov (United States)

    Hall, Christopher L.

    Energy cost for buildings is an issue of concern for owners across the U.S. The bigger the building, the greater the concern. A part of this is due to the energy required to cool the building and the way in which charges are set when paying for energy consumed during different times of the day. This study will prove that designing ice thermal storage properly will minimize energy cost in buildings. The effectiveness of ice thermal storage as a means to reduce energy costs lies within transferring the time of most energy consumption from on-peak to off-peak periods. Multiple variables go into the equation of finding the optimal use of ice thermal storage and they are all judged with the final objective of minimizing monthly energy costs. This research discusses the optimal design of ice thermal storage and its impact on energy consumption, energy demand, and the total energy cost. A tool for optimal design of ice thermal storage is developed, considering variables such as chiller and ice storage sizes and charging and discharge times. The simulations take place in a four-story building and investigate the potential of Ice Thermal Storage as a resource in reducing and minimizing energy cost for cooling. The simulations test the effectiveness of Ice Thermal Storage implemented into the four-story building in ten locations across the United States.

  19. Production Cost Optimization Model Based on CODP in Mass Customization

    Directory of Open Access Journals (Sweden)

    Yanhong Qin

    2013-01-01

    Full Text Available The key for enterprises to implement the postponement strategy is the right decision on the location of Customer Order Decoupling Point (CODP so as to achieve the scope economics of mass customization and scale economics of mass production fully. To deal with production cost optimization problem of postponement system based on various situation of CODP, a basic model of production cost and its M/M/1 extended model are proposed and compared so as to optimize the overall production cost of the postponement system. The production modes can be classified as MTS (make to stock, ATO (assemble to order, MTO (make to order and ETO (engineering to order according to the inventory location, and the postponed production system considered here includes manufacturing cost, semi-finished inventory cost and customer waiting cost caused by delaying delivery. By Matlab simulation, we can compute the optimal location of CODP in each production mode, which can provide some management insight for the manufacturer to decide the right production mode and utilize the resources efficiently.

  20. Interstellar holography

    NARCIS (Netherlands)

    Walker, M. A.; Koopmans, L. V. E.; Stinebring, D. R.; van Straten, W.

    2008-01-01

    The dynamic spectrum of a radio pulsar is an in-line digital hologram of the ionized interstellar medium. It has previously been demonstrated that such holograms permit image reconstruction, in the sense that one can determine an approximation to the complex electric field values as a function of Do

  1. The Application of Maximum Principle in Supply Chain Cost Optimization

    Directory of Open Access Journals (Sweden)

    Zhou Ling

    2013-09-01

    Full Text Available In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain and obtain the optimal conditions and results. On this basis, we further research the effect of localization of CODP on the total cost and the relation of CODP, inventory policy and demand type through the data simulation. The results of simulation show that CODP locates in the downstream of the product life cycle, is a linear function of the product life cycle. The result indicates that the demand forecast is the main factors influencing the total cost; meanwhile the mode of production according to the demand forecast is the deciding factor of the total cost. Also the model can reflect the relation between the total cost of two-stage supply chain and inventory, demand.

  2. THE ATTAINABILITY OF THE PORTFOLIO OPTIMIZATION UNDER TRANSACTION COSTS

    Institute of Scientific and Technical Information of China (English)

    XU Shimeng; ZHANG Yuzhong

    2000-01-01

    We propose the concepts of normalized market and average hedging strategies, and discuss the attainability of the portfolio optimization under transaction costs and multiple stocks, by the extended forms of the martingale approach and the duality methods. Incidentally, we give an upper bound of the expectation of the portfolio at terminal investing time.

  3. Cost optimal river dike design using probabilistic methods

    NARCIS (Netherlands)

    Bischiniotis, K.; Kanning, W.; Jonkman, S.N.

    2014-01-01

    This research focuses on the optimization of river dikes using probabilistic methods. Its aim is to develop a generic method that automatically estimates the failure probabilities of many river dike cross-sections and gives the one with the least cost, taking into account the boundary conditions and

  4. Cost optimal river dike design using probabilistic methods

    NARCIS (Netherlands)

    Bischiniotis, K.; Kanning, W.; Jonkman, S.N.

    2014-01-01

    This research focuses on the optimization of river dikes using probabilistic methods. Its aim is to develop a generic method that automatically estimates the failure probabilities of many river dike cross-sections and gives the one with the least cost, taking into account the boundary conditions and

  5. Cost-Optimal Structural Reliability of Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Tarp-Johansen, N. J.

    2004-01-01

    Wind turbines for electricity production are increasing drastically these years both in production capability and in size. Offshore wind turbines with an electricity production more than 5 MW are now being produced. The main failure modes are fatigue failure of wings, hub, shaft and main tower......, and failure costs. Different reconstruction policies in case of failure are considered, including systematic reconstruction in case of failure, no reconstruction and inspection and maintenance strategies. Illustrative examples for offshore wind turbines are presented, and as a part of the results optimal......, local buckling of main tower, and failure of the foundation. This paper considers reliability-based optimization of the tower and foundation. Different formulations are considered of the objective function including benefits, building costs of the wind turbine, inspection and maintenance costs...

  6. Cost-Optimal Structural Reliability of Offshore Wind Turbines

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Tarp-Johansen, N. J.

    2004-01-01

    Wind turbines for electricity production are increasing drastically these years both in production capability and in size. Offshore wind turbines with an electricity production more than 5 MW are now being produced. The main failure modes are fatigue failure of wings, hub, shaft and main tower......, local buckling of main tower, and failure of the foundation. This paper considers reliability-based optimization of the tower and foundation. Different formulations are considered of the objective function including benefits, building costs of the wind turbine, inspection and maintenance costs......, and failure costs. Different reconstruction policies in case of failure are considered, including systematic reconstruction in case of failure, no reconstruction and inspection and maintenance strategies. Illustrative examples for offshore wind turbines are presented, and as a part of the results optimal...

  7. Optimized low-cost-array field designs for photovoltaic systems

    Energy Technology Data Exchange (ETDEWEB)

    Post, H.N.; Carmichael, D.C.; Castle, J.A.

    1982-01-01

    As manager of the US Department of Energy Photovoltaic Systems Definition Project, Sandia National Laboratories is engaged in a comprehensive program to define and develop array field subsystems which can achieve the lowest possible lifecycle costs. The major activity of this program is described, namely, the design and development of optimized, modular array fields for photovoltaic (PV) systems. As part of this activity, design criteria and performance requirements for specific array subsystems including support structures, foundations, intermodule connections, field wiring, lightning protection, system grounding, site preparation, and monitoring and control have been defined and evaluated. Similarly, fully integrated flat-panel array field designs, optimized for lowest lifecycle costs, have been developed for system sizes ranging from 20 to 500 kW/sub p/. Key features, subsystem requirements, and projected costs for these array field designs are presented and discussed.

  8. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  9. Interstellar Travel. (Latest citations from the Aerospace Database)

    Science.gov (United States)

    1996-01-01

    The bibliography contains citations concerning travel between the stars. Topics include cost considerations, hyperspace navigation, exploration, and propulsion systems for vehicles to be used in interstellar travel. Human factor issues and social aspects of interstellar travel are also discussed.

  10. Optimal Cost-Analysis and Design of Circular Footings

    Directory of Open Access Journals (Sweden)

    Prabir K. Basudhar

    2012-10-01

    Full Text Available The study pertains to the optimal cost-analysis and design of a circular footing subjected to generalized loadings using sequential unconstrained minimization technique (SUMT in conjunction with Powell’s conjugate direction method for multidimensional search and quadratic interpolation method for one dimensional minimization. The cost of the footing is minimized satisfying all the structural and geotechnical engineering design considerations. As extended penalty function method has been used to convert the constrained problem into an unconstrained one, the developed technique is capable of handling both feasible and infeasible initial design vector. The net saving in cost starting from the best possible manual design ranges from 10 to 20 %. For all practical purposes, the optimum cost is independent of the initial design point. It was observed that for better convergence, the transition parameter  should be chosen at least 100 times the initial penalty parameter kr .

  11. APECS: A family of optimization products for least cost generation

    Energy Technology Data Exchange (ETDEWEB)

    Petrill, E.; Stallings, J. [Electric Power Research Institute, Palo Alto, CA (United States); Shea, S. [Praxis Engineering, Inc., Milpitas, CA (United States)

    1996-05-01

    Reducing costs of power generation is the primary focus of many power generators today in efforts to prepare for competition in a deregulated market, to increase profitability, or to retain customers. To help power generators track and manage power generation costs, the Electric Power Research Institute (EPRI) offers APECS{sup plus}, one of EPRI`s APECS - Advisory Plant and Environmental Control System - family of optimization products for fossil power plants. The APECS family of products provides tools and techniques to optimize costs, as well as NO{sub x} emissions and performance, in fossil power plants. These products include APECS{sup plus}, GNOCIS, and ULTRAMAX{reg_sign}. The products have varying degrees of functionality and their application at a power plant will depend on the site-specific needs and resources in each case. This paper describes APECS{sup plus}, the cost management product of the APECS family of optimization products. The other key products in this family, GNOCIS and ULTRAMAX{reg_sign}, are mentioned here and described in more detail in the literature.

  12. Research on optimal guaranteed cost control of flexible spacecraft

    Institute of Scientific and Technical Information of China (English)

    Wang Qingchao; Cai Peng

    2008-01-01

    This article is concerned with the modeling and control problems of the flexible spacecraft.First,the state observer is designed to estimate the vibration mode on the basis of free vibration models.Then,an optimal guaranteed cost controller is proposed to stabilize system attitude and damp the vibration of the flexible beam at the same time.Numerical simulation examples show the feasibility and validity of the proposed method.

  13. Cost optimal levels for energy performance requirements:Executive summary

    OpenAIRE

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike; Erhorn, Hans; Poel, Bart; Hitchin, Roger

    2011-01-01

    This report summarises the work done within the Concerted Action EPBD from December 2010 to April 2011 in order to feed into the European Commission's proposal for a common European procedure for a Cost-Optimal methodology under the Directive on the Energy Performance of Buildings (recast) 2010/31/EU. This report summarises the work done within the Concerted Action EPBD from December 2010 to April 2011 in order to feed into the European Commission's proposal for a common European procedure...

  14. Cost Evaluation and Portfolio Management Optimization for Biopharmaceutical Product Development

    OpenAIRE

    Nie, W.

    2015-01-01

    The pharmaceutical industry is suffering from declining R&D productivity and yet biopharmaceutical firms have been attracting increasing venture capital investment. Effective R&D portfolio management can deliver above average returns under increasing costs of drug development and the high risk of clinical trial failure. This points to the need for advanced decisional tools that facilitate decision-making in R&D portfolio management by efficiently identifying optimal solutions while accounting...

  15. Flash memories economic principles of performance, cost and reliability optimization

    CERN Document Server

    Richter, Detlev

    2014-01-01

    The subject of this book is to introduce a model-based quantitative performance indicator methodology applicable for performance, cost and reliability optimization of non-volatile memories. The complex example of flash memories is used to introduce and apply the methodology. It has been developed by the author based on an industrial 2-bit to 4-bit per cell flash development project. For the first time, design and cost aspects of 3D integration of flash memory are treated in this book. Cell, array, performance and reliability effects of flash memories are introduced and analyzed. Key performance parameters are derived to handle the flash complexity. A performance and array memory model is developed and a set of performance indicators characterizing architecture, cost and durability is defined.   Flash memories are selected to apply the Performance Indicator Methodology to quantify design and technology innovation. A graphical representation based on trend lines is introduced to support a requirement based pr...

  16. Current Cloud Computing Review and Cost Optimization by DERSP

    Directory of Open Access Journals (Sweden)

    M. Gomathy

    2014-03-01

    Full Text Available Cloud computing promises to deliver cost saving through the “pay as you use” paradigm. The focus is on adding computing resources when needed and releasing them when the need is serviced. Since cloud computing relies on providing computing power through multiple interconnected computers, there is a paradigm shift from one large machine to a combination of multiple smaller machine instances. In this paper, we review the current cloud computing scenario and provide a set of recommendations that can be used for designing custom applications suited for cloud deployment. We also present a comparative study on the change in cost incurred while using different combinations of machine instances for running an application on cloud; and derive the case for optimal cost

  17. Energy Cost Optimization in a Water Supply System Case Study

    Directory of Open Access Journals (Sweden)

    Daniel F. Moreira

    2013-01-01

    Full Text Available The majority of the life cycle costs (LCC of a pump are related to the energy spent in pumping, with the rest being related to the purchase and maintenance of the equipment. Any optimizations in the energy efficiency of the pumps result in a considerable reduction of the total operational cost. The Fátima water supply system in Portugal was analyzed in order to minimize its operational energy costs. Different pump characteristic curves were analyzed and modeled in order to achieve the most efficient operation point. To determine the best daily pumping operational scheduling pattern, genetic algorithm (GA optimization embedded in the modeling software was considered in contrast with a manual override (MO approach. The main goal was to determine which pumps and what daily scheduling allowed the best economical solution. At the end of the analysis it was possible to reduce the original daily energy costs by 43.7%. This was achieved by introducing more appropriate pumps and by intelligent programming of their operation. Given the heuristic nature of GAs, different approaches were employed and the most common errors were pinpointed, whereby this investigation can be used as a reference for similar future developments.

  18. Cost Effectiveness Analysis of Optimal Malaria Control Strategies in Kenya

    Directory of Open Access Journals (Sweden)

    Gabriel Otieno

    2016-03-01

    Full Text Available Malaria remains a leading cause of mortality and morbidity among the children under five and pregnant women in sub-Saharan Africa, but it is preventable and controllable provided current recommended interventions are properly implemented. Better utilization of malaria intervention strategies will ensure the gain for the value for money and producing health improvements in the most cost effective way. The purpose of the value for money drive is to develop a better understanding (and better articulation of costs and results so that more informed, evidence-based choices could be made. Cost effectiveness analysis is carried out to inform decision makers on how to determine where to allocate resources for malaria interventions. This study carries out cost effective analysis of one or all possible combinations of the optimal malaria control strategies (Insecticide Treated Bednets—ITNs, Treatment, Indoor Residual Spray—IRS and Intermittent Preventive Treatment for Pregnant Women—IPTp for the four different transmission settings in order to assess the extent to which the intervention strategies are beneficial and cost effective. For the four different transmission settings in Kenya the optimal solution for the 15 strategies and their associated effectiveness are computed. Cost-effective analysis using Incremental Cost Effectiveness Ratio (ICER was done after ranking the strategies in order of the increasing effectiveness (total infections averted. The findings shows that for the endemic regions the combination of ITNs, IRS, and IPTp was the most cost-effective of all the combined strategies developed in this study for malaria disease control and prevention; for the epidemic prone areas is the combination of the treatment and IRS; for seasonal areas is the use of ITNs plus treatment; and for the low risk areas is the use of treatment only. Malaria transmission in Kenya can be minimized through tailor-made intervention strategies for malaria control

  19. Optimizing bulk milk dioxin monitoring based on costs and effectiveness.

    Science.gov (United States)

    Lascano-Alcoser, V H; Velthuis, A G J; van der Fels-Klerx, H J; Hoogenboom, L A P; Oude Lansink, A G J M

    2013-07-01

    Dioxins are environmental pollutants, potentially present in milk products, which have negative consequences for human health and for the firms and farms involved in the dairy chain. Dioxin monitoring in feed and food has been implemented to detect their presence and estimate their levels in food chains. However, the costs and effectiveness of such programs have not been evaluated. In this study, the costs and effectiveness of bulk milk dioxin monitoring in milk trucks were estimated to optimize the sampling and pooling monitoring strategies aimed at detecting at least 1 contaminated dairy farm out of 20,000 at a target dioxin concentration level. Incidents of different proportions, in terms of the number of contaminated farms, and concentrations were simulated. A combined testing strategy, consisting of screening and confirmatory methods, was assumed as well as testing of pooled samples. Two optimization models were built using linear programming. The first model aimed to minimize monitoring costs subject to a minimum required effectiveness of finding an incident, whereas the second model aimed to maximize the effectiveness for a given monitoring budget. Our results show that a high level of effectiveness is possible, but at high costs. Given specific assumptions, monitoring with 95% effectiveness to detect an incident of 1 contaminated farm at a dioxin concentration of 2 pg of toxic equivalents/g of fat [European Commission's (EC) action level] costs €2.6 million per month. At the same level of effectiveness, a 73% cost reduction is possible when aiming to detect an incident where 2 farms are contaminated at a dioxin concentration of 3 pg of toxic equivalents/g of fat (EC maximum level). With a fixed budget of €40,000 per month, the probability of detecting an incident with a single contaminated farm at a dioxin concentration equal to the EC action level is 4.4%. This probability almost doubled (8.0%) when aiming to detect the same incident but with a dioxin

  20. A Low Cost Structurally Optimized Design for Diverse Filter Types.

    Science.gov (United States)

    Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar

    2016-01-01

    A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image

  1. Optimal allocation of watershed management cost among different water users

    Institute of Scientific and Technical Information of China (English)

    Wang Zanxin; Margaret M.Calderon

    2006-01-01

    The issue of water scarcity highlights the importance of watershed management. A sound watershed management should make all water users share the incurred cost. This study analyzes the optimal allocation of watershed management cost among different water users. As a consumable, water should be allocated to different users the amounts in which their marginal utilities (Mus) or marginal products (MPs) of water are equal. The value of Mus or MPs equals the water price that the watershed manager charges. When water is simultaneously used as consumable and non-consumable, the watershed manager produces the quantity of water in which the sum of Mus and/or MPs for the two types of uses equals the marginal cost of water production. Each water user should share the portion of watershed management cost in the percentage that his MU or MP accounts for the sum of Mus and/or MPs. Thus, the price of consumable water does not equal the marginal cost of water production even if there is no public good.

  2. Linear versus quadratic portfolio optimization model with transaction cost

    Science.gov (United States)

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  3. Interstellar Ices

    CERN Document Server

    Boogert, A C A

    2003-01-01

    Currently ~36 different absorption bands have been detected in the infrared spectra of cold, dense interstellar and circumstellar environments. These are attributed to the vibrational transitions of ~17 different molecules frozen on dust grains. We review identification issues and summarize the techniques required to extract information on the physical and chemical evolution of these ices. Both laboratory simulations and line of sight studies are essential. Examples are given for ice bands observed toward high mass protostars, fields stars and recent work on ices in disks surrounding low mass protostars. A number of clear trends have emerged in recent years. One prominent ice component consists of an intimate mixture between H2O, CH3OH and CO2 molecules. Apparently a stable balance exists between low temperature hydrogenation and oxidation reactions on grain surfaces. In contrast, an equally prominent ice component, consisting almost entirely of CO, must have accreted directly from the gas phase. Thermal proc...

  4. INTERSTELLAR TURBULENCE

    Directory of Open Access Journals (Sweden)

    D. Falceta-Gonçalves

    2011-01-01

    Full Text Available The Interstellar Medium (ISM is a complex, multi-phase system, where the history of the stars occurs. The processes of birth and death of stars are strongly coupled to the dynamics of the ISM. The observed chaotic and diffusive motions of the gas characterize its turbulent nature. Understanding turbulence is crucial for understanding the star-formation process and the energy-mass feedback from evolved stars. Magnetic fields, threading the ISM, are also observed, making this effort even more difficult. In this work, I briefly review the main observations and the characterization of turbulence from these observable quantities. Following on, I provide a review of the physics of magnetized turbulence. Finally, I will show the main results from theoretical and numerical simulations, which can be used to reconstruct observable quantities, and compare these predictions to the observations.

  5. Life Cycle Cost optimization of a BOLIG+ Zero Energy Building

    Energy Technology Data Exchange (ETDEWEB)

    Marszal, A.J.

    2011-12-15

    Buildings consume approximately 40% of the world's primary energy use. Considering the total energy consumption throughout the whole life cycle of a building, the energy performance and supply is an important issue in the context of climate change, scarcity of energy resources and reduction of global energy consumption. An energy consuming as well as producing building, labelled as the Zero Energy Building (ZEB) concept, is seen as one of the solutions that could change the picture of energy consumption in the building sector, and thus contribute to the reduction of the global energy use. However, before being fully implemented in the national building codes and international standards, the ZEB concept requires a clear understanding and a uniform definition. The ZEB concept is an energy-conservation solution, whose successful adaptation in real life depends significantly on private building owners' approach to it. For this particular target group, the cost is often an obstacle when investing money in environmental or climate friendly products. Therefore, this PhD project took the perspective of a future private ZEB owner to investigate the cost-optimal Net ZEB definition applicable in the Danish context. The review of the various ZEB approaches indicated a general concept of a Zero Energy Building as a building with significantly reduced energy demand that is balanced by an equivalent energy generation from renewable sources. And, with this as a general framework, each ZEB definition should further specify: (1) the connection or the lack of it to the energy infrastructure, (2) the unit of the balance, (3) the period of the balance, (4) the types of energy use included in the balance, (5) the minimum energy performance requirements (6) the renewable energy supply options, and if applicable (7) the requirements of the building-grid interaction. Moreover, the study revealed that the future ZEB definitions applied in Denmark should mostly be focused on grid

  6. An Optimal Solution of Resource Provisioning Cost in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Arun Pandian

    2013-03-01

    Full Text Available In cloud computing, providing an optimal resource to user becomes more and more important. Cloud computing users can access the pool of computing resources through internet. Cloud providers are charge for these computing resources based on cloud resource usage. The provided resource plans are reservation and on demand. The computing resources are provisioned by cloud resource provisioning model. In this model resource cost is high due to the difficulty in optimization of resource cost under uncertainty. The resource optimization cost is dealing with an uncertainty of resource provisioning cost. The uncertainty of resource provisioning cost consists: on demand cost, Reservation cost, Expending cost. This problem leads difficulty to achieve optimal solution of resource provisioning cost in cloud computing. The Stochastic Integer Programming is applied for difficulty to obtain optimal resource provisioning cost. The Two Stage Stochastic Integer Programming with recourse is applied to solve the complexity of optimization problems under uncertainty. The stochastic programming is enhanced as Deterministic Equivalent Formulation for solve the probability distribution of all scenarios to reduce the on demand cost. The Benders Decomposition is applied for break down the resource optimization problem into multiple sub problems to reduce the on demand cost and reservation cost. The Sample Average Approximation is applied for reduce the problem scenarios in a resource optimization problem. This algorithm is used to reduce the reservation cost and expending cost.

  7. Life Cycle Cost Optimization of a Bolig+ Zero Energy Building

    DEFF Research Database (Denmark)

    Marszal, Anna Joanna

    in the Danish context. The review of the various ZEB approaches indicated a general concept of a Zero Energy Building as a building with significantly reduced energy demand that isbalanced by an equivalent energy generation from renewable sources. And, with this as a general framework, each ZEB definition...... should further specify: (1) the connection orthe lack of it to the energy infrastructure, (2) the unit of the balance, (3) the period of the balance, (4) the types of energy use included in the balance, (5) the minimumenergy performance requirements (6) the renewable energy supply options...... case of a multi-storey residential Net ZEB aimed to determine the cost-optimal “zero” energy balance,minimum energy performance requirements and options of supplying renewable energy. The calculation encompassed three levels of energy frames, which mirrored theDanish low-energy building classes...

  8. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    Science.gov (United States)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  9. Cost Optimization of Cloud Computing Services in a Networked Environment

    Directory of Open Access Journals (Sweden)

    Eli WEINTRAUB

    2015-04-01

    Full Text Available Cloud computing service providers' offer their customers' services maximizing their revenues, whereas customers wish to minimize their costs. In this paper we shall concentrate on consumers' point of view. Cloud computing services are composed of services organized according to a hierarchy of software application services, beneath them platform services which also use infrastructure services. Providers currently offer software services as bundles consisting of services which include the software, platform and infrastructure services. Providers also offer platform services bundled with infrastructure services. Bundling services prevent customers from splitting their service purchases between a provider of software and a different provider of the underlying platform or infrastructure. This bundling policy is likely to change in the long run since it contradicts economic competition theory, causing an unfair pricing model and locking-in consumers to specific service providers. In this paper we assume the existence of a free competitive market, in which consumers are free to switch their services among providers. We assume that free market competition will enforce vendors to adopt open standards, improve the quality of their services and suggest a large variety of cloud services in all layers. Our model is aimed at the potential customer who wishes to find the optimal combination of service providers which minimizes his costs. We propose three possible strategies for implementation of the model in organizations. We formulate the mathematical model and illustrate its advantages compared to existing pricing practices used by cloud computing consumers.

  10. Particle swarm optimization algorithm based low cost magnetometer calibration

    Science.gov (United States)

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  11. Aircraft path planning for optimal imaging using dynamic cost functions

    Science.gov (United States)

    Christie, Gordon; Chaudhry, Haseeb; Kochersberger, Kevin

    2015-05-01

    Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an "applications lag" for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.

  12. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    Science.gov (United States)

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  13. Development and optimization of an analytical system for volatile organic compound analysis coming from the heating of interstellar/cometary ice analogues.

    Science.gov (United States)

    Abou Mrad, Ninette; Duvernay, Fabrice; Theulé, Patrice; Chiavassa, Thierry; Danger, Grégoire

    2014-08-19

    This contribution presents an original analytical system for studying volatile organic compounds (VOC) coming from the heating and/or irradiation of interstellar/cometary ice analogues (VAHIIA system) through laboratory experiments. The VAHIIA system brings solutions to three analytical constraints regarding chromatography analysis: the low desorption kinetics of VOC (many hours) in the vacuum chamber during laboratory experiments, the low pressure under which they sublime (10(-9) mbar), and the presence of water in ice analogues. The VAHIIA system which we developed, calibrated, and optimized is composed of two units. The first is a preconcentration unit providing the VOC recovery. This unit is based on a cryogenic trapping which allows VOC preconcentration and provides an adequate pressure allowing their subsequent transfer to an injection unit. The latter is a gaseous injection unit allowing the direct injection into the GC-MS of the VOC previously transferred from the preconcentration unit. The feasibility of the online transfer through this interface is demonstrated. Nanomoles of VOC can be detected with the VAHIIA system, and the variability in replicate measurements is lower than 13%. The advantages of the GC-MS in comparison to infrared spectroscopy are pointed out, the GC-MS allowing an unambiguous identification of compounds coming from complex mixtures. Beyond the application to astrophysical subjects, these analytical developments can be used for all systems requiring vacuum/cryogenic environments.

  14. Heuristic Optimization of Consumer Electricity Costs Using a Generic Cost Model

    Directory of Open Access Journals (Sweden)

    Chris Ogwumike

    2015-12-01

    Full Text Available Many new demand response strategies are emerging for energy management in smart grids. Real-Time Energy Pricing (RTP is one important aspect of consumer Demand Side Management (DSM, which encourages consumers to participate in load scheduling. This can help reduce peak demand and improve power system efficiency. The use of Intelligent Decision Support Systems (IDSSs for load scheduling has become necessary in order to enable consumers to respond to the changing economic value of energy across different hours of the day. The type of scheduling problem encountered by a consumer IDSS is typically NP-hard, which warrants the search for good heuristics with efficient computational performance and ease of implementation. This paper presents an extensive evaluation of a heuristic scheduling algorithm for use in a consumer IDSS. A generic cost model for hourly pricing is utilized, which can be configured for traditional on/off peak pricing, RTP, Time of Use Pricing (TOUP, Two-Tier Pricing (2TP and combinations thereof. The heuristic greedily schedules controllable appliances to minimize smart appliance energy costs and has a polynomial worst-case computation time. Extensive computational experiments demonstrate the effectiveness of the algorithm and the obtained results indicate the gaps between the optimal achievable costs are negligible.

  15. Optimal Privacy-Cost Trade-off in Demand-Side Management with Storage

    OpenAIRE

    Tan, Onur; Gündüz, Deniz; Gómez-Vilardebó, Jesús

    2015-01-01

    Demand-side energy storage management is studied from a joint privacy-energy cost optimization perspective. Assuming that the user's power demand profile as well as the electricity prices are known non-causally, the optimal energy management (EM) policy that jointly increases the privacy of the user and reduces his energy cost is characterized. The backward water-filling interpretation is provided for the optimal EM policy. While the energy cost is reduced by requesting more energy when the p...

  16. Cost optimization of biofuel production – The impact of scale, integration, transport and supply chain configurations

    NARCIS (Netherlands)

    de Jong, S.A.|info:eu-repo/dai/nl/41200836X; Hoefnagels, E.T.A.|info:eu-repo/dai/nl/313935998; Wetterlund, Elisabeth; Pettersson, Karin; Faaij, André; Junginger, H.M.|info:eu-repo/dai/nl/202130703

    2017-01-01

    This study uses a geographically-explicit cost optimization model to analyze the impact of and interrelation between four cost reduction strategies for biofuel production: economies of scale, intermodal transport, integration with existing industries, and distributed supply chain configurations

  17. An attempt of reduction of optimization costs of complex industrial processes

    Science.gov (United States)

    Sztangret, Łukasz; Kusiak, Jan

    2017-09-01

    Reduction of computational costs of optimization of real industrial processes is crucial, because the models of these processes are often complex and demand time consuming numerical computations. Iterative optimization procedures have to run the simulations many times and therefore the computational costs of the optimization may be unacceptable high. This is why a new optimization methods and strategies which need less simulation runs are searched. The paper is focused on the problem of reduction of computational costs of optimization procedure. The main goal is the presentation of developed by the Authors new, efficient Approximation Based Optimization (ABO) and Modified Approximation Based Optimization (MABO) methods which allow finding the global minimum in smaller number of objective function calls. Detailed algorithm of the MABO method as well as the results of tests using several benchmark functions are presented. The efficiency of MABO method was compared with heuristic methods and the results show that MABO method reduces the computational costs and improve the optimization accuracy.

  18. Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects

    OpenAIRE

    Ming Li; Guangdong Wu

    2014-01-01

    Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illus...

  19. Optimal guaranteed cost control for fuzzy descriptor systems with time-varying delay

    Institute of Scientific and Technical Information of China (English)

    Tian Weihua; Zhang Huaguang

    2008-01-01

    Based on the delay-independent rule, the problem of optimal guaranteed cost control for a class of Takagi-Sugeno (T-S) fuzzy descriptor systems with time-varying delay is studied. A linear quadratic cost function is considered as the performance index of the closed-loop system. Sufficient conditions for the existence of guaranteed cost controllers via state feedback are given in terms of linear matrix inequalities (LMIs), and the design of an optimal guaranteed cost controller can be reduced to a convex optimization problem. It is shown that the designed controller not only guarantees the asymptotic stability of the closed-loop fuzzy descriptor delay system, but also provides an optimized upper bound of the guaranteed cost. At last, a numerical example is given to illustrate the effectiveness of the proposed method and the perfect performance of the optimal guaranteed cost controller.

  20. Optimalization of logistic costs by the controlling approach

    Directory of Open Access Journals (Sweden)

    Katarína Teplická

    2007-10-01

    Full Text Available Article deals with logistic cost problems, their following, reporting and evaluation in the firm. It gives basic information about possible access of the effective decreasing of the logistic cost by the way of controlling access, or evaluation of Balance Score Card. It indicates short algorithm of logistic costs reporting by the way of controlling.

  1. Simulator for Optimization of Software Project Cost and Schedule

    Directory of Open Access Journals (Sweden)

    P. K. Suri

    2008-01-01

    Full Text Available Each phase of the software design consumes some resources and hence has cost associated with it. In most of the cases cost will vary to some extent with the amount of time consumed by the design of each phase .The total cost of project, which is aggregate of the activities costs will also depends upon the project duration, can be cut down to some extent. The aim is always to strike a balance between the cost and time and to obtain an optimum software project schedule. An optimum minimum cost project schedule implies lowest possible cost and the associated time for the software project management. In this research an attempt has been made to solve the cost and schedule problem of software project using PERT network showing the details of the activities to be carried out for a software project development/management with the help of crashing, reducing software project duration at a minimum cost by locating a minimal cut in the duration of an activity of the original project design network. This minimal cut is then utilized to identify the project phases which should experience a duration modification in order to achieve the total software duration reduction. Crashing PERT networks can save a significant amount of money in crashing and overrun costs of a company. Even if there are no direct costs in the form of penalties for late completion of projects, there is likely to be intangible costs because of reputation damage.

  2. Optimalization of logistic costs by the controlling approach

    OpenAIRE

    Katarína Teplická

    2007-01-01

    Article deals with logistic cost problems, their following, reporting and evaluation in the firm. It gives basic information about possible access of the effective decreasing of the logistic cost by the way of controlling access, or evaluation of Balance Score Card. It indicates short algorithm of logistic costs reporting by the way of controlling.

  3. Interstellar Fullerene Compounds and Diffuse Interstellar Bands

    CERN Document Server

    Omont, Alain

    2015-01-01

    Recently, the presence of fullerenes in the interstellar medium (ISM) has been confirmed, especially with the first confirmed identification of two strong diffuse interstellar bands (DIBs) with C60+. This justifies reassesing the importance of interstellar fullerenes of various sizes with endohedral or exohedral inclusions and heterofullerenes (EEHFs). The phenomenology of fullerenes is complex. In addition to formation in shock shattering, fully dehydrogenated PAHs in diffuse interstellar (IS) clouds could perhaps efficiently transform into fullerenes including EEHFs. But it is extremely difficult to assess their expected abundance, composition and size distribution, except for C60+. As often suggested, EEHFs share many properties with C60, as regards stability, formation/destruction, chemical processes and many basic spectral features. We address the importance of various EEHFs as possible DIB carriers. Specifically, we discuss IS properties and the contributions of fullerenes of various sizes and charge su...

  4. Interstellar Molecules Their Laboratory and Interstellar Habitat

    CERN Document Server

    Yamada, Koichi M T

    2011-01-01

    This book deals with the astrophysics and spectroscopy of the interstellar molecules. In the introduction, overview and history of interstellar observations are described in order to help understanding how the modern astrophysics and molecular spectroscopy have been developed interactively. The recent progress in the study of this field, after the 4th Cologne-Bonn-Zermatt symposium 2003 is briefly summarized. Furthermore, the basic knowledge of molecular spectroscopy, which is essential to correctly comprehend the astrophysical observations, is presented in a compact form.

  5. Process Cost Modeling for Multi-Disciplinary Design Optimization

    Science.gov (United States)

    Bao, Han P.; Freeman, William (Technical Monitor)

    2002-01-01

    For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to

  6. Sequential optimization of matrix chain multiplication relative to different cost functions

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    In this paper, we present a methodology to optimize matrix chain multiplication sequentially relative to different cost functions such as total number of scalar multiplications, communication overhead in a multiprocessor environment, etc. For n matrices our optimization procedure requires O(n 3) arithmetic operations per one cost function. This work is done in the framework of a dynamic programming extension that allows sequential optimization relative to different criteria. © 2011 Springer-Verlag Berlin Heidelberg.

  7. The effect of cost distributions on evolutionary optimization algorithms

    NARCIS (Netherlands)

    Waas, F.; Galindo-Legaria, C.A.

    2000-01-01

    According to the No-Free-Lunch theorems of Wolpert and Macready, we cannot expect one generic optimization technique to outperform others on average. For every optimization technique there exist ``easy'' and ``hard'' problems. However, only little is known as to what criteria determine the particula

  8. Optimal monetary policy in a model with agency costs

    OpenAIRE

    Timothy Fuerst; Matthias Paustian; Charles Carlstorm

    2009-01-01

    is the optimal policy. We derive the targeting criterion that implements optimal monetary policy under commitment and show under what conditions the target depends on leads or lags of the risk premium. Finally, the paper demonstrates that the degree of price stickiness and/or the nature of monetary policy alter the endogenous propagation of net worth across time.

  9. Globally Optimal Path Planning with Anisotropic Running Costs

    Science.gov (United States)

    2013-03-01

    Proceedings of the American Control Conference , pp...Jacques, D. R. & Pachter, M. (2002) Air vehicle optimal trajectories between two radars, in Proceedings of the American Control Conference . Pachter...M. & Hebert, J. (2001) Optimal aircraft trajectories for radar exposure mini- mization, in Proceedings of the American Control Conference .

  10. One Kilogram Interstellar Colony Mission

    Science.gov (United States)

    Mole, A.

    Small interstellar colony probes based on nanotechnology will become possible long before giant multi-generation ships become affordable. A beam generator and magnetic sail can accelerate a one kg probe to .1 c, braking via the interstellar field can decelerate it, and the field in a distant solar system can allow it to maneuver to an extrasolar planet. A heat shield is used for landing and nanobots emerge to build ever-larger robots and construct colony infrastructure. Humans can then be generated from genomes stored as data in computer memory. Technology is evolving towards these capabilities and should reach the required level in fifty years. The plan appears to be affordable, with the principal cost being the beam generator, estimated at $17 billion.

  11. Optimizing calibration intervals for specific applications to reduce maintenance costs

    Energy Technology Data Exchange (ETDEWEB)

    Collier, Steve; Holland, Jack [Servomex Group, Crowborough (United Kingdom)

    2009-11-01

    The introduction of the Servomex MultiExact 5400 analyzer has presented an opportunity to review the cost of ownership and how improvements to an analyzer's performance may be used to reduce this. Until now, gas analyzer manufacturers have taken a conservative approach to calibration intervals based on site practices and experience covering a wide range of applications. However, if specific applications are considered, then there is an opportunity to reduce costs by increasing calibration intervals. This paper demonstrates how maintenance costs may be reduced by increasing calibration intervals for those gas analyzers used for monitoring Air Separation Units (ASUs) without detracting from their performance.(author)

  12. Cost-optimized design approach for steam condensers

    Energy Technology Data Exchange (ETDEWEB)

    Kurma Rao, P.S.V.

    1982-08-23

    Examines methods of selecting cooling water quality for a steam surface condenser operating in a power plant or in a process. Equations and graphs for determining optimum terminal temperature difference and optimum cooling water velocity are presented. With large quantity with large terminal temperature difference, water cost will be more, but the annual fixed charges for maintenance will be smaller because the surface area is less due to the large terminal temperature difference. With high velocities giving high heat transfer rates with less surface area, initial costs are low, but pumping costs will be higher.

  13. The Optimization of Transportation Costs in Logistics Enterprises with Time-Window Constraints

    Directory of Open Access Journals (Sweden)

    Qingyou Yan

    2015-01-01

    Full Text Available This paper presents a model for solving a multiobjective vehicle routing problem with soft time-window constraints that specify the earliest and latest arrival times of customers. If a customer is serviced before the earliest specified arrival time, extra inventory costs are incurred. If the customer is serviced after the latest arrival time, penalty costs must be paid. Both the total transportation cost and the required fleet size are minimized in this model, which also accounts for the given capacity limitations of each vehicle. The total transportation cost consists of direct transportation costs, extra inventory costs, and penalty costs. This multiobjective optimization is solved by using a modified genetic algorithm approach. The output of the algorithm is a set of optimal solutions that represent the trade-off between total transportation cost and the fleet size required to service customers. The influential impact of these two factors is analyzed through the use of a case study.

  14. The Application of Maximum Principle in Supply Chain Cost Optimization

    National Research Council Canada - National Science Library

    Zhou Ling; Wang Jun

    2013-01-01

    In this paper, using the maximum principle for analyzing dynamic cost, we propose a new two-stage supply chain model of the manufacturing-assembly mode for high-tech perishable products supply chain...

  15. Cost optimization of load carrying thin-walled precast high performance concrete sandwich panels

    DEFF Research Database (Denmark)

    Hodicky, Kamil; Hansen, Sanne; Hulin, Thomas

    2015-01-01

    and HPCSP’s geometrical parameters as well as on material cost function in the HPCSP design. Cost functions are presented for High Performance Concrete (HPC), insulation layer, reinforcement and include labour-related costs. The present study reports the economic data corresponding to specific manufacturing......The paper describes a procedure to find the structurally and thermally efficient design of load-carrying thin-walled precast High Performance Concrete Sandwich Panels (HPCSP) with an optimal economical solution. A systematic optimization approach is based on the selection of material’s performances....... The solution of the optimization problem is performed in the computer package software Matlab® with SQPlab package and integrates the processes of HPCSP design, quantity take-off and cost estimation. The proposed optimization process outcomes in complex HPCSP design proposals to achieve minimum cost of HPCSP....

  16. OPTIMIZATION METHOD AND SOFTWARE FOR FUEL COST REDUCTION IN CASE OF ROAD TRANSPORT ACTIVITY

    Directory of Open Access Journals (Sweden)

    György Kovács

    2017-06-01

    Full Text Available The transport activity is one of the most expensive processes in the supply chain and the fuel cost is the highest cost among the cost components of transportation. The goal of the research is to optimize the transport costs in case of a given transport task both by the selecting the optimal petrol station and by determining the optimal amount of the refilled fuel. Recently, in practice, these two decisions have not been made centrally at the forwarding company, but they depend on the individual decision of the driver. The aim of this study is to elaborate a precise and reliable mathematical method for selecting the optimal refuelling stations and determining the optimal amount of the refilled fuel to fulfil the transport demands. Based on the elaborated model, new decision-supporting software is developed for the economical fulfilment of transport trips.

  17. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    Science.gov (United States)

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  18. Optimization of stability index versus first strike cost

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-05-01

    This note studies the impact of maximizing the stability index rather than minimizing the first strike cost in choosing offensive missile allocations. It does so in the context of a model in which exchanges between vulnerable missile forces are modeled probabilistically, converted into first and second strike costs through approximations to the value target sets at risk, and the stability index is taken to be their ratio. The value of the allocation that minimizes the first strike cost for both attack preferences are derived analytically. The former recovers results derived earlier. The latter leads to an optimum at unity allocation for which the stability index is determined analytically. For values of the attack preference greater than about unity, maximizing the stability index increases the cost of striking first 10--15%. For smaller values of the attack preference, maximizing the index increases the second strike cost a similar amount. Both are stabilizing, so if both sides could be trusted to target on missiles in order to minimize damage to value and maximize stability, the stability index for vulnerable missiles could be increased by about 15%. However, that would increase the cost to the first striker by about 15%. It is unclear why--having decided to strike--he would do so in a way that would increase damage to himself.

  19. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  20. Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings

    Directory of Open Access Journals (Sweden)

    Paolo Maria Congedo

    2016-10-01

    Full Text Available The recast of the energy performance of buildings directive (EPBD describes a comparative methodological framework to promote energy efficiency and establish minimum energy performance requirements in buildings at the lowest costs. The aim of the cost-optimal methodology is to foster the achievement of nearly zero energy buildings (nZEBs, the new target for all new buildings by 2020, characterized by a high performance with a low energy requirement almost covered by renewable sources. The paper presents the results of the application of the cost-optimal methodology in two existing buildings located in the Mediterranean area. These buildings are a kindergarten and a nursery school that differ in construction period, materials and systems. Several combinations of measures have been applied to derive cost-effective efficient solutions for retrofitting. The cost-optimal level has been identified for each building and the best performing solutions have been selected considering both a financial and a macroeconomic analysis. The results illustrate the suitability of the methodology to assess cost-optimality and energy efficiency in school building refurbishment. The research shows the variants providing the most cost-effective balance between costs and energy saving. The cost-optimal solution reduces primary energy consumption by 85% and gas emissions by 82%–83% in each reference building.

  1. Minimizing Energy Cost in Electric Arc Furnace Steel Making by Optimal Control Designs

    Directory of Open Access Journals (Sweden)

    Er-wei Bai

    2014-01-01

    Full Text Available Production cost in steel industry is a challenge issue and energy optimization is an important part. This paper proposes an optimal control design aiming at minimizing the production cost of the electric arc furnace steel making. In particular, it is shown that with the structure of an electric arc furnace, the production cost which is a linear programming problem can be solved by the tools of linear quadratic regulation control design that not only provides an optimal solution but also is in a feedback form. Modeling and control designs are validated by the actual production data sets.

  2. Data of cost-optimality and technical solutions for high energy performance buildings in warm climate.

    Science.gov (United States)

    Zacà, Ilaria; D'Agostino, Delia; Maria Congedo, Paolo; Baglivo, Cristina

    2015-09-01

    The data reported in this article refers to input and output information related to the research articles entitled Assessment of cost-optimality and technical solutions in high performance multi-residential buildings in the Mediterranean area by Zacà et al. (Assessment of cost-optimality and technical solutions in high performance multi-residential buildings in the Mediterranean area, in press.) and related to the research article Cost-optimal analysis and technical comparison between standard and high efficient mono residential buildings in a warm climate by Baglivo et al. (Energy, 2015, 10.1016/j.energy.2015.02.062, in press).

  3. HECDOR: a heat exchanger cost and design optimization routine

    Energy Technology Data Exchange (ETDEWEB)

    Turner, S.E.; Madsen, W.W.

    1977-04-01

    An update is presented on a series of four computer codes developed by the Bureau of Mines. The programs were developed to evaluate design parameters and cost of heat exchangers. The major differences in three of the programs were concerned with pumping costs; the first (N = 1) used both fluids, the second (N = 2) used tube side fluid, and the third (N = 3) used shell side fluid as a base for prime parameters. All three assumed no change in phase. The fourth program (N = 4) assumed a change of phase on the shell side.

  4. Is interstellar archeology possible?

    Science.gov (United States)

    Carrigan, Richard A.

    2012-09-01

    Searching for signatures of cosmic-scale archeological artifacts such as Dyson spheres is an interesting alternative to conventional radio SETI. Uncovering such an artifact does not require the intentional transmission of a signal on the part of the original civilization. This type of search is called interstellar archeology or sometimes cosmic archeology. A variety of interstellar archeology signatures is discussed including non-natural planetary atmospheric constituents, stellar doping, Dyson spheres, as well as signatures of stellar, and galactic-scale engineering. The concept of a Fermi bubble due to interstellar migration is reviewed in the discussion of galactic signatures. These potential interstellar archeological signatures are classified using the Kardashev scale. A modified Drake equation is introduced. With few exceptions interstellar archeological signatures are clouded and beyond current technological capabilities. However SETI for so-called cultural transmissions and planetary atmosphere signatures are within reach.

  5. Optimal production policy for a remanufacturing system with virtual inventory cost

    Science.gov (United States)

    Nakashima, Kenichi; Gupta, Surendra M.

    2005-11-01

    This paper deals with a cost management problem of a remanufacturing system with stochastic demand. We model the system with consideration for two types of inventories. One is the actual product inventory in the factory. The other is the virtual inventory that is being used by the customer. For this virtual inventory, it should be required to consider an operational cost that we need in order to observe and check the quantity of the inventory. We call this the virtual inventory cost and model the system by including it. We define the state of the remanufacturing system by the two inventory levels. It is assumed that the cost function is composed of various cost factors such as holding, backlog and manufacturing costs. We obtain the optimal policy that minimizes the expected average cost per period. Numerical results reveal the effects of the factors on the optimal policy.

  6. Improved mine blast algorithm for optimal cost design of water distribution systems

    Science.gov (United States)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  7. Efficient Guiding Towards Cost-Optimality in Uppaal

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas S.

    2001-01-01

    In this paper we present an algorithm for efficiently computing the minimum cost of reaching a goal state in the model of Uniformly Priced Timed Automata (UPTA). This model can be seen as a submodel of the recently suggested model of linearly priced timed automata, which extends timed automata...

  8. Game theory approach to optimal capital cost allocation in pollution control

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    This paper tries to integrate game theory, a very usefultool to resolve conflict phenomena, with optimal capital costallocation issue in total emission control. First the necessity ofallocating optimal capital costs fairly and reasonably amongpolluters in total emission control is analyzed. Then thepossibility of applying game theory to the issue of the optimalcapital cost allocation is expounded. Next the cooperative N-person game model of the optimal capital cost allocation and itssolution ways including method based on Shapley value, least coremethod, weak least core methods, proportional least core method,CGA method, MCRS method and so on are delineated. Finally throughapplication of these methods it is concluded that to apply gamethory in the optimal capital cost allocation issue is helpful toimplement the total emission control planning schemes successfully,to control pollution effectively, and to ensure sustainable development.

  9. On the optimality equation for average cost Markov control processes with Feller transition probabilities

    Science.gov (United States)

    Jaskiewicz, Anna; Nowak, Andrzej S.

    2006-04-01

    We consider Markov control processes with Borel state space and Feller transition probabilities, satisfying some generalized geometric ergodicity conditions. We provide a new theorem on the existence of a solution to the average cost optimality equation.

  10. Cost-Optimal Design of a 3-Phase Core Type Transformer by Gradient Search Technique

    Science.gov (United States)

    Basak, R.; Das, A.; Sensarma, A. K.; Sanyal, A. N.

    2014-04-01

    3-phase core type transformers are extensively used as power and distribution transformers in power system and their cost is a sizable proportion of the total system cost. Therefore they should be designed cost-optimally. The design methodology for reaching cost-optimality has been discussed in details by authors like Ramamoorty. It has also been discussed in brief in some of the text-books of electrical design. The paper gives a method for optimizing design, in presence of constraints specified by the customer and the regulatory authorities, through gradient search technique. The starting point has been chosen within the allowable parameter space the steepest decent path has been followed for convergence. The step length has been judiciously chosen and the program has been maneuvered to avoid local minimal points. The method appears to be best as its convergence is quickest amongst different optimizing techniques.

  11. An Optimal Operating Strategy for Battery Life Cycle Costs in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yinghua Han

    2014-01-01

    Full Text Available Impact on petroleum based vehicles on the environment, cost, and availability of fuel has led to an increased interest in electric vehicle as a means of transportation. Battery is a major component in an electric vehicle. Economic viability of these vehicles depends on the availability of cost-effective batteries. This paper presents a generalized formulation for determining the optimal operating strategy and cost optimization for battery. Assume that the deterioration of the battery is stochastic. Under the assumptions, the proposed operating strategy for battery is formulated as a nonlinear optimization problem considering reliability and failure number. And an explicit expression of the average cost rate is derived for battery lifetime. Results show that the proposed operating strategy enhances the availability and reliability at a low cost.

  12. Optimizing Transmission Service Cost of Khuzestan Regional Grid Based on Nsga-Ii Algorithm

    Science.gov (United States)

    Shooshtari, Alireza Tavakoli; Joorabian, Mahmood; Milani, Armin Ebrahimi; Gholamzadeh, Arash

    2011-06-01

    Any plan for modeling the components of transmission service costs should be able to consider congestion as well as loss cost. Assessing the real value of congestion and loss costs in each network has a substantial contribution to analyze the grid's weaknesses in order to release capacity of power network. As much as the amount of congestion and loss costs in the transmission grid reduces the amount of power passing through transmission lines increases. Therefore, the transmission service cost will be optimized and revenues of the regional electricity company from transmission services will be increased. In this paper, a new power flow algorithm with congestion and loss considerations of a power network is presented. Thus, optimal power flow and a multi-objectives optimization algorithm, called NSGA-II, is used in this work. The real data of Khuzestan regional power grid is implemented to confirm the efficiency of proposed method.

  13. An Optimization System for Concrete Life Cycle Cost and Related CO2 Emissions

    Directory of Open Access Journals (Sweden)

    Tae Hyoung Kim

    2016-04-01

    Full Text Available An optimization system that supports the production of concrete while minimizing carbon dioxide (CO2 emissions or costs is presented that incorporates an evolution algorithm for the materials’ mix design stage, a trigonometric function for the transportation stage, and a stochastic model for the manufacturing stage. A case study demonstrates that applying the optimization system reduced CO2 emissions by 34% compared to the standard concrete production processes typically used. When minimizing the cost of concrete production was prioritized, the cost dropped by 1% compared to the cost of conventional concrete production. These findings confirm that this optimization system helps with the design of the concrete mix and the choice of a material supplier, thus reducing both CO2 emissions and costs.

  14. OPTIMAL LAND CONVERSION AND GROWTH WITH UNCERTAIN BIODIVERSITY COSTS

    OpenAIRE

    Anke Leroux; John Creedy

    2005-01-01

    An important characteristic defining the threat of environmental crises is the uncertainty about their consequences for future welfare. Random processes governing ecosystem dynamics and adaptation to anthropogenic change are important sources of prevailing ecological uncertainty and contribute to the problem of how to balance economic development against natural resource conservation. The aim of this study is to examine optimal growth subject to non-linear dynamic environmental constraints. I...

  15. A KBE-enabled design framework for cost/weight optimization study of aircraft composite structures

    Science.gov (United States)

    Wang, H.; La Rocca, G.; van Tooren, M. J. L.

    2014-10-01

    Traditionally, minimum weight is the objective when optimizing airframe structures. This optimization, however, does not consider the manufacturing cost which actually determines the profit of the airframe manufacturer. To this purpose, a design framework has been developed able to perform cost/weight multi-objective optimization of an aircraft component, including large topology variations of the structural configuration. The key element of the proposed framework is a dedicated knowledge based engineering (KBE) application, called multi-model generator, which enables modelling very different product configurations and variants and extract all data required to feed the weight and cost estimation modules, in a fully automated fashion. The weight estimation method developed in this research work uses Finite Element Analysis to calculate the internal stresses of the structural elements and an analytical composite plate sizing method to determine their minimum required thicknesses. The manufacturing cost estimation module was developed on the basis of a cost model available in literature. The capability of the framework was successfully demonstrated by designing and optimizing the composite structure of a business jet rudder. The study case indicates the design framework is able to find the Pareto optimal set for minimum structural weight and manufacturing costin a very quick way. Based on the Pareto set, the rudder manufacturer is in conditions to conduct both internal trade-off studies between minimum weight and minimum cost solutions, as well as to offer the OEM a full set of optimized options to choose, rather than one feasible design.

  16. Optimal Policy for Brownian Inventory Models with General Convex Inventory Cost

    Institute of Scientific and Technical Information of China (English)

    Da-cheng YAO

    2013-01-01

    We study an inventory system in which products are ordered from outside to meet demands,and the cumulative demand is governed by a Brownian motion.Excessive demand is backlogged.We suppose that the shortage and holding costs associated with the inventory are given by a general convex function.The product ordering from outside incurs a linear ordering cost and a setup fee.There is a constant leadtime when placing an order.The optimal policy is established so as to minimize the discounted cost including the inventory cost and ordering cost.

  17. Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects

    Directory of Open Access Journals (Sweden)

    Ming Li

    2014-01-01

    Full Text Available Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illustrate the robust model, nondominated sorting genetic algorithm-II (NSGA-II is modified to solve the project example. The results show that, by means of adjusting the time and cost robust coefficients, the robust Pareto sets for time-cost tradeoff can be obtained according to different acceptable risk level, from which the decision maker could choose the preferred construction alternative.

  18. Contingency Contractor Optimization Phase 3 Sustainment Cost by JCA Implementation Guide

    Energy Technology Data Exchange (ETDEWEB)

    Durfee, Justin David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Frazier, Christopher Rawls [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Arguello, Bryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bandlow, Alisa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gearhart, Jared Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Katherine A [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    This document provides implementation guidance for implementing personnel group FTE costs by JCA Tier 1 or 2 categories in the Contingency Contractor Optimization Tool – Engineering Prototype (CCOT-P). CCOT-P currently only allows FTE costs by personnel group to differ by mission. Changes will need to be made to the user interface inputs pages and the database

  19. Integrated emission management for cost optimal EGR-SCR balancing in diesels

    NARCIS (Netherlands)

    Willems, F.P.T.; Mentink, P.R.; Kupper, F.; Eijnden, E.A.C. van den

    2013-01-01

    The potential of a cost-based optimization method is experimentally demonstrated on a Euro-VI heavy-duty diesel engine. Based on the actual engine-aftertreatment state, this model-based Integrated Emission Management (IEM) strategy minimizes operational (fuel and AdBlue) costs within emission constr

  20. Integrated emission management for cost optimal EGR-SCR balancing in diesels

    NARCIS (Netherlands)

    Willems, F.P.T.; Mentink, P.R.; Kupper, F.; Eijnden, E.A.C. van den

    2013-01-01

    The potential of a cost-based optimization method is experimentally demonstrated on a Euro-VI heavy-duty diesel engine. Based on the actual engine-aftertreatment state, this model-based Integrated Emission Management (IEM) strategy minimizes operational (fuel and AdBlue) costs within emission

  1. The Optimal Solution of the Model with Physical and Human Capital Adjustment Costs

    Institute of Scientific and Technical Information of China (English)

    RAO Lan-lan; CAI Dong-han

    2004-01-01

    We prove that the model with physical and human capital adjustment costs has optimal solution when the production function is increasing return and the structure of vetor fields of the model changes substantially when the prodution function from decreasing return turns to increasing return.And it is shown that the economy is improved when the coefficients of adjustment costs become small.

  2. The environmental cost of subsistence: Optimizing diets to minimize footprints.

    Science.gov (United States)

    Gephart, Jessica A; Davis, Kyle F; Emery, Kyle A; Leach, Allison M; Galloway, James N; Pace, Michael L

    2016-05-15

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result, this study

  3. Efficient Guiding Towards Cost-Optimality in Uppaal

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas S.

    2001-01-01

    In this paper we present an algorithm for efficiently computing the minimum cost of reaching a goal state in the model of Uniformly Priced Timed Automata (UPTA). This model can be seen as a submodel of the recently suggested model of linearly priced timed automata, which extends timed automata...... with prices on both locations and transitions. The presented algorithm is based on a symbolic semantics of UTPA, and an efficient representation and operations based on difference bound matrices. In analogy with Dijkstra’s shortest path algorithm, we show that the search order of the algorithm can be chosen...

  4. Heliostat field cost reduction by `slope drive' optimization

    Science.gov (United States)

    Arbes, Florian; Weinrebe, Gerhard; Wöhrbach, Markus

    2016-05-01

    An algorithm to optimize power tower heliostat fields employing heliostats with so-called slope drives is presented. It is shown that a field using heliostats with the slope drive axes configuration has the same performance as a field with conventional azimuth-elevation tracking heliostats. Even though heliostats with the slope drive configuration have a limited tracking range, field groups of heliostats with different axes or different drives are not needed for different positions in the heliostat field. The impacts of selected parameters on a benchmark power plant (PS10 near Seville, Spain) are analyzed.

  5. Distribution Inventory Cost Optimization Under Grey and Fuzzy Uncertainty

    Institute of Scientific and Technical Information of China (English)

    LIU Dongbo; HUANG Dao; CHEN Yujuan

    2006-01-01

    The grey fuzzy variable was defined for the two fold uncertain parameters combining grey and fuzziness factors. On the basis of the credibility and chance measure of grey fuzzy variables, the distribution center inventory uncertain programming model was presented. The grey fuzzy simulation technology can generate input-output data for the uncertain functions. The neural network trained from the input-output data can approximate the uncertain functions. The designed hybrid intelligent algorithm by embedding the trained neural network into genetic algorithm can optimize the general grey fuzzy programming problems. Finally, one numerical example is provided to illustrate the effectiveness of the model and the hybrid intelligent algorithm.

  6. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Abubeker, Jewahir Ali

    2011-05-14

    This paper is devoted to the consideration of an algorithm for sequential optimization of paths in directed graphs relative to di_erent cost functions. The considered algorithm is based on an extension of dynamic programming which allows to represent the initial set of paths and the set of optimal paths after each application of optimization procedure in the form of a directed acyclic graph.

  7. Explicit Solution of the Average-Cost Optimality Equation for a Pest-Control Problem

    Directory of Open Access Journals (Sweden)

    Epaminondas G. Kyriakidis

    2011-01-01

    Full Text Available We introduce a Markov decision process in continuous time for the optimal control of a simple symmetrical immigration-emigration process by the introduction of total catastrophes. It is proved that a particular control-limit policy is average cost optimal within the class of all stationary policies by verifying that the relative values of this policy are the solution of the corresponding optimality equation.

  8. Optimization of multiplexed RADseq libraries using low-cost adaptors.

    Science.gov (United States)

    Henri, Hélène; Cariou, Marie; Terraz, Gabriel; Martinez, Sonia; El Filali, Adil; Veyssiere, Marine; Duret, Laurent; Charlat, Sylvain

    2015-04-01

    Reduced representation genomics approaches, of which RADseq is currently the most popular form, offer the possibility to produce genome wide data from potentially any species, without previous genomic information. The application of RADseq to highly multiplexed libraries (including numerous specimens, and potentially numerous different species) is however limited by technical constraints. First, the cost of synthesis of Illumina adaptors including molecular identifiers (MIDs) becomes excessive when numerous specimens are to be multiplexed. Second, the necessity to empirically adjust the ratio of adaptors to genomic DNA concentration impedes the high throughput application of RADseq to heterogeneous samples, of variable DNA concentration and quality. In an attempt to solve these problems, we propose here some adjustments regarding the adaptor synthesis. First, we show that the common and unique (MID) parts of adaptors can be synthesized separately and subsequently ligated, which drastically reduces the synthesis cost, and thus allows multiplexing hundreds of specimens. Second, we show that self-ligation of adaptors, which makes the adaptor concentration so critical, can be simply prevented by using unphosphorylated adaptors, which significantly improves the ligation and sequencing yield.

  9. Cost benefit theory and optimal design of gene regulation functions

    Science.gov (United States)

    Kalisky, Tomer; Dekel, Erez; Alon, Uri

    2007-12-01

    Cells respond to the environment by regulating the expression of genes according to environmental signals. The relation between the input signal level and the expression of the gene is called the gene regulation function. It is of interest to understand the shape of a gene regulation function in terms of the environment in which it has evolved and the basic constraints of biological systems. Here we address this by presenting a cost-benefit theory for gene regulation functions that takes into account temporally varying inputs in the environment and stochastic noise in the biological components. We apply this theory to the well-studied lac operon of E. coli. The present theory explains the shape of this regulation function in terms of temporal variation of the input signals, and of minimizing the deleterious effect of cell-cell variability in regulatory protein levels. We also apply the theory to understand the evolutionary tradeoffs in setting the number of regulatory proteins and for selection of feed-forward loops in genetic circuits. The present cost-benefit theory can be used to understand the shape of other gene regulatory functions in terms of environment and noise constraints.

  10. Optimal administrative scale for planning public services: a social cost model applied to Flemish hospital care.

    Science.gov (United States)

    Blank, Jos L T; van Hulst, Bart

    2015-01-01

    In choosing the scale of public services, such as hospitals, both economic and public administrative considerations play important roles. The scale and the corresponding spatial distribution of public institutions have consequences for social costs, defined as the institutions' operating costs and the users' travel costs (which include the money and time costs). Insight into the relationship between scale and spatial distribution and social costs provides a practical guide for the best possible administrative planning level. This article presents a purely economic model that is suitable for deriving the optimal scale for public services. The model also reveals the corresponding optimal administrative planning level from an economic perspective. We applied this model to hospital care in Flanders for three different types of care. For its application, we examined the social costs of hospital services at different levels of administrative planning. The outcomes show that the social costs of rehabilitation in Flanders with planning at the urban level (38 areas) are 11% higher than those at the provincial level (five provinces). At the regional level (18 areas), the social costs of rehabilitation are virtually equal to those at the provincial level. For radiotherapy, there is a difference of 88% in the social costs between the urban and the provincial level. For general care, there are hardly any cost differences between the three administrative levels. Thus, purely from the perspective of social costs, rehabilitation should preferably be planned at the regional level, general services at the urban level and radiotherapy at the provincial level.

  11. Mixed H2/H∞ Optimal Guaranteed Cost Control of Uncertain Linear Systems

    Institute of Scientific and Technical Information of China (English)

    GuodingChen; MayingYang; LiYu

    2004-01-01

    The mixed H2/H∞ guaranteed cost control problem via state feedback control laws is considered in this paper for linear systems with norm-bounded parameter uncertainty. Based on the linear matrix inequality (LMI) approach, sufficient conditions are derived for the existence of guaranteed cost controllers whihc guarantee not only a prespecified H∞ disturbance attenuation level on one controlled output for all admissible parameter uncertainties, but also the worst-case H2 performance index on the other controlled output to be no more than a specified bound. Furthermore, a convex optimization problem is formulated to design an optimal H2/H∞ guaranteed cost controller.

  12. Dynamic Portfolio Optimization with Transaction Costs and State-Dependent Drift

    DEFF Research Database (Denmark)

    Palczewski, Jan; Poulsen, Rolf; Schenk-Hoppe, Klaus Reiner

    2015-01-01

    The problem of dynamic portfolio choice with transaction costs is often addressed by constructing a Markov Chain approximation of the continuous time price processes. Using this approximation, we present an efficient numerical method to determine optimal portfolio strategies under time- and state....... It is applied to measure the value of information and the loss from transaction costs using the indifference principle.......-dependent drift and proportional transaction costs. This scenario arises when investors have behavioral biases or the actual drift is unknown and needs to be estimated. Our numerical method solves dynamic optimal portfolio problems with an exponential utility function for time-horizons of up to 40 years...

  13. Optimal transportation network with concave cost functions: loop analysis and algorithms.

    Science.gov (United States)

    Shao, Zhen; Zhou, Haijun

    2007-06-01

    Transportation networks play a vital role in modern societies. Structural optimization of a transportation system under a given set of constraints is an issue of great practical importance. For a general transportation system whose total cost C is determined by C = Sigma(ioptimal network topology is a tree if C(ij) proportional |I(ij)|(gamma) with 0 optimality of tree-formed networks is given. The simple intuitive picture of this proof then leads to an efficient global algorithm for the searching of optimal structures for a given transportation system with concave cost functions.

  14. Using Electromagnetic Algorithm for Total Costs of Sub-contractor Optimization in the Cellular Manufacturing Problem

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Shahriari

    2016-12-01

    Full Text Available In this paper, we present a non-linear binary programing for optimizing a specific cost in cellular manufacturing system in a controlled production condition. The system parameters are determined by the continuous distribution functions. The aim of the presented model is to optimize the total cost of imposed sub-contractors to the manufacturing system by determining how to allocate the machines and parts to each seller. In this system, DM could control the occupation level of each machine in the system. For solving the presented model, we used the electromagnetic meta-heuristic algorithm and Taguchi method for determining the optimal algorithm parameters.

  15. The environmental cost of subsistence: Optimizing diets to minimize footprints

    Energy Technology Data Exchange (ETDEWEB)

    Gephart, Jessica A.; Davis, Kyle F. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States); Emery, Kyle A. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States); University of California, Santa Barbara. Marine Science Institute, Santa Barbara, CA 93106 (United States); Leach, Allison M. [University of New Hampshire, 107 Nesmith Hall, 131 Main Street, Durham, NH, 03824 (United States); Galloway, James N.; Pace, Michael L. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States)

    2016-05-15

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result

  16. Control and operation cost optimization of the HISS cryogenic system

    Science.gov (United States)

    Porter, J.; Bieser, F.; Anderson, D.

    1983-08-01

    The Heavy Ion Spectrometer System (HISS) relies upon superconducting coils of cryostable design to provide a maximum particle bending field of 3 tesla. A previous paper describes the cryogenic facility including helium refrigeration and gas management. A control strategy which has allowed full time unattended operation, along with significant nitrogen and power cost reductions is discussed. Reduction of liquid nitrogen consumption was accomplished by using the sensible heat available in the cold exhaust gas. Measured nitrogen throughput agrees with calculations for sensible heat utilization of zero to 70%. Calculated consumption saving over this range is 40 liters per hour for conductive losses to the supports only. It is found that the measured throughput differential for the total system is higher.

  17. Control and operation cost optimization of the HISS cryogenic system

    Energy Technology Data Exchange (ETDEWEB)

    Porter, J.; Bieser, F.; Anderson, D.

    1983-08-01

    The Heavy Ion Spectrometer System (HISS) relies upon superconducting coils of cryostable design to provide a maximum particle bending field of 3 tesla. A previous paper describes the cryogenic facility including helium refrigeration and gas management. This paper discusses a control strategy which has allowed full time unattended operation, along with significant nitrogen and power cost reductions. Reduction of liquid nitrogen consumption has been accomplished by making use of the sensible heat available in the cold exhaust gas. Measured nitrogen throughput agrees with calculations for sensible heat utilization of zero to 70%. Calculated consumption saving over this range is 40 liters per hour for conductive losses to the supports only. The measured throughput differential for the total system is higher.

  18. The Optimal Sequence of Production Orders, Taking into Account the Cost of Delays

    Directory of Open Access Journals (Sweden)

    Dylewski Robert

    2016-06-01

    Full Text Available In flexible manufacturing systems the most important element in determining the proper course of technological processes, transport and storage is the control and planning subsystem. The key planning task is to determine the optimal sequence of production orders. This paper proposes a new method of determining the optimal sequence of production orders in view of the sum of the costs related to the delayed execution of orders. It takes into account the different unit costs of delays of individual orders and the amount of allowable delays of orders involving no delay costs. The optimum sequence of orders, in the single-machine problem, in view of the sum of the costs of delays may be significantly different from the optimal order, taking into account the sum of delay times.

  19. Cost Optimization for Series-Parallel Petroleum Transportation Pape-Lines under Reliability Constraints

    Directory of Open Access Journals (Sweden)

    M. Amara,

    2014-01-01

    Full Text Available This paper uses an ant colony meta-heuristic optimization method to solve the cost-optimization problem in petrolum industry. This problem is known as total investment-cost minimization of series-parallel transportation pape lines. Redundant Electro-Pumpe coupled to the papes lines are included to achieve a desired level of availability. System availability is represented by a multi-state availability function. The Electro-pumpe (pape-lines are characterized by their capacity, availability and cost. These electro-pumpes are chosen among a list of products available on the market. The proposed meta-heuristic seeks to find the best minimal cost of petrol transportation system configuration with desired availability. To estimate the series-parallel pape lines availability, a fast method based on universal moment generating function (UMGF is suggested. The ant colony approach is used as an optimization technique. An example of petrol transportation system is presented.

  20. The galactic interstellar medium

    CERN Document Server

    Burton, WB; Genzel, R

    1992-01-01

    This volume contains the papers of three extended lectures addressing advanced topics in astronomy and astrophysics. The topics discussed include the most recent observational data on interstellar matter outside our galaxy and the physics and chemistry of molecular clouds.

  1. Diffuse interstellar absorption bands

    Institute of Scientific and Technical Information of China (English)

    XIANG FuYuan; LIANG ShunLin; LI AiGen

    2009-01-01

    The diffuse interstellar bands (DIBs) are a large number of absorption bands that are superposed on the interstellar extinction curve and are of interstellar origin. Since the discovery of the first two DIBs in the 1920s, the exact nature of DIBs still remains unclear. This article reviews the history of the detec-tions of DIBs in the Milky Way and external galaxies, the major observational characteristics of DIBs, the correlations or anti-correlations among DIBs or between DIBs and other interstellar features (e.g. the prominent 2175 Angstrom extinction bump and the far-ultraviolet extinction rise), and the proposed candidate carriers. Whether they are also present in circumstellar environments is also discussed.

  2. Diffuse interstellar absorption bands

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The diffuse interstellar bands(DIBs) are a large number of absorption bands that are superposed on the interstellar extinction curve and are of interstellar origin. Since the discovery of the first two DIBs in the 1920s,the exact nature of DIBs still remains unclear. This article reviews the history of the detections of DIBs in the Milky Way and external galaxies,the major observational characteristics of DIBs,the correlations or anti-correlations among DIBs or between DIBs and other interstellar features(e.g. the prominent 2175 Angstrom extinction bump and the far-ultraviolet extinction rise),and the proposed candidate carriers. Whether they are also present in circumstellar environments is also discussed.

  3. Optimal and maximin sample sizes for multicentre cost-effectiveness trials.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2015-10-01

    This paper deals with the optimal sample sizes for a multicentre trial in which the cost-effectiveness of two treatments in terms of net monetary benefit is studied. A bivariate random-effects model, with the treatment-by-centre interaction effect being random and the main effect of centres fixed or random, is assumed to describe both costs and effects. The optimal sample sizes concern the number of centres and the number of individuals per centre in each of the treatment conditions. These numbers maximize the efficiency or power for given research costs or minimize the research costs at a desired level of efficiency or power. Information on model parameters and sampling costs are required to calculate these optimal sample sizes. In case of limited information on relevant model parameters, sample size formulas are derived for so-called maximin sample sizes which guarantee a power level at the lowest study costs. Four different maximin sample sizes are derived based on the signs of the lower bounds of two model parameters, with one case being worst compared to others. We numerically evaluate the efficiency of the worst case instead of using others. Finally, an expression is derived for calculating optimal and maximin sample sizes that yield sufficient power to test the cost-effectiveness of two treatments. © The Author(s) 2015.

  4. Modeling the lowest-cost splitting of a herd of cows by optimizing a cost function

    OpenAIRE

    Gajamannage, Kelum; Bollt, Erik M; Porter, Mason A.; Dawkins, Marian S.

    2016-01-01

    Animals live in groups to defend against predation and to obtain food. However, for some animals --- especially ones that spend long periods of time feeding --- there are costs if a group chooses to move on before their nutritional needs are satisfied. If the conflict between feeding and keeping up with a group becomes too large, it may be advantageous to some animals to split into subgroups of animals with similar nutritional needs. We model the costs and benefits of splitting by a herd of c...

  5. Interstellar organic chemistry.

    Science.gov (United States)

    Sagan, C.

    1972-01-01

    Most of the interstellar organic molecules have been found in the large radio source Sagittarius B2 toward the galactic center, and in such regions as W51 and the IR source in the Orion nebula. Questions of the reliability of molecular identifications are discussed together with aspects of organic synthesis in condensing clouds, degradational origin, synthesis on grains, UV natural selection, interstellar biology, and contributions to planetary biology.

  6. Interstellar organic chemistry.

    Science.gov (United States)

    Sagan, C.

    1972-01-01

    Most of the interstellar organic molecules have been found in the large radio source Sagittarius B2 toward the galactic center, and in such regions as W51 and the IR source in the Orion nebula. Questions of the reliability of molecular identifications are discussed together with aspects of organic synthesis in condensing clouds, degradational origin, synthesis on grains, UV natural selection, interstellar biology, and contributions to planetary biology.

  7. Cost Saving Opportunities in NSCLC Therapy by Optimized Diagnostics

    Directory of Open Access Journals (Sweden)

    Ilija Nenadić

    2017-07-01

    Full Text Available With an incidence of 68 new cases per 100,000 people per year, an estimated total number of up to 350,000 new non-small-cell lung cancer (NSCLC cases are diagnosed each year in the European Union. Up to 10% of NSCLC patients are eligible for therapy with novel ALK (anaplastic lymphoma kinase inhibitors, as they have been diagnosed with a mutation in the gene coding for ALK. The ALK inhibitor therapy costs add up to approx. 9,000 € per patient per month, with treatment durations of up to one year. Recent studies have shown that up to 10% of ALK cases are misdiagnosed by nearly 40% of pathologic investigations. The current state-of-the-art ALK diagnostic procedure comprises a Fluorescent in situ Hybridization (FISH assay accompanied by ALK inhibitor therapy (Crizotinib. The therapy success ranges between a full therapy failure and the complete remission of the tumor (i.e., healing, but the biomedical and systemic reasons for this range remain unknown so far. It appears that the variety of different ALK mutations and variants contributes to the discrepancy in therapy results. Although the major known fusion partner for ALK in NSCLC is the Echinoderm microtubule-associated protein-like 4 (EML4, of which a minimum of 15 variants have been described, an additional 20 further ALK fusion variants with other genes are known, of which three have already been found in NSCLC. We hypothesize that the wide variety of known (and unknown ALK mutations is associated with a variable therapy success, thus rendering current companion diagnostic procedures (FISH and therapy (Crizotinib only partly applicable in ALK-related NSCLC treatment. In cell culture, differing sensitivity to Crizotinib has been shown for some fusion variants, but it is as yet unknown which of them are really biologically active in cancer patients, and how the respective variants affect the response to Crizotinib treatment. Moreover, it has been demonstrated that translocated ALK genes can

  8. Multi-objective optimization approach for cost management during product design at the conceptual phase

    Science.gov (United States)

    Durga Prasad, K. G.; Venkata Subbaiah, K.; Narayana Rao, K.

    2014-03-01

    The effective cost management during the conceptual design phase of a product is essential to develop a product with minimum cost and desired quality. The integration of the methodologies of quality function deployment (QFD), value engineering (VE) and target costing (TC) could be applied to the continuous improvement of any product during product development. To optimize customer satisfaction and total cost of a product, a mathematical model is established in this paper. This model integrates QFD, VE and TC under multi-objective optimization frame work. A case study on domestic refrigerator is presented to show the performance of the proposed model. Goal programming is adopted to attain the goals of maximum customer satisfaction and minimum cost of the product.

  9. Cost optimal building performance requirements. Calculation methodology for reporting on national energy performance requirements on the basis of cost optimality within the framework of the EPBD

    Energy Technology Data Exchange (ETDEWEB)

    Boermans, T.; Bettgenhaeuser, K.; Hermelink, A.; Schimschar, S. [Ecofys, Utrecht (Netherlands)

    2011-05-15

    On the European level, the principles for the requirements for the energy performance of buildings are set by the Energy Performance of Buildings Directive (EPBD). Dating from December 2002, the EPBD has set a common framework from which the individual Member States in the EU developed or adapted their individual national regulations. The EPBD in 2008 and 2009 underwent a recast procedure, with final political agreement having been reached in November 2009. The new Directive was then formally adopted on May 19, 2010. Among other clarifications and new provisions, the EPBD recast introduces a benchmarking mechanism for national energy performance requirements for the purpose of determining cost-optimal levels to be used by Member States for comparing and setting these requirements. The previous EPBD set out a general framework to assess the energy performance of buildings and required Member States to define maximum values for energy delivered to meet the energy demand associated with the standardised use of the building. However it did not contain requirements or guidance related to the ambition level of such requirements. As a consequence, building regulations in the various Member States have been developed by the use of different approaches (influenced by different building traditions, political processes and individual market conditions) and resulted in different ambition levels where in many cases cost optimality principles could justify higher ambitions. The EPBD recast now requests that Member States shall ensure that minimum energy performance requirements for buildings are set 'with a view to achieving cost-optimal levels'. The cost optimum level shall be calculated in accordance with a comparative methodology. The objective of this report is to contribute to the ongoing discussion in Europe around the details of such a methodology by describing possible details on how to calculate cost optimal levels and pointing towards important factors and

  10. Investigating nearby exoplanets via interstellar radar

    Science.gov (United States)

    Scheffer, Louis K.

    2014-01-01

    Interstellar radar is a potential intermediate step between passive observation of exoplanets and interstellar exploratory missions. Compared with passive observation, it has the traditional advantages of radar astronomy. It can measure surface characteristics, determine spin rates and axes, provide extremely accurate ranges, construct maps of planets, distinguish liquid from solid surfaces, find rings and moons, and penetrate clouds. It can do this even for planets close to the parent star. Compared with interstellar travel or probes, it also offers significant advantages. The technology required to build such a radar already exists, radar can return results within a human lifetime, and a single facility can investigate thousands of planetary systems. The cost, although too high for current implementation, is within the reach of Earth's economy.

  11. Investigating Nearby Exoplanets via Interstellar Radar

    CERN Document Server

    Scheffer, Louis K

    2013-01-01

    Interstellar radar is a potential intermediate step between passive observation of exoplanets and interstellar exploratory missions. Compared to passive observation, it has the traditional advantages of radar astronomy. It can measure surface characteristics, determine spin rates and axes, provide extremely accurate ranges, construct maps of planets, distinguish liquid from solid surfaces, find rings and moons, and penetrate clouds. It can do this even for planets close to the parent star. Compared to interstellar travel or probes, it also offers significant advantages. The technology required to build such a radar already exists, radar can return results within a human lifetime, and a single facility can investigate thousands of planetary systems. The cost, although high, is within the reach of Earth's economy, so it is cheaper as well.

  12. A bivariate optimal replacement policy with cumulative repair cost limit under cumulative damage model

    Indian Academy of Sciences (India)

    MIN-T SAI LAI; SHIH-CHIH CHEN

    2016-05-01

    In this paper, a bivariate replacement policy (n, T) for a cumulative shock damage process is presented that included the concept of cumulative repair cost limit. The arrival shocks can be divided into two kinds of shocks. Each type-I shock causes a random amount of damage and these damages are additive. When the total damage exceeds a failure level, the system goes into serious failure. Type-II shock causes the system into minor failure and such a failure can be corrected by minimal repair. When a minor failure occurs, the repaircost will be evaluated and minimal repair is executed if the accumulated repair cost is less than a predetermined limit L. The system is replaced at scheduled time T, at n-th minor failure, or at serious failure. The long-term expected cost per unit time is derived using the expected costs as the optimality criterion. The minimum-cost policy is derived, and existence and uniqueness of the optimal n* and T* are proved. This bivariate optimal replacement policy (n, T) is showed to be better than the optimal T* and the optimal n* policy.

  13. Application of particle swarm optimization technique for optimal location of FACTS devices considering cost of installation and system loadability

    Energy Technology Data Exchange (ETDEWEB)

    Saravanan, M.; Slochanal, S. Mary Raja; Venkatesh, P.; Abraham, J. Prince Stephen [Electrical and Electronics Engineering Department, Thiagarajar College of Engineering, Madurai 625015 (India)

    2007-03-15

    This paper presents the application of particle swarm optimization (PSO) technique to find the optimal location of flexible AC transmission system (FACTS) devices with minimum cost of installation of FACTS devices and to improve system loadability (SL). While finding the optimal location, thermal limit for the lines and voltage limit for the buses are taken as constraints. Three types of FACTS devices, thyristor controlled series compensator (TCSC), static VAR compensator (SVC) and unified power flow controller (UPFC) are considered. The optimizations are performed on the parameters namely the location of FACTS devices, their setting, their type, and installation cost of FACTS devices. Two cases namely, single-type devices (same type of FACTS devices) and multi-type devices (combination of TCSC, SVC and UPFC) are considered. Simulations are performed on IEEE 6, 30 and 118 bus systems and Tamil Nadu Electricity Board (TNEB) 69 bus system, a practical system in India for optimal location of FACTS devices. The results obtained are quite encouraging and will be useful in electrical restructuring. (author)

  14. Laboratory Astrochemistry: Interstellar PAHs

    Science.gov (United States)

    Salama, Farid; DeVincenzi, Donald L. (Technical Monitor)

    2000-01-01

    Polycyclic aromatic hydrocarbons (PAHs) are now considered to be an important and ubiquitous component of the organic material in space. PAHs are found in a large variety of extraterrestrial materials such as interplanetary dust particles (IDPs) and meteoritic materials. PAHs are also good candidates to account for the infrared emission bands (UIRs) and the diffuse interstellar optical absorption bands (DIBs) detected in various regions of the interstellar medium. The recent observations made with the Infrared Space Observatory (ISO) have confirmed the ubiquitous nature of the UIR bands and their carriers. PAHs are thought to form through chemical reactions in the outflow from carbon-rich stars in a process similar to soot formation. Once injected in the interstellar medium, PAHs are further processed by the interstellar radiation field, interstellar shocks and energetic particles. A major, dedicated, laboratory effort has been undertaken to measure the physical and chemical characteristics of these complex molecules and their ions under experimental conditions that mimic the interstellar conditions. These measurements require collision-free conditions where the molecules and ions are cold and chemically isolated. The spectroscopy of PAHs under controlled conditions represents an essential diagnostic tool to study the evolution of extraterrestrial PAHs. The Astrochemistry Laboratory program will be discussed through its multiple aspects: (1) objectives, (2) approach and techniques adopted, (3) adaptability to the nature of the problem(s), and (4) results and implications for astronomy as well as for molecular spectroscopy. A review of the data generated through laboratory simulations of space environments and the role these data have played in our current understanding of the properties of interstellar PAHs will be presented. The discussion will also introduce the newest generation of laboratory experiments that are currently being developed in order to provide a

  15. Vertical and lateral flight optimization algorithm and missed approach cost calculation

    Science.gov (United States)

    Murrieta Mendoza, Alejandro

    Flight trajectory optimization is being looked as a way of reducing flight costs, fuel burned and emissions generated by the fuel consumption. The objective of this work is to find the optimal trajectory between two points. To find the optimal trajectory, the parameters of weight, cost index, initial coordinates, and meteorological conditions along the route are provided to the algorithm. This algorithm finds the trajectory where the global cost is the most economical. The global cost is a compromise between fuel burned and flight time, this is determined using a cost index that assigns a cost in terms of fuel to the flight time. The optimization is achieved by calculating a candidate optimal cruise trajectory profile from all the combinations available in the aircraft performance database. With this cruise candidate profile, more cruises profiles are calculated taken into account the climb and descend costs. During cruise, step climbs are evaluated to optimize the trajectory. The different trajectories are compared and the most economical one is defined as the optimal vertical navigation profile. From the optimal vertical navigation profile, different lateral routes are tested. Taking advantage of the meteorological influence, the algorithm looks for the lateral navigation trajectory where the global cost is the most economical. That route is then selected as the optimal lateral navigation profile. The meteorological data was obtained from environment Canada. The new way of obtaining data from the grid from environment Canada proposed in this work resulted in an important computation time reduction compared against other methods such as bilinear interpolation. The algorithm developed here was evaluated in two different aircraft: the Lockheed L-1011 and the Sukhoi Russian regional jet. The algorithm was developed in MATLAB, and the validation was performed using Flight-Sim by Presagis and the FMS CMA-9000 by CMC Electronics -- Esterline. At the end of this work a

  16. Research on Reliability and Cost Integrated Optimization Algorithm of Construction Project Logistics System

    Directory of Open Access Journals (Sweden)

    Xiaoping Bai

    2013-01-01

    Full Text Available The structure of the construction project logistics system is decomposed in detail; some uncertain factors affecting system reliability are analyzed. This paper applies the probability-influence coordinate graph to screen out the logistics subsystem that has great influence on system reliability under the conditions of occurring failure, and establishes an allocation model of the reliability index in construction project logistics system based on the restriction of cost, makes use of the presented model and algorithm to calculate the cost-based reliability, contrasts it with the index reliability assigned by the scoring method to the optimal distribution value of reliability index and the optimal cost. The presented detailed methods and steps can offer the meaningful reference for reliability optimization management of the logistics system in the construction project.

  17. Investment-Cost Optimization of Plastic Recycling System under Reliability Constraints

    Directory of Open Access Journals (Sweden)

    Abdelkader ZEBLAH

    2008-06-01

    Full Text Available This paper describes and uses an ant colony meta-heuristic optimization method to solve the redundancy optimization problem in plastic recycling industry. This problem is known as total investment-cost minimization of series-parallel plastic recycling system. Redundant components are included to achieve a desired level of availability. System availability is represented by a multi-state availability function. The plastic machines are characterized by their capacity, availability and cost. These machines are chosen among a list of products available on the market. The proposed meta-heuristic seeks to find the best minimal cost plastic recycling system configuration with desired availability. To estimate the series-parallel plastic machines availability, a fast method based on universal moment generating function (UMGF is suggested. The ant colony approach is used as an optimization technique. An example of plastic recycling system is presented.

  18. Cost Optimal Reliability Based Inspection and Replacement Planning of Piping Subjected to CO2 Corrosion

    DEFF Research Database (Denmark)

    Hellevik, S. G.; Langen, I.; Sørensen, John Dalsgaard

    1999-01-01

    A methodology for cost optimal reliability based inspection and replacement planning of piping subjected to CO2 corrosion is described. Both initial (design phase) and in-service planning are dealt with. The methodology is based on the application of methods for structural reliability analysis...... within the framework of Bayesian decision theory. The planning problem is formulated as an optimization problem where the expected lifetime costs are minimized with a constraint on the minimum acceptable reliability level. The optimization parameters are the number of inspections in the expected lifetime......, the inspection times and methods. In the design phase the nominal design wall thickness is also treated as an optimization parameter. The most important benefits gained through the application of the methodology are consistent evaluation of the consequences of different inspection and replacement plans...

  19. Optimal Power Flow With UPFC Using Fuzzy- PSO With NonSmooth Fuel Cost Function

    Directory of Open Access Journals (Sweden)

    A.Immanuel

    2015-05-01

    Full Text Available This paper presents an efficient and reliable evolutionary based approach to solve the Optimal Power Flow problem in electrical power network. The Particle Swarm Optimization method is used to solve optimal power Flow problem in power system by incorporating a powerful and most versatile Flexible Alternating Current Transmission Systems device such as Unified power Flow Controller. It is a new device in FACTS family and has great flexibility that can control Active power, Reactive power and voltage magnitudes simultaneously. In this paper optimal location is find out using Fuzzy approach and control settings of UPFC are determined by PSO. The proposed approach is examined on IEEE-30 bus system with different objective function that reflects fuel cost minimization and fuel cost with valve point effects. The test results show the effectiveness of robustness of the proposed approachcompared with the existing results in the literature.

  20. Interstellar Antifreeze: Ethylene Glycol

    Science.gov (United States)

    Hollis, J. M.; Lovas, F. J.; Jewell, P. R.; Coudert, L. H.

    2002-01-01

    Interstellar ethylene glycol (HOCH2CH2,OH) has been detected in emission toward the Galactic center source Sagittarius B2(N-LMH) by means of several millimeter-wave rotational torsional transitions of its lowest energy conformer. The types and kinds of molecules found to date in interstellar clouds suggest a chemistry that favors aldehydes and their corresponding reduced alcohols-e.g., formaldehyde (H2CO)/methanol (CH3OH), acetaldehyde (CH3CHO)/ethanol (CH3CH2OH). Similarly, ethylene glycol is the reduced alcohol of glycolaldehyde (CH2OHCHO), which has also been detected toward Sgr B2(N-LMH). While there is no consensus as to how any such large complex molecules are formed in the interstellar clouds, atomic hydrogen (H) and carbon monoxide (CO) could form formaldehyde on grain surfaces, but such surface chemistry beyond that point is uncertain. However, laboratory experiments have shown that the gas-phase reaction of atomic hydrogen (H) and solid-phase CO at 10-20 K can produce formaldehyde and methanol and that alcohols and other complex molecules can be synthesized from cometary ice analogs when subject to ionizing radiation at 15 K. Thus, the presence of aldehyde/ reduced alcohol pairs in interstellar clouds implies that such molecules are a product of a low-temperature chemistry on grain surfaces or in grain ice mantles. This work suggests that aldehydes and their corresponding reduced alcohols provide unique observational constraints on the formation of complex interstellar molecules.

  1. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Mahayni, Malek A.

    2011-07-01

    Finding optimal paths in directed graphs is a wide area of research that has received much of attention in theoretical computer science due to its importance in many applications (e.g., computer networks and road maps). Many algorithms have been developed to solve the optimal paths problem with different kinds of graphs. An algorithm that solves the problem of paths’ optimization in directed graphs relative to different cost functions is described in [1]. It follows an approach extended from the dynamic programming approach as it solves the problem sequentially and works on directed graphs with positive weights and no loop edges. The aim of this thesis is to implement and evaluate that algorithm to find the optimal paths in directed graphs relative to two different cost functions ( , ). A possible interpretation of a directed graph is a network of roads so the weights for the function represent the length of roads, whereas the weights for the function represent a constraint of the width or weight of a vehicle. The optimization aim for those two functions is to minimize the cost relative to the function and maximize the constraint value associated with the function. This thesis also includes finding and proving the relation between the two different cost functions ( , ). When given a value of one function, we can find the best possible value for the other function. This relation is proven theoretically and also implemented and experimented using Matlab®[2].

  2. Research on Optimal Delivery Days and Customer Satisfaction Based on Quality Costs

    Institute of Scientific and Technical Information of China (English)

    薛伟; 周宏彤; 陈亚绒

    2004-01-01

    Customer satisfaction is an important index to evaluate the competitiveness and efficiency of an enterprise. Every enterprise is confront with the subject of supplying with the customer satisfactory products at the lowest costs and the highest manufacturing speed. Regarding the delivery days of a coach company as a design variable, this paper builds up an optimization model of customer satisfaction, and suggests an effective method to reduce costs and increase customer satisfaction based on analysis and research.

  3. Sequential Optimization of Global Sequence Alignments Relative to Different Cost Functions

    KAUST Repository

    Odat, Enas M.

    2011-05-01

    The purpose of this dissertation is to present a methodology to model global sequence alignment problem as directed acyclic graph which helps to extract all possible optimal alignments. Moreover, a mechanism to sequentially optimize sequence alignment problem relative to different cost functions is suggested. Sequence alignment is mostly important in computational biology. It is used to find evolutionary relationships between biological sequences. There are many algo- rithms that have been developed to solve this problem. The most famous algorithms are Needleman-Wunsch and Smith-Waterman that are based on dynamic program- ming. In dynamic programming, problem is divided into a set of overlapping sub- problems and then the solution of each subproblem is found. Finally, the solutions to these subproblems are combined into a final solution. In this thesis it has been proved that for two sequences of length m and n over a fixed alphabet, the suggested optimization procedure requires O(mn) arithmetic operations per cost function on a single processor machine. The algorithm has been simulated using C#.Net programming language and a number of experiments have been done to verify the proved statements. The results of these experiments show that the number of optimal alignments is reduced after each step of optimization. Furthermore, it has been verified that as the sequence length increased linearly then the number of optimal alignments increased exponentially which also depends on the cost function that is used. Finally, the number of executed operations increases polynomially as the sequence length increase linearly.

  4. Cost-optimal power system extension under flow-based market coupling

    Energy Technology Data Exchange (ETDEWEB)

    Hagspiel, Simeon; Jaegemann, Cosima; Lindenberger, Dietmar [Koeln Univ. (Germany). Energiewirtschaftliches Inst.; Brown, Tom; Cherevatskiy, Stanislav; Troester, Eckehard [Energynautics GmbH, Langen (Germany)

    2013-05-15

    Electricity market models, implemented as dynamic programming problems, have been applied widely to identify possible pathways towards a cost-optimal and low carbon electricity system. However, the joint optimization of generation and transmission remains challenging, mainly due to the fact that different characteristics and rules apply to commercial and physical exchanges of electricity in meshed networks. This paper presents a methodology that allows to optimize power generation and transmission infrastructures jointly through an iterative approach based on power transfer distribution factors (PTDFs). As PTDFs are linear representations of the physical load flow equations, they can be implemented in a linear programming environment suitable for large scale problems. The algorithm iteratively updates PTDFs when grid infrastructures are modified due to cost-optimal extension and thus yields an optimal solution with a consistent representation of physical load flows. The method is first demonstrated on a simplified three-node model where it is found to be robust and convergent. It is then applied to the European power system in order to find its cost-optimal development under the prescription of strongly decreasing CO{sub 2} emissions until 2050.

  5. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    Science.gov (United States)

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly.

  6. The Local Interstellar Medium

    CERN Document Server

    Redfield, S

    2006-01-01

    The Local Interstellar Medium (LISM) is a unique environment that presents an opportunity to study general interstellar phenomena in great detail and in three dimensions. In particular, high resolution optical and ultraviolet spectroscopy have proven to be powerful tools for addressing fundamental questions concerning the physical conditions and three-dimensional (3D) morphology of this local material. After reviewing our current understanding of the structure of gas in the solar neighborhood, I will discuss the influence that the LISM can have on stellar and planetary systems, including LISM dust deposition onto planetary atmospheres and the modulation of galactic cosmic rays through the astrosphere - the balancing interface between the outward pressure of the magnetized stellar wind and the inward pressure of the surrounding interstellar medium. On Earth, galactic cosmic rays may play a role as contributors to ozone layer chemistry, planetary electrical discharge frequency, biological mutation rates, and cl...

  7. Interstellar and circumstellar fullerenes

    CERN Document Server

    Bernard-Salas, J; Jones, A P; Peeters, E; Micelotta, E R; Otsuka, M; Sloan, G C; Kemper, F; Groenewegen, M

    2014-01-01

    Fullerenes are a particularly stable class of carbon molecules in the shape of a hollow sphere or ellipsoid that might be formed in the outflows of carbon stars. Once injected into the interstellar medium (ISM), these stable species survive and are thus likely to be widespread in the Galaxy where they contribute to interstellar extinction, heating processes, and complex chemical reactions. In recent years, the fullerene species C60 (and to a lesser extent C70) have been detected in a wide variety of circumstellar and interstellar environments showing that when conditions are favourable, fullerenes are formed efficiently. Fullerenes are the first and only large aromatics firmly identified in space. The detection of fullerenes is thus crucial to provide clues as to the key chemical pathways leading to the formation of large complex organic molecules in space, and offers a great diagnostic tool to describe the environment in which they reside. Since fullerenes share many physical properties with PAHs, understand...

  8. A multiple ship routing and speed optimization problem under time, cost and environmental objectives

    DEFF Research Database (Denmark)

    Wen, M.; Pacino, Dario; Kontovas, C.A.

    2017-01-01

    The purpose of this paper is to investigate a multiple ship routing and speed optimization problem under time, cost and environmental objectives. A branch and price algorithm as well as a constraint programming model are developed that consider (a) fuel consumption as a function of payload, (b......) fuel price as an explicit input, (c) freight rate as an input, and (d) in-transit cargo inventory costs. The alternative objective functions are minimum total trip duration, minimum total cost and minimum emissions. Computational experience with the algorithm is reported on a variety of scenarios....

  9. Assessment of various failure theories for weight and cost optimized laminated composites using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Goyal, T. [Indian Institute of Technology Kanpur. Dept. of Aerospace Engineering, UP (India); Gupta, R. [Infotech Enterprises Ltd., Hyderabad (India)

    2012-07-01

    In this work, minimum weight-cost design for laminated composites is presented. A genetic algorithm has been developed for the optimization process. Maximum-Stress, Tsai-Wu and Tsai-Hill failure criteria have been used along with buckling analysis parameter for the margin of safety calculations. The design variables include three materials; namely Carbon-Epoxy, Glass-Epoxy, Kevlar-Epoxy; number of plies; ply orientation angles, varying from -75 deg. to 90 deg. in the intervals of 15 deg. and ply thicknesses which depend on the material in use. The total cost is a sum of material cost and layup cost. Layup cost is a function of the ply angle. Validation studies for solution convergence and weight-cost inverse proportionality are carried out. One set of results for shear loading are also validated from literature for a particular case. A Pareto-Optimal solution set is demonstrated for biaxial loading conditions. It is then extended to applied moments. It is found that global optimum for a given loading condition is a function of the failure criteria for shear loading, with Maximum Stress criteria giving the lightest-cheapest and Tsai-Wu criteria giving the heaviest-costliest optimized laminates. Optimized weight results are plotted from the three criteria to do a comparative study. This work gives a global optimized laminated composite and also a set of other local optimum laminates for a given set of loading conditions. The current algorithm also provides with adequate data to supplement the use of different failure criteria for varying loadings. This work can find use in the industry and/or academia considering the increased use of laminated composites in modern wind blades. (Author)

  10. Two-scale cost efficiency optimization of 5G wireless backhaul networks

    OpenAIRE

    Ge, Xiaohu; Tu, Song; Mao, Guoqiang; Lau, Vincent K. N.; Pan, Linghui

    2016-01-01

    To cater for the demands of future fifth generation (5G) ultra-dense small cell networks, the wireless backhaul network is an attractive solution for the urban deployment of 5G wireless networks. Optimization of 5G wireless backhaul networks is a key issue. In this paper we propose a two-scale optimization solution to maximize the cost efficiency of 5G wireless backhaul networks. Specifically, the number and positions of gateways are optimized in the long time scale of 5G wireless backhaul ne...

  11. Two-scale cost efficiency optimization of 5G wireless backhaul networks

    OpenAIRE

    Ge, Xiaohu; Tu, Song; Mao, Guoqiang; Lau, Vincent K. N.; Pan, Linghui

    2016-01-01

    To cater for the demands of future fifth generation (5G) ultra-dense small cell networks, the wireless backhaul network is an attractive solution for the urban deployment of 5G wireless networks. Optimization of 5G wireless backhaul networks is a key issue. In this paper we propose a two-scale optimization solution to maximize the cost efficiency of 5G wireless backhaul networks. Specifically, the number and positions of gateways are optimized in the long time scale of 5G wireless backhaul ne...

  12. Optimal control of a large dam, taking into account the water costs [New Edition

    CERN Document Server

    Abramov, Vyacheslav M

    2009-01-01

    This paper studies large dam models where the difference between lower and upper levels $L$ is assumed to be large. Passage across the levels leads to damage, and the damage costs of crossing the lower or upper level are proportional to the large parameter $L$. Input stream of water is described by compound Poisson process, and the water cost depends upon current level of water in the dam. The aim of the paper is to choose the parameters of output stream (specifically defined in the paper) minimizing the long-run expenses. The particular problem, where input stream is Poisson and water costs are not taken into account has been studied in [Abramov, \\emph{J. Appl. Prob.}, 44 (2007), 249-258]. The present paper partially answers the question \\textit{How does the structure of water costs affect the optimal solution?} In particular the case of linear costs is studied.

  13. Cost-optimal levels of minimum energy performance requirements in the Danish Building Regulations

    Energy Technology Data Exchange (ETDEWEB)

    Aggerholm, S.

    2013-09-15

    The purpose of the report is to analyse the cost optimality of the energy requirements in the Danish Building Regulations 2010, BR10 to new building and to existing buildings undergoing major renovation. The energy requirements in the Danish Building Regulations have by tradition always been based on the cost and benefits related to the private economical or financial perspective. Macro economical calculations have in the past only been made in addition. The cost optimum used in this report is thus based on the financial perspective. Due to the high energy taxes in Denmark there is a significant difference between the consumer price and the macro economical for energy. Energy taxes are also paid by commercial consumers when the energy is used for building operation e.g. heating, lighting, ventilation etc. In relation to the new housing examples the present minimum energy requirements in BR 10 all shows gaps that are negative with a deviation of up till 16 % from the point of cost optimality. With the planned tightening of the requirements to new houses in 2015 and in 2020, the energy requirements can be expected to be tighter than the cost optimal point, if the costs for the needed improvements don't decrease correspondingly. In relation to the new office building there is a gap of 31 % to the point of cost optimality in relation to the 2010 requirement. In relation to the 2015 and 2020 requirements there are negative gaps to the point of cost optimality based on today's prices. If the gaps for all the new buildings are weighted to an average based on mix of building types and heat supply for new buildings in Denmark there is a gap of 3 % in average for the new building. The excessive tightness with today's prices is 34 % in relation to the 2015 requirement and 49 % in relation to the 2020 requirement. The component requirement to elements in the building envelope and to installations in existing buildings adds up to significant energy efficiency

  14. Optimal replacement time estimation for machines and equipment based on cost function

    Directory of Open Access Journals (Sweden)

    J. Šebo

    2013-01-01

    Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.

  15. Constrained Optimization Problems in Cost and Managerial Accounting--Spreadsheet Tools

    Science.gov (United States)

    Amlie, Thomas T.

    2009-01-01

    A common problem addressed in Managerial and Cost Accounting classes is that of selecting an optimal production mix given scarce resources. That is, if a firm produces a number of different products, and is faced with scarce resources (e.g., limitations on labor, materials, or machine time), what combination of products yields the greatest profit…

  16. Towards Cost and Comfort Based Hybrid Optimization for Residential Load Scheduling in a Smart Grid

    Directory of Open Access Journals (Sweden)

    Nadeem Javaid

    2017-10-01

    Full Text Available In a smart grid, several optimization techniques have been developed to schedule load in the residential area. Most of these techniques aim at minimizing the energy consumption cost and the comfort of electricity consumer. Conversely, maintaining a balance between two conflicting objectives: energy consumption cost and user comfort is still a challenging task. Therefore, in this paper, we aim to minimize the electricity cost and user discomfort while taking into account the peak energy consumption. In this regard, we implement and analyse the performance of a traditional dynamic programming (DP technique and two heuristic optimization techniques: genetic algorithm (GA and binary particle swarm optimization (BPSO for residential load management. Based on these techniques, we propose a hybrid scheme named GAPSO for residential load scheduling, so as to optimize the desired objective function. In order to alleviate the complexity of the problem, the multi dimensional knapsack is used to ensure that the load of electricity consumer will not escalate during peak hours. The proposed model is evaluated based on two pricing schemes: day-ahead and critical peak pricing for single and multiple days. Furthermore, feasible regions are calculated and analysed to develop a relationship between power consumption, electricity cost, and user discomfort. The simulation results are compared with GA, BPSO and DP, and validate that the proposed hybrid scheme reflects substantial savings in electricity bills with minimum user discomfort. Moreover, results also show a phenomenal reduction in peak power consumption.

  17. Optimization of costs of Port Operations in Nigeria: A Scenario For ...

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... This study attempts to optimize the cost of port operations in Nigeria. The quantitative estimates ... its customers at a lower price and also be able to make profits. .... Based On Averages Of Different Kinds Of Cargo. SHIP/PORT ...

  18. Energy cost based design optimization method for medium temperature CPC collectors

    Science.gov (United States)

    Horta, Pedro; Osório, Tiago; Collares-Pereira, Manuel

    2016-05-01

    CPC collectors, approaching the ideal concentration limits established by non-imaging optics, can be designed to have such acceptance angles enabling fully stationary designs, useful for applications in the low temperature range (T concentration factors in turn requiring seasonal tracking strategies. Considering the CPC design options in terms of effective concentration factor, truncation, concentrator height, mirror perimeter, seasonal tracking, trough spacing, etc., an energy cost function based design optimization method is presented in this article. Accounting for the impact of the design on its optical (optical efficiency, Incidence Angle Modifier, diffuse acceptance) and thermal performances (dependent on the concentration factor), the optimization function integrates design (e.g. mirror area, frame length, trough spacing/shading), concept (e.g. rotating/stationary components, materials) and operation (e.g. O&M, tilt shifts and tracking strategy) costs into a collector specific energy cost function, in €/(kWh.m2). The use of such function stands for a location and operating temperature dependent design optimization procedure, aiming at the lowest solar energy cost. Illustrating this approach, optimization results will be presented for a (tubular) evacuated absorber CPC design operating in Morocco.

  19. Integrated emission management strategy for cost-optimal engine-aftertreatment operation

    NARCIS (Netherlands)

    Cloudt, R.P.M.; Willems, F.P.T.

    2011-01-01

    A new cost-based control strategy is presented that optimizes engine-aftertreatment performance under all operating conditions. This Integrated Emission Management strategy minimizes fuel consumption within the set emission limits by on-line adjustment of air management based on the actual state of

  20. Optimizing for confidence—costs and opportunities at the frontier between abstraction and reality

    NARCIS (Netherlands)

    Poss, R.

    2013-01-01

    Is there a relationship between computing costs and the confidence people place in the behavior of computing systems? What are the tuning knobs one can use to optimize systems for human confidence instead of correctness in purely abstract models? This report explores these questions by reviewing the

  1. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    Science.gov (United States)

    Handford, Matthew L.; Srinivasan, Manoj

    2016-02-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost – even lower than assuming that the non-amputee’s ankle torques are cost-free.

  2. Incorporation of Fixed Installation Costs into Optimization of Groundwater Remediation with a New Efficient Surrogate Nonlinear Mixed Integer Optimization Algorithm

    Science.gov (United States)

    Shoemaker, Christine; Wan, Ying

    2016-04-01

    Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).

  3. Cost Optimal Design of a Single-Phase Dry Power Transformer

    Directory of Open Access Journals (Sweden)

    Raju Basak

    2015-08-01

    Full Text Available The Dry type transformers are preferred to their oil-immersed counterparts for various reasons, particularly because their operation is hazardless. The application of dry transformers was limited to small ratings in the earlier days. But now these are being used for considerably higher ratings.  Therefore, their cost-optimal design has gained importance. This paper deals with the design procedure for achieving cost optimal design of a dry type single-phase power transformer of small rating, subject to usual design constraints on efficiency and voltage regulation. The selling cost for the transformer has been taken as the objective function. Only two key variables have been chosen, the turns/volt and the height: width ratio of window, which affects the cost function to high degrees. Other variables have been chosen on the basis of designers’ experience. Copper has been used as conductor material and CRGOS as core material to achieve higher efficiency, lower running cost and compact design. The electrical and magnetic loadings have been kept at their maximum values without violating the design constraints. The optimal solution has been obtained by the method of exhaustive search using nested loops.

  4. Scheduling structural health monitoring activities for optimizing life-cycle costs and reliability of wind turbines

    Science.gov (United States)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2017-04-01

    Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.

  5. Cost-based optimization of a nuclear reactor core design: a preliminary model

    Energy Technology Data Exchange (ETDEWEB)

    Sacco, Wagner F.; Alves Filho, Hermes [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Modelagem Computacional]. E-mails: wfsacco@iprj.uerj.br; halves@iprj.uerj.br; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil). Div. de Reatores]. E-mail: cmnap@ien.gov.br

    2007-07-01

    A new formulation of a nuclear core design optimization problem is introduced in this article. Originally, the optimization problem consisted in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the radial power peaking factor in a three-enrichment zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. Here, we address the same problem using the minimization of the fuel and cladding materials costs as the objective function, and the radial power peaking factor as an operational constraint. This cost-based optimization problem is attacked by two metaheuristics, the standard genetic algorithm (SGA), and a recently introduced Metropolis algorithm called the Particle Collision Algorithm (PCA). The two algorithms are submitted to the same computational effort and their results are compared. As the formulation presented is preliminary, more elaborate models are also discussed (author)

  6. Reconstruction of the unknown optimization cost functions from experimental recordings during static multi-finger prehension

    Science.gov (United States)

    Niu, Xun; Terekhov, Alexander V.; Latash, Mark L.; Zatsiorsky, Vladimir M.

    2013-01-01

    The goal of the research is to reconstruct the unknown cost (objective) function(s) presumably used by the neural controller for sharing the total force among individual fingers in multi-finger prehension. The cost function was determined from experimental data by applying the recently developed Analytical Inverse Optimization (ANIO) method (Terekhov et al 2010). The core of the ANIO method is the Theorem of Uniqueness that specifies conditions for unique (with some restrictions) estimation of the objective functions. In the experiment, subjects (n=8) grasped an instrumented handle and maintained it at rest in the air with various external torques, loads, and target grasping forces applied to the object. The experimental data recorded from 80 trials showed a tendency to lie on a 2-dimensional hyperplane in the 4-dimensional finger-force space. Because the constraints in each trial were different, such a propensity is a manifestation of a neural mechanism (not the task mechanics). In agreement with the Lagrange principle for the inverse optimization, the plane of experimental observations was close to the plane resulting from the direct optimization. The latter plane was determined using the ANIO method. The unknown cost function was reconstructed successfully for each performer, as well as for the group data. The cost functions were found to be quadratic with non-zero linear terms. The cost functions obtained with the ANIO method yielded more accurate results than other optimization methods. The ANIO method has an evident potential for addressing the problem of optimization in motor control. PMID:22104742

  7. Optimization of solar cell contacts by system cost-per-watt minimization

    Science.gov (United States)

    Redfield, D.

    1977-01-01

    New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.

  8. Maximum power, ecological function and efficiency of an irreversible Carnot cycle. A cost and effectiveness optimization

    CERN Document Server

    Aragon-Gonzalez, G; Leon-Galicia, A; Morales-Gomez, J R

    2007-01-01

    In this work we include, for the Carnot cycle, irreversibilities of linear finite rate of heat transferences between the heat engine and its reservoirs, heat leak between the reservoirs and internal dissipations of the working fluid. A first optimization of the power output, the efficiency and ecological function of an irreversible Carnot cycle, with respect to: internal temperature ratio, time ratio for the heat exchange and the allocation ratio of the heat exchangers; is performed. For the second and third optimizations, the optimum values for the time ratio and internal temperature ratio are substituted into the equation of power and, then, the optimizations with respect to the cost and effectiveness ratio of the heat exchangers are performed. Finally, a criterion of partial optimization for the class of irreversible Carnot engines is herein presented.

  9. Cost-Optimal Operation of Energy Storage Units: Benefits of a Problem-Specific Approach

    CERN Document Server

    Siemer, Lars; Kleinhans, David

    2015-01-01

    The integration of large shares of electricity produced by non-dispatchable Renewable Energy Sources (RES) leads to an increasingly volatile energy generation side, with temporary local overproduction. The application of energy storage units has the potential to use this excess electricity from RES efficiently and to prevent curtailment. The objective of this work is to calculate cost-optimal charging strategies for energy storage units used as buffers. For this purpose, a new mathematical optimization method is presented that is applicable to general storage-related problems. Due to a tremendous gain in efficiency of this method compared with standard solvers and proven optimality, calculations of complex problems as well as a high-resolution sensitivity analysis of multiple system combinations are feasible within a very short time. As an example technology, Power-to-Heat converters used in combination with thermal storage units are investigated in detail and optimal system configurations, including storage ...

  10. Vaccination and treatment as control interventions in an infectious disease model with their cost optimization

    Science.gov (United States)

    Kumar, Anuj; Srivastava, Prashant K.

    2017-03-01

    In this work, an optimal control problem with vaccination and treatment as control policies is proposed and analysed for an SVIR model. We choose vaccination and treatment as control policies because both these interventions have their own practical advantage and ease in implementation. Also, they are widely applied to control or curtail a disease. The corresponding total cost incurred is considered as weighted combination of costs because of opportunity loss due to infected individuals and costs incurred in providing vaccination and treatment. The existence of optimal control paths for the problem is established and guaranteed. Further, these optimal paths are obtained analytically using Pontryagin's Maximum Principle. We analyse our results numerically to compare three important strategies of proposed controls, viz.: vaccination only; with both treatment and vaccination; and treatment only. We note that first strategy (vaccination only) is less effective as well as expensive. Though, for a highly effective vaccine, vaccination alone may also work well in comparison with treatment only strategy. Among all the strategies, we observe that implementation of both treatment and vaccination is most effective and less expensive. Moreover, in this case the infective population is found to be relatively very low. Thus, we conclude that the comprehensive effect of vaccination and treatment not only minimizes cost burden due to opportunity loss and applied control policies but also keeps a tab on infective population.

  11. A Cost Optimized Fully Sustainable Power System for Southeast Asia and the Pacific Rim

    Directory of Open Access Journals (Sweden)

    Ashish Gulagi

    2017-04-01

    Full Text Available In this paper, a cost optimal 100% renewable energy based system is obtained for Southeast Asia and the Pacific Rim region for the year 2030 on an hourly resolution for the whole year. For the optimization, the region was divided into 15 sub-regions and three different scenarios were set up based on the level of high voltage direct current grid connections. The results obtained for a total system levelized cost of electricity showed a decrease from 66.7 €/MWh in a decentralized scenario to 63.5 €/MWh for a centralized grid connected scenario. An integrated scenario was simulated to show the benefit of integrating additional demand of industrial gas and desalinated water which provided the system the required flexibility and increased the efficiency of the usage of storage technologies. This was reflected in the decrease of system cost by 9.5% and the total electricity generation by 5.1%. According to the results, grid integration on a larger scale decreases the total system cost and levelized cost of electricity by reducing the need for storage technologies due to seasonal variations in weather and demand profiles. The intermittency of renewable technologies can be effectively stabilized to satisfy hourly demand at a low cost level. A 100% renewable energy based system could be a reality economically and technically in Southeast Asia and the Pacific Rim with the cost assumptions used in this research and it may be more cost competitive than the nuclear and fossil carbon capture and storage (CCS alternatives.

  12. Integrating sequencing technologies in personal genomics: optimal low cost reconstruction of structural variants.

    Directory of Open Access Journals (Sweden)

    Jiang Du

    2009-07-01

    Full Text Available The goal of human genome re-sequencing is obtaining an accurate assembly of an individual's genome. Recently, there has been great excitement in the development of many technologies for this (e.g. medium and short read sequencing from companies such as 454 and SOLiD, and high-density oligo-arrays from Affymetrix and NimbelGen, with even more expected to appear. The costs and sensitivities of these technologies differ considerably from each other. As an important goal of personal genomics is to reduce the cost of re-sequencing to an affordable point, it is worthwhile to consider optimally integrating technologies. Here, we build a simulation toolbox that will help us optimally combine different technologies for genome re-sequencing, especially in reconstructing large structural variants (SVs. SV reconstruction is considered the most challenging step in human genome re-sequencing. (It is sometimes even harder than de novo assembly of small genomes because of the duplications and repetitive sequences in the human genome. To this end, we formulate canonical problems that are representative of issues in reconstruction and are of small enough scale to be computationally tractable and simulatable. Using semi-realistic simulations, we show how we can combine different technologies to optimally solve the assembly at low cost. With mapability maps, our simulations efficiently handle the inhomogeneous repeat-containing structure of the human genome and the computational complexity of practical assembly algorithms. They quantitatively show how combining different read lengths is more cost-effective than using one length, how an optimal mixed sequencing strategy for reconstructing large novel SVs usually also gives accurate detection of SNPs/indels, how paired-end reads can improve reconstruction efficiency, and how adding in arrays is more efficient than just sequencing for disentangling some complex SVs. Our strategy should facilitate the sequencing of

  13. Development of GIS-Based Decision Support System for Optimizing Transportation Cost in Underground Limestone Mining

    Science.gov (United States)

    Oh, Sungchan; Park, Jihwan; Suh, Jangwon; Lee, Sangho; Choi, Youngmin

    2014-05-01

    In mining industry, large amount of cost has been invested in early stages of mine development such as prospecting, exploration, and discovery. Recent changes in mining, however, also raised the cost in operation, production, and environmental protection because ore depletion at shallow depth caused large-scale, deep mining. Therefore, many mining facilities are installed or relocated underground to reduce transportation cost as well as environmental pollution. This study presents GIS-based decision support system that optimizes transportation cost from various mining faces to mine facility in underground mines. The development of this system consists of five steps. As a first step, mining maps were collected which contains underground geo-spatial informations. In mine maps, then, mine network and contour data were converted to GIS format in second step for 3D visualization and spatial analysis. In doing so, original tunnel outline data were digitized with ground level, and converted to simplified network format, and surface morphology, contours were converted to digital elevation model (DEM). The next step is to define calculation algorithm of transportation cost. Among the many component of transportation cost, this study focused on the fuel cost because it can be easily estimated if mining maps are available by itself. The cost were calculated by multiplication of the number of blasting, haulage per blasting, distance between mining faces to facility, fuel cost per liter, and two for downhill and uphill, divided by fuel efficiency of mining trucks. Finally, decision support system, SNUTunnel was implemented. For the application of SNUTunnel in actual underground mine, Nammyeong Development Corporation, Korea, was selected as study site. This mine produces limestone with high content of calcite for paper, steel manufacture, or desulfurization, and its development is continuously ongoing to reach down to deeper calcite ore body, so the mine network is expanding

  14. Optimizing Cost of Continuous Overlapping Queries over Data Streams by Filter Adaption

    KAUST Repository

    Xie, Qing

    2016-01-12

    The problem we aim to address is the optimization of cost management for executing multiple continuous queries on data streams, where each query is defined by several filters, each of which monitors certain status of the data stream. Specially the filter can be shared by different queries and expensive to evaluate. The conventional objective for such a problem is to minimize the overall execution cost to solve all queries, by planning the order of filter evaluation in shared strategy. However, in streaming scenario, the characteristics of data items may change in process, which can bring some uncertainty to the outcome of individual filter evaluation, and affect the plan of query execution as well as the overall execution cost. In our work, considering the influence of the uncertain variation of data characteristics, we propose a framework to deal with the dynamic adjustment of filter ordering for query execution on data stream, and focus on the issues of cost management. By incrementally monitoring and analyzing the results of filter evaluation, our proposed approach can be effectively adaptive to the varied stream behavior and adjust the optimal ordering of filter evaluation, so as to optimize the execution cost. In order to achieve satisfactory performance and efficiency, we also discuss the trade-off between the adaptivity of our framework and the overhead incurred by filter adaption. The experimental results on synthetic and two real data sets (traffic and multimedia) show that our framework can effectively reduce and balance the overall query execution cost and keep high adaptivity in streaming scenario.

  15. Optimal scenario balance of reduction in costs and greenhouse gas emissions for municipal solid waste management

    Institute of Scientific and Technical Information of China (English)

    邓娜; 张强; 陈广武; 齐长青; 崔文谦; 张于峰; 马洪亭

    2015-01-01

    To reduce carbon intensity, an improved management method balancing the reduction in costs and greenhouse gas (GHG) emissions is required for Tianjin’s waste management system. Firstly, six objective functions, namely, cost minimization, GHG minimization, eco-efficiency minimization, cost maximization, GHG maximization and eco-efficiency maximization, are built and subjected to the same constraints with each objective function corresponding to one scenario. Secondly, GHG emissions and costs are derived from the waste flow of each scenario. Thirdly, the range of GHG emissions and costs of other potential scenarios are obtained and plotted through adjusting waste flow with infinitely possible step sizes according to the correlation among the above six scenarios. And the optimal scenario is determined based on this range. The results suggest the following conclusions. 1) The scenarios located on the border between scenario cost minimization and GHG minimization create an optimum curve, and scenario GHG minimization has the smallest eco-efficiency on the curve;2) Simple pursuit of eco-efficiency minimization using fractional programming may be unreasonable; 3) Balancing GHG emissions from incineration and landfills benefits Tianjin’s waste management system as it reduces GHG emissions and costs.

  16. OPTIMIZATION OF TIMES AND COSTS OF PROJECT OF HORIZONTAL LAMINATOR PRODUCTION USING PERT/CPM TECHNICAL

    Directory of Open Access Journals (Sweden)

    Fernando Henrique Lermen

    2016-09-01

    Full Text Available The PERT/CPM is a technique widely used in both the scheduling and in the project feasibility in terms of cost control and time.  In order to optimize time and costs involved in production, the work presented here aims to apply the PERT/CPM technique in the production project of the Horizontal Laminator, a machine used to cut polyurethane foam blocks in the mattresses industries. For the application of PERT/CPM technique in the project of Horizontal Laminator production were identified the activities that compose the project, the dependence between them, the normal and accelerated durations and the normal and accelerated costs. In this study, deterministic estimates for the duration of the activities were considered. The results show that the project can be completed in 520 hours at a total cost of R$7,042.50, when all activities are performed in their normal durations.  When all the activities that compose the critical path are accelerated, the project can be completed in 333.3 hours at a total cost of R$9,263.01. If the activities slacks have been exploited, it can obtain a final total cost of R$6,157.8, without changing the new duration of the project. It is noteworthy that the final total cost of the project if the slacks are used, will be lower than the initial cost. Regarding the initial cost of the project, after the application of the PERT/CPM technique, it presents a decrease of 12.56% of the total project cost.

  17. Evaluation of Externality Costs in Life-Cycle Optimization of Municipal Solid Waste Management Systems

    DEFF Research Database (Denmark)

    Martinez Sanchez, Veronica; Levis, James W.; Damgaard, Anders

    2017-01-01

    " and "externality costs". Budget costs include market goods and services (economic impact), whereas externality costs include effects outside the economic system (e.g., environmental impact). This study demonstrates the applicability of S-LCC to SWM life-cycle optimization through a case study based on an average...... suburban U.S. county of 500 000 people generating 320 000 Mg of waste annually. Estimated externality costs are based on emissions of CO2, CH4, N2O, PM2.5, PM10, NOx, SO2, VOC, CO, NH3, Hg, Pb, Cd, Cr (VI), Ni, As, and dioxins. The results indicate that incorporating S-LCC into optimized SWM strategy...... development encourages the use of a mixed waste material recovery facility with residues going to incineration, and separated organics to anaerobic digestion. Results are sensitive to waste composition, energy mix and recycling rates. Most of the externality costs stem from SO2, NOx, PM2.5, CH4, fossil CO2...

  18. Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost

    Science.gov (United States)

    Martin, J. A.

    1973-01-01

    An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.

  19. The Optimal Control for the Output Feedback Stochastic System at the Risk-Sensitive Cost

    Institute of Scientific and Technical Information of China (English)

    戴立言; 潘子刚; 施颂椒

    2003-01-01

    The optimal control of the partially observable stochastic system at the risk-sensitive cost is considered in this paper. The system dynamics has a general correlation between system and measurement noise. And the risk-sensitive cost contains a general quadratic term (with cross terms and extra linear terms). The explicit solution of such a problem is presented here using the output feedback control method. This clean and direct derivation enables one to convert such partial observable problems into the equivalent complete observable control problems and use the routine ways to solve them.

  20. A cost optimization model for 100% renewable residential energy supply systems

    DEFF Research Database (Denmark)

    Milan, Christian; Bojesen, Carsten; Nielsen, Mads Pagh

    2012-01-01

    for the interdependencies between the different supply technologies as well as the construction energy of the installations, consumption profiles and on-site energy resource availability. This paper aims at developing such a model for the optimal sizing of renewable energy supply systems (RES) for residential Net ZEB......'s involving on-site production of heat and electricity in combination with electricity exchanged with the public grid. The model is based on linear programming and determines the optimal capacities for each relevant supply technology in terms of the overall system costs. It has been successfully applied...

  1. Convex Optimization for the Energy Management of Hybrid Electric Vehicles Considering Engine Start and Gearshift Costs

    Directory of Open Access Journals (Sweden)

    Tobias Nüesch

    2014-02-01

    Full Text Available This paper presents a novel method to solve the energy management problem for hybrid electric vehicles (HEVs with engine start and gearshift costs. The method is based on a combination of deterministic dynamic programming (DP and convex optimization. As demonstrated in a case study, the method yields globally optimal results while returning the solution in much less time than the conventional DP method. In addition, the proposed method handles state constraints, which allows for the application to scenarios where the battery state of charge (SOC reaches its boundaries.

  2. Status of Solar Sail Propulsion: Moving Toward an Interstellar Probe

    Science.gov (United States)

    Johnson, Les; Young, Roy M.; Montgomery, Edward E., IV

    2006-01-01

    NASA's In-Space Propulsion Technology Program has developed the first-generation of solar sail propulsion systems sufficient to accomplish inner solar system science and exploration missions. These first-generation solar sails, when operational, will range in size from 40 meters to well over 100 meters in diameter and have an areal density of less than 13 grams-per-square meter. A rigorous, multiyear technology development effort culminated last year in the testing of two different 20-meter solar sail systems under thermal vacuum conditions. This effort provided a number of significant insights into the optimal design and expected performance of solar sails as well as an understanding of the methods and costs of building and using them. In a separate effort, solar sail orbital analysis tools for mission design were developed and tested. Laboratory simulations of the effects of long-term space radiation exposure were also conducted on two candidate solar sail materials. Detailed radiation and charging environments were defined for mission trajectories outside the protection of the earth's magnetosphere, in the solar wind environment. These were used in other analytical tools to prove the adequacy of sail design features for accommodating the harsh space environment. Preceding, and in conjunction with these technology efforts, NASA sponsored several mission application studies for solar sails, including one that would use an evolved sail capability to support humanity's first mission into nearby interstellar space. The proposed mission is called the Interstellar Probe. The Interstellar Probe might be accomplished in several ways. A 200-meter sail, with an areal density approaching 1 gram-per-square meter, could accelerate a robotic probe to the very edge of the solar system in just under 20 years from launch. A sail using the technology just demonstrated could make the same mission, but take significantly longer. Conventional chemical propulsion systems would require

  3. Optimizing staffing, quality, and cost in home healthcare nursing: theory synthesis.

    Science.gov (United States)

    Park, Claire Su-Yeon

    2017-08-01

    To propose a new theory pinpointing the optimal nurse staffing threshold delivering the maximum quality of care relative to attendant costs in home health care. Little knowledge exists on the theoretical foundation addressing the inter-relationship among quality of care, nurse staffing, and cost. Theory synthesis. Cochrane Library, PubMed, CINAHL, EBSCOhost Web and Web of Science (25 February - 26 April 2013; 20 January - 22 March 2015). Most of the existing theories/models lacked the detail necessary to explain the relationship among quality of care, nurse staffing and cost. Two notable exceptions are: 'Production Function for Staffing and Quality in Nursing Homes,' which describes an S-shaped trajectory between quality of care and nurse staffing and 'Thirty-day Survival Isoquant and Estimated Costs According to the Nurse Staff Mix,' which depicts a positive quadric relationship between nurse staffing and cost according to quality of care. A synthesis of these theories led to an innovative multi-dimensional econometric theory helping to determine the maximum quality of care for patients while simultaneously delivering nurse staffing in the most cost-effective way. The theory-driven threshold, navigated by Mathematical Programming based on the Duality Theorem in Mathematical Economics, will help nurse executives defend sufficient nurse staffing with scientific justification to ensure optimal patient care; help stakeholders set an evidence-based reasonable economical goal; and facilitate patient-centred decision-making in choosing the institution which delivers the best quality of care. A new theory to determine the optimum nurse staffing maximizing quality of care relative to cost was proposed. © 2017 The Author. Journal of Advanced Nursing © John Wiley & Sons Ltd.

  4. Affordable Design: A Methodolgy to Implement Process-Based Manufacturing Cost into the Traditional Performance-Focused Multidisciplinary Design Optimization

    Science.gov (United States)

    Bao, Han P.; Samareh, J. A.

    2000-01-01

    The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.

  5. Optimal stopping of expected profit and cost yields in an investment under uncertainty

    CERN Document Server

    Djehiche, Boualem; Morlais, Marie Amélie

    2010-01-01

    We consider a finite horizon optimal stopping problem related to trade-off strategies between expected profit and cost cash-flows of an investment under uncertainty. The optimal problem is first formulated in terms of a system of Snell envelopes for the profit and cost yields which act as obstacles to each other. We then construct both a minimal and a maximal solutions using an approximation scheme of the associated system of reflected backward SDEs. When the dependence of the cash-flows on the sources of uncertainty, such as fluctuation market prices, assumed to evolve according to a diffusion process, is made explicit, we also obtain a connection between these solutions and viscosity solutions of a system of variational inequalities (VI) with interconnected obstacles. We also provide two counter-examples showing that uniqueness of solutions of (VI) does not hold in general.

  6. Evaluation and Optimization of an Innovative Low-Cost Photovoltaic Solar Concentrator

    Directory of Open Access Journals (Sweden)

    Franco Cotana

    2011-01-01

    Full Text Available Many researches showed that the cost of the energy produced by photovoltaic (PV concentrators is strongly reduced with respect to flat panels, especially in those countries that have a high solar irradiation. The cost drop comes from the reduction of the expensive high-efficiency photovoltaic surface through the use of optical concentrators of the solar radiation. In this paper, an experimental innovative PV low-concentration system is analysed. Numerical simulations were performed to determine the possible reasons of energy losses in the prototype, primarily due to geometrical factors. In particular, the effect of the shadows produced from the mirrors on the prototype performances was analysed: shadows are often neglected in the design phase of such systems. The study demonstrates that shadows may affect the performances of a hypothetical optimized PV low-concentration system up to 15%. Finally, an economical evaluation was carried out comparing the proposed optimized system to a traditional flat PV panel.

  7. Cost optimization for series-parallel execution of a collection of intersecting operation sets

    Science.gov (United States)

    Dolgui, Alexandre; Levin, Genrikh; Rozin, Boris; Kasabutski, Igor

    2016-05-01

    A collection of intersecting sets of operations is considered. These sets of operations are performed successively. The operations of each set are activated simultaneously. Operation durations can be modified. The cost of each operation decreases with the increase in operation duration. In contrast, the additional expenses for each set of operations are proportional to its time. The problem of selecting the durations of all operations that minimize the total cost under constraint on completion time for the whole collection of operation sets is studied. The mathematical model and method to solve this problem are presented. The proposed method is based on a combination of Lagrangian relaxation and dynamic programming. The results of numerical experiments that illustrate the performance of the proposed method are presented. This approach was used for optimization multi-spindle machines and machining lines, but the problem is common in engineering optimization and thus the techniques developed could be useful for other applications.

  8. Interstellar hydrogen sulfide.

    Science.gov (United States)

    Thaddeus, P.; Kutner, M. L.; Penzias, A. A.; Wilson, R. W.; Jefferts, K. B.

    1972-01-01

    Hydrogen sulfide has been detected in seven Galactic sources by observation of a single line corresponding to the rotational transition from the 1(sub 10) to the 1(sub 01) levels at 168.7 GHz. The observations show that hydrogen sulfide is only a moderately common interstellar molecule comparable in abundance to H2CO and CS, but somewhat less abundant than HCN and much less abundant than CO.

  9. Matrix decomposition and Lagrangian dual method for discrete portfolio optimization under concave transaction costs

    Institute of Scientific and Technical Information of China (English)

    GAO Zhen-xing; ZHANG Shi-tao; SUN Xiao-ling

    2009-01-01

    In this paper, the discrete mean-variance model is considered for portfolio selection under concave transaction costs. By using the Cholesky decomposition technique, the convariance matrix to obtain a separable mixed integer nonlinear optimization problem is decomposed. A brand-and-bound algorithm based on Lagrangian relaxation is then proposed. Com-putational results are reported for test problems with the data randomly generated and those from the US stock market.

  10. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    Energy Technology Data Exchange (ETDEWEB)

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  11. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    Directory of Open Access Journals (Sweden)

    Álvaro Gutiérrez

    2011-11-01

    Full Text Available Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA, previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce.

  12. Distributed bees algorithm parameters optimization for a cost efficient target allocation in swarms of robots.

    Science.gov (United States)

    Jevtić, Aleksandar; Gutiérrez, Alvaro

    2011-01-01

    Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the distributed bees algorithm (DBA), previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA's control parameters by means of a genetic algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots' distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce.

  13. Multi-period Optimal Portfolio Decision with Transaction Costs and HARA Utility Function

    Directory of Open Access Journals (Sweden)

    Zhen Wang

    2013-01-01

    Full Text Available Portfolio selection problem is one of the core research fields in modern financial management. While considering the transaction costs in the long term investment makes the portfolio selection problems more complex than there are no transaction costs. In this paper, the general multi-period investment problems with HARA utility function and proportional transaction costs are investigated. By using the dynamic programming method, the indirect utility function is defined for solving the portfolio selection problem. The optimal strategies and the boundary of the no-transaction region are obtained in the explicit form. And the procedure for solving the original portfolio selection problem is given. Numerical example shows the feasibility and effectiveness of the method provided in this paper.

  14. Integration of water footprint accounting and costs for optimal pulp supply mix in paper industry

    DEFF Research Database (Denmark)

    Manzardo, Alessandro; Ren, Jingzheng; Piantella, Antonio

    2014-01-01

    studies have focused on these aspects, but there have been no previous reports on the integrated application of raw material water footprint accounting and costs in the definition of the optimal supply mix of chemical pulps from different countries. The current models that have been applied specifically...... that minimizes the water footprint accounting results and costs of chemical pulp, thereby facilitating the assessment of the water footprint by accounting for different chemical pulps purchased from various suppliers, with a focus on the efficiency of the production process. Water footprint accounting......Chemical pulp is one of the most important raw materials used in the paper industry. This material is known to make a significant contribution to the water footprint and cost of final paper products; therefore, chemical pulp is crucial in determining the competitiveness of final products'. Several...

  15. Integrated optimization of management cost of hierarchical mobile IPv6 and its performance simulation

    Science.gov (United States)

    Peng, Xue-hai; Zhang, Hong-ke; Zhang, Si-dong

    2004-04-01

    Mobile IPv6 was designed to enable an IPv6 terminal to continue communications seamlessly while changing its access to network. Decreasing communication and management cost is a key issue of the research of the Internet mobility management. Hierarchical Mobile IPv6 was proposed to reduce the number of management messages in backbone network. However, the resources consumptions inside a hierarchical domain are increased as expense according to our cost models. Based on the idea of integrated optimization, adaptive mobility management scheme (AMMS) is proposed in this paper, which decreases the total cost of delivering management messages and data payload on the viewpoint of entire network resources by selecting a suitable mobility management scheme adaptively for a mobile node. The results of simulation show that AMMS has better performance than unmixed Mobile IPv6 and Hierarchical Mobile IPv6.

  16. Cost-Optimal Pathways to 75% Fuel Reduction in Remote Alaskan Villages: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Simpkins, Travis; Cutler, Dylan; Hirsch, Brian; Olis, Dan; Anderson, Kate

    2015-10-28

    There are thousands of isolated, diesel-powered microgrids that deliver energy to remote communities around the world at very high energy costs. The Remote Communities Renewable Energy program aims to help these communities reduce their fuel consumption and lower their energy costs through the use of high penetration renewable energy. As part of this program, the REopt modeling platform for energy system integration and optimization was used to analyze cost-optimal pathways toward achieving a combined 75% reduction in diesel fuel and fuel oil consumption in a select Alaskan village. In addition to the existing diesel generator and fuel oil heating technologies, the model was able to select from among wind, battery storage, and dispatchable electric heaters to meet the electrical and thermal loads. The model results indicate that while 75% fuel reduction appears to be technically feasible it may not be economically viable at this time. When the fuel reduction target was relaxed, the results indicate that by installing high-penetration renewable energy, the community could lower their energy costs by 21% while still reducing their fuel consumption by 54%.

  17. Genetic algorithm for project time-cost optimization in fuzzy environment

    Directory of Open Access Journals (Sweden)

    Khan Md. Ariful Haque

    2012-12-01

    Full Text Available Purpose: The aim of this research is to develop a more realistic approach to solve project time-cost optimization problem under uncertain conditions, with fuzzy time periods. Design/methodology/approach: Deterministic models for time-cost optimization are never efficient considering various uncertainty factors. To make such problems realistic, triangular fuzzy numbers and the concept of a-cut method in fuzzy logic theory are employed to model the problem. Because of NP-hard nature of the project scheduling problem, Genetic Algorithm (GA has been used as a searching tool. Finally, Dev-C++ 4.9.9.2 has been used to code this solver. Findings: The solution has been performed under different combinations of GA parameters and after result analysis optimum values of those parameters have been found for the best solution. Research limitations/implications: For demonstration of the application of the developed algorithm, a project on new product (Pre-paid electric meter, a project under government finance launching has been chosen as a real case. The algorithm is developed under some assumptions. Practical implications: The proposed model leads decision makers to choose the desired solution under different risk levels. Originality/value: Reports reveal that project optimization problems have never been solved under multiple uncertainty conditions. Here, the function has been optimized using Genetic Algorithm search technique, with varied level of risks and fuzzy time periods.

  18. Optimal pricing policies for services with consideration of facility maintenance costs

    Science.gov (United States)

    Yeh, Ruey Huei; Lin, Yi-Fang

    2012-06-01

    For survival and success, pricing is an essential issue for service firms. This article deals with the pricing strategies for services with substantial facility maintenance costs. For this purpose, a mathematical framework that incorporates service demand and facility deterioration is proposed to address the problem. The facility and customers constitute a service system driven by Poisson arrivals and exponential service times. A service demand with increasing price elasticity and a facility lifetime with strictly increasing failure rate are also adopted in modelling. By examining the bidirectional relationship between customer demand and facility deterioration in the profit model, the pricing policies of the service are investigated. Then analytical conditions of customer demand and facility lifetime are derived to achieve a unique optimal pricing policy. The comparative statics properties of the optimal policy are also explored. Finally, numerical examples are presented to illustrate the effects of parameter variations on the optimal pricing policy.

  19. A cost optimization model for 100% renewable residential energy supply systems

    DEFF Research Database (Denmark)

    Milan, Christian; Bojesen, Carsten; Nielsen, Mads Pagh

    2012-01-01

    's involving on-site production of heat and electricity in combination with electricity exchanged with the public grid. The model is based on linear programming and determines the optimal capacities for each relevant supply technology in terms of the overall system costs. It has been successfully applied......The concept of net zero energy buildings (Net ZEB) has received increased attention throughout the last years. A well adapted and optimized design of the energy supply system is crucial for the performance of these buildings. To achieve this, a holistic approach is needed which accounts...... for the interdependencies between the different supply technologies as well as the construction energy of the installations, consumption profiles and on-site energy resource availability. This paper aims at developing such a model for the optimal sizing of renewable energy supply systems (RES) for residential Net ZEB...

  20. Supply China Production-distribution Cost Optimization under Grey Fuzzy Uncertainty

    Institute of Scientific and Technical Information of China (English)

    LIU Dong-bo; CHEN Yu-juan; HUANG Dao; TIAN Yu

    2008-01-01

    Most supply chain programming problems are restricted to the deterministic situations or stochastic environments. Considering twofold uncertainty combining grey and fuzzy factors, this paper proposes a hybrid uncertain programming model to optimize the supply chain production-distribution cost. The programming parameters of the material suppliers, manufacturer, distribution centers, and the customers are integrated into the presented model. On the basis of the chance measure and the credibility of grey fuzzy variable, the grey fuzzy simulation methodology was proposed to generate input-output data for the uncertain functions. The designed neural network can expedite the simulation process after trained from the generated input-output data. The improved Particle Swarm Optimization (PSO) algorithm based on the Differential Evolution (DE) algorithm can optimize the uncertain programming problems. A numerical example was presented to highlight the significance of the uncertain model and the feasibility of the solution strategy.

  1. A Data Driven Pre-cooling Framework for Energy Cost Optimization in Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Vishwanath, Arun; Chandan, Vikas; Mendoza, Cameron; Blake, Charles

    2017-05-16

    Commercial buildings consume significant amount of energy. Facility managers are increasingly grappling with the problem of reducing their buildings’ peak power, overall energy consumption and energy bills. In this paper, we first develop an optimization framework – based on a gray box model for zone thermal dynamics – to determine a pre-cooling strategy that simultaneously shifts the peak power to low energy tariff regimes, and reduces both the peak power and overall energy consumption by exploiting the flexibility in a building’s thermal comfort range. We then evaluate the efficacy of the pre-cooling optimization framework by applying it to building management system data, spanning several days, obtained from a large commercial building located in a tropical region of the world. The results from simulations show that optimal pre-cooling reduces peak power by over 50%, energy consumption by up to 30% and energy bills by up to 37%. Next, to enable ease of use of our framework, we also propose a shortest path based heuristic algorithmfor solving the optimization problemand show that it has comparable erformance with the optimal solution. Finally, we describe an application of the proposed optimization framework for developing countries to reduce the dependency on expensive fossil fuels, which are often used as a source for energy backup.We conclude by highlighting our real world deployment of the optimal pre-cooling framework via a software service on the cloud platform of a major provider. Our pre-cooling methodology, based on the gray box optimization framework, incurs no capital expense and relies on data readily available from a building management system, thus enabling facility managers to take informed decisions for improving the energy and cost footprints of their buildings

  2. Cost-effective river rehabilitation planning: optimizing for morphological benefits at large spatial scales.

    Science.gov (United States)

    Langhans, Simone D; Hermoso, Virgilio; Linke, Simon; Bunn, Stuart E; Possingham, Hugh P

    2014-01-01

    River rehabilitation aims to protect biodiversity or restore key ecosystem services but the success rate is often low. This is seldom because of insufficient funding for rehabilitation works but because trade-offs between costs and ecological benefits of management actions are rarely incorporated in the planning, and because monitoring is often inadequate for managers to learn by doing. In this study, we demonstrate a new approach to plan cost-effective river rehabilitation at large scales. The framework is based on the use of cost functions (relationship between costs of rehabilitation and the expected ecological benefit) to optimize the spatial allocation of rehabilitation actions needed to achieve given rehabilitation goals (in our case established by the Swiss water act). To demonstrate the approach with a simple example, we link costs of the three types of management actions that are most commonly used in Switzerland (culvert removal, widening of one riverside buffer and widening of both riversides) to the improvement in riparian zone quality. We then use Marxan, a widely applied conservation planning software, to identify priority areas to implement these rehabilitation measures in two neighbouring Swiss cantons (Aargau, AG and Zürich, ZH). The best rehabilitation plans identified for the two cantons met all the targets (i.e. restoring different types of morphological deficits with different actions) rehabilitating 80,786 m (AG) and 106,036 m (ZH) of the river network at a total cost of 106.1 Million CHF (AG) and 129.3 Million CH (ZH). The best rehabilitation plan for the canton of AG consisted of more and better connected sub-catchments that were generally less expensive, compared to its neighbouring canton. The framework developed in this study can be used to inform river managers how and where best to spend their rehabilitation budget for a given set of actions, ensures the cost-effective achievement of desired rehabilitation outcomes, and helps

  3. Human factors issues for interstellar spacecraft

    Science.gov (United States)

    Cohen, Marc M.; Brody, Adam R.

    1991-01-01

    Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality.

  4. Impact of coverage-dependent marginal costs on optimal HPV vaccination strategies.

    Science.gov (United States)

    Ryser, Marc D; McGoff, Kevin; Herzog, David P; Sivakoff, David J; Myers, Evan R

    2015-06-01

    The effectiveness of vaccinating males against the human papillomavirus (HPV) remains a controversial subject. Many existing studies conclude that increasing female coverage is more effective than diverting resources into male vaccination. Recently, several empirical studies on HPV immunization have been published, providing evidence of the fact that marginal vaccination costs increase with coverage. In this study, we use a stochastic agent-based modeling framework to revisit the male vaccination debate in light of these new findings. Within this framework, we assess the impact of coverage-dependent marginal costs of vaccine distribution on optimal immunization strategies against HPV. Focusing on the two scenarios of ongoing and new vaccination programs, we analyze different resource allocation policies and their effects on overall disease burden. Our results suggest that if the costs associated with vaccinating males are relatively close to those associated with vaccinating females, then coverage-dependent, increasing marginal costs may favor vaccination strategies that entail immunization of both genders. In particular, this study emphasizes the necessity for further empirical research on the nature of coverage-dependent vaccination costs.

  5. Impact of coverage-dependent marginal costs on optimal HPV vaccination strategies

    Directory of Open Access Journals (Sweden)

    Marc D. Ryser

    2015-06-01

    Full Text Available The effectiveness of vaccinating males against the human papillomavirus (HPV remains a controversial subject. Many existing studies conclude that increasing female coverage is more effective than diverting resources into male vaccination. Recently, several empirical studies on HPV immunization have been published, providing evidence of the fact that marginal vaccination costs increase with coverage. In this study, we use a stochastic agent-based modeling framework to revisit the male vaccination debate in light of these new findings. Within this framework, we assess the impact of coverage-dependent marginal costs of vaccine distribution on optimal immunization strategies against HPV. Focusing on the two scenarios of ongoing and new vaccination programs, we analyze different resource allocation policies and their effects on overall disease burden. Our results suggest that if the costs associated with vaccinating males are relatively close to those associated with vaccinating females, then coverage-dependent, increasing marginal costs may favor vaccination strategies that entail immunization of both genders. In particular, this study emphasizes the necessity for further empirical research on the nature of coverage-dependent vaccination costs.

  6. The cost of leg forces in bipedal locomotion: a simple optimization study.

    Directory of Open Access Journals (Sweden)

    John R Rebula

    Full Text Available Simple optimization models show that bipedal locomotion may largely be governed by the mechanical work performed by the legs, minimization of which can automatically discover walking and running gaits. Work minimization can reproduce broad aspects of human ground reaction forces, such as a double-peaked profile for walking and a single peak for running, but the predicted peaks are unrealistically high and impulsive compared to the much smoother forces produced by humans. The smoothness might be explained better by a cost for the force rather than work produced by the legs, but it is unclear what features of force might be most relevant. We therefore tested a generalized force cost that can penalize force amplitude or its n-th time derivative, raised to the p-th power (or p-norm, across a variety of combinations for n and p. A simple model shows that this generalized force cost only produces smoother, human-like forces if it penalizes the rate rather than amplitude of force production, and only in combination with a work cost. Such a combined objective reproduces the characteristic profiles of human walking (R² = 0.96 and running (R² = 0.92, more so than minimization of either work or force amplitude alone (R² = -0.79 and R² = 0.22, respectively, for walking. Humans might find it preferable to avoid rapid force production, which may be mechanically and physiologically costly.

  7. The cost of leg forces in bipedal locomotion: a simple optimization study.

    Science.gov (United States)

    Rebula, John R; Kuo, Arthur D

    2015-01-01

    Simple optimization models show that bipedal locomotion may largely be governed by the mechanical work performed by the legs, minimization of which can automatically discover walking and running gaits. Work minimization can reproduce broad aspects of human ground reaction forces, such as a double-peaked profile for walking and a single peak for running, but the predicted peaks are unrealistically high and impulsive compared to the much smoother forces produced by humans. The smoothness might be explained better by a cost for the force rather than work produced by the legs, but it is unclear what features of force might be most relevant. We therefore tested a generalized force cost that can penalize force amplitude or its n-th time derivative, raised to the p-th power (or p-norm), across a variety of combinations for n and p. A simple model shows that this generalized force cost only produces smoother, human-like forces if it penalizes the rate rather than amplitude of force production, and only in combination with a work cost. Such a combined objective reproduces the characteristic profiles of human walking (R² = 0.96) and running (R² = 0.92), more so than minimization of either work or force amplitude alone (R² = -0.79 and R² = 0.22, respectively, for walking). Humans might find it preferable to avoid rapid force production, which may be mechanically and physiologically costly.

  8. Life span and reproductive cost explain interspecific variation in the optimal onset of reproduction.

    Science.gov (United States)

    Mourocq, Emeline; Bize, Pierre; Bouwhuis, Sandra; Bradley, Russell; Charmantier, Anne; de la Cruz, Carlos; Drobniak, Szymon M; Espie, Richard H M; Herényi, Márton; Hötker, Hermann; Krüger, Oliver; Marzluff, John; Møller, Anders P; Nakagawa, Shinichi; Phillips, Richard A; Radford, Andrew N; Roulin, Alexandre; Török, János; Valencia, Juliana; van de Pol, Martijn; Warkentin, Ian G; Winney, Isabel S; Wood, Andrew G; Griesser, Michael

    2016-02-01

    Fitness can be profoundly influenced by the age at first reproduction (AFR), but to date the AFR-fitness relationship only has been investigated intraspecifically. Here, we investigated the relationship between AFR and average lifetime reproductive success (LRS) across 34 bird species. We assessed differences in the deviation of the Optimal AFR (i.e., the species-specific AFR associated with the highest LRS) from the age at sexual maturity, considering potential effects of life history as well as social and ecological factors. Most individuals adopted the species-specific Optimal AFR and both the mean and Optimal AFR of species correlated positively with life span. Interspecific deviations of the Optimal AFR were associated with indices reflecting a change in LRS or survival as a function of AFR: a delayed AFR was beneficial in species where early AFR was associated with a decrease in subsequent survival or reproductive output. Overall, our results suggest that a delayed onset of reproduction beyond maturity is an optimal strategy explained by a long life span and costs of early reproduction. By providing the first empirical confirmations of key predictions of life-history theory across species, this study contributes to a better understanding of life-history evolution.

  9. Chemical composition of interstellar dust

    Science.gov (United States)

    Das, Ankan; Chakrabarti, Sandip Kumar; Majumdar, Liton; Sahu, Dipen

    Study of chemical evolution of interstellar medium is well recognized to be a challenging task. Interstellar medium (ISM) is a rich reservoir of complex molecules. So far, around 180 gas phase molecules and around 20 molecular species on the interstellar dust have been detected in various regions of ISM, especially in regions of star formation. In last decade, it was well established that gas phase reactions alone cannot explain molecular abundances in ISM. Chemical reactions which occur on interstellar dust grains are essential to explain formation of several molecules especially hydrogenated species including simplest and most abundant molecule H2. Interstellar grains provide surface for accreted species to meet and react. Therefore, an understanding of formation of molecules on grain surfaces is of prime importance. We concentrate mainly on water, methanol, carbon dioxide, which constitute nearly 90% of the grain mantle. These molecules are detected on grain surface due to their strong absorption bands arising out of multiple vibrational modes. Water is the most abundant species (with a surface coverage >60% ) on a grain in dense interstellar medium. CO2 is second most abundant molecule in interstellar medium with an abundance of around 20% with respect to H2O. However, this can vary from cloud to cloud. In clouds like W 33A it could be even less than 5% of water abundance. The next most abundant molecule is CO, which is well studied ice with an abundance varying between 2%\\ to 15% of water. Methanol (CH3OH) is also very abundant having abundance 2% to 30% of water. Measurement of water deuterium fractionation is a relevant tool for understanding mechanisms of water formation and evolution from prestellar phase to formation of planets and comets. We are also considering deuterated species in our simulation. We use Monte Carlo method (considering multilayer regime) to mimic the exact scenario. We study chemical evolution of interstellar grain mantle by varying

  10. Achieving conservation when opportunity costs are high: optimizing reserve design in Alberta's oil sands region.

    Directory of Open Access Journals (Sweden)

    Richard R Schneider

    Full Text Available Recent studies have shown that conservation gains can be achieved when the spatial distributions of biological benefits and economic costs are incorporated in the conservation planning process. Using Alberta, Canada, as a case study we apply these techniques in the context of coarse-filter reserve design. Because targets for ecosystem representation and other coarse-filter design elements are difficult to define objectively we use a trade-off analysis to systematically explore the relationship between conservation targets and economic opportunity costs. We use the Marxan conservation planning software to generate reserve designs at each level of conservation target to ensure that our quantification of conservation and economic outcomes represents the optimal allocation of resources in each case. Opportunity cost is most affected by the ecological representation target and this relationship is nonlinear. Although petroleum resources are present throughout most of Alberta, and include highly valuable oil sands deposits, our analysis indicates that over 30% of public lands could be protected while maintaining access to more than 97% of the value of the region's resources. Our case study demonstrates that optimal resource allocation can be usefully employed to support strategic decision making in the context of land-use planning, even when conservation targets are not well defined.

  11. Achieving conservation when opportunity costs are high: optimizing reserve design in Alberta's oil sands region.

    Science.gov (United States)

    Schneider, Richard R; Hauer, Grant; Farr, Dan; Adamowicz, W L; Boutin, Stan

    2011-01-01

    Recent studies have shown that conservation gains can be achieved when the spatial distributions of biological benefits and economic costs are incorporated in the conservation planning process. Using Alberta, Canada, as a case study we apply these techniques in the context of coarse-filter reserve design. Because targets for ecosystem representation and other coarse-filter design elements are difficult to define objectively we use a trade-off analysis to systematically explore the relationship between conservation targets and economic opportunity costs. We use the Marxan conservation planning software to generate reserve designs at each level of conservation target to ensure that our quantification of conservation and economic outcomes represents the optimal allocation of resources in each case. Opportunity cost is most affected by the ecological representation target and this relationship is nonlinear. Although petroleum resources are present throughout most of Alberta, and include highly valuable oil sands deposits, our analysis indicates that over 30% of public lands could be protected while maintaining access to more than 97% of the value of the region's resources. Our case study demonstrates that optimal resource allocation can be usefully employed to support strategic decision making in the context of land-use planning, even when conservation targets are not well defined.

  12. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Directory of Open Access Journals (Sweden)

    Shigang Zhang

    2015-10-01

    Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.

  13. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Science.gov (United States)

    Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin

    2015-01-01

    Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709

  14. An Integrated Modeling Approach to Evaluate and Optimize Data Center Sustainability, Dependability and Cost

    Directory of Open Access Journals (Sweden)

    Gustavo Callou

    2014-01-01

    Full Text Available Data centers have evolved dramatically in recent years, due to the advent of social networking services, e-commerce and cloud computing. The conflicting requirements are the high availability levels demanded against the low sustainability impact and cost values. The approaches that evaluate and optimize these requirements are essential to support designers of data center architectures. Our work aims to propose an integrated approach to estimate and optimize these issues with the support of the developed environment, Mercury. Mercury is a tool for dependability, performance and energy flow evaluation. The tool supports reliability block diagrams (RBD, stochastic Petri nets (SPNs, continuous-time Markov chains (CTMC and energy flow (EFM models. The EFM verifies the energy flow on data center architectures, taking into account the energy efficiency and power capacity that each device can provide (assuming power systems or extract (considering cooling components. The EFM also estimates the sustainability impact and cost issues of data center architectures. Additionally, a methodology is also considered to support the modeling, evaluation and optimization processes. Two case studies are presented to illustrate the adopted methodology on data center power systems.

  15. Evaluating risks, costs, and benefits of new and emerging therapies to optimize outcomes in multiple sclerosis.

    Science.gov (United States)

    Bandari, Daniel S; Sternaman, Debora; Chan, Theodore; Prostko, Chris R; Sapir, Tamar

    2012-01-01

    Multiple sclerosis (MS) is a complex, chronic, and often disablingneurological disease. Despite the recent incorporation of new treatmentapproaches early in the disease course, care providers still face difficultdecisions as to which therapy will lead to optimal outcomes and whento initiate or escalate therapies. Such decisions require proper assessmentof relative risks, costs, and benefits of new and emerging therapies, as wellas addressing challenges with adherence to achieve optimal managementand outcomes.At the 24th Annual Meeting Expo of the Academy of Managed CarePharmacy (AMCP), held in San Francisco on April 18, 2012, a 4-hour activitytitled "Analyzing and Applying the Evidence to Improve Cost-Benefit andRisk-Benefit Outcomes in Multiple Sclerosis" was conducted in associationwith AMCP's Continuing Professional Education Partner Program (CPEPP).The practicum, led by the primary authors of this supplement, featureddidactic presentations, a roundtable session, and an expert panel discussiondetailing research evidence, ideas, and discussion topics central to MSand its applications to managed care. To review (a) recent advances in MS management, (b) strategiesto optimize the use of disease-modifying therapies for MS, (c) costs ofcurrent MS therapies, (d) strategies to promote adherence and complianceto disease-modifying therapies, and (e) potential strategies for managedcare organizations to improve care of their MS patient populations and optimizeclinical and economic outcomes. Advances in magnetic resonance imaging and newer therapieshave allowed earlier diagnosis and reduction of relapses, reduction in progressionof disability, and reduction in total cost of care in the long term.Yet, even with the incorporation of new disease-modifying therapies intothe treatment armamentarium of MS, challenges remain for patients, providers,caregivers, and managed care organizations as they have to makeinformed decisions based on the properties, risks, costs, and benefits

  16. Optimal guaranteed cost control for an uncertain discrete T-S fuzzy system with time-delay

    Institute of Scientific and Technical Information of China (English)

    Renming WANG; Thierry Marie GUERRA; Juntao PAN

    2009-01-01

    This paper considers the guaranteed cost control problem for a class of uncertain discrete T-S fuzzy systems with time delay and a given quadratic cost function. Sufficient conditions for the existence of such controllers are derived based on the linear matrix inequalities (LMI) approach by constructing a specific nonquadratic Lyapunov-Krasovskii functional and a nonlinear PDC-like control law. A convex optimization problem is also formulated to select the optimal guaranteed cost controller that minimizes the upper bound of the closed-loop cost function. Finally, numerical examples are presented to demonstrate the effectiveness of the proposed approaches.

  17. Optimal Willingness to Supply Wholesale Electricity Under Asymmetric Linearized Marginal Costs

    Directory of Open Access Journals (Sweden)

    David Hudgins

    2012-01-01

    Full Text Available This analysis derives the profit-maximizing willingness to supply functions for single-plant and multi-plant wholesale electricity suppliers that all incur linear marginal costs. The optimal strategy must result in linear residual demand functions in the absence of capacity constraints. This necessarily leads to a linear pricing rule structure that can be used by firm managers to construct their offer curves and to serve as a benchmark to evaluate firm profit-maximizing behavior. The procedure derives the cost functions and the residual demand curves for merged or multi-plant generators, and uses these to construct the individual generator plant offer curves for a multi-plant firm.

  18. Statistical Arbitrage and Optimal Trading with Transaction Costs in Futures Markets

    CERN Document Server

    Tsagaris, Theodoros

    2008-01-01

    We consider the Brownian market model and the problem of expected utility maximization of terminal wealth. We, specifically, examine the problem of maximizing the utility of terminal wealth under the presence of transaction costs of a fund/agent investing in futures markets. We offer some preliminary remarks about statistical arbitrage strategies and we set the framework for futures markets, and introduce concepts such as margin, gearing and slippage. The setting is of discrete time, and the price evolution of the futures prices is modelled as discrete random sequence involving Ito's sums. We assume the drift and the Brownian motion driving the return process are non-observable and the transaction costs are represented by the bid-ask spread. We provide explicit solution to the optimal portfolio process, and we offer an example using logarithmic utility.

  19. Solving resource availability cost problem in project scheduling by pseudo particle swarm optimization

    Institute of Scientific and Technical Information of China (English)

    Jianjun Qi; Bo Guo; Hongtao Lei; Tao Zhang

    2014-01-01

    This paper considers a project scheduling problem with the objective of minimizing resource availability costs appealed to finish al activities before the deadline. There are finish-start type precedence relations among the activities which require some kinds of renewable resources. We predigest the process of sol-ving the resource availability cost problem (RACP) by using start time of each activity to code the schedule. Then, a novel heuris-tic algorithm is proposed to make the process of looking for the best solution efficiently. And then pseudo particle swarm optimiza-tion (PPSO) combined with PSO and path relinking procedure is presented to solve the RACP. Final y, comparative computational experiments are designed and the computational results show that the proposed method is very effective to solve RACP.

  20. Systems and methods for energy cost optimization in a building system

    Science.gov (United States)

    Turney, Robert D.; Wenzel, Michael J.

    2016-09-06

    Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.

  1. Analytical Thermal and Cost Optimization of Micro-Structured Plate-Fin Heat Sink

    DEFF Research Database (Denmark)

    Rezaniakolaei, Alireza; Rosendahl, Lasse

    Microchannel heat sinks have been widely used in the field of thermo-fluids due to the rapid growth in technological applications which require high rates of heat transfer in relatively small spaces and volumes. In this work, a micro plate-fin heat sink is optimized parametrically, to minimize...... the thermal resistance and to maximize the cost performance of the heat sink. The width and the height of the microchannels, and the fin thickness are analytically optimized at a wide range of pumping power. Using an effective numeric test, the generated equations also discuss the optimum parameters at three...... sizes of the substrate plat of the heat sink. Results show that, at any pumping power there are specific values of the channel width and fin thickness which produce minimum thermal resistance in the heat sink. The results also illustrate that, a larger channel width and a smaller fin thickness lead...

  2. Optimal Multi-Modes Switching with the Switching Cost not necessarily Positive

    CERN Document Server

    Asri, Brahim El

    2012-01-01

    In this paper, we study the $m$-states optimal switching problem in finite horizon, when the switching cost functions are arbitrary and can be positive or negative. This has an economic incentive in terms of central evaluation in cases where such organizations or state grants or financial assistance to power plants that promotes green energy in their production activity or what uses less polluting modes in their production. We show existence for optimal strategy via a verification theorem then we show existence and uniqueness of the value processes by using an approximation scheme. In the markovian framework we show that the value processes can be characterized in terms of deterministic continuous functions of the state of the process. Those latter functions are the unique viscosity solutions for a system of $m$ variational partial differential inequalities with inter-connected obstacles.

  3. Optimal climate policy is a utopia. From quantitative to qualitative cost-benefit analysis

    Energy Technology Data Exchange (ETDEWEB)

    Van den Bergh, Jeroen C.J.M. [Department of Spatial Economics, Faculty of Economics and Business Administration, and Institute for Environmental Studies, Free University, De Boelelaan 1105, 1081 HV, Amsterdam (Netherlands)

    2004-04-20

    The dominance of quantitative cost-benefit analysis (CBA) and optimality concepts in the economic analysis of climate policy is criticised. Among others, it is argued to be based in a misplaced interpretation of policy for a complex climate-economy system as being analogous to individual inter-temporal welfare optimisation. The transfer of quantitative CBA and optimality concepts reflects an overly ambitious approach that does more harm than good. An alternative approach is to focus the attention on extreme events, structural change and complexity. It is argued that a qualitative rather than a quantitative CBA that takes account of these aspects can support the adoption of a minimax regret approach or precautionary principle in climate policy. This means: implement stringent GHG reduction policies as soon as possible.

  4. Optimal Allocation of Smart Substations in a Distribution System Considering Interruption Costs of Customers

    DEFF Research Database (Denmark)

    Sun, Lei; You, Shi; Hu, Junjie;

    2016-01-01

    One of the major functions of a smart substation (SS) is to restore power supply to interrupted customers as quickly as possible after an outage. The high cost of a smart substation limits its widespread utilization. In this paper, a smart substation allocation model (SSAM) to determine the optimal...... is simplified into a mixed integer linear programming problem which could be solved efficiently with commercial solvers. Finally, the performance of the proposed methodology is demonstrated by the standard RBTS-BUS 4 test system and a medium voltage power distribution system in Denmark....

  5. Exact and heuristic approaches to solve the Internet shopping optimization problem with delivery costs

    Directory of Open Access Journals (Sweden)

    Lopez-Loces Mario C.

    2016-06-01

    Full Text Available Internet shopping has been one of the most common online activities, carried out by millions of users every day. As the number of available offers grows, the difficulty in getting the best one among all the shops increases as well. In this paper we propose an integer linear programming (ILP model and two heuristic solutions, the MinMin algorithm and the cellular processing algorithm, to tackle the Internet shopping optimization problem with delivery costs. The obtained results improve those achieved by the state-of-the-art heuristics, and for small real case scenarios ILP delivers exact solutions in a reasonable amount of time.

  6. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Dong, Liang; Sun, Lu

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered...... in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed...

  7. Optimal Stochastic Control with Recursive Cost Functionals of Stochastic Differential Systems Reflected in a Domain

    CERN Document Server

    Li, Juan

    2012-01-01

    In this paper we study the optimal stochastic control problem for stochastic differential systems reflected in a domain. The cost functional is a recursive one, which is defined via generalized backward stochastic differential equations developed by Pardoux and Zhang [17]. The value function is shown to be the viscosity solution to the associated Hamilton-Jacobi-Bellman equation, which is a fully nonlinear parabolic partial differential equation with a nonlinear Neumann boundary condition. The method of stochastic "backward semigroups" introduced by Peng [18] is adapted to our context.

  8. A Conceptual Model to Optimize Operating Cost of Passenger Ships in Macau

    Directory of Open Access Journals (Sweden)

    Xin-long Xu

    2015-01-01

    Full Text Available To facilitate more convenient travel as the economy of Macau expands, the government of Macau has allowed shipping companies to add passenger ships and shipping lines. This paper demonstrates how shipping companies can reduce costs by optimizing passenger ships and crew size. It analyzes operating conditions for each shipping depot, including transit time, ships, and volume of passengers. A series of integer programming models is proposed. After a practical demonstration using Excel to solve the LP model, we show that the reduction in the number of passenger ships and crew size could reach 22.6% and 29.4%, respectively.

  9. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive...... control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated...

  10. Efficiency-optimized low-cost TDPAC spectrometer using a versatile routing/coincidence unit

    Energy Technology Data Exchange (ETDEWEB)

    Renteria, M., E-mail: renteria@fisica.unlp.edu.ar; Bibiloni, A. G.; Darriba, G. N.; Errico, L. A.; Munoz, E. L.; Richard, D.; Runco, J. [Universidad Nacional de La Plata, Departamento de Fisica, Facultad de Ciencias Exactas (Argentina)

    2008-01-15

    A highly efficient, reliable, and low-cost {gamma}-{gamma} TDPAC spectrometer, PACAr, optimized for {sup 181}Hf-implanted low-activity samples, is presented. A versatile EPROM-based routing/coincidence unit was developed and implemented to be use with the memory-card-based multichannel analyzer hosted in a personal computer. The excellent energy resolution and very good overall resolution and efficiency of PACAr are analyzed and compare with advanced and already tested fast-fast and slow-fast PAC spectrometers.

  11. COST ANALYSIS AND OPTIMIZATION IN THE LOGISTIC SUPPLY CHAIN USING THE SIMPROLOGIC PROGRAM

    Directory of Open Access Journals (Sweden)

    Ilona MAŃKA

    2016-12-01

    Full Text Available This article aims to characterize the authorial SimProLOGIC program, version 2.1, which enables one to conduct a cost analysis of individual links, as well as the entire logistic supply chain (LSC. This article also presents an example of the analysis of the parameters, which characterize the supplier of subsystems in the examined logistic chain, and the results of the initial optimization, which makes it possible to improve the economic balance, as well as the level of customer service for a sample test task.

  12. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming.

    Science.gov (United States)

    Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties.

  13. Optimal dividend policies with transaction costs for a class of jump-diffusion processes

    DEFF Research Database (Denmark)

    Hunting, Martin; Paulsen, Jostein

    2013-01-01

    his paper addresses the problem of finding an optimal dividend policy for a class of jump-diffusion processes. The jump component is a compound Poisson process with negative jumps, and the drift and diffusion components are assumed to satisfy some regularity and growth restrictions. Each dividend...... payment is changed by a fixed and a proportional cost, meaning that if ξ is paid out by the company, the shareholders receive kξ−K, where k and K are positive. The aim is to maximize expected discounted dividends until ruin. It is proved that when the jumps belong to a certain class of light...

  14. Cost analysis of water and sediment diversions to optimize land building in the Mississippi River delta

    Science.gov (United States)

    Kenney, Melissa A.; Hobbs, Benjamin F.; Mohrig, David; Huang, Hongtai; Nittrouer, Jeffrey A.; Kim, Wonsuck; Parker, Gary

    2013-06-01

    Land loss in the Mississippi River delta caused by subsidence and erosion has resulted in habitat loss and increased exposure of settled areas to storm surge risks. There is debate over the most cost-efficient and geomorphologically feasible projects to build land by river diversions, namely, whether a larger number of small, or a lesser number of large, engineered diversions provide the most efficient outcomes. This study uses an optimization framework to identify portfolios of diversions that are efficient for three general restoration objectives: maximize land built, minimize cost, and minimize water diverted. The framework links the following models: (1) a hydraulic water and sediment diversion model that, for a given structural design for a diversion, estimates the volume of water and sediment diverted; (2) a geomorphological land-building model that estimates the amount of land built over a time period, given the volume of water and sediment; and (3) a statistical model of investment cost as a function of diversion depth and width. An efficient portfolio is found by optimizing one objective subject to constraints on achievement of the other two; then by permuting those constraints, we find distinct portfolios that represent trade-offs among the objectives. Although the analysis explores generic relationships among size, cost, and land building (and thus does not consider specific project proposals or locations), the results demonstrate that large-scale land building (>200 km2) programs that operate over a time span of 50 years require deep diversions because of the enhanced efficiency of sand extraction per unit water. This conclusion applies whether or not there are significant scale economies or diseconomies associated with wider and deeper diversions.

  15. Visualizing Interstellar's Wormhole

    Science.gov (United States)

    James, Oliver; von Tunzelmann, Eugénie; Franklin, Paul; Thorne, Kip S.

    2015-06-01

    Christopher Nolan's science fiction movie Interstellar offers a variety of opportunities for students in elementary courses on general relativity theory. This paper describes such opportunities, including: (i) At the motivational level, the manner in which elementary relativity concepts underlie the wormhole visualizations seen in the movie; (ii) At the briefest computational level, instructive calculations with simple but intriguing wormhole metrics, including, e.g., constructing embedding diagrams for the three-parameter wormhole that was used by our visual effects team and Christopher Nolan in scoping out possible wormhole geometries for the movie; (iii) Combining the proper reference frame of a camera with solutions of the geodesic equation, to construct a light-ray-tracing map backward in time from a camera's local sky to a wormhole's two celestial spheres; (iv) Implementing this map, for example, in Mathematica, Maple or Matlab, and using that implementation to construct images of what a camera sees when near or inside a wormhole; (v) With the student's implementation, exploring how the wormhole's three parameters influence what the camera sees—which is precisely how Christopher Nolan, using our implementation, chose the parameters for Interstellar's wormhole; (vi) Using the student's implementation, exploring the wormhole's Einstein ring and particularly the peculiar motions of star images near the ring, and exploring what it looks like to travel through a wormhole.

  16. Optimal combinations of control strategies and cost-effective analysis for visceral leishmaniasis disease transmission

    Science.gov (United States)

    Biswas, Santanu; Subramanian, Abhishek; ELMojtaba, Ibrahim M.; Chattopadhyay, Joydev; Sarkar, Ram Rup

    2017-01-01

    Visceral leishmaniasis (VL) is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis. PMID:28222162

  17. Estimation of Partial Safety Factors and Target Failure Probability Based on Cost Optimization of Rubble Mound Breakwaters

    DEFF Research Database (Denmark)

    Kim, Seung-Woo; Suh, Kyung-Duck; Burcharth, Hans F.

    2010-01-01

    The breakwaters are designed by considering the cost optimization because a human risk is seldom considered. Most breakwaters, however, were constructed without considering the cost optimization. In this study, the optimum return period, target failure probability and the partial safety factors...... were evaluated by applying the cost optimization to the rubble mound breakwaters in Korea. The applied method was developed by Hans F. Burcharth and John D. Sorensen in relation to the PIANC Working Group 47. The optimum return period was determined as 50 years in many cases and was found as 100 years...... of the national design standard and then the overall safety factor is calculated as 1.09. It is required that the nominal diameter and weight of armor are respectively 9% and 30% larger than those of the existing design method. Moreover, partial safety factors considering the cost optimization were compared...

  18. Cost-optimal levels for energy performance requirements:The Concerted Action's input to the Framework Methodology

    OpenAIRE

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike; Erhorn, Hans; Poel, Bart; Hitchin, Roger

    2011-01-01

    The CA conducted a study on experiences and challenges for setting cost optimal levels for energy performance requirements. The results were used as input by the EU Commission in their work of establishing the Regulation on a comparative methodology framework for calculating cost optimal levels of minimum energy performance requirements. In addition to the summary report released in August 2011, the full detailed report on this study is now also made available, just as the EC is about to publ...

  19. Designing cost-effective seawater reverse osmosis system under optimal energy options

    Energy Technology Data Exchange (ETDEWEB)

    Gilau, Asmerom M.; Small, Mitchell J. [Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213 (United States)

    2008-04-15

    Today, three billion people around the world have no access to clean drinking water and about 1.76 billion people live in areas already facing a high degree of water stress. This paper analyzes the cost-effectiveness of a stand alone small-scale renewable energy-powered seawater reverse osmosis (SWRO) system for developing countries. In this paper, we have introduced a new methodology; an energy optimization model which simulates hourly power production from renewable energy sources. Applying the model using the wind and solar radiation conditions for Eritrea, East Africa, we have computed hourly water production for a two-stage SWRO system with a capacity of 35 m{sup 3}/day. According to our results, specific energy consumption is about 2.33 kW h/m{sup 3}, which is a lower value than that achieved in most of the previous designs. The use of a booster pump, energy recovery turbine and an appropriate membrane, allows the specific energy consumption to be decreased by about 70% compared to less efficient design without these features. The energy recovery turbine results in a reduction in the water cost of about 41%. Our results show that a wind-powered system is the least cost and a PV-powered system the most expensive, with finished water costs of about 0.50 and 1.00/m{sup 3}, respectively. By international standards, for example, in China, these values are considered economically feasible. Detailed simulations of the RO system design, energy options, and power, water, and life-cycle costs are presented. (author)

  20. Managing simulation-based training: A framework for optimizing learning, cost, and time

    Science.gov (United States)

    Richmond, Noah Joseph

    This study provides a management framework for optimizing training programs for learning, cost, and time when using simulation based training (SBT) and reality based training (RBT) as resources. Simulation is shown to be an effective means for implementing activity substitution as a way to reduce risk. The risk profile of 22 US Air Force vehicles are calculated, and the potential risk reduction is calculated under the assumption of perfect substitutability of RBT and SBT. Methods are subsequently developed to relax the assumption of perfect substitutability. The transfer effectiveness ratio (TER) concept is defined and modeled as a function of the quality of the simulator used, and the requirements of the activity trained. The Navy F/A-18 is then analyzed in a case study illustrating how learning can be maximized subject to constraints in cost and time, and also subject to the decision maker's preferences for the proportional and absolute use of simulation. Solution methods for optimizing multiple activities across shared resources are next provided. Finally, a simulation strategy including an operations planning program (OPP), an implementation program (IP), an acquisition program (AP), and a pedagogical research program (PRP) is detailed. The study provides the theoretical tools to understand how to leverage SBT, a case study demonstrating these tools' efficacy, and a set of policy recommendations to enable the US military to better utilize SBT in the future.

  1. Layer-switching cost and optimality in information spreading on multiplex networks.

    Science.gov (United States)

    Min, Byungjoon; Gwak, Sang-Hwan; Lee, Nanoom; Goh, K-I

    2016-02-18

    We study a model of information spreading on multiplex networks, in which agents interact through multiple interaction channels (layers), say online vs. offline communication layers, subject to layer-switching cost for transmissions across different interaction layers. The model is characterized by the layer-wise path-dependent transmissibility over a contact, that is dynamically determined dependently on both incoming and outgoing transmission layers. We formulate an analytical framework to deal with such path-dependent transmissibility and demonstrate the nontrivial interplay between the multiplexity and spreading dynamics, including optimality. It is shown that the epidemic threshold and prevalence respond to the layer-switching cost non-monotonically and that the optimal conditions can change in abrupt non-analytic ways, depending also on the densities of network layers and the type of seed infections. Our results elucidate the essential role of multiplexity that its explicit consideration should be crucial for realistic modeling and prediction of spreading phenomena on multiplex social networks in an era of ever-diversifying social interaction layers.

  2. Manufacturing enterprise’s logistics operational cost simulation and optimization from the perspective of inter-firm network

    Directory of Open Access Journals (Sweden)

    Chun Fu

    2015-05-01

    Full Text Available Purpose: By studying the case of a Changsha engineering machinery manufacturing firm, this paper aims to find out the optimization tactics to reduce enterprise’s logistics operational cost. Design/methodology/approach: This paper builds the structure model of manufacturing enterprise’s logistics operational costs from the perspective of inter-firm network and simulates the model based on system dynamics. Findings: It concludes that applying system dynamics in the research of manufacturing enterprise’s logistics cost control can better reflect the relationship of factors in the system. And the case firm can optimize the logistics costs by implement joint distribution. Research limitations/implications: This study still lacks comprehensive consideration about the variables quantities and quantitative of the control factors. In the future, we should strengthen the collection of data and information about the engineering manufacturing firms and improve the logistics operational cost model. Practical implications: This study puts forward some optimization tactics to reduce enterprise’s logistics operational cost. And it is of great significance for enterprise’s supply chain management optimization and logistics cost control. Originality/value: Differing from the existing literatures, this paper builds the structure model of manufacturing enterprise’s logistics operational costs from the perspective of inter-firm network and simulates the model based on system dynamics.

  3. Optimization of low-cost biosurfactant production from agricultural residues through response surface methodology.

    Science.gov (United States)

    Ebadipour, N; Lotfabad, T Bagheri; Yaghmaei, S; RoostaAzad, R

    2016-01-01

    Biosurfactants are surface-active compounds capable of reducing surface tension and interfacial tension. Biosurfactants are produced by various microorganisms. They are promising replacements for chemical surfactants because of biodegradability, nontoxicity, and their ability to be produced from renewable sources. However, a major obstacle in producing biosurfactants at the industrial level is the lack of cost-effectiveness. In the present study, by using corn steep liquor (CSL) as a low-cost agricultural waste, not only is the production cost reduced but a higher production yield is also achieved. Moreover, a response surface methodology (RSM) approach through the Box-Behnken method was applied to optimize the biosurfactant production level. The results found that biosurfactant production was improved around 2.3 times at optimum condition when the CSL was at a concentration of 1.88 mL/L and yeast extract was reduced to 25 times less than what was used in a basic soybean oil medium (SOM). The predicted and experimental values of responses were in reasonable agreement with each other (Pred-R(2) = 0.86 and adj-R(2) = 0.94). Optimization led to a drop in raw material price per unit of biosurfactant from $47 to $12/kg. Moreover, the biosurfactant product at a concentration of 84 mg/L could lower the surface tension of twice-distilled water from 72 mN/m to less than 28 mN/m and emulsify an equal volume of kerosene by an emulsification index of (E24) 68% in a two-phase mixture. These capabilities made these biosurfactants applicable in microbial enhanced oil recovery (MEOR), hydrocarbon remediation, and all other petroleum industry surfactant applications.

  4. Multiobjective optimization of water distribution systems accounting for economic cost, hydraulic reliability, and greenhouse gas emissions

    Science.gov (United States)

    Wu, Wenyan; Maier, Holger R.; Simpson, Angus R.

    2013-03-01

    In this paper, three objectives are considered for the optimization of water distribution systems (WDSs): the traditional objectives of minimizing economic cost and maximizing hydraulic reliability and the recently proposed objective of minimizing greenhouse gas (GHG) emissions. It is particularly important to include the GHG minimization objective for WDSs involving pumping into storages or water transmission systems (WTSs), as these systems are the main contributors of GHG emissions in the water industry. In order to better understand the nature of tradeoffs among these three objectives, the shape of the solution space and the location of the Pareto-optimal front in the solution space are investigated for WTSs and WDSs that include pumping into storages, and the implications of the interaction between the three objectives are explored from a practical design perspective. Through three case studies, it is found that the solution space is a U-shaped curve rather than a surface, as the tradeoffs among the three objectives are dominated by the hydraulic reliability objective. The Pareto-optimal front of real-world systems is often located at the "elbow" section and lower "arm" of the solution space (i.e., the U-shaped curve), indicating that it is more economic to increase the hydraulic reliability of these systems by increasing pipe capacity (i.e., pipe diameter) compared to increasing pumping power. Solutions having the same GHG emission level but different cost-reliability tradeoffs often exist. Therefore, the final decision needs to be made in conjunction with expert knowledge and the specific budget and reliability requirements of the system.

  5. Optimal search strategies for detecting cost and economic studies in EMBASE

    Directory of Open Access Journals (Sweden)

    Haynes R Brian

    2006-06-01

    Full Text Available Abstract Background Economic evaluations in the medical literature compare competing diagnosis or treatment methods for their use of resources and their expected outcomes. The best evidence currently available from research regarding both cost and economic comparisons will continue to expand as this type of information becomes more important in today's clinical practice. Researchers and clinicians need quick, reliable ways to access this information. A key source of this type of information is large bibliographic databases such as EMBASE. The objective of this study was to develop search strategies that optimize the retrieval of health costs and economics studies from EMBASE. Methods We conducted an analytic survey, comparing hand searches of journals with retrievals from EMBASE for candidate search terms and combinations. 6 research assistants read all issues of 55 journals indexed by EMBASE for the publishing year 2000. We rated all articles using purpose and quality indicators and categorized them into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized for purpose (i.e., cost and economics and other clinical topics and depending on the purpose as 'pass' or 'fail' for methodologic rigor. Candidate search strategies were developed for economic and cost studies, then run in the 55 EMBASE journals, the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. Results Combinations of search terms for detecting both cost and economic studies attained levels of 100% sensitivity with specificity levels of 92.9% and 92.3% respectively. When maximizing for both sensitivity and specificity, the combination of terms for detecting cost studies (sensitivity increased 2.2% over the single term but at a slight decrease in specificity of 0.9%. The maximized combination of terms

  6. AUTOMATIC WEB SERVICE SELECTION BY OPTIMIZING COST OF COMPOSITION IN SLAKY COMPOSER USING ASSIGNMENT MINIMIZATION APPROACH

    Directory of Open Access Journals (Sweden)

    P. Sandhya

    2012-12-01

    Full Text Available Web service composition is a means of building enterprises virtually by knitting relevant web services on the fly. Automatic web service composition is done dynamically at runtime. Extensive research has been done in the field of automatic web service composition. However all the works focus on providing client oriented results and hence there is less industry adoption of composition technology. In this paper we have proposed a new service collaboration stack that composes with realistic business metrics of a provider in addition to client metrics. Some of the service provider metrics include time planning, profit management, native intelligence, user adoption, environment, market scenario, vision and industry adoption. In this paper we focus on enhancing industry adoption through optimizing cost of service composition. We propose the SLAKY composer that solves assignment of appropriate service during composition as an assignment minimization problem to reduce the cost of composition. We also extend OWL-S profile sub ontology to augment cost as a service parameter.

  7. Cost and surface optimization of a remote photovoltaic system for two kinds of panels' technologies

    Science.gov (United States)

    Avril, S.; Arnaud, G.; Colin, H.; Montignac, F.; Mansilla, C.; Vinard, M.

    2011-10-01

    Stand alone photovoltaic (PV) systems comprise one of the promising electrification solutions to cover the demand of remote consumers, especially when it is coupled with a storage solution that would both increase the productivity of power plants and reduce the areas dedicated to energy production. This short communication presents a multi-objective design of a remote PV system coupled to battery and hydrogen storages systems simultaneously minimizing the total levelized cost and the occupied area, while fulfilling a constraint of consumer satisfaction. For this task, a multi-objective code based on particle swarm optimization has been used to find the best combination of different energy devices. Both short and mid terms based on forecasts assumptions have been investigated. An application for the site of La Nouvelle in the French overseas island of La Réunion is proposed. It points up a strong cost advantage by using Heterojunction with Intrinsic Thin layer (HIT) rather than crystalline silicon (c-Si) cells for the short term. However, the discrimination between these two PV cell technologies is less obvious for the mid term: a strong constraint on the occupied area will promote HIT, whereas a strong constraint on the cost will promote c-Si.

  8. Visualizing Interstellar's Wormhole

    CERN Document Server

    James, Oliver; Franklin, Paul; Thorne, Kip S

    2015-01-01

    Christopher Nolan's science fiction movie Interstellar offers a variety of opportunities for students in elementary courses on general relativity theory. This paper describes such opportunities, including: (i) At the motivational level, the manner in which elementary relativity concepts underlie the wormhole visualizations seen in the movie. (ii) At the briefest computational level, instructive calculations with simple but intriguing wormhole metrics, including, e.g., constructing embedding diagrams for the three-parameter wormhole that was used by our visual effects team and Christopher Nolan in scoping out possible wormhole geometries for the movie. (iii) Combining the proper reference frame of a camera with solutions of the geodesic equation, to construct a light-ray-tracing map backward in time from a camera's local sky to a wormhole's two celestial spheres. (iv) Implementing this map, for example in Mathematica, Maple or Matlab, and using that implementation to construct images of what a camera sees when...

  9. Staged cost optimization of urban storm drainage systems based on hydraulic performance in a changing environment

    Directory of Open Access Journals (Sweden)

    M. Maharjan

    2008-06-01

    Full Text Available Urban flooding causes large economic losses, property damage and loss of lives. The impact of environmental changes mainly, the urbanization and the climatic change leads to increased runoff and increased peak flows which the drainage system must be able to cope with to overcome possible damage and inconveniences caused by the induced flooding. Allowing for detention storage to compliment the capacity of the drainage system network is one of the approaches to reduce urban floods. The traditional practice was to design systems against stationary environmental forcings – including design rainfall, landuse, etc. Due to the rapid change in climate-environment, this approach is no longer economically viable and safe, and explicit consideration of changes that gradually take place during the life-time of the drainage system is warranted. In this paper, a staged cost optimization tool based on the hydraulic performance of the drainage system is presented. A one dimensional hydraulic model is used for hydraulic evaluation of the network together with a genetic algorithm based optimization tool to determine optimal intervention timings and amounts throughout the lifespan of the drainage network. The model was applied in a case study area in the city of Porto Alegre, Brazil. It was concluded that considerable financial savings and/or additional level of flood-safety can be achieved by approaching the design problem as a staged plan rather than one-off scheme.

  10. Staged cost optimization of urban storm drainage systems based on hydraulic performance in a changing environment

    Science.gov (United States)

    Maharjan, M.; Pathirana, A.; Gersonius, B.; Vairavamoorthy, K.

    2009-04-01

    Urban flooding causes large economic losses, property damage and loss of lives. The impact of environmental changes, mainly urbanization and climatic change, leads to increased runoff and peak flows which the drainage system must be able to cope with to reduce potential damage and inconvenience. Allowing for detention storage to compliment the conveyance capacity of the drainage system network is one of the approaches to reduce urban floods. Contemporary practice is to design systems against stationary environmental forcings - including design rainfall, landuse, etc. Due to the rapid change in the climate- and the urban environment, this approach is no longer appropriate, and explicit consideration of gradual changes during the life-time of the drainage system is warranted. In this paper, a staged cost optimization tool based on the hydraulic performance of the drainage system is presented. A one dimensional hydraulic model is used for hydraulic evaluation of the network together with a genetic algorithm based optimization tool to determine optimal intervention timings and responses over the analysis period. The model was applied in a case study area in the city of Porto Alegre, Brazil. It was concluded that considerable financial savings and/or additional level of flood-safety can be achieved by approaching the design problem as a staged plan rather than one-off scheme.

  11. Staged cost optimization of urban storm drainage systems based on hydraulic performance in a changing environment

    Directory of Open Access Journals (Sweden)

    M. Maharjan

    2009-04-01

    Full Text Available Urban flooding causes large economic losses, property damage and loss of lives. The impact of environmental changes, mainly urbanization and climatic change, leads to increased runoff and peak flows which the drainage system must be able to cope with to reduce potential damage and inconvenience. Allowing for detention storage to compliment the conveyance capacity of the drainage system network is one of the approaches to reduce urban floods. Contemporary practice is to design systems against stationary environmental forcings – including design rainfall, landuse, etc. Due to the rapid change in the climate- and the urban environment, this approach is no longer appropriate, and explicit consideration of gradual changes during the life-time of the drainage system is warranted. In this paper, a staged cost optimization tool based on the hydraulic performance of the drainage system is presented. A one dimensional hydraulic model is used for hydraulic evaluation of the network together with a genetic algorithm based optimization tool to determine optimal intervention timings and responses over the analysis period. The model was applied in a case study area in the city of Porto Alegre, Brazil. It was concluded that considerable financial savings and/or additional level of flood-safety can be achieved by approaching the design problem as a staged plan rather than one-off scheme.

  12. Optimal cost and allocation for UPFC using HRGAPSO to improve power system security and loadability

    Energy Technology Data Exchange (ETDEWEB)

    Marouani, I.; Guesmi, T.; Hadj Abdallah, H.; Ouali, A. [Sfax Engineering National School, Electrical Department, BP: W, 3038 Sfax (Tunisia)

    2011-07-01

    With the electricity market deregulation, the number of unplanned power exchanges increases. Some lines located on particular paths may become overload. It is advisable for the transmission system operator to have another way of controlling power flows in order to permit a more efficient and secure use of transmission lines. The FACTS devices (Flexible AC Transmission Systems) could be a mean to carry out this function. In this paper, unified power flow controller (UPFC) is located in order to maximize the system loadability and index security. The optimization problem is solved using a new evolutionary learning algorithm based on a hybrid of real genetic algorithm (RGA) and particle swarm optimization (PSO) called HRGAPSO. The Newton-Raphson load flow algorithm is modified to consider the insertion of the UPFC devices in the network. Simulations results validate the efficiency of this approach to improvement in security, reduction in losses of power system, minimizing the installation cost of UPFC and increasing power transfer capability of the existing power transmission lines. The optimization results was performed on 14-bus test system and implemented using MATLAB.

  13. Optimal cost and allocation for UPFC using HRGAPSO to improve power system security and loadability

    Directory of Open Access Journals (Sweden)

    Marouani I., Guesmi T., Hadj Abdallah H., Ouali A.

    2011-09-01

    Full Text Available With the electricity market deregulation, the number of unplanned power exchanges increases. Some lines located on particular paths may become overload. It is advisable for the transmission system operator to have another way of controlling power flows in order to permit a more efficient and secure use of transmission lines. The FACTS devices (Flexible AC Transmission Systems could be a mean to carry out this function. In this paper, unified power flow controller (UPFC is located in order to maximize the system loadability and index security. The optimization problem is solved using a new evolutionary learning algorithm based on a hybrid of real genetic algorithm (RGA and particle swarm optimization (PSO called HRGAPSO. The Newton-Raphson load flow algorithm is modified to consider the insertion of the UPFC devices in the network. Simulations results validate the efficiency of this approach to improvement in security, reduction in losses of power system, minimizing the installation cost of UPFC and increasing power transfer capability of the existing power transmission lines. The optimization results was performed on 14-bus test system and implemented using MATLAB.

  14. Tårs 10000 m2 CSP + Flat Plate Solar Collector Plant - Cost-Performance Optimization of the Design

    DEFF Research Database (Denmark)

    Perers, Bengt; Furbo, Simon; Tian, Zhiyong

    2016-01-01

    , was established. The optimization showed that there was a synergy in combining CSP and FP collectors. Even though the present cost per m² of the CSP collectors is high, the total energy cost is minimized by installing a combination of collectors in such solar heating plant. It was also found that the CSP...... collectors could raise flexibility in the control strategy of the plant. The TRNSYS-Genopt model is based on individually validated component models and collector parameters from experiments. Optimization of the cost performance of the plant has been conducted in this paper. The simulation model remains...... to be validated with annual measured data from the plant....

  15. Optimization of costs in the lighting planning. Total cost of ownership of a lighting device; Kosten optimieren bei der Lichtplanung. Total Cost of Ownership einer Beleuchtungsanlage

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Elke

    2010-11-15

    Not only the initial costs, but also the total costs in the life cycle of the plant are crucial for economical lighting devices. Apart from the acquisition, the perspective 'Total Cost of Ownership' covers the costs of energy, replacement lamps and maintenance. Likewise, indirect follow-up costs such as additional expenditure for air conditioning are considered in order to compensate the heat load of the lighting.

  16. Finite-Horizon Approximate Optimal Guaranteed Cost Control of Uncertain Nonlinear Systems With Application to Mars Entry Guidance.

    Science.gov (United States)

    Wu, Huai-Ning; Li, Mao-Mao; Guo, Lei

    2015-07-01

    This paper studies the finite-horizon optimal guaranteed cost control (GCC) problem for a class of time-varying uncertain nonlinear systems. The aim of this problem is to find a robust state feedback controller such that the closed-loop system has not only a bounded response in a finite duration of time for all admissible uncertainties but also a minimal guaranteed cost. A neural network (NN) based approximate optimal GCC design is developed. Initially, by modifying the cost function to account for the nonlinear perturbation of system, the optimal GCC problem is transformed into a finite-horizon optimal control problem of the nominal system. Subsequently, with the help of the modified cost function together with a parametrized bounding function for all admissible uncertainties, the solution to the optimal GCC problem is given in terms of a parametrized Hamilton-Jacobi-Bellman (PHJB) equation. Then, a NN method is developed to solve offline the PHJB equation approximately and thus obtain the nearly optimal GCC policy. Furthermore, the convergence of approximate PHJB equation and the robust admissibility of nearly optimal GCC policy are also analyzed. Finally, by applying the proposed design method to the entry guidance problem of the Mars lander, the achieved simulation results show the effectiveness of the proposed controller.

  17. Cost Optimization of Water Resources in Pernambuco, Brazil: Valuing Future Infrastructure and Climate Forecasts

    Science.gov (United States)

    Kumar, Ipsita; Josset, Laureline; Lall, Upmanu; Cavalcanti e Silva, Erik; Cordeiro Possas, José Marcelo; Cauás Asfora, Marcelo

    2017-04-01

    Optimal management of water resources is paramount in semi-arid regions to limit strains on the society and economy due to limited water availability. This problem is likely to become even more recurrent as droughts are projected to intensify in the coming years, causing increasing stresses to the water supply in the concerned areas. The state of Pernambuco, in the Northeast Brazil is one such case, where one of the largest reservoir, Jucazinho, has been at approximately 1% capacity throughout 2016, making infrastructural challenges in the region very real. To ease some of the infrastructural stresses and reduce vulnerabilities of the water system, a new source of water from Rio São Francisco is currently under development. Till its development, water trucks have been regularly mandated to cover water deficits, but at a much higher cost, thus endangering the financial sustainability of the region. In this paper, we propose to evaluate the sustainability of the considered water system by formulating an optimization problem and determine the optimal operations to be conducted. We start with a comparative study of the current and future infrastructures capabilities to face various climate. We show that while the Rio Sao Francisco project mitigates the problems, both implementations do not prevent failure and require the reliance on water trucks during prolonged droughts. We also study the cost associated with the provision of water to the municipalities for several streamflow forecasts. In particular, we investigate the value of climate predictions to adapt operational decisions by comparing the results with a fixed policy derived from historical data. We show that the use of climate information permits the reduction of the water deficit and reduces overall operational costs. We conclude with a discussion on the potential of the approach to evaluate future infrastructure developments. This study is funded by the Inter-American Development Bank (IADB), and in

  18. Detection of interstellar $CH_{3}$

    CERN Document Server

    Feuchtgruber, H; Van Dishoeck, E F; Wright, C M

    2000-01-01

    Observations with the Short Wavelength Spectrometer (SWS) onboard the {\\it Infrared Space Observatory} (ISO) have led to the first detection of the methyl radical ${\\rm CH_3}$ in the interstellar medium. The $\

  19. Turbulence in the Interstellar Medium

    CERN Document Server

    Falceta-Goncalves, D; Falgarone, E; Chian, A C -L

    2014-01-01

    Turbulence is ubiquitous in the insterstellar medium and plays a major role in several processes such as the formation of dense structures and stars, the stability of molecular clouds, the amplification of magnetic fields, and the re-acceleration and diffusion of cosmic rays. Despite its importance, interstellar turbulence, alike turbulence in general, is far from being fully understood. In this review we present the basics of turbulence physics, focusing on the statistics of its structure and energy cascade. We explore the physics of compressible and incompressible turbulent flows, as well as magnetized cases. The most relevant observational techniques that provide quantitative insights of interstellar turbulence are also presented. We also discuss the main difficulties in developing a three-dimensional view of interstellar turbulence from these observations. Finally, we briefly present what could be the the main sources of turbulence in the interstellar medium.

  20. Optimized SU-8 Processing for Low-Cost Microstructures Fabrication without Cleanroom Facilities

    Directory of Open Access Journals (Sweden)

    Vânia C. Pinto

    2014-09-01

    Full Text Available The study and optimization of epoxy-based negative photoresist (SU-8 microstructures through a low-cost process and without the need for cleanroom facility is presented in this paper. It is demonstrated that the Ultraviolet Rays (UV exposure equipment, commonly used in the Printed Circuit Board (PCB industry, can replace the more expensive and less available equipment, as the Mask Aligner that has been used in the last 15 years for SU-8 patterning. Moreover, high transparency masks, printed in a photomask, are used, instead of expensive chromium masks. The fabrication of well-defined SU-8 microstructures with aspect ratios more than 20 is successfully demonstrated with those facilities. The viability of using the gray-scale technology in the photomasks for the fabrication of 3D microstructures is also reported. Moreover, SU-8 microstructures for different applications are shown throughout the paper.

  1. Multiple sequence alignment with arbitrary gap costs: computing an optimal solution using polyhedral combinatorics.

    Science.gov (United States)

    Althaus, Ernst; Caprara, Alberto; Lenhof, Hans-Peter; Reinert, Knut

    2002-01-01

    Multiple sequence alignment is one of the dominant problems in computational molecular biology. Numerous scoring functions and methods have been proposed, most of which result in NP-hard problems. In this paper we propose for the first time a general formulation for multiple alignment with arbitrary gap-costs based on an integer linear program (ILP). In addition we describe a branch-and-cut algorithm to effectively solve the ILP to optimality. We evaluate the performances of our approach in terms of running time and quality of the alignments using the BAliBase database of reference alignments. The results show that our implementation ranks amongst the best programs developed so far.

  2. OSP(TM): Optimal Steel Making Process (Low cost steelmaking plus ecological benefits)

    Energy Technology Data Exchange (ETDEWEB)

    Wunsche, R. [EMPCO Ltd., Whitby, ON (Canada)

    2001-07-01

    The Optimal Steelmaking Process (OSP)(TM) utilizes a non-electric multi-oxy-fuel powered metallurgical furnace. It is proposed that this furnace can replace the electric arc furnace for scrap recycling. The process OSP(TM) uses a novel scrap preheating apparatus, called ROTOBIN(TM) prior to feeding the scrap into the metallurgical furnace. ROTOBIN(TM) makes use of heat recovered from the hot waste gases emitted from the furnace exhaust. While doing so, it simultaneously reduces the contaminants from the scrap. Energy for the process is provided by multi-oxy fuel systems UNILANCE(TM) or COMBILANCE(TM). Benefits of the system in terms of reduced operating conversion costs, increased productivity, improved energy efficiency, improved safety and overall working environment, and reduced air and solid pollutants generation are described. 3 tabs.

  3. Operation Cost Minimization of Droop-Controlled DC Microgrids Based on Real-Time Pricing and Optimal Power Flow

    DEFF Research Database (Denmark)

    Li, Chendan; de Bosio, Federico; Chaudhary, Sanjay Kumar

    2015-01-01

    In this paper, an optimal power flow problem is formulated in order to minimize the total operation cost by considering real-time pricing in DC microgrids. Each generation resource in the system, including the utility grid, is modeled in terms of operation cost, which combines the cost...... problem is solved in a heuristic way by using genetic algorithms. In order to test the proposed algorithm, a six-bus droop-controlled DC microgrid is used as a case-study. The obtained simulation results show that under variable renewable generation, load, and electricity prices, the proposed method can...... successfully dispatch the resources in the microgrid with lower total operation costs....

  4. Neural-network-based online HJB solution for optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems.

    Science.gov (United States)

    Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong

    2014-12-01

    In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.

  5. The Interstellar Conspiracy

    Science.gov (United States)

    Johnson, Les; Matloff, Gregory L.

    2005-01-01

    If we were designing a human-carrying starship that could be launched in the not-too-distant future, it would almost certainly not use a warp drive to instantaneously bounce around the universe, as is done in Isaac Asimov's classic Foundation series or in episodes of Star Trek or Star Wars. Sadly, those starships that seem to be within technological reach could not even travel at high relativistic speeds, as does the interstellar ramjet in Poul Anderson's Tau Zero. Warp-speeds seem to be well outside the realm of currently understood physical law; proton-fusing ramjets may never be technologically feasible. Perhaps fortunately in our terrorist-plagued world, the economics of antimatter may never be attractive for large-scale starship propulsion. But interstellar travel will be possible within a few centuries, although it will certainly not be as fast as we might prefer. If humans learn how to hibernate, perhaps we will sleep our way to the stars, as do the crew in A. E. van Vogt's Far Centaurus. However, as discussed in a landmark paper in The Journal of the British Interplanetary Society, the most feasible approach to transporting a small human population to the planets (if any) of Alpha Centauri is the worldship. Such craft have often been featured in science fiction. See for example Arthur C. Clarke's Rendezvous with Rama, and Robert A. Heinlein's Orphans of the Sky. Worldships are essentially mobile versions of the O Neill free-space habitats. Constructed mostly from lunar and/or asteroidal materials, these solar-powered, multi-kilometer-dimension structures could house 10,000 to 100,000 humans in Earth-approximating environments. Artificial gravity would be provided by habitat rotation, and cosmic ray shielding would be provided by passive methods, such as habitat atmosphere and mass shielding, or magnetic fields. A late 21st century space-habitat venture might support itself economically by constructing large solar-powered satellites to beam energy back to

  6. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    Directory of Open Access Journals (Sweden)

    Maciej Malawski

    2015-01-01

    Full Text Available This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL and allows us to minimize the cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.

  7. Multi-objective optimization of aircraft design for emission and cost reductions

    Institute of Scientific and Technical Information of China (English)

    Wang Yu; Yin Hailian; Zhang Shuai; Yu Xiongqing

    2014-01-01

    Pollutant gases emitted from the civil jet are doing more and more harm to the environ-ment with the rapid development of the global commercial aviation transport. Low environmental impact has become a new requirement for aircraft design. In this paper, estimation method for emis-sion in aircraft conceptual design stage is improved based on the International Civil Aviation Orga-nization (ICAO) aircraft engine emissions databank and the polynomial curve fitting methods. The greenhouse gas emission (CO2 equivalent) per seat per kilometer is proposed to measure the emis-sions. An approximate sensitive analysis and a multi-objective optimization of aircraft design for tradeoff between greenhouse effect and direct operating cost (DOC) are performed with five geom-etry variables of wing configuration and two flight operational parameters. The results indicate that reducing the cruise altitude and Mach number may result in a decrease of the greenhouse effect but an increase of DOC. And the two flight operational parameters have more effects on the emissions than the wing configuration. The Pareto-optimal front shows that a decrease of 29.8%in DOC is attained at the expense of an increase of 10.8%in greenhouse gases.

  8. Multi-objective optimization of aircraft design for emission and cost reductions

    Directory of Open Access Journals (Sweden)

    Wang Yu

    2014-02-01

    Full Text Available Pollutant gases emitted from the civil jet are doing more and more harm to the environment with the rapid development of the global commercial aviation transport. Low environmental impact has become a new requirement for aircraft design. In this paper, estimation method for emission in aircraft conceptual design stage is improved based on the International Civil Aviation Organization (ICAO aircraft engine emissions databank and the polynomial curve fitting methods. The greenhouse gas emission (CO2 equivalent per seat per kilometer is proposed to measure the emissions. An approximate sensitive analysis and a multi-objective optimization of aircraft design for tradeoff between greenhouse effect and direct operating cost (DOC are performed with five geometry variables of wing configuration and two flight operational parameters. The results indicate that reducing the cruise altitude and Mach number may result in a decrease of the greenhouse effect but an increase of DOC. And the two flight operational parameters have more effects on the emissions than the wing configuration. The Pareto-optimal front shows that a decrease of 29.8% in DOC is attained at the expense of an increase of 10.8% in greenhouse gases.

  9. Advanced Algorithm for Optimizing the Deployment Cost of Passive Optical Networks

    Directory of Open Access Journals (Sweden)

    Pavel Lafata

    2013-01-01

    Full Text Available The deployment of passive optical networks (PONs is slow today, especially in Europe, because completely new optical infrastructures are necessary to be installed in the last-mile segments of access networks, which is always very expensive process. One of the possibilities is to design economically effective topologies and to optimize the deployment cost. This article describes the method leading to evaluate an algorithm for designing suboptimal economic solutions and topologies for PONs by focusing on optimization of constructional length of distribution networks. While the typical PON topologies are star topologies or tree-star topologies, the first part of this article introduces new sub algorithm for estimating the minimum star topology. The next section brings the evaluation of two sub algorithms for solving minimum constructional length problems. Finally, all these parts will be merged into a complex algorithm by using clusterization technique to solve optimum topologies. However, the current version of presented algorithm is purely based on mathematical theories and was implemented in Matlab environment. Therefore, it is able to design only theoretical optimum topologies without taking external conditions and real limitations into account. These real conditions will be further implemented in the future, so the algorithm could be also used for practical applications.

  10. Use of software to optimize time and costs in the elaboration of the basic drawing

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Tchaikowisky M. [Faculdade de Tecnologia e Ciencias (FTC), Itabuna, BA (Brazil); Bresci, Claudio T.; Franca, Carlos M.M. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In order to choose areas to implement pipe yards, terminals, campsite, pipeline path and other areas of assistance for the construction and assembly of the pipeline, it is necessary previous location studies to elaborate the basic drawing. However it is not always possible to contract a company registered in aerial survey to elaborate this type of drawing, either for the cost or time, and where the lack of a basic drawing can lead to an erroneous choice or one which reflects an exaggerated estimate or not adequate logistics. In order to minimize costs and optimizing time, without compromising quality, this study proposes the use of software 'virtual globe' type for mapping and geographic location available on the internet to assist in the gathering of geographical, altimetric, hydro graphic, road, environmental, socioeconomic and aerial images, information necessary to elaborate the basic drawing. This article includes a case study of the oil pipeline Cacimbas-Barra do Riacho project and a proposed procedure to be used in the elaboration of basic drawings using data generated using the referred to software. In 2007 the Pipeline Construction and Assembly sector, CMDPI unit of PETROBRAS Engenharia/IETEG/IEDT was designated to select an area of 2 hectares where the pipe storage yard for pipes to be used in the Oil Pipeline Cacimbas- Barra do Riacho in Espirito Santo state would be constructed. The area was chosen following some pre-requisites and using as main resources in the gathering of information necessary to select the area, 'virtual globe' type software and a CAD type software with the capacity to import images and altimetric identification as well as exporting drawing data to the mentioned. The activity, much simpler than hiring an aerial survey, was elaborated surpassing expectations of time-costs besides complying with the necessary technical demands. (author)

  11. Discovery of Interstellar CF+

    CERN Document Server

    Neufeld, D A; Menten, K M; Wolfire, M G; Black, J H; Schuller, F; Müller, H; Thorwirth, S; Gusten, R; Philipp, S

    2006-01-01

    We discuss the first astronomical detection of the CF+ (fluoromethylidynium) ion, obtained by observations of the J=1-0 (102.6 GHz), J=2-1 (205.2 GHz) and J=3-2 (307.7 GHz) rotational transitions toward the Orion Bar region. Our search for CF+, carried out using the IRAM 30m and APEX 12m telescopes, was motivated by recent theoretical models that predict CF+ abundances of a few times 1.E-10 in UV-irradiated molecular regions where C+ is present. The CF+ ion is produced by exothermic reactions of C+ with HF. Because fluorine atoms can react exothermically with H2, HF is predicted to be the dominant reservoir of fluorine, not only in well-shielded regions but also in the surface layers of molecular clouds where the C+ abundance is large. The observed CF+ line intensities imply the presence of CF+ column densities of at least 1.E+12 cm-2 over a region of size at least ~ 1 arcmin, in good agreement with theoretical predictions. They provide support for our current theories of interstellar fluorine chemistry, whic...

  12. Interstellar molecular clouds

    Science.gov (United States)

    Bally, J.

    1986-04-01

    The physical properties of the molecular phase of the interstellar medium are studied with regard to star formation and the structure of the Galaxy. Most observations of molecular clouds are made with single-dish, high-surface precision radio telescopes, with the best resolution attainable at 0.2 to 1 arcmin; the smallest structures that can be resolved are of order 10 to the 17th cm in diameter. It is now believed that: (1) most of the mass of the Galaxy is in the form of giant molecular clouds; (2) the largest clouds and those responsible for most massive star formation are concentrated in spiral arms; (3) the molecular clouds are the sites of perpetual star formation, and are significant in the chemical evolution of the Galaxy; (4) giant molecular clouds determine the evolution of the kinematic properties of galactic disk stars; (5) the total gas content is diminishing with time; and (6) most clouds have supersonic internal motions and do not form stars on a free-fall time scale. It is concluded that though progress has been made, more advanced instruments are needed to inspect the processes operating within stellar nurseries and to study the distribution of the molecular clouds in more distant galaxies. Instruments presently under construction which are designed to meet these ends are presented.

  13. Interstellar Solid Hydrogen

    CERN Document Server

    Lin, Ching Yeh; Walker, Mark A

    2011-01-01

    We consider the possibility that solid molecular hydrogen is present in interstellar space. If so cosmic-rays and energetic photons cause ionisation in the solid leading to the formation of H6+. This ion is not produced by gas-phase reactions and its radiative transitions therefore provide a signature of solid H2 in the astrophysical context. The vibrational transitions of H6+ are yet to be observed in the laboratory, but we have characterised them in a quantum-theoretical treatment of the molecule; our calculations include anharmonic corrections, which are large. Here we report on those calculations and compare our results with astronomical data. In addition to the H6+ isotopomer, we focus on the deuterated species (HD)3+ which is expected to dominate at low ionisation rates as a result of isotopic condensation reactions. We can reliably predict the frequencies of the fundamental bands for five modes of vibration. For (HD)3+ all of these are found to lie close to some of the strongest of the pervasive mid-in...

  14. Interstellar Dust Close to the Sun

    CERN Document Server

    Frisch, Priscilla C

    2012-01-01

    The low density interstellar medium (ISM) close to the Sun and inside of the heliosphere provides a unique laboratory for studying interstellar dust grains. Grain characteristics in the nearby ISM are obtained from observations of interstellar gas and dust inside of the heliosphere and the interstellar gas towards nearby stars. Comparison between the gas composition and solar abundances suggests that grains are dominated by olivines and possibly some form of iron oxide. Measurements of the interstellar Ne/O ratio by the Interstellar Boundary Explorer spacecraft indicate that a high fraction of interstellar oxygen in the ISM must be depleted onto dust grains. Local interstellar abundances are consistent with grain destruction in ~150 km/s interstellar shocks, provided that the carbonaceous component is hydrogenated amorphous carbon and carbon abundances are correct. Variations in relative abundances of refractories in gas suggest variations in the history of grain destruction in nearby ISM. The large observed ...

  15. Premium cost optimization of operational and maintenance of green building in Indonesia using life cycle assessment method

    Science.gov (United States)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Budiman, Rachmat; Riswanto

    2017-06-01

    Building has a big impact on the environmental developments. There are three general motives in building, namely the economy, society, and environment. Total completed building construction in Indonesia increased by 116% during 2009 to 2011. It made the energy consumption increased by 11% within the last three years. In fact, 70% of energy consumption is used for electricity needs on commercial buildings which leads to an increase of greenhouse gas emissions by 25%. Green Building cycle costs is known as highly building upfront cost in Indonesia. The purpose of optimization in this research improves building performance with some of green concept alternatives. Research methodology is mixed method of qualitative and quantitative approaches through questionnaire surveys and case study. Assessing the successful of optimization functions in the existing green building is based on the operational and maintenance phase with the Life Cycle Assessment Method. Choosing optimization results were based on the largest efficiency of building life cycle and the most effective cost to refund.

  16. Optimal Preventive Maintenance Schedule based on Lifecycle Cost and Time-Dependent Reliability

    Science.gov (United States)

    2011-11-10

    cost PC , the inspection cost IC and an expected variable cost EVC [2, 32]. These costs are a function of quality and reliability. The lifecycle...expected variable cost EVC is a function of the time- dependent reliability which is used to estimate the expected present value of repairing and/or

  17. Optimizing cost and minimizing energy loss in the recirculating race-track design of the LHeC electron linac

    CERN Document Server

    Skrabacz, J

    2008-01-01

    The objective of this project is to propose an optimal design of a recirculating electron linac for a future LHC-based e-p collider_the LHeC [1, 2]. Primary considerations are the cost, structure, shape, and size of the recirculating track, the optimal number of revolutions through which the e-beam should be accelerated, and radiative energy loss in the bends. Secondary considerations are transverse emittance growth due to radiation, the number of dipoles needed in order to maintain an upper bound on the emittance growth, the average length of such dipoles, and the maximum bending dipole field needed to recirculate the beam. These effects will be studied macroscopically with respect to the overall structure, in that smaller effects related to machine optics of the lattice structure will be neglected. The scope of the optimization problem is, in essence, a "first order" insight into optimal dimensions, centered on minimizing the most important parameter_cost.

  18. Supply-side-demand-side optimization and cost-environment trade offs for China`s coal and electricity system

    Energy Technology Data Exchange (ETDEWEB)

    Xie Zhijun; Kuby, M. [Boston University, Boston, MA (United States). Dept. of Geography, Center for Energy and Environmental Studies

    1997-02-01

    The authors simultaneously optimize supply-side and demand-size investments for satisfying China`s coal and electricity needs over a 15 year time horizon. The results are compared to equivalent results from a supply-side only optimization assuming a business-as-usual demand scenario. It is estimated that, by shifting investment from energy production and transportation to energy efficiency improvement, China could meet the same energy service demand in 2000 for 7% less cost and 120 million tons (mt) less coal. Alternatively, for greater environmental protection, China could satisfy the same demands at the same cost using 275 mt coal. 27 refs., 6 figs.

  19. Toward a new spacecraft optimal design lifetime? Impact of marginal cost of durability and reduced launch price

    Science.gov (United States)

    Snelgrove, Kailah B.; Saleh, Joseph Homer

    2016-10-01

    The average design lifetime of satellites continues to increase, in part due to the expectation that the satellite cost per operational day decreases monotonically with increased design lifetime. In this work, we challenge this expectation by revisiting the durability choice problem for spacecraft in the face of reduced launch price and under various cost of durability models. We first provide a brief overview of the economic thought on durability and highlight its limitations as they pertain to our problem (e.g., the assumption of zero marginal cost of durability). We then investigate the merging influence of spacecraft cost of durability and launch price, and we identify conditions that give rise cost-optimal design lifetimes that are shorter than the longest lifetime technically achievable. For example, we find that high costs of durability favor short design lifetimes, and that under these conditions the optimal choice is relatively robust to reduction in launch prices. By contrast, lower costs of durability favor longer design lifetimes, and the optimal choice is highly sensitive to reduction in launch price. In both cases, reduction in launch prices translates into reduction of the optimal design lifetime. Our results identify a number of situations for which satellite operators would be better served by spacecraft with shorter design lifetimes. Beyond cost issues and repeat purchases, other implications of long design lifetime include the increased risk of technological slowdown given the lower frequency of purchases and technology refresh, and the increased risk for satellite operators that the spacecraft will be technologically obsolete before the end of its life (with the corollary of loss of value and competitive advantage). We conclude with the recommendation that, should pressure to extend spacecraft design lifetime continue, satellite manufacturers should explore opportunities to lease their spacecraft to operators, or to take a stake in the ownership

  20. A hybrid genetic algorithm-queuing multi-compartment model for optimizing inpatient bed occupancy and associated costs.

    Science.gov (United States)

    Belciug, Smaranda; Gorunescu, Florin

    2016-03-01

    Explore how efficient intelligent decision support systems, both easily understandable and straightforwardly implemented, can help modern hospital managers to optimize both bed occupancy and utilization costs. This paper proposes a hybrid genetic algorithm-queuing multi-compartment model for the patient flow in hospitals. A finite capacity queuing model with phase-type service distribution is combined with a compartmental model, and an associated cost model is set up. An evolutionary-based approach is used for enhancing the ability to optimize both bed management and associated costs. In addition, a "What-if analysis" shows how changing the model parameters could improve performance while controlling costs. The study uses bed-occupancy data collected at the Department of Geriatric Medicine - St. George's Hospital, London, period 1969-1984, and January 2000. The hybrid model revealed that a bed-occupancy exceeding 91%, implying a patient rejection rate around 1.1%, can be carried out with 159 beds plus 8 unstaffed beds. The same holding and penalty costs, but significantly different bed allocations (156 vs. 184 staffed beds, and 8 vs. 9 unstaffed beds, respectively) will result in significantly different costs (£755 vs. £1172). Moreover, once the arrival rate exceeds 7 patient/day, the costs associated to the finite capacity system become significantly smaller than those associated to an Erlang B queuing model (£134 vs. £947). Encoding the whole information provided by both the queuing system and the cost model through chromosomes, the genetic algorithm represents an efficient tool in optimizing the bed allocation and associated costs. The methodology can be extended to different medical departments with minor modifications in structure and parameterization. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Depolarization canals and interstellar turbulence

    Science.gov (United States)

    Fletcher, A.; Shukurov, A.

    Recent radio polarization observations have revealed a plethora of unexpected features in the polarized Galactic radio background that arise from propagation effects in the random (turbulent) interstellar medium. The canals are especially striking among them, a random network of very dark, narrow regions clearly visible in many directions against a bright polarized Galactic synchrotron background. There are no obvious physical structures in the ISM that may have caused the canals, and so they have been called Faraday ghosts. They evidently carry information about interstellar turbulence but only now is it becoming clear how this information can be extracted. Two theories for the origin of the canals have been proposed; both attribute the canals to Faraday rotation, but one invokes strong gradients in Faraday rotation in the sky plane (specifically, in a foreground Faraday screen) and the other only relies on line-of-sight effects (differential Faraday rotation). In this review we discuss the physical nature of the canals and how they can be used to explore statistical properties of interstellar turbulence. This opens studies of magnetized interstellar turbulence to new methods of analysis, such as contour statistics and related techniques of computational geometry and topology. In particular, we can hope to measure such elusive quantities as the Taylor microscale and the effective magnetic Reynolds number of interstellar MHD turbulence.

  2. Consistent cost curves for identification of optimal energy savings across industry and residential sectors

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik; Baldini, Mattia

    Energy savings are a key element in reaching ambitious climate targets and may contribute to increased productivity as well. For identification of the most attractive saving options cost curves for savings are constructed illustrating potentials of savings with associated costs. In optimisation...... with constructing and applying the cost curves in modelling: • Cost curves do not have the same cost interpretation across economic subsectors and end-use technologies (investment cost for equipment varies – including/excluding installation – adaptation costs – indirect production costs) • The time issue of when...... the costs are incurred and savings (difference in discount rates both private and social) • The issue of marginal investment in a case of replacement anyway or a full investment in the energy saving technology • Implementation costs (and probability of investment) differs across sectors • Cost saving...

  3. Optimal replacement of residential air conditioning equipment to minimize energy, greenhouse gas emissions, and consumer cost in the US

    Energy Technology Data Exchange (ETDEWEB)

    De Kleine, Robert D. [Center for Sustainable Systems, School of Natural Resources and Environment, University of Michigan, 440 Church St., Dana Bldg., Ann Arbor, MI 48109-1041 (United States); Keoleian, Gregory A., E-mail: gregak@umich.edu [Center for Sustainable Systems, School of Natural Resources and Environment, University of Michigan, 440 Church St., Dana Bldg., Ann Arbor, MI 48109-1041 (United States); Kelly, Jarod C. [Center for Sustainable Systems, School of Natural Resources and Environment, University of Michigan, 440 Church St., Dana Bldg., Ann Arbor, MI 48109-1041 (United States)

    2011-06-15

    A life cycle optimization of the replacement of residential central air conditioners (CACs) was conducted in order to identify replacement schedules that minimized three separate objectives: life cycle energy consumption, greenhouse gas (GHG) emissions, and consumer cost. The analysis was conducted for the time period of 1985-2025 for Ann Arbor, MI and San Antonio, TX. Using annual sales-weighted efficiencies of residential CAC equipment, the tradeoff between potential operational savings and the burdens of producing new, more efficient equipment was evaluated. The optimal replacement schedule for each objective was identified for each location and service scenario. In general, minimizing energy consumption required frequent replacement (4-12 replacements), minimizing GHG required fewer replacements (2-5 replacements), and minimizing cost required the fewest replacements (1-3 replacements) over the time horizon. Scenario analysis of different federal efficiency standards, regional standards, and Energy Star purchases were conducted to quantify each policy's impact. For example, a 16 SEER regional standard in Texas was shown to either reduce primary energy consumption 13%, GHGs emissions by 11%, or cost by 6-7% when performing optimal replacement of CACs from 2005 or before. The results also indicate that proper servicing should be a higher priority than optimal replacement to minimize environmental burdens. - Highlights: > Optimal replacement schedules for residential central air conditioners were found. > Minimizing energy required more frequent replacement than minimizing consumer cost. > Significant variation in optimal replacement was observed for Michigan and Texas. > Rebates for altering replacement patterns are not cost effective for GHG abatement. > Maintenance levels were significant in determining the energy and GHG impacts.

  4. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    Science.gov (United States)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum

  5. Reliability-Based and Cost-Oriented Product Optimization Integrating Fuzzy Reasoning Petri Nets, Interval Expert Evaluation and Cultural-Based DMOPSO Using Crowding Distance Sorting

    OpenAIRE

    Zhaoxi Hong; Yixiong Feng; Zhongkai Li; Guangdong Tian; Jianrong Tan

    2017-01-01

    In reliability-based and cost-oriented product optimization, the target product reliability is apportioned to subsystems or components to achieve the maximum reliability and minimum cost. Main challenges to conducting such optimization design lie in how to simultaneously consider subsystem division, uncertain evaluation provided by experts for essential factors, and dynamic propagation of product failure. To overcome these problems, a reliability-based and cost-oriented product optimization m...

  6. Theory of interstellar medium diagnostics

    Science.gov (United States)

    Fahr, H. J.

    1983-01-01

    The theoretical interpretation of observed interplanetary resonance luminescence patterns is used as one of the must promising methods to determine the state of the local interstellar medium (LISM). However, these methods lead to discrepant results that would be hard to understand in the framework of any physical LISM scenario. Assuming that the observational data are reliable, two possibilities which could help to resolve these discrepancies are discussed: (1) the current modeling of resonance luminescence patterns is unsatisfactory and has to be improved, and (2) the extrapolated interstellar parameters are not indicative of the unperturbed LISM state, but rather designate an intermediate state attained in the outer regions of the solar system. It is shown that a quantitative treatment of the neutral gas-plasma interaction effects in the interface between the heliospheric and the interstellar plasmas is of major importance for the correct understanding of the whole complex.

  7. Interstellar Isotopes: Prospects with ALMA

    Science.gov (United States)

    Charnley Steven B.

    2010-01-01

    Cold molecular clouds are natural environments for the enrichment of interstellar molecules in the heavy isotopes of H, C, N and O. Anomalously fractionated isotopic material is found in many primitive Solar System objects, such as meteorites and comets, that may trace interstellar matter that was incorporated into the Solar Nebula without undergoing significant processing. Models of the fractionation chemistry of H, C, N and O in dense molecular clouds, particularly in cores where substantial freeze-out of molecules on to dust has occurred, make several predictions that can be tested in the near future by molecular line observations. The range of fractionation ratios expected in different interstellar molecules will be discussed and the capabilities of ALMA for testing these models (e.g. in observing doubly-substituted isotopologues) will be outlined.

  8. A PC program to optimize system configuration for desired reliability at minimum cost

    Science.gov (United States)

    Hills, Steven W.; Siahpush, Ali S.

    1994-01-01

    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  9. Production Optimization for Plan of Gas Field Development Using Marginal Cost Analysis

    Directory of Open Access Journals (Sweden)

    Suprapto Soemardan

    2013-09-01

    Full Text Available Gas production rate is one of the most important variables affecting the feasibility plan of gas field development. It take into account reservoir characteristics, gas reserves, number of wells, production facilities, government take and market conditions. In this research, a mathematical  model of gas production optimization  has been developed using  marginal cost  analysis  in  determining  the  optimum  gas  production  rate  for  economic  profit,  by employing  the  case  study  of Matindok  Field.  The  results  show  that  the  optimum  gas  production  rate  is  mainly  affected  by  gas  price  duration  and time of gas delivery. When the price of gas  increases, the optimum  gas production rate  will increase, and then it  will become closer to the maximum production rate of the reservoir. Increasing the duration time of gas delivery will reduce the optimum gas production rate and increase maximum profit non-linearly.

  10. Optimized electricity expansions with external costs internalized and risk of severe accidents as a new criterion in the decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martin del Campo M, C.; Estrada S, G. J., E-mail: cmcm@fi-b.unam.mx [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico)

    2011-11-15

    The external cost of severe accidents was incorporated as a new element for the assessment of energy technologies in the expansion plans of the Mexican electric generating system. Optimizations of the electric expansions were made by internalizing the external cost into the objective function of the WASP-IV model as a variable cost, and these expansions were compared with the expansion plans that did not internalize them. Average external costs reported by the Extern E Project were used for each type of technology and were added to the variable component of operation and maintenance cost in the study cases in which the externalises were internalized. Special attention was paid to study the convenience of including nuclear energy in the generating mix. The comparative assessment of six expansion plans was made by means of the Position Vector of Minimum Regret Analysis (PVMRA) decision analysis tool. The expansion plans were ranked according to seven decision criteria which consider internal costs, economical impact associated with incremental fuel prices, diversity, external costs, foreign capital fraction, carbon-free fraction, and external costs of severe accidents. A set of data for the calculation of the last criterion was obtained from a Report of the European Commission. We found that with the external costs included in the optimization process of WASP-IV, better electric expansion plans, with lower total (internal + external) generating costs, were found. On the other hand, the plans which included the participation of nuclear power plants were in general relatively more attractive than the plans that did not. (Author)

  11. Using ant colony optimization on the quadratic assignment problem to achieve low energy cost in geo-distributed data centers

    Science.gov (United States)

    Osei, Richard

    There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers. In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users' requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users' requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user's resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able

  12. An Algorithm for the Design of a Cost-Optimized Topology for the Fixed Part of a GSM Network

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper proposes an algorithm to design a cost optimized topology for the fixed part of a mobile network. For the given locations of base stations and mobile switching centers, the algorithm decides the number and position of the base station controllers as well as the way the base stations are connected to the base stations controllers.

  13. Cost optimal and nearly zero-energy buildings (nZEB) definitions, calculation principles and case studies

    CERN Document Server

    Kurnitski, Jarek

    2013-01-01

    This book introduces technical definitions, system boundaries, energy calculation methods and input data for setting primary energy based minimum/cost optimal and nZEB requirements in national energy frames. Offers five case studies of nZEB office buildings.

  14. Good Manufacturing Practices (GMP) manufacturing of advanced therapy medicinal products: a novel tailored model for optimizing performance and estimating costs.

    Science.gov (United States)

    Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra

    2013-03-01

    Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  15. A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts

    Science.gov (United States)

    2015-04-30

    greater than zero penalty cost avoided , but LTB costs will have to be paid. The cost avoidances in Equation 1 are uncertain due to uncertainties in...buys and to determine the optimum part quantity for a lifetime or bridge buy. The approach accommodates uncertainties in demand, holding costs, and end... uncertainties in demand, holding costs, and end of support dates. Introduction Background A significant problem facing many complex systems is

  16. Interstellar Initiative Web Page Design

    Science.gov (United States)

    Mehta, Alkesh

    1999-01-01

    This summer at NASA/MSFC, I have contributed to two projects: Interstellar Initiative Web Page Design and Lenz's Law Relative Motion Demonstration. In the Web Design Project, I worked on an Outline. The Web Design Outline was developed to provide a foundation for a Hierarchy Tree Structure. The Outline would help design a Website information base for future and near-term missions. The Website would give in-depth information on Propulsion Systems and Interstellar Travel. The Lenz's Law Relative Motion Demonstrator is discussed in this volume by Russell Lee.

  17. The Warped Science of Interstellar

    CERN Document Server

    Luminet, Jean-Pierre

    2015-01-01

    The science fiction film, Interstellar, tells the story of a team of astronauts searching a distant galaxy for habitable planets to colonize. Interstellar's story draws heavily from contemporary science. The film makes reference to a range of topics, from established concepts such as fast-spinning black holes, accretion disks, tidal effects, and time dilation, to far more speculative ideas such as wormholes, time travel, additional space dimensions, and the theory of everything. The aim of this article is to decipher some of the scientific notions which support the framework of the movie.

  18. Infrared emission from interstellar PAHs

    Science.gov (United States)

    Allamandola, L. J.; Tielens, A. G. G. M.; Barker, J. R.

    1987-01-01

    The mid-IR absorption and Raman spectra of polycyclic aromatic hydrocarbons (PAHs) and the mechanisms determining them are reviewed, and the implications for observations of similar emission spectra in interstellar clouds are considered. Topics addressed include the relationship between PAHs and amorphous C, the vibrational spectroscopy of PAHs, the molecular emission process, molecular anharmonicity, and the vibrational quasi-continuum. Extensive graphs, diagrams, and sample spectra are provided, and the interstellar emission bands are attributed to PAHs with 20-30 C atoms on the basis of the observed 3.3/3.4-micron intensity ratios.

  19. The formation of interstellar jets

    Science.gov (United States)

    Tenorio-Tagle, G.; Canto, J.; Rozyczka, M.

    1988-01-01

    The formation of interstellar jets by convergence of supersonic conical flows and the further dynamical evolution of these jets are investigated theoretically by means of numerical simulations. The results are presented in extensive graphs and characterized in detail. Strong radiative cooling is shown to result in jets with Mach numbers 2.5-29 propagating to lengths 50-100 times their original widths, with condensation of swept-up interstellar matter at Mach 5 or greater. The characteristics of so-called molecular outflows are well reproduced by the simulations of low-Mach-number and quasi-adiabatic jets.

  20. Determination of the optimal time and cost of manufacturing flow of an assembly using the Taguchi method

    Science.gov (United States)

    Petrila, S.; Brabie, G.; Chirita, B.

    2016-08-01

    The optimization of the parts and assembly manufacturing operation was carried out in order to minimize both the time and cost of production as appropriate. The optimization was made by using the Taguchi method. The Taguchi method is based on the plans of experiences that vary the input and outputs factors. The application of the Taguchi method in order to optimize the flow of the analyzed assembly production is made in the following: to find the optimal combination between the manufacturing operations; to choose the variant involving the use of equipment performance; to delivery operations based on automation. The final aim of the Taguchi method application is that the entire assembly to be achieved at minimum cost and in a short time. Philosophy Taguchi method of optimizing product quality is synthesized from three basic concepts: quality must be designed into the product and not he product inspected after it has been manufactured; the higher quality is obtained when the deviation from the proposed target is low or when uncontrollable factors action has no influence on it, which translates robustness; costs entailed quality are expressed as a function of deviation from the nominal value [1]. When determining the number of experiments involving the study of a phenomenon by this method, follow more restrictive conditions [2].

  1. [Receiver operating characteristic analysis and the cost--benefit analysis in determination of the optimal cut-off point].

    Science.gov (United States)

    Vránová, J; Horák, J; Krátká, K; Hendrichová, M; Kovaírková, K

    2009-01-01

    An overview of the use of Receiver Operating Characteristic (ROC) analysis within medicine is provided. A survey of the theory behind the analysis is offered together with a presentation on how to create a ROC curve and how to use Cost--Benefit analysis to determine the optimal cut-off point or threshold. The use of ROC analysis is exemplified in the "Cost--Benefit analysis" section of the paper. In these examples, it can be seen that the determination of the optimal cut-off point is mainly influenced by the prevalence and the severity of the disease, by the risks and adverse events of treatment or the diagnostic testing, by the overall costs of treating true and false positives (TP and FP), and by the risk of deficient or non-treatment of false negative (FN) cases.

  2. A novel agent based autonomous and service composition framework for cost optimization of resource provisioning in cloud computing

    Directory of Open Access Journals (Sweden)

    Aarti Singh

    2017-01-01

    Full Text Available A cloud computing environment offers a simplified, centralized platform or resources for use when needed at a low cost. One of the key functionalities of this type of computing is to allocate the resources on an individual demand. However, with the expanding requirements of cloud user, the need of efficient resource allocation is also emerging. The main role of service provider is to effectively distribute and share the resources which otherwise would result into resource wastage. In addition to the user getting the appropriate service according to request, the cost of respective resource is also optimized. In order to surmount the mentioned shortcomings and perform optimized resource allocation, this research proposes a new Agent based Automated Service Composition (A2SC algorithm comprising of request processing and automated service composition phases and is not only responsible for searching comprehensive services but also considers reducing the cost of virtual machines which are consumed by on-demand services only.

  3. Cost-benefit study of consumer product take-back programs using IBM's WIT reverse logistics optimization tool

    Science.gov (United States)

    Veerakamolmal, Pitipong; Lee, Yung-Joon; Fasano, J. P.; Hale, Rhea; Jacques, Mary

    2002-02-01

    In recent years, there has been increased focus by regulators, manufacturers, and consumers on the issue of product end of life management for electronics. This paper presents an overview of a conceptual study designed to examine the costs and benefits of several different Product Take Back (PTB) scenarios for used electronics equipment. The study utilized a reverse logistics supply chain model to examine the effects of several different factors in PTB programs. The model was done using the IBM supply chain optimization tool known as WIT (Watson Implosion Technology). Using the WIT tool, we were able to determine a theoretical optimal cost scenario for PTB programs. The study was designed to assist IBM internally in determining theoretical optimal Product Take Back program models and determining potential incentives for increasing participation rates.

  4. On the question of interstellar travel

    Science.gov (United States)

    Wolfe, J. H.

    1985-01-01

    Arguments are presented which show that motives for interstellar travel by advanced technological civilizations based on an extrapolation of earth's history may be quite invalid. In addition, it is proposed that interstellar travel is so enormously expensive and perhaps so hazardous, that advanced civilizations do not engage in such practices because of the ease of information transfer via interstellar communication.

  5. Experimental interstellar organic chemistry - Preliminary findings

    Science.gov (United States)

    Khare, B. N.; Sagan, C.

    1973-01-01

    Review of the results of some explicit experimental simulation of interstellar organic chemistry consisting in low-temperature high-vacuum UV irradiation of condensed simple gases known or suspected to be present in the interstellar medium. The results include the finding that acetonitrile may be present in the interstellar medium. The implication of this and other findings are discussed.

  6. Experimental interstellar organic chemistry - Preliminary findings

    Science.gov (United States)

    Khare, B. N.; Sagan, C.

    1973-01-01

    Review of the results of some explicit experimental simulation of interstellar organic chemistry consisting in low-temperature high-vacuum UV irradiation of condensed simple gases known or suspected to be present in the interstellar medium. The results include the finding that acetonitrile may be present in the interstellar medium. The implication of this and other findings are discussed.

  7. Interstellar Aldehydes and their corresponding Reduced Alcohols: Interstellar Propanol?

    Science.gov (United States)

    Etim, Emmanuel; Chakrabarti, Sandip Kumar; Das, Ankan; Gorai, Prasanta; Arunan, Elangannan

    2016-07-01

    There is a well-defined trend of aldehydes and their corresponding reduced alcohols among the known interstellar molecules; methanal (CH_2O) and methanol (CH_3OH); ethenone (C_2H_2O) and vinyl alcohol (CH_2CHOH); ethanal (C_2H_4O) and ethanol(C_2H_5OH); glycolaldehyde (C_2H_4O_2) and ethylene glycol(C_2H_6O_2). The reduced alcohol of propanal (CH_3CH_2CHO) which is propanol (CH_3CH_2CH_2OH) has not yet been observed but its isomer; ethyl methyl ether (CH_3CH_2OCH_3) is a known interstellar molecule. In this article, different studies are carried out in investigating the trend between aldehydes and their corresponding reduced alcohols and the deviation from the trend. Kinetically and with respect to the formation route, alcohols could have been produced from their corresponding reduced aldehydes via two successive hydrogen additions. This is plausible because of (a) the unquestionable high abundance of hydrogen, (b) presence of energy sources within some of the molecular clouds and (c) the ease at which successive hydrogen addition reaction occurs. In terms of stability, the observed alcohols are thermodynamically favorable as compared to their isomers. Regarding the formation process, the hydrogen addition reactions are believed to proceed on the surface of the interstellar grains which leads to the effect of interstellar hydrogen bonding. From the studies, propanol and propan-2-ol are found to be more strongly attached to the surface of the interstellar dust grains which affects its overall gas phase abundance as compared to its isomer ethyl methyl ether which has been observed.

  8. Cost-optimal power system extension under flow-based market coupling and high shares of photovoltaics

    Energy Technology Data Exchange (ETDEWEB)

    Hagspiel, Simeon; Jaegemann, Cosima; Lindenberger, Dietmar [Koeln Univ. (Germany). Inst. of Energy Economics; Cherevatskiy, Stanislav; Troester, Eckehard; Brown, Tom [Energynautics GmbH, Langen (Germany)

    2012-07-01

    Electricity market models, implemented as dynamic programming problems, have been applied widely to identify possible pathways towards a cost-optimal and low carbon electricity system. However, the joint optimization of generation and transmission remains challenging, mainly due to the fact that different characteristics and rules apply to commercial and physical exchanges of electricity in meshed networks. This paper presents a methodology that allows to optimize power generation and transmission infrastructures jointly through an iterative approach based on power transfer distribution factors (PTDFs). As PTDFs are linear representations of the physical load flow equations, they can be implemented in a linear programming environment suitable for large scale problems such as the European power system. The algorithm iteratively updates PTDFs when grid infrastructures are modified due to cost-optimal extension and thus yields an optimal solution with a consistent representation of physical load flows. The method is demonstrated on a simplified three-node model where it is found to be stable and convergent. It is then scaled to the European level in order to find the optimal power system infrastructure development under the prescription of strongly decreasing CO{sub 2} emissions in Europe until 2050 with a specific focus on photovoltaic (PV) power. (orig.)

  9. Low cost and conformal microwave water-cut sensor for optimizing oil production process

    KAUST Repository

    Karimi, Muhammad Akram

    2015-08-01

    Efficient oil production and refining processes require the precise measurement of water content in oil (i.e., water-cut) which is extracted out of a production well as a byproduct. Traditional water-cut (WC) laboratory measurements are precise, but are incapable of providing real-time information, while recently reported in-line WC sensors (both in research and industry) are usually incapable of sensing the full WC range (0 – 100 %), are bulky, expensive and non-scalable for the variety of pipe sizes used in the oil industry. This work presents a novel implementation of a planar microwave T-resonator for fully non-intrusive in situ WC sensing over the full range of operation, i.e., 0 – 100 %. As opposed to non-planar resonators, the choice of a planar resonator has enabled its direct implementation on the pipe surface using low cost fabrication methods. WC sensors make use of series resonance introduced by a λ/4 open shunt stub placed in the middle of a microstrip line. The detection mechanism is based on the measurement of the T-resonator’s resonance frequency, which varies with the relative percentage of oil and water (due to the difference in their dielectric properties). In order to implement the planar T-resonator based sensor on the curved surface of the pipe, a novel approach of utilizing two ground planes is proposed in this work. The innovative use of dual ground planes makes this sensor scalable to a wide range of pipe sizes present in the oil industry. The design and optimization of this sensor was performed in an electromagnetic Finite Element Method (FEM) solver, i.e., High Frequency Structural Simulator (HFSS) and the dielectric properties of oil, water and their emulsions of different WCs used in the simulation model were measured using a SPEAG-dielectric assessment kit (DAK-12). The simulation results were validated through characterization of fabricated prototypes. Initial rapid prototyping was completed using copper tape, after which a

  10. Optimal Vehicle Design Using the Integrated System and Cost Modeling Tool Suite

    Science.gov (United States)

    2010-08-01

    Space Vehicle Costing ( ACEIT ) • New Small Sat Model Development & Production Cost O&M Cost Module  Radiation Exposure  Radiation Detector Response...Reliability OML Availability Risk l l Tools CEA, SRM Model, POST, ACEIT , Inflation Model, Rotor Blade Des, Microsoft Project, ATSV, S/1-iABP...space STK, SOAP – Specific mission • Space Vehicle Design (SMAD) • Space Vehicle Propulsion • Orbit Propagation • Space Vehicle Costing ( ACEIT ) • New

  11. Herschel observations of interstellar chloronium

    NARCIS (Netherlands)

    Neufeld, David A.; Roueff, Evelyne; Snell, Ronald L.; Lis, Dariusz; Benz, Arnold O.; Bruderer, Simon; Black, John H.; De Luca, Massimo; Gerin, Maryvonne; Goldsmith, Paul F.; Gupta, Harshal; Indriolo, Nick; Le Bourlot, Jacques; Le Petit, Franck; Larsson, Bengt; Melnick, Gary J.; Menten, Karl M.; Monje, Raquel; Nagy, Zsofia; Phillips, Thomas G.; Sandqvist, Aage; Sonnentrucker, Paule; van der Tak, Floris; Wolfire, Mark G.

    2012-01-01

    Using the Herschel Space Observatory's Heterodyne Instrument for the Far-Infrared, we have observed parachloronium (H2Cl+) toward six sources in the Galaxy. We detected interstellar chloronium absorption in foreground molecular clouds along the sight lines to the bright submillimeter continuum sourc

  12. Stardust Interstellar Preliminary Examination (ISPE)

    Science.gov (United States)

    Westphal, A. J.; Allen, C.; Bajt, S.; Basset, R.; Bastien, R.; Bechtel, H.; Bleuet, P.; Borg, J.; Brenker F.; Bridges, J.

    2009-01-01

    In January 2006 the Stardust sample return capsule returned to Earth bearing the first solid samples from a primitive solar system body, C omet 81P/Wild2, and a collector dedicated to the capture and return o f contemporary interstellar dust. Both collectors were approximately 0.1m(exp 2) in area and were composed of aerogel tiles (85% of the co llecting area) and aluminum foils. The Stardust Interstellar Dust Col lector (SIDC) was exposed to the interstellar dust stream for a total exposure factor of 20 m(exp 2-) day during two periods before the co metary encounter. The Stardust Interstellar Preliminary Examination ( ISPE) is a three-year effort to characterize the collection using no ndestructive techniques. The ISPE consists of six interdependent proj ects: (1) Candidate identification through automated digital microsco py and a massively distributed, calibrated search (2) Candidate extr action and photodocumentation (3) Characterization of candidates thro ugh synchrotronbased FourierTranform Infrared Spectroscopy (FTIR), S canning XRay Fluoresence Microscopy (SXRF), and Scanning Transmission Xray Microscopy (STXM) (4) Search for and analysis of craters in f oils through FESEM scanning, Auger Spectroscopy and synchrotronbased Photoemission Electron Microscopy (PEEM) (5) Modeling of interstell ar dust transport in the solar system (6) Laboratory simulations of h ypervelocity dust impacts into the collecting media

  13. An Optimal Cost Effectiveness Study on Zimbabwe Cholera Seasonal Data from 2008–2011

    Science.gov (United States)

    Sardar, Tridip; Mukhopadhyay, Soumalya; Bhowmick, Amiya Ranjan; Chattopadhyay, Joydev

    2013-01-01

    Incidence of cholera outbreak is a serious issue in underdeveloped and developing countries. In Zimbabwe, after the massive outbreak in 2008–09, cholera cases and deaths are reported every year from some provinces. Substantial number of reported cholera cases in some provinces during and after the epidemic in 2008–09 indicates a plausible presence of seasonality in cholera incidence in those regions. We formulate a compartmental mathematical model with periodic slow-fast transmission rate to study such recurrent occurrences and fitted the model to cumulative cholera cases and deaths for different provinces of Zimbabwe from the beginning of cholera outbreak in 2008–09 to June 2011. Daily and weekly reported cholera incidence data were collected from Zimbabwe epidemiological bulletin, Zimbabwe Daily cholera updates and Office for the Coordination of Humanitarian Affairs Zimbabwe (OCHA, Zimbabwe). For each province, the basic reproduction number () in periodic environment is estimated. To the best of our knowledge, this is probably a pioneering attempt to estimate in periodic environment using real-life data set of cholera epidemic for Zimbabwe. Our estimates of agree with the previous estimate for some provinces but differ significantly for Bulawayo, Mashonaland West, Manicaland, Matabeleland South and Matabeleland North. Seasonal trend in cholera incidence is observed in Harare, Mashonaland West, Mashonaland East, Manicaland and Matabeleland South. Our result suggests that, slow transmission is a dominating factor for cholera transmission in most of these provinces. Our model projects cholera cases and cholera deaths during the end of the epidemic in 2008–09 to January 1, 2012. We also determine an optimal cost-effective control strategy among the four government undertaken interventions namely promoting hand-hygiene & clean water distribution, vaccination, treatment and sanitation for each province. PMID:24312540

  14. Screening and optimization of low-cost medium for Pseudomonas putida Rs-198 culture using RSM.

    Science.gov (United States)

    Peng, Yanjie; He, Yanhui; Wu, Zhansheng; Lu, Jianjiang; Li, Chun

    2014-01-01

    The plant growth-promoting rhizobacterial strain Pseudomonas putida Rs-198 was isolated from salinized soils from Xinjiang Province. We optimized the composition of the low-cost medium of P. putida Rs-198 based on its bacterial concentration, as well as its phosphate-dissolving and indole acetic acid (IAA)-producing capabilities using the response surface methodology (RSM), and a mathematical model was developed to show the effect of each medium component and its interactions on phosphate dissolution and IAA production. The model predicted a maximum phosphate concentration in medium containing 63.23 mg/L inorganic phosphate with 49.22 g/L corn flour, 14.63 g/L soybean meal, 2.03 g/L K₂HPO₄, 0.19 g/L MnSO₄ and 5.00 g/L NaCl. The maximum IAA concentration (18.73 mg/L) was predicted in medium containing 52.41 g/L corn flour, 15.82 g/L soybean meal, 2.40 g/L K₂HPO₄, 0.17 g/L MnSO₄ and 5.00 g/L NaCl. These predicted values were also verified through experiments, with a cell density of 10(13) cfu/mL, phosphate dissolution of 64.33 mg/L, and IAA concentration of 18.08 mg/L. The excellent correlation between predicted and measured values of each model justifies the validity of both the response models. The study aims to provide a basis for industrialized fermentation using P. putida Rs-198.

  15. Screening and optimization of low-cost medium for Pseudomonas putida Rs-198 culture using RSM

    Directory of Open Access Journals (Sweden)

    Yanjie Peng

    2014-12-01

    Full Text Available The plant growth-promoting rhizobacterial strain Pseudomonas putida Rs-198 was isolated from salinized soils from Xinjiang Province. We optimized the composition of the low-cost medium of P. putida Rs-198 based on its bacterial concentration, as well as its phosphate-dissolving and indole acetic acid (IAA-producing capabilities using the response surface methodology (RSM, and a mathematical model was developed to show the effect of each medium component and its interactions on phosphate dissolution and IAA production. The model predicted a maximum phosphate concentration in medium containing 63.23 mg/L inorganic phosphate with 49.22 g/L corn flour, 14.63 g/L soybean meal, 2.03 g/L K2HPO4, 0.19 g/L MnSO4 and 5.00 g/L NaCl. The maximum IAA concentration (18.73 mg/L was predicted in medium containing 52.41 g/L corn flour, 15.82 g/L soybean meal, 2.40 g/L K2HPO4, 0.17 g/L MnSO4 and 5.00 g/L NaCl. These predicted values were also verified through experiments, with a cell density of 10(13 cfu/mL, phosphate dissolution of 64.33 mg/L, and IAA concentration of 18.08 mg/L. The excellent correlation between predicted and measured values of each model justifies the validity of both the response models. The study aims to provide a basis for industrialized fermentation using P. putida Rs-198.

  16. Simulation, optimization and analysis of cost of biodiesel plant pot route enzymatic; Simulacao, otimizacao e analise de custo de planta de biodiesel via rota enzimatica

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, Jocelia S.; Ferreira, Andrea L.O. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Silva, Giovanilton F. [Tecnologia Bioenergetica - Tecbio, Fortaleza, CE (Brazil)

    2008-07-01

    The aim of this work ware simulation, optimization and to find the biodiesel production cost produced by enzymatic route. Consequently, it was carried out a methodology of economic calculations and sensitivity analyses for this process. It was used a computational software from balance equations for obtaining the biodiesel cost. The economical analysis was obtained by capital cost of biofuel. The whole process was developed according analysis of fixed capital cost, total manufacturing cost, raw material cost, and chemical cost. The results of economic calculations to biodiesel production showed efficient. The model was meant for use in assessing the effects on estimated biodiesel production cost of changes in different types of oils. (author)

  17. Physical processes in the interstellar medium

    CERN Document Server

    Spitzer, Lyman

    2008-01-01

    Physical Processes in the Interstellar Medium discusses the nature of interstellar matter, with a strong emphasis on basic physical principles, and summarizes the present state of knowledge about the interstellar medium by providing the latest observational data. Physics and chemistry of the interstellar medium are treated, with frequent references to observational results. The overall equilibrium and dynamical state of the interstellar gas are described, with discussions of explosions produced by star birth and star death and the initial phases of cloud collapse leading to star formation.

  18. Central Plant Optimization for Waste Energy Reduction (CPOWER). ESTCP Cost and Performance Report

    Science.gov (United States)

    2016-12-01

    EW-201349) Central Plant Optimization for Waste Energy Reduction (CPOWER) December 2016 This document has been cleared for public release...Optimization for Waste Energy Reduction (CPOWER) Girija Parthasarathy Honeywell Honeywell - 1985 Douglas Drive North, Golden Valley, MN 55422 ERDC...technology that commands all equipment in a central plant. Central Plant Optimization for Waste Energy Reduction (CPOWER), Building Automation System (BAS

  19. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    CERN Document Server

    Brochu, Eric; de Freitas, Nando

    2010-01-01

    We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments---active user modelling with preferences, and hierarchical reinforcement learning---and a discussion of the pros and cons of Bayesian optimization based on our experiences.

  20. Cost related sensitivity analysis for optimal operation of a grid-parallel PEM fuel cell power plant

    Science.gov (United States)

    El-Sharkh, M. Y.; Tanrioven, M.; Rahman, A.; Alam, M. S.

    Fuel cell power plants (FCPP) as a combined source of heat, power and hydrogen (CHP&H) can be considered as a potential option to supply both thermal and electrical loads. Hydrogen produced from the FCPP can be stored for future use of the FCPP or can be sold for profit. In such a system, tariff rates for purchasing or selling electricity, the fuel cost for the FCPP/thermal load, and hydrogen selling price are the main factors that affect the operational strategy. This paper presents a hybrid evolutionary programming and Hill-Climbing based approach to evaluate the impact of change of the above mentioned cost parameters on the optimal operational strategy of the FCPP. The optimal operational strategy of the FCPP for different tariffs is achieved through the estimation of the following: hourly generated power, the amount of thermal power recovered, power trade with the local grid, and the quantity of hydrogen that can be produced. Results show the importance of optimizing system cost parameters in order to minimize overall operating cost.

  1. Optimal DO Setpoint Decision and Electric Cost Saving in Aerobic Reactor Using Respirometer and Air Blower Control

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kwang Su; Yoo, Changkyoo [Kyung Hee University, Yongin (Korea, Republic of); Kim, Minhan [Pangaea21 Ltd., Seongnam (Korea, Republic of); Kim, Jongrack [UnUsoft Ltd., Seoul (Korea, Republic of)

    2014-10-15

    Main objects for wastewater treatment operation are to maintain effluent water quality and minimize operation cost. However, the optimal operation is difficult because of the change of influent flow rate and concentrations, the nonlinear dynamics of microbiology growth rate and other environmental factors. Therefore, many wastewater treatment plants are operated for much more redundant oxygen or chemical dosing than the necessary. In this study, the optimal control scheme for dissolved oxygen (DO) is suggested to prevent over-aeration and the reduction of the electric cost in plant operation while maintaining the dissolved oxygen (DO) concentration for the metabolism of microorganisms in oxic reactor. The oxygen uptake rate (OUR) is real-time measured for the identification of influent characterization and the identification of microorganisms' oxygen requirement in oxic reactor. Optimal DO set-point needed for the micro-organism is suggested based on real-time measurement of oxygen uptake of micro-organism and the control of air blower. Therefore, both stable effluent quality and minimization of electric cost are satisfied with a suggested optimal set-point decision system by providing the necessary oxygen supply requirement to the micro-organisms coping with the variations of influent loading.

  2. The Sun's dusty interstellar environment

    Science.gov (United States)

    Sterken, Veerle

    2016-07-01

    The Sun's dusty interstellar environment Interstellar dust from our immediate interstellar neighborhood travels through the solar system at speeds of ca. 26 km/s: the relative speed of the solar system with respect to the local interstellar cloud. On its way, its trajectories are altered by several forces like the solar radiation pressure force and Lorentz force. The latter is due to the charged dust particles that fly through the interplanetary magnetic field. These trajectories differ per particle type and size and lead to varying fluxes and directions of the flow inside of the solar system that depend on location but also on phase in the solar cycle. Hence, these fluxes and directions depend strongly on the configuration of the inner regions and outer regions of the heliosphere. Several missions have measured this dust in the solar system directly. The Ulysses dust detector data encompasses 16 years of intestellar dust fluxes and approximate directions, Stardust captured returned to Earth a few of these particles sucessfully, and finally the Cassini dust detector allowed for compositional information to be obtained from the impacts on the instrument. In this talk, we give an overview of the current status of interstellar dust research through the measurements made inside of the solar system, and we put them in perspective to the knowledge obtained from more classical astronomical means. In special, we focus on the interaction of the dust with the interplanetary magnetic field, and on what we learn about the dust (and the fields) by comparing the available dust data to computer simulations of dust trajectories. Finally, we synthesize the different methods of observation, their results, and give a preview on new research opportunities in the coming year(s).

  3. PAHs in Translucent Interstellar Clouds

    Science.gov (United States)

    Salama, Farid; Galazutdinov, G.; Krelowski, J.; Biennier, L.; Beletsky, Y.; Song, I.

    2011-05-01

    We discuss the proposal of relating the origin of some of the diffuse interstellar bands (DIBs) to neutral polycyclic aromatic hydrocarbons (PAHs) present in translucent interstellar clouds. The spectra of several cold, isolated gas-phase PAHs have been measured in the laboratory under experimental conditions that mimic the interstellar conditions and are compared with an extensive set of astronomical spectra of reddened, early type stars. This comparison provides - for the first time - accurate upper limits for the abundances of specific PAH molecules along specific lines-of-sight. Something that is not attainable from IR observations alone. The comparison of these unique laboratory data with high resolution, high S/N ratio astronomical observations leads to two major findings: (1) a finding specific to the individual molecules that were probed in this study and, which leads to the clear and unambiguous conclusion that the abundance of these specific neutral PAHs must be very low in the individual translucent interstellar clouds that were probed in this survey (PAH features remain below the level of detection) and, (2) a general finding that neutral PAHs exhibit intrinsic band profiles that are similar to the profile of the narrow DIBs indicating that the carriers of the narrow DIBs must have close molecular structure and characteristics. This study is the first quantitative survey of neutral PAHs in the optical range and it opens the way for unambiguous quantitative searches of PAHs in a variety of interstellar and circumstellar environments. // Reference: F. Salama et al. (2011) ApJ. 728 (1), 154 // Acknowledgements: F.S. acknowledges the support of the NASA's Space Mission Directorate APRA Program. J.K. acknowledges the financial support of the Polish State (grant N203 012 32/1550). The authors are deeply grateful to the ESO archive as well as to the ESO staff members for their active support.

  4. Optimizing continuous miner coal production systems based on production and production cost

    Energy Technology Data Exchange (ETDEWEB)

    Chugh, Y.P.; Patwardhan, A.; Moharana, A. [Southern Illinois University, Carbondale, IL (United States). Department of Mining and Minerals Resources Engineering

    2005-07-01

    The purpose is to develop a face production cost model for room-and-pillar mining, to integrate the model with SIU-Suboleski production (SSP) modeling software, and to show that combining face production cost with ROM production provides a better indicator for comparing equipment costs. Several production planning scenarios were modeled. An SSP model was developed that incorporates capital, operating, and production costs for each option and calculates an estimate of face production cost on a raw coal basis. 6 refs., 5 figs., 6 tabs.

  5. Application of a Multi-Objective Optimization Method to Provide Least Cost Alternatives for NPS Pollution Control

    Science.gov (United States)

    Maringanti, Chetan; Chaubey, Indrajeet; Arabi, Mazdak; Engel, Bernard

    2011-09-01

    Nonpoint source (NPS) pollutants such as phosphorus, nitrogen, sediment, and pesticides are the foremost sources of water contamination in many of the water bodies in the Midwestern agricultural watersheds. This problem is expected to increase in the future with the increasing demand to provide corn as grain or stover for biofuel production. Best management practices (BMPs) have been proven to effectively reduce the NPS pollutant loads from agricultural areas. However, in a watershed with multiple farms and multiple BMPs feasible for implementation, it becomes a daunting task to choose a right combination of BMPs that provide maximum pollution reduction for least implementation costs. Multi-objective algorithms capable of searching from a large number of solutions are required to meet the given watershed management objectives. Genetic algorithms have been the most popular optimization algorithms for the BMP selection and placement. However, previous BMP optimization models did not study pesticide which is very commonly used in corn areas. Also, with corn stover being projected as a viable alternative for biofuel production there might be unintended consequences of the reduced residue in the corn fields on water quality. Therefore, there is a need to study the impact of different levels of residue management in combination with other BMPs at a watershed scale. In this research the following BMPs were selected for placement in the watershed: (a) residue management, (b) filter strips, (c) parallel terraces, (d) contour farming, and (e) tillage. We present a novel method of combing different NPS pollutants into a single objective function, which, along with the net costs, were used as the two objective functions during optimization. In this study we used BMP tool, a database that contains the pollution reduction and cost information of different BMPs under consideration which provides pollutant loads during optimization. The BMP optimization was performed using a NSGA

  6. Optimizing power plant cycling operations while reducing generating plant damage and costs

    Energy Technology Data Exchange (ETDEWEB)

    Lefton, S.A.; Besuner, P.H.; Grimsrud, P. [Aptech Engineering Services, Inc., Sunnyvale, CA (United States); Bissel, A. [Electric Supply Board, Dublin (Ireland)

    1998-12-31

    This presentation describes a method for analyzing, quantifying, and minimizing the total cost of fossil, combined cycle, and pumped hydro power plant cycling operation. The method has been developed, refined, and applied during engineering studies at some 160 units in the United States and 8 units at the Irish Electric Supply Board (ESB) generating system. The basic premise of these studies was that utilities are underestimating the cost of cycling operation. The studies showed that the cost of cycling conventional boiler/turbine fossil power plants can range from between $2,500 and $500,000 per start-stop cycle. It was found that utilities typically estimate these costs by factors of 3 to 30 below actual costs and, thus, often significantly underestimate their true cycling costs. Knowledge of the actual, or total, cost of cycling will reduce power production costs by enabling utilities to more accurately dispatch their units to manage unit life expectancies, maintenance strategies and reliability. Utility management responses to these costs are presented and utility cost savings have been demonstrated. (orig.) 7 refs.

  7. METHODOLOGY FOR DETERMINING THE OPTIMAL CLEANING PERIOD OF HEAT EXCHANGERS BY USING THE CRITERIA OF MINIMUM COST

    Directory of Open Access Journals (Sweden)

    Yanileisy Rodríguez Calderón

    2015-04-01

    Full Text Available One of the most serious problems of the Process Industry is that when planning the maintenance of the heat exchangers is not applied the methodologies based on economic criteria to optimize periods of cleaning surfaces resulting in additional costs for the company and for the country. This work develops and proposes a methodical based on the criterion of Minimum Cost for determining the optimal cleaning period. It is given an example of application of this method to the case of intercoolers of a centrifugal compressor with a high fouling level.It occurs this because is used sea water with many microorganisms as cooling agent which severely embeds transfer surfaces of side water. The methodology employed can be generalized to other applications.

  8. Interstellar dust. Evidence for interstellar origin of seven dust particles collected by the Stardust spacecraft.

    Science.gov (United States)

    Westphal, Andrew J; Stroud, Rhonda M; Bechtel, Hans A; Brenker, Frank E; Butterworth, Anna L; Flynn, George J; Frank, David R; Gainsforth, Zack; Hillier, Jon K; Postberg, Frank; Simionovici, Alexandre S; Sterken, Veerle J; Nittler, Larry R; Allen, Carlton; Anderson, David; Ansari, Asna; Bajt, Saša; Bastien, Ron K; Bassim, Nabil; Bridges, John; Brownlee, Donald E; Burchell, Mark; Burghammer, Manfred; Changela, Hitesh; Cloetens, Peter; Davis, Andrew M; Doll, Ryan; Floss, Christine; Grün, Eberhard; Heck, Philipp R; Hoppe, Peter; Hudson, Bruce; Huth, Joachim; Kearsley, Anton; King, Ashley J; Lai, Barry; Leitner, Jan; Lemelle, Laurence; Leonard, Ariel; Leroux, Hugues; Lettieri, Robert; Marchant, William; Ogliore, Ryan; Ong, Wei Jia; Price, Mark C; Sandford, Scott A; Sans Tresseras, Juan-Angel; Schmitz, Sylvia; Schoonjans, Tom; Schreiber, Kate; Silversmit, Geert; Solé, Vicente A; Srama, Ralf; Stadermann, Frank; Stephan, Thomas; Stodolna, Julien; Sutton, Stephen; Trieloff, Mario; Tsou, Peter; Tyliszczak, Tolek; Vekemans, Bart; Vincze, Laszlo; Von Korff, Joshua; Wordsworth, Naomi; Zevin, Daniel; Zolensky, Michael E

    2014-08-15

    Seven particles captured by the Stardust Interstellar Dust Collector and returned to Earth for laboratory analysis have features consistent with an origin in the contemporary interstellar dust stream. More than 50 spacecraft debris particles were also identified. The interstellar dust candidates are readily distinguished from debris impacts on the basis of elemental composition and/or impact trajectory. The seven candidate interstellar particles are diverse in elemental composition, crystal structure, and size. The presence of crystalline grains and multiple iron-bearing phases, including sulfide, in some particles indicates that individual interstellar particles diverge from any one representative model of interstellar dust inferred from astronomical observations and theory.

  9. Optimal costs of HIV pre-exposure prophylaxis for men who have sex with men.

    Science.gov (United States)

    McKenney, Jennie; Chen, Anders; Hoover, Karen W; Kelly, Jane; Dowdy, David; Sharifi, Parastu; Sullivan, Patrick S; Rosenberg, Eli S

    2017-01-01

    Men who have sex with men (MSM) are disproportionately affected by HIV due to their increased risk of infection. Oral pre-exposure prophylaxis (PrEP) is a highly effictive HIV-prevention strategy for MSM. Despite evidence of its effectiveness, PrEP uptake in the United States has been slow, in part due to its cost. As jurisdictions and health organizations begin to think about PrEP scale-up, the high cost to society needs to be understood. We modified a previously-described decision-analysis model to estimate the cost per quality-adjusted life-year (QALY) gained, over a 1-year duration of PrEP intervention and lifetime time horizon. Using updated parameter estimates, we calculated: 1) the cost per QALY gained, stratified over 4 strata of PrEP cost (a function of both drug cost and provider costs); and 2) PrEP drug cost per year required to fall at or under 4 cost per QALY gained thresholds. When PrEP drug costs were reduced by 60% (with no sexual disinhibition) to 80% (assuming 25% sexual disinhibition), PrEP was cost-effective (at <$100,000 per QALY averted) in all scenarios of base-case or better adherence, as long as the background HIV prevalence was greater than 10%. For PrEP to be cost saving at base-case adherence/efficacy levels and at a background prevalence of 20%, drug cost would need to be reduced to $8,021 per year with no disinhibition, and to $2,548 with disinhibition. Results from our analysis suggest that PrEP drug costs need to be reduced in order to be cost-effective across a range of background HIV prevalence. Moreover, our results provide guidance on the pricing of generic emtricitabine/tenofovir disoproxil fumarate, in order to provide those at high risk for HIV an affordable prevention option without financial burden on individuals or jurisdictions scaling-up coverage.

  10. Robust optimization for nonlinear time-delay dynamical system of dha regulon with cost sensitivity constraint in batch culture

    Science.gov (United States)

    Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong

    2016-09-01

    Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory

  11. Scaling of swim speed and stroke frequency in geometrically similar penguins: they swim optimally to minimize cost of transport

    Science.gov (United States)

    Sato, Katsufumi; Shiomi, Kozue; Watanabe, Yuuki; Watanuki, Yutaka; Takahashi, Akinori; Ponganis, Paul J.

    2010-01-01

    It has been predicted that geometrically similar animals would swim at the same speed with stroke frequency scaling with mass−1/3. In the present study, morphological and behavioural data obtained from free-ranging penguins (seven species) were compared. Morphological measurements support the geometrical similarity. However, cruising speeds of 1.8–2.3 m s−1 were significantly related to mass0.08 and stroke frequencies were proportional to mass−0.29. These scaling relationships do not agree with the previous predictions for geometrically similar animals. We propose a theoretical model, considering metabolic cost, work against mechanical forces (drag and buoyancy), pitch angle and dive depth. This new model predicts that: (i) the optimal swim speed, which minimizes the energy cost of transport, is proportional to (basal metabolic rate/drag)1/3 independent of buoyancy, pitch angle and dive depth; (ii) the optimal speed is related to mass0.05; and (iii) stroke frequency is proportional to mass−0.28. The observed scaling relationships of penguins support these predictions, which suggest that breath-hold divers swam optimally to minimize the cost of transport, including mechanical and metabolic energy during dive. PMID:19906666

  12. Life cycle cost optimization of buildings with regard to energy use, thermal indoor environment and daylight

    DEFF Research Database (Denmark)

    Nielsen, Toke Rammer; Svendsen, Svend

    2002-01-01

    Buildings represent a large economical investment and have long service lives through which expenses for heating, cooling, maintenance and replacement depends on the chosen building design. Therefore, the building cost should not only be evaluated by the initial investment cost but rather by the ...

  13. Living renal donors: optimizing the imaging strategy--decision- and cost-effectiveness analysis

    NARCIS (Netherlands)

    Y.S. Liem (Ylian Serina); M.C.J.M. Kock (Marc); W. Weimar (Willem); K. Visser (Karen); M.G.M. Hunink (Myriam); J.N.M. IJzermans (Jan)

    2003-01-01

    textabstractPURPOSE: To determine the most cost-effective strategy for preoperative imaging performed in potential living renal donors. MATERIALS AND METHODS: In a decision-analytic model, the societal cost-effectiveness of digital subtraction angiography (DSA), gadolinium-enhanced

  14. Social welfare and the Affordable Care Act: is it ever optimal to set aside comparative cost?

    Science.gov (United States)

    Mortimer, Duncan; Peacock, Stuart

    2012-10-01

    The creation of the Patient-Centered Outcomes Research Institute (PCORI) under the Affordable Care Act has set comparative effectiveness research (CER) at centre stage of US health care reform. Comparative cost analysis has remained marginalised and it now appears unlikely that the PCORI will require comparative cost data to be collected as an essential component of CER. In this paper, we review the literature to identify ethical and distributional objectives that might motivate calls to set priorities without regard to comparative cost. We then present argument and evidence to consider whether there is any plausible set of objectives and constraints against which priorities can be set without reference to comparative cost. We conclude that - to set aside comparative cost even after accounting for ethical and distributional constraints - would be truly to act as if money is no object.

  15. Depolarization canals and interstellar turbulence

    CERN Document Server

    Fletcher, A; Fletcher, Andrew; Shukurov, Anvar

    2006-01-01

    Recent radio polarization observations have revealed a plethora of unexpected features in the polarized Galactic radio background that arise from propagation effects in the random (turbulent) interstellar medium. The canals are especially striking among them, a random network of very dark, narrow regions clearly visible in many directions against a bright polarized Galactic synchrotron background. There are no obvious physical structures in the ISM that may have caused the canals, and so they have been called Faraday ghosts. They evidently carry information about interstellar turbulence but only now is it becoming clear how this information can be extracted. Two theories for the origin of the canals have been proposed; both attribute the canals to Faraday rotation, but one invokes strong gradients in Faraday rotation in the sky plane (specifically, in a foreground Faraday screen) and the other only relies on line-of-sight effects (differential Faraday rotation). In this review we discuss the physical nature o...

  16. Interstellar Grains: 50 Years On

    CERN Document Server

    Wickramasinghe, N Chandra

    2011-01-01

    Our understanding of the nature of interstellar grains has evolved considerably over the past half century with the present author and Fred Hoyle being intimately involved at several key stages of progress. The currently fashionable graphite-silicate-organic grain model has all its essential aspects unequivocally traceable to original peer-reviewed publications by the author and/or Fred Hoyle. The prevailing reluctance to accept these clear-cut priorities may be linked to our further work that argued for interstellar grains and organics to have a biological provenance - a position perceived as heretical. The biological model, however, continues to provide a powerful unifying hypothesis for a vast amount of otherwise disconnected and disparate astronomical data.

  17. Ionization in nearby interstellar gas

    Science.gov (United States)

    Frisch, P. C.; Welty, D. E.; York, D. G.; Fowler, J. R.

    1990-01-01

    Due to dielectric recombination, neutral magnesium represents an important tracer for the warm low-density gas around the solar system. New Mg I 2852 absorption-line data from IUE are presented, including detections in a few stars within 40 pc of the sun. The absence of detectable Mg I in Alpha CMa and other stars sets limits on the combined size and electron density of the interstellar cloud which gives rise to the local interstellar wind. For a cloud radius greater than 1 pc and density of 0.1/cu cm, the local cloud has a low fractional ionization, n(e)/n(tot) less than 0.05, if magnesium is undepleted, equilibrium conditions prevail, the cloud temperature is 11,750 K, and 80 percent of the magnesium in the sightline is Mg II.

  18. Ionization in nearby interstellar gas

    Energy Technology Data Exchange (ETDEWEB)

    Frisch, P.C.; Welty, D.E.; York, D.G.; Fowler, J.R. (Chicago Univ., IL (USA) New Mexico State Univ., Las Cruces (USA))

    1990-07-01

    Due to dielectric recombination, neutral magnesium represents an important tracer for the warm low-density gas around the solar system. New Mg I 2852 absorption-line data from IUE are presented, including detections in a few stars within 40 pc of the sun. The absence of detectable Mg I in Alpha CMa and other stars sets limits on the combined size and electron density of the interstellar cloud which gives rise to the local interstellar wind. For a cloud radius greater than 1 pc and density of 0.1/cu cm, the local cloud has a low fractional ionization, n(e)/n(tot) less than 0.05, if magnesium is undepleted, equilibrium conditions prevail, the cloud temperature is 11,750 K, and 80 percent of the magnesium in the sightline is Mg II. 85 refs.

  19. Optimization of transport network in the Basin of Yangtze River with minimization of environmental emission and transport/investment costs

    Directory of Open Access Journals (Sweden)

    Haiping Shi

    2016-08-01

    Full Text Available The capacity of the ship-lock at the Three Gorges Dam has become bottleneck of waterway transport and caused serious congestion. In this article, a continual network design model is established to solve the problem with minimizing the transport cost and environmental emission as well as infrastructure construction cost. In this bi-level model, the upper model gives the schemes of ship-lock expansion or construction of pass-dam highway. The lower model assigns the containers in the multi-mode network and calculates the transport cost, environmental emission, and construction investment. The solution algorithm to the model is proposed. In the numerical study, scenario analyses are done to evaluate the schemes and determine the optimal one in the context of different traffic demands. The result shows that expanding the ship-lock is better than constructing pass-dam highway.

  20. Non-fragile robust optimal guaranteed cost control of uncertain 2-D discrete state-delayed systems

    Science.gov (United States)

    Tandon, Akshata; Dhawan, Amit

    2016-10-01

    This paper is concerned with the problem of non-fragile robust optimal guaranteed cost control for a class of uncertain two-dimensional (2-D) discrete state-delayed systems described by the general model with norm-bounded uncertainties. Our attention is focused on the design of non-fragile state feedback controllers such that the resulting closed-loop system is asymptotically stable and the closed-loop cost function value is not more than a specified upper bound for all admissible parameter uncertainties and controller gain variations. A sufficient condition for the existence of such controllers is established under the linear matrix inequality framework. Moreover, a convex optimisation problem is proposed to select a non-fragile robust optimal guaranteed cost controller stabilising the 2-D discrete state-delayed system as well as achieving the least guaranteed cost for the resulting closed-loop system. The proposed method is compared with the previously reported criterion. Finally, illustrative examples are given to show the potential of the proposed technique.

  1. The optimization model of the vendor selection for the joint procurement from a total cost of ownership perspective

    Directory of Open Access Journals (Sweden)

    Fubin Pan

    2015-09-01

    Full Text Available Purpose: This paper is an attempt to establish the mathematical programming model of the vendor selection for the joint procurement from a total cost of ownership perspective. Design/methodology/approach: Fuzzy genetic algorithm is employed to solve the model, and the data set of the ball bearings purchasing problem is illustrated as a numerical analysis. Findings: According to the results, it can be seen that the performance of the optimization model is pretty good and can reduce the total costs of the procurement. Originality/value: The contribution of this paper is threefold. First, a literature review and classification of the published vendor selection models is shown in this paper. Second, a mathematical programming model of the vendor selection for the joint procurement from a total cost of ownership perspective is established. Third, an empirical study is displayed to illustrate the application of the proposed model to evaluate and identify the best vendors for ball bearing procurement, and the results show that it could reduce the total costs as much as twenty percent after the optimization.

  2. Optimal Control Method for Wind Farm to Support Temporary Primary Frequency Control with Minimized Wind Energy Cost

    DEFF Research Database (Denmark)

    Wang, Haijiao; Chen, Zhe; Jiang, Quanyuan

    2015-01-01

    This study proposes an optimal control method for variable speed wind turbines (VSWTs) based wind farm (WF) to support temporary primary frequency control. This control method consists of two layers: temporary frequency support control (TFSC) of the VSWT, and temporary support power optimal...... dispatch (TSPOD) of the WF. With TFSC, the VSWT could temporarily provide extra power to support system frequency under varying and wide-range wind speed. In the WF control centre, TSPOD optimally dispatches the frequency support power orders to the VSWTs that operate under different wind speeds, minimises...... the wind energy cost of frequency support, and satisfies the support capabilities of the VSWTs. The effectiveness of the whole control method is verified in the IEEE-RTS built in MATLABSimulink, and compared with a published de-loading method....

  3. Representing culture in interstellar messages

    Science.gov (United States)

    Vakoch, Douglas A.

    2008-09-01

    As scholars involved with the Search for Extraterrestrial Intelligence (SETI) have contemplated how we might portray humankind in any messages sent to civilizations beyond Earth, one of the challenges they face is adequately representing the diversity of human cultures. For example, in a 2003 workshop in Paris sponsored by the SETI Institute, the International Academy of Astronautics (IAA) SETI Permanent Study Group, the International Society for the Arts, Sciences and Technology (ISAST), and the John Templeton Foundation, a varied group of artists, scientists, and scholars from the humanities considered how to encode notions of altruism in interstellar messages art_science/2003>. Though the group represented 10 countries, most were from Europe and North America, leading to the group's recommendation that subsequent discussions on the topic should include more globally representative perspectives. As a result, the IAA Study Group on Interstellar Message Construction and the SETI Institute sponsored a follow-up workshop in Santa Fe, New Mexico, USA in February 2005. The Santa Fe workshop brought together scholars from a range of disciplines including anthropology, archaeology, chemistry, communication science, philosophy, and psychology. Participants included scholars familiar with interstellar message design as well as specialists in cross-cultural research who had participated in the Symposium on Altruism in Cross-cultural Perspective, held just prior to the workshop during the annual conference of the Society for Cross-cultural Research . The workshop included discussion of how cultural understandings of altruism can complement and critique the more biologically based models of altruism proposed for interstellar messages at the 2003 Paris workshop. This paper, written by the chair of both the Paris and Santa Fe workshops, will explore the challenges of communicating concepts of altruism that draw on both biological and cultural models.

  4. Photodissociation of interstellar N2

    CERN Document Server

    Li, Xiaohu; Visser, Ruud; Ubachs, Wim; Lewis, Brenton R; Gibson, Stephen T; van Dishoeck, Ewine F

    2013-01-01

    Molecular nitrogen is one of the key species in the chemistry of interstellar clouds and protoplanetary disks and the partitioning of nitrogen between N and N2 controls the formation of more complex prebiotic nitrogen-containing species. The aim of this work is to gain a better understanding of the interstellar N2 photodissociation processes based on recent detailed theoretical and experimental work and to provide accurate rates for use in chemical models. We simulated the full high-resolution line-by-line absorption + dissociation spectrum of N2 over the relevant 912-1000 \\AA\\ wavelength range, by using a quantum-mechanical model which solves the coupled-channels Schr\\"odinger equation. The simulated N2 spectra were compared with the absorption spectra of H2, H, CO, and dust to compute photodissociation rates in various radiation fields and shielding functions. The effects of the new rates in interstellar cloud models were illustrated for diffuse and translucent clouds, a dense photon dominated region and a ...

  5. Locally optimal control under unknown dynamics with learnt cost function: application to industrial robot positioning

    Science.gov (United States)

    Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric

    2017-01-01

    Recent methods of Reinforcement Learning have enabled to solve difficult, high dimensional, robotic tasks under unknown dynamics using iterative Linear Quadratic Gaussian control theory. These algorithms are based on building a local time-varying linear model of the dynamics from data gathered through interaction with the environment. In such tasks, the cost function is often expressed directly in terms of the state and control variables so that it can be locally quadratized to run the algorithm. If the cost is expressed in terms of other variables, a model is required to compute the cost function from the variables manipulated. We propose a method to learn the cost function directly from the data, in the same way as for the dynamics. This way, the cost function can be defined in terms of any measurable quantity and thus can be chosen more appropriately for the task to be carried out. With our method, any sensor information can be used to design the cost function. We demonstrate the efficiency of this method through simulating, with the V-REP software, the learning of a Cartesian positioning task on several industrial robots with different characteristics. The robots are controlled in joint space and no model is provided a priori. Our results are compared with another model free technique, consisting in writing the cost function as a state variable.

  6. Combining Magnetic and Electric Sails for Interstellar Deceleration

    Science.gov (United States)

    Perakis, Nikolaos; Hein, Andreas M.

    2016-07-01

    The main benefit of an interstellar mission is to carry out in-situ measurements within a target star system. To allow for extended in-situ measurements, the spacecraft needs to be decelerated. One of the currently most promising technologies for deceleration is the magnetic sail which uses the deflection of interstellar matter via a magnetic field to decelerate the spacecraft. However, while the magnetic sail is very efficient at high velocities, its performance decreases with lower speeds. This leads to deceleration durations of several decades depending on the spacecraft mass. Within the context of Project Dragonfly, initiated by the Initiative of Interstellar Studies (i4is), this paper proposes a novel concept for decelerating a spacecraft on an interstellar mission by combining a magnetic sail with an electric sail. Combining the sails compensates for each technologys shortcomings: A magnetic sail is more effective at higher velocities than the electric sail and vice versa. It is demonstrated that using both sails sequentially outperforms using only the magnetic or electric sail for various mission scenarios and velocity ranges, at a constant total spacecraft mass. For example, for decelerating from 5% c, to interplanetary velocities, a spacecraft with both sails needs about 29 years, whereas the electric sail alone would take 35 years and the magnetic sail about 40 years with a total spacecraft mass of 8250 kg. Furthermore, it is assessed how the combined deceleration system affects the optimal overall mission architecture for different spacecraft masses and cruising speeds. Future work would investigate how operating both systems in parallel instead of sequentially would affect its performance. Moreover, uncertainties in the density of interstellar matter and sail properties need to be explored.

  7. Determining the optimal costs on engineering equipment control systems for smart house

    Directory of Open Access Journals (Sweden)

    O.D. Samarin

    2012-01-01

    Full Text Available In the paper some candidate solutions of a problem of a decrease of energy consumption in civil buildings were reviewed. The features of implementation of measures on the engineering equipment for smart house were shown. The basic constituents of the working costs were submitted at operation of such object. The dependence of costs change on the depth of engineering measures implementation was investigated. The economically optimum degree of building equipping according to criterion of cumulative discounted costs minimization was detected. The presentation is illustrated by graphic examples and numerical examples of calculations.

  8. The Optimization of Costs and the Carbon Content in Cast Iron

    Directory of Open Access Journals (Sweden)

    M. Grzybowska

    2007-07-01

    Full Text Available In the article was introduced the conceptions of the optimization of the cast-iron batch near the use the mathematical programmer MATLAB. The results of industrial tests were showed with the use of the batch from sheet metals. It was showed on the possibility of formulating the tasks of optimizing with the use of the programming linear. It was showed on more effective utilization the power of productive foundries and minimalizing losses coming into being in the result of the inappropriate selection of the raw material composition. The conduct of optimizing the intervention of the fusion of cast iron was talked over.

  9. Optimizing Reactors Selection and Sequencing:Minimum Cost versus Minimum Volume

    Institute of Scientific and Technical Information of China (English)

    Rachid Chebbi

    2014-01-01

    The present investigation targets minimum cost of reactors in series for the case of one single chemical reaction, considering plug flow and stirred tank reactor(s) in the sequence of flow reactors. Using Guthrie’s cost correlations three typical cases were considered based on the profile of the reaction rate reciprocal versus conversion. Significant differences were found compared to the classical approach targeting minimum total reactor volume.

  10. Use of Aggregate Emission Reduction Cost Functions in Designing Optimal Regional SO2 Abatement Strategies

    OpenAIRE

    Mariam, Yohannes; Barre, Mike; Molburg, John

    1997-01-01

    The 1990 Canadian long-range transport of air pollutants and acid deposition report divided North America into 40 sources of emission and 15 sensitive receptor sites. For the purpose of national policy making and international negotiation, the use of these large sources and few receptors may prove adequate. Due to inadequate information regarding cost of reducing emissions from each point source, it was felt necessary to design a method to generate cost functions for emission regions. T...

  11. Cost control in nursing homes by means of economies of scale and care profile optimization.

    Science.gov (United States)

    Hoess, Victoria; Bachler, Adi; Ostermann, Herwig; Staudinger, Roland

    2009-01-01

    The call to enlarge or merge nursing homes in order to lower costs rests on the assumption that economies of scale exist within the cost structure of these homes. Economies of scale means that an increasing number of residents will reduce the costs per person needing care. However, the existence and the extent of economies of scale as such in nursing homes are the subject of controversy because studies of this issue performed in nursing homes up to now have yielded contradictory results. In this study, researchers demonstrated economies of scale in Tyrolean, Austria, nursing homes and showed that the composition of the nursing home residents in respect to their care needs influences the development of the average costs. Changing the size of the facility and/or influencing the average care level can have a considerable influence on the progression of average costs in nursing homes. Cost reductions can be achieved by increasing the size of the facility or by improved distribution of the care levels of the persons in need of care.

  12. Laboratory spectroscopic studies of interstellar ice analogues

    OpenAIRE

    Puletti, F

    2014-01-01

    In recent years, the molecular chemistry in interstellar environments has proven to be far more complex than was initially expected. We live in a molecular universe that is rich with molecules formed both in the gas phase and on the surface of interstellar icy dust grains. Two important classes of interstellar molecules are sulphur-bearing species and complex organic molecules, i.e., molecules containing carbon and containing more than 6 atoms. The former are relevant because of their potenti...

  13. Physics of the interstellar and intergalactic medium

    CERN Document Server

    Draine, Bruce T

    2010-01-01

    This is a comprehensive and richly illustrated textbook on the astrophysics of the interstellar and intergalactic medium--the gas and dust, as well as the electromagnetic radiation, cosmic rays, and magnetic and gravitational fields, present between the stars in a galaxy and also between galaxies themselves. Topics include radiative processes across the electromagnetic spectrum; radiative transfer; ionization; heating and cooling; astrochemistry; interstellar dust; fluid dynamics, including ionization fronts and shock waves; cosmic rays; distribution and evolution of the interstellar medium

  14. Interstellar Extinction by Spheroidal Dust Grains

    OpenAIRE

    Gupta, Ranjan; Mukai, Tadashi; Vaidya, D. B.; Sen, Asoke K.; Okada, Yasuhiko

    2005-01-01

    Observations of interstellar extinction and polarization indicate that the interstellar medium consists of aligned non-spherical dust grains which show variation in the interstellar extinction curve for wavelengths ranging from NIR to UV. To model the extinction and polarization, one cannot use the Mie theory which assumes the grains as solid spheres. We have used a T-matrix based method for computing the extinction efficiencies of spheroidal silicate and graphite grains of different shapes (...

  15. Structure and Dynamics of the Interstellar Medium

    Science.gov (United States)

    Tenorio-Tagle, Guillermo; Moles, Mariano; Melnick, Jorge

    Here for the first time is a book that treats practically all aspects of modern research in interstellar matter astrophysics. 20 review articles and 40 carefully selected and refereed papers give a thorough overview of the field and convey the flavor of enthusiastic colloquium discussions to the reader. The book includes sections on: - Molecular clouds, star formation and HII regions - Mechanical energy sources - Discs, outflows, jets and HH objects - The Orion Nebula - The extragalactic interstellar medium - Interstellar matter at high galactic latitudes - The structure of the interstellar medium

  16. Optimal insemination and replacement decisions to minimize the cost of pathogen-specific clinical mastitis in dairy cows.

    Science.gov (United States)

    Cha, E; Kristensen, A R; Hertl, J A; Schukken, Y H; Tauer, L W; Welcome, F L; Gröhn, Y T

    2014-01-01

    Mastitis is a serious production-limiting disease, with effects on milk yield, milk quality, and conception rate, and an increase in the risk of mortality and culling. The objective of this study was 2-fold: (1) to develop an economic optimization model that incorporates all the different types of pathogens that cause clinical mastitis (CM) categorized into 8 classes of culture results, and account for whether the CM was a first, second, or third case in the current lactation and whether the cow had a previous case or cases of CM in the preceding lactation; and (2) to develop this decision model to be versatile enough to add additional pathogens, diseases, or other cow characteristics as more information becomes available without significant alterations to the basic structure of the model. The model provides economically optimal decisions depending on the individual characteristics of the cow and the specific pathogen causing CM. The net returns for the basic herd scenario (with all CM included) were $507/cow per year, where the incidence of CM (cases per 100 cow-years) was 35.6, of which 91.8% of cases were recommended for treatment under an optimal replacement policy. The cost per case of CM was $216.11. The CM cases comprised (incidences, %) Staphylococcus spp. (1.6), Staphylococcus aureus (1.8), Streptococcus spp. (6.9), Escherichia coli (8.1), Klebsiella spp. (2.2), other treated cases (e.g., Pseudomonas; 1.1), other not treated cases (e.g., Trueperella pyogenes; 1.2), and negative culture cases (12.7). The average cost per case, even under optimal decisions, was greatest for Klebsiella spp. ($477), followed by E. coli ($361), other treated cases ($297), and other not treated cases ($280). This was followed by the gram-positive pathogens; among these, the greatest cost per case was due to Staph. aureus ($266), followed by Streptococcus spp. ($174) and Staphylococcus spp. ($135); negative culture had the lowest cost ($115). The model recommended treatment for

  17. DYNALIGHT DESKTOP: A GREENHOUSE CONTROL SYSTEM TO OPTIMIZE THE ENERGY COST-EFFICIENCY OF SUPPLEMENTARY LIGHTING

    DEFF Research Database (Denmark)

    Mærsk-Møller, Hans Martin; Kjær, Katrine Heinsvig; Ottosen, Carl-Otto;

    2016-01-01

    for energy and cost-efficient climate control strategies that do not compromise product quality. In this paper, we present a novel approach addressing dynamic control of supplemental light in greenhouses aiming to decrease electricity costs and energy consumption without loss in plant productivity. Our...... approach uses weather forecasts and electricity prices to compute energy and cost-efficient supplemental light plans, which fulfils the production goals of the grower. The approach is supported by a set of newly developed planning software, which interfaces with a greenhouse climate computer. The planning......In Northern Europe the production of ornamental pot plants in greenhouses requires use of supplemental light, as light is a restricting climatic factor for growth from late autumn until early spring. To make this production ecologically and economically sustainable there is an urgent need...

  18. ANALYSIS OF SAFETY RELIEF VALVE PROOF TEST DATA TO OPTIMIZE LIFECYCLE MAINTENANCE COSTS

    Energy Technology Data Exchange (ETDEWEB)

    Gross, Robert; Harris, Stephen

    2007-08-01

    Proof test results were analyzed and compared with a proposed life cycle curve or hazard function and the limit of useful life. Relief valve proof testing procedures, statistical modeling, data collection processes, and time-in-service trends are presented. The resulting analysis of test data allows for the estimation of the PFD. Extended maintenance intervals to the limit of useful life as well as methodologies and practices for improving relief valve performance and reliability are discussed. A generic cost-benefit analysis and an expected life cycle cost reduction concludes that $90 million maintenance dollars might be avoided for a population of 3000 valves over 20 years.

  19. Deuterium enrichment of interstellar dusts

    Science.gov (United States)

    Das, Ankan; Chakrabarti, Sandip Kumar; Majumdar, Liton; Sahu, Dipen

    2016-07-01

    High abundance of some abundant and simple interstellar species could be explained by considering the chemistry that occurs on interstellar dusts. Because of its simplicity, the rate equation method is widely used to study the surface chemistry. However, because the recombination efficiency for the formation of any surface species is highly dependent on various physical and chemical parameters, the Monte Carlo method is best suited for addressing the randomness of the processes. We carry out Monte-Carlo simulation to study deuterium enrichment of interstellar grain mantle under various physical conditions. Based on the physical properties, various types of clouds are considered. We find that in diffuse cloud regions, very strong radiation fields persists and hardly a few layers of surface species are formed. In translucent cloud regions with a moderate radiation field, significant number of layers would be produced and surface coverage is mainly dominated by photo-dissociation products such as, C, CH_3, CH_2D, OH and OD. In the intermediate dense cloud regions (having number density of total hydrogen nuclei in all forms ˜2 × 10^4 cm^{-3}), water and methanol along with their deuterated derivatives are efficiently formed. For much higher density regions (˜10^6 cm^{-3}), water and methanol productions are suppressed but surface coverage of CO, CO_2, O_2, O_3 are dramatically increased. We find a very high degree of fractionation of water and methanol. Observational results support a high fractionation of methanol but surprisingly water fractionation is found to be low. This is in contradiction with our model results indicating alternative routes for de-fractionation of water.

  20. HERSCHEL OBSERVATIONS OF INTERSTELLAR CHLORONIUM

    Energy Technology Data Exchange (ETDEWEB)

    Neufeld, David A.; Indriolo, Nick [Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St., Baltimore, MD 21218 (United States); Roueff, Evelyne; Le Bourlot, Jacques; Le Petit, Franck [Observatoire de Paris-Meudon, LUTH UMR 8102, 5 Pl. Jules Janssen, F-92195 Meudon Cedex (France); Snell, Ronald L. [Astronomy Department, University of Massachusetts at Amherst, Amherst, MA 01003 (United States); Lis, Dariusz; Monje, Raquel; Phillips, Thomas G. [Astronomy Department, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Benz, Arnold O. [Institute of Astronomy, ETH Zurich, 8092 Zurich (Switzerland); Bruderer, Simon [Max Planck Institut fuer Extraterrestrische Physik, Giessenbachstrasse 1, D-85748, Garching (Germany); Black, John H.; Larsson, Bengt [Department of Earth and Space Sciences, Chalmers University of Technology, Onsala (Sweden); De Luca, Massimo; Gerin, Maryvonne [LERMA, UMR 8112 du CNRS, Observatoire de Paris, Ecole Normale Superieure, UPMC and UCP (France); Goldsmith, Paul F.; Gupta, Harshal [JPL, California Institute of Technology, Pasadena, CA (United States); Melnick, Gary J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Menten, Karl M. [Max-Planck-Institut fuer Radioastronomie, Auf dem Huegel 69, D-53121 Bonn (Germany); Nagy, Zsofia [Kapteyn Astronomical Institute University of Groningen, Groningen (Netherlands); and others

    2012-03-20

    Using the Herschel Space Observatory's Heterodyne Instrument for the Far-Infrared, we have observed para-chloronium (H{sub 2}Cl{sup +}) toward six sources in the Galaxy. We detected interstellar chloronium absorption in foreground molecular clouds along the sight lines to the bright submillimeter continuum sources Sgr A (+50 km s{sup -1} cloud) and W31C. Both the para-H{sup 35}{sub 2}Cl{sup +} and para-H{sup 37}{sub 2}Cl{sup +} isotopologues were detected, through observations of their 1{sub 11}-0{sub 00} transitions at rest frequencies of 485.42 and 484.23 GHz, respectively. For an assumed ortho-to-para ratio (OPR) of 3, the observed optical depths imply that chloronium accounts for {approx}4%-12% of chlorine nuclei in the gas phase. We detected interstellar chloronium emission from two sources in the Orion Molecular Cloud 1: the Orion Bar photodissociation region and the Orion South condensation. For an assumed OPR of 3 for chloronium, the observed emission line fluxes imply total beam-averaged column densities of {approx}2 Multiplication-Sign 10{sup 13} cm{sup -2} and {approx}1.2 Multiplication-Sign 10{sup 13} cm{sup -2}, respectively, for chloronium in these two sources. We obtained upper limits on the para-H{sup 35}{sub 2}Cl{sup +} line strengths toward H{sub 2} Peak 1 in the Orion Molecular cloud and toward the massive young star AFGL 2591. The chloronium abundances inferred in this study are typically at least a factor {approx}10 larger than the predictions of steady-state theoretical models for the chemistry of interstellar molecules containing chlorine. Several explanations for this discrepancy were investigated, but none has proven satisfactory, and thus the large observed abundances of chloronium remain puzzling.

  1. Balancing development costs and sales to optimize the development time of product line additions

    NARCIS (Netherlands)

    Langerak, F.; Griffin, A.; Hultink, E.J.

    2010-01-01

    Development teams often use mental models to simplify development time decision making because a comprehensive empirical assessment of the trade-offs across the metrics of development time, development costs, proficiency in market-entry timing, and new product sales is simply not feasible. Surprisin

  2. Trade-off between carbon dioxide emissions and logistics costs based on multiobjective optimization

    NARCIS (Netherlands)

    Kim, N.S.; Janic, M.; Van Wee, G.P.

    2009-01-01

    This paper examines the relationship between the freight transport costs and the carbon dioxide (CO2) emissions in given intermodal and truckonly freight networks. When the trade-off, which is represented as the relationship, is changed, the freight mode share and route choice are also modified. To

  3. Towards cost optimal design and maintenance of steel structures under stray current interference

    NARCIS (Netherlands)

    Peelen, W.H.A.; Krom, A.H.M.; Polder, R.B.

    2006-01-01

    Steel is a cost-effective and durable building material. Corrosion is one of the major degradation mechanisms of steel structures, also for underground structures like sheet pile walls. Traction currents leaking into the ground (called stray currents here) and interfering with the structure can caus

  4. Cost optimal allocation of amplifiers and DCMs in WDM ring networks.

    Science.gov (United States)

    Minagar, Amir; Premaratne, Malin; Tran, An V

    2006-10-30

    Designing metropolitan wavelength division multiplexing (WDM) ring networks with minimum deployment cost is a demanding issue in Telecommunication Network planning . We have already presented amplifier placement methods to minimize the number of amplifiers in WDM rings for the case all amplifiers follow a unique gain model. In this paper, we take into account different types of amplifiers with predefined fixed characteristics and costs. We also formulate fiber dispersion limitations on the ring design, and present two efficient methods for placing amplifiers and Dispersion Compensation Modules (DCMs) in WDM rings to minimize the total deployment cost of the system. The first method deals with both linear and nonlinear equations and uses a mixed integer nonlinear programming (MINLP) solver where the second method applies the linear approximation of nonlinear constraints, and uses a mixed integer linear programming (MILP) solver to minimize the total cost of the system. We carry out Simulation experiments to confirm the applicability of the methods and compare the results for various network configurations.

  5. HOPE: Helmet Optimization in Europe. The final report of COST Action TU1101.

    NARCIS (Netherlands)

    Bogerd, C.P. Annaheim, S. Halldin, P. Houtenbos, M. Otte, D. Shinar, D. Walker, I. & Willinger, R.

    2015-01-01

    The primary purpose of COST Action TU1101 was to stimulate collaboration and networking amongst European scientists working in the field of bicycle helmet safety and improvement. By gathering together in a single, collective Action, researchers can improve the collection and dissemination of data fr

  6. HOPE: Helmet Optimization in Europe. The final report of COST Action TU1101.

    NARCIS (Netherlands)

    Bogerd, C.P. Annaheim, S. Halldin, P. Houtenbos, M. Otte, D. Shinar, D. Walker, I. & Willinger, R.

    2015-01-01

    The primary purpose of COST Action TU1101 was to stimulate collaboration and networking amongst European scientists working in the field of bicycle helmet safety and improvement. By gathering together in a single, collective Action, researchers can improve the collection and dissemination of data fr

  7. Balancing development costs and sales to optimize the development time of product line additions

    NARCIS (Netherlands)

    Langerak, F.; Griffin, A.; Hultink, E.J.

    2010-01-01

    Development teams often use mental models to simplify development time decision making because a comprehensive empirical assessment of the trade-offs across the metrics of development time, development costs, proficiency in market-entry timing, and new product sales is simply not feasible. Surprisin

  8. Towards cost optimal design and maintenance of steel structures under stray current interference

    NARCIS (Netherlands)

    Peelen, W.H.A.; Krom, A.H.M.; Polder, R.B.

    2006-01-01

    Steel is a cost-effective and durable building material. Corrosion is one of the major degradation mechanisms of steel structures, also for underground structures like sheet pile walls. Traction currents leaking into the ground (called stray currents here) and interfering with the structure can

  9. Is local participation always optimal for sustainable action? The costs of consensus-building in Local Agenda 21.

    Science.gov (United States)

    Brandt, Urs Steiner; Svendsen, Gert Tinggaard

    2013-11-15

    Is local participation always optimal for sustainable action? Here, Local Agenda 21 is a relevant case as it broadly calls for consensus-building among stakeholders. Consensus-building is, however, costly. We show that the costs of making local decisions are likely to rapidly exceed the benefits. Why? Because as the number of participants grows, the more likely it is that the group will include individuals who have an extreme position and are unwilling to make compromises. Thus, the net gain of self-organization should be compared with those of its alternatives, for example voting, market-solutions, or not making any choices at all. Even though the informational value of meetings may be helpful to policy makers, the model shows that it also decreases as the number of participants increase. Overall, the result is a thought provoking scenario for Local Agenda 21 as it highlights the risk of less sustainable action in the future.

  10. Grain Destruction in Interstellar Shocks

    OpenAIRE

    1995-01-01

    Interstellar shock waves can erode and destroy grains present in the shocked gas, primarily as the result of sputtering and grain-grain collisions. Uncertainties in current estimates of sputtering yields are reviewed. Results are presented for the simple case of sputtering of fast grains being stopped in cold gas. An upper limit is derived for sputtering of refractory grains in C-type MHD shocks: shock speeds $v_s \\gtrsim 50 \\kms$ are required for return of more than 30\\% of the silicate to t...

  11. A Search for Interstellar Pyrimidine

    CERN Document Server

    Kuan, Y J; Charnley, S B; Kisiel, Z; Ehrenfreund, P; Huang, H C; Kuan, Yi-Jehng; Yan, Chi-Hung; Charnley, Steven B.; Kisiel, Zbigniew; Ehrenfreund, Pascale; Huang, Hui-Chun

    2003-01-01

    We have searched three hot molecular cores for submillimeter emission from the nucleic acid building-block pyrimidine. We obtain upper limits to the total pyrimidine (beam-averaged) column densities towards Sgr B2(N), Orion KL and W51 e1/e2 of 1.7E+14 cm^{-2}, 2.4E+14 cm^{-2} and 3.4E+14 cm^{-2}, respectively. The associated upper limits to the pyrimidine fractional abundances lie in the range (0.3-3)E-10. Implications of this result for interstellar organic chemistry, and for the prospects of detecting nitrogen heterocycles in general, are briefly discussed.

  12. Discovery of Interstellar Heavy Water

    OpenAIRE

    Butner, H. M.; Charnley, S. B.; Ceccarelli, C.; Rodgers, S.D.; Pardo Carrión, Juan Ramón; Parise, B.; Cernicharo, José; Davis, G. R.

    2007-01-01

    We report the discovery of doubly deuterated water (D2O, heavy water) in the interstellar medium. Using the James Clerk Maxwell Telescope and the Caltech Submillimeter Observatory 10 m telescope, we detected the 1_10–1_01 transition of para-D2O at 316.7998 GHz in both absorption and emission toward the protostellar binary system IRAS 16293-2422. Assuming that the D2O exists primarily in the warm regions where water ices have been evaporated (i.e., in a "hot corino" environment), we determi...

  13. Identifying Cost-Effective Water Resources Management Strategies: Watershed Management Optimization Support Tool (WMOST)

    Science.gov (United States)

    The Watershed Management Optimization Support Tool (WMOST) is a public-domain software application designed to aid decision makers with integrated water resources management. The tool allows water resource managers and planners to screen a wide-range of management practices for c...

  14. Design Optimization of Time- and Cost-Constrained Fault-Tolerant Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru;

    2005-01-01

    In this paper we present an approach to the design optimization of fault-tolerant embedded systems for safety-critical applications. Processes are statically scheduled and communications are performed using the time-triggered protocol. We use process re-execution and replication for tolerating...

  15. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on

  16. Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    2000-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query

  17. An Improved Particle Swarm Optimization for Selective Single Machine Scheduling with Sequence Dependent Setup Costs and Downstream Demands

    Directory of Open Access Journals (Sweden)

    Kun Li

    2015-01-01

    Full Text Available This paper investigates a special single machine scheduling problem derived from practical industries, namely, the selective single machine scheduling with sequence dependent setup costs and downstream demands. Different from traditional single machine scheduling, this problem further takes into account the selection of jobs and the demands of downstream lines. This problem is formulated as a mixed integer linear programming model and an improved particle swarm optimization (PSO is proposed to solve it. To enhance the exploitation ability of the PSO, an adaptive neighborhood search with different search depth is developed based on the decision characteristics of the problem. To improve the search diversity and make the proposed PSO algorithm capable of getting out of local optimum, an elite solution pool is introduced into the PSO. Computational results based on extensive test instances show that the proposed PSO can obtain optimal solutions for small size problems and outperform the CPLEX and some other powerful algorithms for large size problems.

  18. An estimation method for direct maintenance cost of aircraft components based on particle swarm optimization with immunity algorithm

    Institute of Scientific and Technical Information of China (English)

    WU Jing-min; ZUO Hong-fu; CHEN Yong

    2005-01-01

    A particle swarm optimization (PSO) algorithm improved by immunity algorithm (IA) was presented.Memory and self-regulation mechanisms of IA were used to avoid PSO plunging into local optima. Vaccination and immune selection mechanisms were used to prevent the undulate phenomenon during the evolutionary process. The algorithm was introduced through an application in the direct maintenance cost (DMC) estimation of aircraft components. Experiments results show that the algorithm can compute simply and run quickly. It resolves the combinatorial optimization problem of component DMC estimation with simple and available parameters. And it has higher accuracy than individual methods, such as PLS, BP and v-SVM, and also has better performance than other combined methods, such as basic PSO and BP neural network.

  19. Stochastic Funding of a Defined Contribution Pension Plan with Proportional Administrative Costs and Taxation under Mean-Variance Optimization Approach

    Directory of Open Access Journals (Sweden)

    Charles I Nkeki

    2014-11-01

    Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.

  20. Optimality of Upper-Arm Reaching Trajectories Based on the Expected Value of the Metabolic Energy Cost.

    Science.gov (United States)

    Taniai, Yoshiaki; Nishii, Jun

    2015-08-01

    When we move our body to perform a movement task, our central nervous system selects a movement trajectory from an infinite number of possible trajectories under constraints that have been acquired through evolution and learning. Minimization of the energy cost has been suggested as a potential candidate for a constraint determining locomotor parameters, such as stride frequency and stride length; however, other constraints have been proposed for a human upper-arm reaching task. In this study, we examined whether the minimum metabolic energy cost model can also explain the characteristics of the upper-arm reaching trajectories. Our results show that the optimal trajectory that minimizes the expected value of energy cost under the effect of signal-dependent noise on motor commands expresses not only the characteristics of reaching movements of typical speed but also those of slower movements. These results suggest that minimization of the energy cost would be a basic constraint not only in locomotion but also in upper-arm reaching.

  1. Optimal Scheduling of Energy Storage System for Self-Sustainable Base Station Operation Considering Battery Wear-Out Cost

    Directory of Open Access Journals (Sweden)

    Yohwan Choi

    2016-06-01

    Full Text Available A self-sustainable base station (BS where renewable resources and energy storage system (ESS are interoperably utilized as power sources is a promising approach to save energy and operational cost in communication networks. However, high battery price and low utilization of ESS intended for uninterruptible power supply (UPS necessitates active utilization of ESS. This paper proposes a multi-functional framework of ESS using dynamic programming (DP for realizing a sustainable BS. We develop an optimal charging and discharging scheduling algorithm considering a detailed battery wear-out model to minimize operational cost as well as to prolong battery lifetime. Our approach significantly reduces total cost compared to the conventional method that does not consider battery wear-out. Extensive experiments for several scenarios exhibit that total cost is reduced by up to 70.6% while battery wear-out is also reduced by 53.6%. The virtue of the proposed framework is its wide applicability beyond sustainable BS and thus can be also used for other types of load in principle.

  2. Developing an Onboard Traffic-Aware Flight Optimization Capability for Near-Term Low-Cost Implementation

    Science.gov (United States)

    Wing, David J.; Ballin, Mark G.; Koczo, Stefan, Jr.; Vivona, Robert A.; Henderson, Jeffrey M.

    2013-01-01

    The concept of Traffic Aware Strategic Aircrew Requests (TASAR) combines Automatic Dependent Surveillance Broadcast (ADS-B) IN and airborne automation to enable user-optimal in-flight trajectory replanning and to increase the likelihood of Air Traffic Control (ATC) approval for the resulting trajectory change request. TASAR is designed as a near-term application to improve flight efficiency or other user-desired attributes of the flight while not impacting and potentially benefiting ATC. Previous work has indicated the potential for significant benefits for each TASAR-equipped aircraft. This paper will discuss the approach to minimizing TASAR's cost for implementation and accelerating readiness for near-term implementation.

  3. Optimal Inventory Policy Involving Ordering Cost Reduction, Back-Order Discounts, and Variable Lead Time Demand by Minimax Criterion

    Directory of Open Access Journals (Sweden)

    Jong-Wuu Wu

    2009-01-01

    Full Text Available This paper allows the backorder rate as a control variable to widen applications of a continuous review inventory model. Moreover, we also consider the backorder rate that is proposed by combining Ouyang and Chuang (2001 (or Lee (2005 with Pan and Hsiao (2001 to present a new form. Thus, the backorder rate is dependent on the amount of shortages and backorder price discounts. Besides, we also treat the ordering cost as a decision variable. Hence, we develop an algorithmic procedure to find the optimal inventory policy by minimax criterion. Finally, a numerical example is also given to illustrate the results.

  4. A Cost-Effective Approach to Optimizing Microstructure and Magnetic Properties in Ce17Fe78B6 Alloys

    Directory of Open Access Journals (Sweden)

    Xiaohua Tan

    2017-07-01

    Full Text Available Optimizing fabrication parameters for rapid solidification of Re-Fe-B (Re = Rare earth alloys can lead to nanocrystalline products with hard magnetic properties without any heat-treatment. In this work, we enhanced the magnetic properties of Ce17Fe78B6 ribbons by engineering both the microstructure and volume fraction of the Ce2Fe14B phase through optimization of the chamber pressure and the wheel speed necessary for quenching the liquid. We explored the relationship between these two parameters (chamber pressure and wheel speed, and proposed an approach to identifying the experimental conditions most likely to yield homogenous microstructure and reproducible magnetic properties. Optimized experimental conditions resulted in a microstructure with homogeneously dispersed Ce2Fe14B and CeFe2 nanocrystals. The best magnetic properties were obtained at a chamber pressure of 0.05 MPa and a wheel speed of 15 m·s−1. Without the conventional heat-treatment that is usually required, key magnetic properties were maximized by optimization processing parameters in rapid solidification of magnetic materials in a cost-effective manner.

  5. Data of cost-optimal solutions and retrofit design methods for school renovation in a warm climate.

    Science.gov (United States)

    Zacà, Ilaria; Tornese, Giuliano; Baglivo, Cristina; Congedo, Paolo Maria; D'Agostino, Delia

    2016-12-01

    "Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings" (Paolo Maria Congedo, Delia D'Agostino, Cristina Baglivo, Giuliano Tornese, Ilaria Zacà) [1] is the paper that refers to this article. It reports the data related to the establishment of several variants of energy efficient retrofit measures selected for two existing school buildings located in the Mediterranean area. In compliance with the cost-optimal analysis described in the Energy Performance of Buildings Directive and its guidelines (EU, Directive, EU 244,) [2], [3], these data are useful for the integration of renewable energy sources and high performance technical systems for school renovation. The data of cost-efficient high performance solutions are provided in tables that are explained within the following sections. The data focus on the describe school refurbishment sector to which European policies and investments are directed. A methodological approach already used in previous studies about new buildings is followed (Baglivo Cristina, Congedo Paolo Maria, D׳Agostino Delia, Zacà Ilaria, 2015; IlariaZacà, Delia D'Agostino, Paolo Maria Congedo, Cristina Baglivo; Baglivo Cristina, Congedo Paolo Maria, D'Agostino Delia, Zacà Ilaria, 2015; Ilaria Zacà, Delia D'Agostino, Paolo Maria Congedo, Cristina Baglivo, 2015; Paolo Maria Congedo, Cristina Baglivo, IlariaZacà, Delia D'Agostino,2015) [4], [5], [6], [7], [8]. The files give the cost-optimal solutions for a kindergarten (REF1) and a nursery (REF2) school located in Sanarica and Squinzano (province of Lecce Southern Italy). The two reference buildings differ for construction period, materials and systems. The eleven tables provided contain data about the localization of the buildings, geometrical features and thermal properties of the envelope, as well as the energy efficiency measures related to walls, windows, heating, cooling, dhw and renewables. Output values of energy consumption, gas emission and costs are given for a

  6. Photodissociation of OH in interstellar clouds

    NARCIS (Netherlands)

    Dishoeck, van E.F.; Dalgarno, A.

    1984-01-01

    Calculations are presented of the lifetime of OH against photodissociation by the interstellar radiation field as a function of depth into interstellar clouds containing grains of various scattering properties. The effectiveness of the different photodissociation channels changes with depth into a c

  7. Transport Studies Enabling Efficiency Optimization of Cost-Competitive Fuel Cell Stacks (aka AURORA: Areal Use and Reactant Optimization at Rated Amperage)

    Energy Technology Data Exchange (ETDEWEB)

    Conti, Amedeo [Nuvera Fuel Cells, Inc., Billerica, MA (United States); Dross, Robert [Nuvera Fuel Cells, Inc., Billerica, MA (United States)

    2013-12-06

    Hydrogen fuel cells are recognized as one of the most viable solutions for mobility in the 21st century; however, there are technical challenges that must be addressed before the technology can become available for mass production. One of the most demanding aspects is the costs of present-day fuel cells which are prohibitively high for the majority of envisioned markets. The fuel cell community recognizes two major drivers to an effective cost reduction: (1) decreasing the noble metals content, and (2) increasing the power density in order to reduce the number of cells needed to achieve a specified power level. To date, the majority of development work aimed at increasing the value metric (i.e. W/mg-Pt) has focused on the reduction of precious metal loadings, and this important work continues. Efforts to increase power density have been limited by two main factors: (1) performance limitations associated with mass transport barriers, and (2) the historical prioritization of efficiency over cost. This program is driven by commercialization imperatives, and challenges both of these factors. The premise of this Program, supported by proprietary cost modeling by Nuvera, is that DOE 2015 cost targets can be met by simultaneously exceeding DOE 2015 targets for Platinum loadings (using materials with less than 0.2 mg-Pt/cm2) and MEA power density (operating at higher than 1.0 Watt/cm2). The approach of this program is to combine Nuvera’s stack technology, which has demonstrated the ability to operate stably at high current densities (> 1.5 A/cm2), with low Platinum loading MEAs developed by Johnson Matthey in order to maximize Pt specific power density and reduce stack cost. A predictive performance model developed by PSU/UTK is central to the program allowing the team to study the physics and optimize materials/conditions specific to low Pt loading electrodes and ultra-high current density and operation.

  8. Energy Efficient Low-Cost Virtual Backbone Construction for Optimal Routing in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    K. Mohaideen Pitchai

    2014-01-01

    Full Text Available Many prominent applications in wireless sensor networks which require collected information have to be routed to end nodes in an efficient manner. In general, weighted connected dominating Sets (WCDS based routing is a promising approach for enhancing the routing efficiency in sensor networks. Backbone has been used extensively in routing. Here an efficient WCDS algorithm for constructing a virtual backbone with low total cost, hop spanning ratio, and minimum number of dominators is proposed. We report a systematic approach, which has three phases. Initial phase considers the issues of revoking a partial CDS tree from a complete CDS tree. Secondary and final phases make the design of the complete algorithm by considering the determination of dominators using an iteration process. Our findings reveal better performance than the existing algorithms in terms of total cost, hop spanning ratio, and number of dominators.

  9. The optimal suppression of a low-cost technology by a durable-good monopoly

    OpenAIRE

    Karp, Larry; Perloff, Jeffrey M.

    1994-01-01

    If a durable-good monopoly can use either of two technologies whose properties are known to consumers, the monopoly uses only the technology with the lowest average cost at low levels of production. If consumers only know about technologies in use, the monopoly may use an inferior technology initially to increase its profits, keeping the new, efficient technology secret and switching later. Thus, in either case, an inferior technology may be used; however, switching between technologies occur...

  10. Construction of Optimal-Path Maps for Homogeneous-Cost-Region Path-Planning Problems

    Science.gov (United States)

    1989-09-01

    JICA opposite-edge boundaries and tlCA cornercuting boundaries. HCA opposite-edge boundaries are the generalization of obstacle op- posite-cdge...4.o APPENDIX D - IIIGlI-COST EXTERIOR-GOAL JICA INTERIOR-BOUNDARY CONSTRUCTION SOURCE CODE AFile "bdrygen". A Updated 12 Jan 89. AThis proorrm...generates boundaries for JICA interiors and writes them to two files; "bdry out" Is a file of prolog ’ facts recording the boundary and terrain information

  11. Cost optimizing of large-scale offshore wind farms. Appendix E to J. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-07-01

    This Volume 3 contains reports prepared by SEAS Distribution A.m.b.A. and partly by Risoe National Laboratory. Appendix E: Grid connection. Appendix F: Review of potential offshore sites. Appendix G: Optimisation of wind farm layout. Appendix G.1: Evaluation of Power Production from Three Offshore Wind Farms. Appendix H: Investments, operation and maintenance costs. Appendix I: Authority approvals and environmental investigations. Appendix J: Outline description of demonstration project. Appendix G.1 has been prepared by Risoe National Laboratory. (au)

  12. Optimal Dividend and Dynamic Reinsurance Strategies with Capital Injections and Proportional Costs

    Institute of Scientific and Technical Information of China (English)

    Yi-dong WU; Jun-yi GUO

    2012-01-01

    We consider an optimization problem of an insurance company in the diffusion setting,which controls the dividends payout as well as the capital injections.To maximize the cumulative expected discounted dividends minus the penalized discounted capital injections until the ruin time,there is a possibility of (cheap or non-cheap) proportional reinsurance.We solve the control problems by constructing two categories of suboptimal models,one without capital injections and one with no bankruptcy by capital injection.Then we derive the explicit solutions for the value function and totally characterize the optimal strategies.Particularly,for cheap reinsurance,they are the same as those in the model of no bankruptcy.

  13. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  14. DATA MINING METHODOLOGY FOR DETERMINING THE OPTIMAL MODEL OF COST PREDICTION IN SHIP INTERIM PRODUCT ASSEMBLY

    Directory of Open Access Journals (Sweden)

    Damir Kolich

    2016-03-01

    Full Text Available In order to accurately predict costs of the thousands of interim products that are assembled in shipyards, it is necessary to use skilled engineers to develop detailed Gantt charts for each interim product separately which takes many hours. It is helpful to develop a prediction tool to estimate the cost of interim products accurately and quickly without the need for skilled engineers. This will drive down shipyard costs and improve competitiveness. Data mining is used extensively for developing prediction models in other industries. Since ships consist of thousands of interim products, it is logical to develop a data mining methodology for a shipyard or any other manufacturing industry where interim products are produced. The methodology involves analysis of existing interim products and data collection. Pre-processing and principal component analysis is done to make the data “user-friendly” for later prediction processing and the development of both accurate and robust models. The support vector machine is demonstrated as the better model when there are a lower number of tuples. However as the number of tuples is increased to over 10000, then the artificial neural network model is recommended.

  15. Detection of interstellar hydrogen peroxide

    CERN Document Server

    Bergman, P; Liseau, R; Larsson, B; Olofsson, H; Menten, K M; Güsten, R

    2011-01-01

    The molecular species hydrogen peroxide, HOOH, is likely to be a key ingredient in the oxygen and water chemistry in the interstellar medium. Our aim with this investigation is to determine how abundant HOOH is in the cloud core {\\rho} Oph A. By observing several transitions of HOOH in the (sub)millimeter regime we seek to identify the molecule and also to determine the excitation conditions through a multilevel excitation analysis. We have detected three spectral lines toward the SM1 position of {\\rho} Oph A at velocity-corrected frequencies that coincide very closely with those measured from laboratory spectroscopy of HOOH. A fourth line was detected at the 4{\\sigma} level. We also found through mapping observations that the HOOH emission extends (about 0.05 pc) over the densest part of the {\\rho} Oph A cloud core. We derive an abundance of HOOH relative to that of H_2 in the SM1 core of about 1\\times10^(-10). To our knowledge, this is the first reported detection of HOOH in the interstellar medium.

  16. Herschel observations of interstellar chloronium

    CERN Document Server

    Neufeld, David A; Snell, Ronald L; Lis, Dariusz; Benz, Arnold O; Bruderer, Simon; Black, John H; De Luca, Massimo; Gerin, Maryvonne; Goldsmith, Paul F; Gupta, Harshal; Indriolo, Nick; Bourlot, Jacques Le; Petit, Franck Le; Larsson, Bengt; Melnick, Gary J; Menten, Karl M; Monje, Raquel; Nagy, Zsofia; Phillips, Thomas G; Sandqvist, Aage; Sonnentrucker, Paule; van der Tak, Floris; Wolfire, Mark G

    2012-01-01

    Using the Herschel Space Observatory's Heterodyne Instrument for the Far-Infrared (HIFI), we have observed para-chloronium (H2Cl+) toward six sources in the Galaxy. We detected interstellar chloronium absorption in foreground molecular clouds along the sight-lines to the bright submillimeter continuum sources Sgr A (+50 km/s cloud) and W31C. Both the para-H2-35Cl+ and para-H2-37Cl+ isotopologues were detected, through observations of their 1(11)-0(00) transitions at rest frequencies of 485.42 and 484.23 GHz, respectively. For an assumed ortho-to-para ratio of 3, the observed optical depths imply that chloronium accounts for ~ 4 - 12% of chlorine nuclei in the gas phase. We detected interstellar chloronium emission from two sources in the Orion Molecular Cloud 1: the Orion Bar photodissociation region and the Orion South condensation. For an assumed ortho-to-para ratio of 3 for chloronium, the observed emission line fluxes imply total beam-averaged column densities of ~ 2.0E+13 cm-2 and ~ 1.2E+13 cm-2, respect...

  17. On the Nature of Interstellar Grains

    Science.gov (United States)

    Hoyle, F.; Wickramasinghe, C.

    Data on interstellar extinction are interpreted to imply an identification of interstellar grains with naturally freeze-dried bacteria and algae. The total mass of such bacterial and algal cells in the galaxy is enormous, ~1040 g. The identification is based on Mie scattering calculations for an experimentally determined size distribution of bacteria. Agreement between our model calculations and astronomical data is remarkably precise over the wavelength intervals 1 μ-1 pigments. The strongest of the diffuse interstellar bands are provisionally assigned to carotenoid-chlorophyll pigment complexes such as exist in algae and pigmented bacteria. The λ2200 Å interstellar absorption feature could be due to `degraded' cellulose strands which form spherical graphitic particles, but could equally well be due to protein-lipid-nucleic acid complexes in bacteria and viruses. Interstellar extinction at wavelengths λ < 1800 Å could be due to scattering by virus particles.

  18. Realistic Detectability of Close Interstellar Comets

    CERN Document Server

    Cook, Nathaniel V; Granvik, Mikael; Stephens, Denise C

    2016-01-01

    During the planet formation process, billions of comets are created and ejected into interstellar space. The detection and characterization of such interstellar comets (also known as extra-solar planetesimals or extra-solar comets) would give us in situ information about the efficiency and properties of planet formation throughout the galaxy. However, no interstellar comets have ever been detected, despite the fact that their hyperbolic orbits would make them readily identifiable as unrelated to the solar system. Moro-Mart\\'in et al. 2009 have made a detailed and reasonable estimate of the properties of the interstellar comet population. We extend their estimates of detectability with a numerical model that allows us to consider "close" interstellar comets, e.g., those that come within the orbit of Jupiter. We include several constraints on a "detectable" object that allow for realistic estimates of the frequency of detections expected from the Large Synoptic Survey Telescope (LSST) and other surveys. The inf...

  19. Thermodynamics and Charging of Interstellar Iron Nanoparticles

    Science.gov (United States)

    Hensley, Brandon S.; Draine, B. T.

    2017-01-01

    Interstellar iron in the form of metallic iron nanoparticles may constitute a component of the interstellar dust. We compute the stability of iron nanoparticles to sublimation in the interstellar radiation field, finding that iron clusters can persist down to a radius of ≃4.5 Å, and perhaps smaller. We employ laboratory data on small iron clusters to compute the photoelectric yields as a function of grain size and the resulting grain charge distribution in various interstellar environments, finding that iron nanoparticles can acquire negative charges, particularly in regions with high gas temperatures and ionization fractions. If ≳10% of the interstellar iron is in the form of ultrasmall iron clusters, the photoelectric heating rate from dust may be increased by up to tens of percent relative to dust models with only carbonaceous and silicate grains.

  20. Thermodynamics and Charging of Interstellar Iron Nanoparticles

    CERN Document Server

    Hensley, Brandon S

    2016-01-01

    Interstellar iron in the form of metallic iron nanoparticles may constitute a component of the interstellar dust. We compute the stability of iron nanoparticles to sublimation in the interstellar radiation field, finding that iron clusters can persist down to a radius of $\\simeq 4.5\\,$\\AA, and perhaps smaller. We employ laboratory data on small iron clusters to compute the photoelectric yields as a function of grain size and the resulting grain charge distribution in various interstellar environments, finding that iron nanoparticles can acquire negative charges particularly in regions with high gas temperatures and ionization fractions. If $\\gtrsim 10\\%$ of the interstellar iron is in the form of ultrasmall iron clusters, the photoelectric heating rate from dust may be increased by up to tens of percent relative to dust models with only carbonaceous and silicate grains.

  1. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades vs. Cost-Optimized Solutions; Chicago, Illinois (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2014-07-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for fifteen Chicagoland single family housing archetypes, called housing groups. In the present study, 800 IHP homes are first matched to one of these fifteen housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt-recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost-effectiveness.

  2. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades Versus Cost-Optimized Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States); Milby, M. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States); Baker, J. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States)

    2014-06-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR® (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for 15 Chicagoland single family housing archetypes. In the present study, 800 IHP homes are first matched to one of these 15 housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost effectiveness.

  3. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades Versus Cost-Optimized Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S.; Milby, M.; Baker, J.

    2014-06-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR(R) (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for fifteen Chicagoland single family housing archetypes, called housing groups. In the present study, 800 IHP homes are first matched to one of these fifteen housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost-effectiveness.

  4. Tailoring the optimal control cost function to a desired output: application to minimizing phase errors in short broadband excitation pulses

    Science.gov (United States)

    Skinner, Thomas E.; Reiss, Timo O.; Luy, Burkhard; Khaneja, Navin; Glaser, Steffen J.

    2005-01-01

    The de facto standard cost function has been used heretofore to characterize the performance of pulses designed using optimal control theory. The freedom to choose new, creative quality factors designed for specific purposes is demonstrated. While the methodology has more general applicability, its utility is illustrated by comparison to a consistently chosen example—broadband excitation. The resulting pulses are limited to the same maximum RF amplitude used previously and tolerate the same variation in RF homogeneity deemed relevant for standard high-resolution NMR probes. Design criteria are unchanged: transformation of Iz → Ix over resonance offsets of ±20 kHz and RF variability of ±5%, with a peak RF amplitude equal to 17.5 kHz. However, the new cost effectively trades a small increase in residual z magnetization for improved phase in the transverse plane. Compared to previous broadband excitation by optimized pulses (BEBOP), significantly shorter pulses are achievable, with only marginally reduced performance. Simulations transform Iz to greater than 0.98 Ix, with phase deviations of the final magnetization less than 2°, over the targeted ranges of resonance offset and RF variability. Experimental performance is in excellent agreement with the simulations.

  5. Cost-based Optimal Distributed Generation Planning with Considering Voltage Depended Loads and Power Factor of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Babak Yousefi-Khangah

    2014-12-01

    Full Text Available If determination of location and size of Distributed Generation (DG are applied accurately, the DG’s ability will improve the network situation and reduce operation costs. In this paper, various market conditions are considered to maximize the benefit of DG’s presence and make a trade off among advantages of DG, network situation, and Distribution Company (DISCO owners. To determine the optimal location and size of DG, two methods of the cost minimization and the nodal pricing are combined. In addition to evaluating the impact of parameters such as variation of energy price and load on objective function, effect of these parameters on location and size of DG is considered. To confirm the results, impact of loads which are dependent on voltage and variation of the power factor of the DG units is applied and then effect of power factor on optimal location and size of DG is shown. A method is proposed for convergence of different results which is caused by different power factors. To observe long-term impact of the DG’s presence in the network, a load growth for five years is considered annually. Study is carried out on IEEE30 bus test circuit.

  6. An Algorithm for Optimized Time, Cost, and Reliability in a Distributed Computing System

    Directory of Open Access Journals (Sweden)

    Pankaj Saxena

    2013-03-01

    Full Text Available Distributed Computing System (DCS refers to multiple computer systems working on a single problem. A distributed system consists of a collection of autonomous computers, connected through a network which enables computers to coordinate their activities and to share the resources of the system. In distributed computing, a single problem is divided into many parts, and each part is solved by different computers. As long as the computers are networked, they can communicate with each other to solve the problem. DCS consists of multiple software components that are on multiple computers, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a local network, or they can be geographically distant and connected by a wide area network. The ultimate goal of distributed computing is to maximize performance in a time effective, cost-effective, and reliability effective manner. In DCS the whole workload is divided into small and independent units, called tasks and it allocates onto the available processors. It also ensures fault tolerance and enables resource accessibility in the event that one of the components fails. The problem is addressed of assigning a task to a distributed computing system. The assignment of the modules of tasks is done statically. We have to give an algorithm to solve the problem of static task assignment in DCS, i.e. given a set of communicating tasks to be executed on a distributed system on a set of processors, to which processor should each task be assigned to get the more reliable results in lesser time and cost. In this paper an efficient algorithm for task allocation in terms of optimum time or optimum cost or optimum reliability is presented where numbers of tasks are more then the number of processors.

  7. Cost optimizing of large-scale offshore wind farms. Summary and conclusion. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-07-01

    The project comprises investigation of the technical and economical possibilities of large-scale offshore wind farms at 3 locations in the eastern Danish waters: Roedsand and Gedser Rev located south of the islands of Falster and Lolland and Omoe Staagrunde located south-west of the island of Zealand plus experiences obtained from British and German offshore wind energy projects. The project included wind and wave measurements at the above 3 locations, data collection, data processing, meteorological analysis, modelling of wind turbine structure, studies of grid connection, design and optimisation of foundations plus estimates of investments and operation and maintenance costs. All costs are in ECU on 1997 basis. The main conclusion of the project financed by the European Commission is: Areas are available for large scale offshore wind farms in the Danish waters; A large wind potential is found on the sites; Park layouts of projects consisting of around 100 wind turbines each has been developed; Design of the foundations has been optimised radically compared to previous designs; A large potential for optimising of the wind turbine design and operation has been found; Grid connection of the first proposed large wind farms is possible with only minor reinforcement of the transmission system; The visual impact is not prohibitive for the projects; A production cost of 4-5 ECUcent/kWh is competitive with current onshore projects. All in all, the results from this project have proven to be very useful for the further development of large-scale wind farms in the Danish waters, and thereby an inspiration for similar projects in other (European) countries. (LN)

  8. Optimizing Feedlot Diagnostic Testing Strategies Using Test Characteristics, Disease Prevalence, and Relative Costs of Misdiagnosis.

    Science.gov (United States)

    Theurer, Miles E; White, Brad J; Renter, David G

    2015-11-01

    Diagnostic tests are commonly used by feedlot practitioners and range from clinical observations to more advanced physiologic testing. Diagnostic sensitivity and specificity, estimated prevalence in the population, and the costs of misdiagnoses need to be considered when selecting a diagnostic test strategy and interpreting results. This article describes methods for evaluating diagnostic strategies using economic outcomes to evaluate the most appropriate strategy for the expected situation. The diagnostic sensitivity and specificity, and expected prevalence influence the likelihood of misdiagnosis in a given population, and the estimated direct economic impact can be used to quantify differences among diagnostic strategies.

  9. Design Optimization of Mixed-Criticality Real-Time Applications on Cost-Constrained Partitioned Architectures

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul

    2011-01-01

    In this paper we are interested to implement mixed-criticality hard real-time applications on a given heterogeneous distributed architecture. Applications have different criticality levels, captured by their Safety-Integrity Level (SIL), and are scheduled using static-cyclic scheduling. Mixed...... of different SILs can share a partition only if they are all elevated to the highest SIL among them. Such elevation leads to increased development costs.We are interested to determine (i) the mapping of tasks to processors, (ii) the assignment of tasks to partitions, (iii) the sequence and size of the time...

  10. Simplified Occupancy Grid Indoor Mapping Optimized for Low-Cost Robots

    Directory of Open Access Journals (Sweden)

    Javier Garrido

    2013-10-01

    Full Text Available This paper presents a mapping system that is suitable for small mobile robots. An ad hoc algorithm for mapping based on the Occupancy Grid method has been developed. The algorithm includes some simplifications in order to be used with low-cost hardware resources. The proposed mapping system has been built in order to be completely autonomous and unassisted. The proposal has been tested with a mobile robot that uses infrared sensors to measure distances to obstacles and uses an ultrasonic beacon system for localization, besides wheel encoders. Finally, experimental results are presented.

  11. A bridge network maintenance framework for Pareto optimization of stakeholders/users costs

    Energy Technology Data Exchange (ETDEWEB)

    Orcesi, Andre D., E-mail: andre.orcesi@lcpc.f [Laboratoire Central des Ponts et Chaussees, Section Durabilite des Ouvrages d' Art, 58 boulevard Lefebvre, F-75732 Paris Cedex 15 (France); Cremona, Christian F., E-mail: Christian.Cremona@developpement-durable.gouv.f [Commissariat General au Developpement Durable, Direction de la recherche et de l' innovation, Tour Pascal B, F-92055, La Defense Cedex (France)

    2010-11-15

    For managing highway bridges, stakeholders require efficient and practical decision making techniques. In a context of limited bridge management budget, it is crucial to determine the most effective breakdown of financial resources over the different structures of a bridge network. Bridge management systems (BMSs) have been developed for such a purpose. However, they generally rely on an individual approach. The influence of the position of bridges in the transportation network, the consequences of inadequate service for the network users, due to maintenance actions or bridge failure, are not taken into consideration. Therefore, maintenance strategies obtained with current BMSs do not necessarily lead to an optimal level of service (LOS) of the bridge network for the users of the transportation network. Besides, the assessment of the structural performance of highway bridges usually requires the access to the geometrical and mechanical properties of its components. Such information might not be available for all structures in a bridge network for which managers try to schedule and prioritize maintenance strategies. On the contrary, visual inspections are performed regularly and information is generally available for all structures of the bridge network. The objective of this paper is threefold (i) propose an advanced network-level bridge management system considering the position of each bridge in the transportation network, (ii) use information obtained at visual inspections to assess the performance of bridges, and (iii) compare optimal maintenance strategies, obtained with a genetic algorithm, when considering interests of users and bridge owner either separately as conflicting criteria, or simultaneously as a common interest for the whole community. In each case, safety and serviceability aspects are taken into account in the model when determining optimal strategies. The theoretical and numerical developments are applied on a French bridge network.

  12. An optimization model for the costs allocation in spare parts collaborative networks

    Science.gov (United States)

    Ronzoni, Chiara; Ferrara, Andrea; Grassi, Andrea

    2017-07-01

    The paper focuses on the aftermarket spare parts in the automotive industry. In particular, only products with infrequent and low quantity demand are considered. This work is an extention of a previuos work by the same authors in which a stochastic model for the optimal inventory policy of spare parts has been presented. In this paper the authors improved what has been called "the second layer" in the previous model, that is the products allocation among a collaborative network of distributors. The improvement is related to the basic model and to the introducion of new products in the network.

  13. Cost and CO2 emission optimization of precast prestressed concrete U-beam road bridges by a hybrid glowworm swarm algorithm

    OpenAIRE

    Yepes Piqueras, Víctor; Martí Albiñana, José Vicente

    2015-01-01

    This paper describes a methodology to optimize cost and CO2 emissions when designing precast-prestressed concrete road bridges with a double U-shape cross-section. To this end, a hybrid glowworm swarm optimization algorithm (SAGSO) is used to combine the synergy effect of the local search with simulated annealing (SA) and the global search with glowworm swarm optimization (GSO). The solution is defined by 40 variables, including the geometry, materials and reinforcement of the beam and the sl...

  14. A Flexible Job Shop Scheduling Problem with Controllable Processing Times to Optimize Total Cost of Delay and Processing

    Directory of Open Access Journals (Sweden)

    Hadi Mokhtari

    2015-11-01

    Full Text Available In this paper, the flexible job shop scheduling problem with machine flexibility and controllable process times is studied. The main idea is that the processing times of operations may be controlled by consumptions of additional resources. The purpose of this paper to find the best trade-off between processing cost and delay cost in order to minimize the total costs. The proposed model, flexible job shop scheduling with controllable processing times (FJCPT, is formulated as an integer non-linear programming (INLP model and then it is converted into an integer linear programming (ILP model. Due to NP-hardness of FJCPT, conventional analytic optimization methods are not efficient. Hence, in order to solve the problem, a Scatter Search (SS, as an efficient metaheuristic method, is developed. To show the effectiveness of the proposed method, numerical experiments are conducted. The efficiency of the proposed algorithm is compared with that of a genetic algorithm (GA available in the literature for solving FJSP problem. The results showed that the proposed SS provide better solutions than the existing GA.

  15. Optimal Materials and Deposition Technique Lead to Cost-Effective Solar Cell with Best-Ever Conversion Efficiency (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2012-07-01

    This fact sheet describes how the SJ3 solar cell was invented, explains how the technology works, and why it won an R&D 100 Award. Based on NREL and Solar Junction technology, the commercial SJ3 concentrator solar cell - with 43.5% conversion efficiency at 418 suns - uses a lattice-matched multijunction architecture that has near-term potential for cells with {approx}50% efficiency. Multijunction solar cells have higher conversion efficiencies than any other type of solar cell. But developers of utility-scale and space applications crave even better efficiencies at lower costs to be both cost-effective and able to meet the demand for power. The SJ3 multijunction cell, developed by Solar Junction with assistance from foundational technological advances by the National Renewable Energy Laboratory, has the highest efficiency to date - almost 2% absolute more than the current industry standard multijunction cell-yet at a comparable cost. So what did it take to create this cell having 43.5% efficiency at 418-sun concentration? A combination of materials with carefully designed properties, a manufacturing technique allowing precise control, and an optimized device design.

  16. A Novel Hybrid Algorithm for Software Cost Estimation Based on Cuckoo Optimization and K-Nearest Neighbors Algorithms

    Directory of Open Access Journals (Sweden)

    E. E. Miandoab

    2016-06-01

    Full Text Available The inherent uncertainty to factors such as technology and creativity in evolving software development is a major challenge for the management of software projects. To address these challenges the project manager, in addition to examining the project progress, may cope with problems such as increased operating costs, lack of resources, and lack of implementation of key activities to better plan the project. Software Cost Estimation (SCE models do not fully cover new approaches. And this lack of coverage is causing problems in the consumer and producer ends. In order to avoid these problems, many methods have already been proposed. Model-based methods are the most familiar solving technique. But it should be noted that model-based methods use a single formula and constant values, and these methods are not responsive to the increasing developments in the field of software engineering. Accordingly, researchers have tried to solve the problem of SCE using machine learning algorithms, data mining algorithms, and artificial neural networks. In this paper, a hybrid algorithm that combines COA-Cuckoo optimization and K-Nearest Neighbors (KNN algorithms is used. The so-called composition algorithm runs on six different data sets and is evaluated based on eight evaluation criteria. The results show an improved accuracy of estimated cost.

  17. A low-cost, high-yield fabrication method for producing optimized biomimetic dry adhesives

    Science.gov (United States)

    Sameoto, D.; Menon, C.

    2009-11-01

    We present a low-cost, large-scale method of fabricating biomimetic dry adhesives. This process is useful because it uses all photosensitive polymers with minimum fabrication costs or complexity to produce molds for silicone-based dry adhesives. A thick-film lift-off process is used to define molds using AZ 9260 photoresist, with a slow acting, deep UV sensitive material, PMGI, used as both an adhesion promoter for the AZ 9260 photoresist and as an undercutting material to produce mushroom-shaped fibers. The benefits to this process are ease of fabrication, wide range of potential layer thicknesses, no special surface treatment requirements to demold silicone adhesives and easy stripping of the full mold if process failure does occur. Sylgard® 184 silicone is used to cast full sheets of biomimetic dry adhesives off 4" diameter wafers, and different fiber geometries are tested for normal adhesion properties. Additionally, failure modes of the adhesive during fabrication are noted and strategies for avoiding these failures are discussed. We use this fabrication method to produce different fiber geometries with varying cap diameters and test them for normal adhesion strengths. The results indicate that the cap diameters relative to post diameters for mushroom-shaped fibers dominate the adhesion properties.

  18. Cost optimizing of large-scale offshore wind farms. Appendix K to Q. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-07-01

    This Volume 4 contains reports prepared by SEAS Distribution A.m.b.A., Risoe National Laboratory, Nellemann, Nielsen and Rauschenberger A/S (NNR), Universidad Politecnica de Madrid, National Wind Power Ltd. and Stadtwerke Rostock AG. Appendix K - Wind and wave measurements; Appendix L - Establishment of design basis (wind, wave and ice loads). Appendix M - Wake effects and wind farm modelling. Appendix N - Functional requirements and optimisation of wind turbines. Appendix O - Operation and maintenance system. Appendix O.1 - Helicopter Service (alternative). Appendix P - Cost optimising of large scale offshore wind farms in UK waters. Appendix Q - Cost optimising of large scale offshore wind farms in German waters. Appendix K, L and N have been prepared by Risoe National Laboratory. Appendix M has been prepared by Universidad Politecnica de Madrid. Appendix 0 has been prepared by SEAS Distribution A.m.b.A.. Appendix O.1 has been prepared by Nellemann, Nielsen and Rauschenberger A/S. Appendix P has been prepared by National Wind Power Ltd.. Appendix Q has been prepared by Stadtwerke Rostock AG. (au)

  19. The Cost-Optimal Distribution of Wind and Solar Generation Facilities in a Simplified Highly Renewable European Power System

    Science.gov (United States)

    Kies, Alexander; von Bremen, Lüder; Schyska, Bruno; Chattopadhyay, Kabitri; Lorenz, Elke; Heinemann, Detlev

    2016-04-01

    The transition of the European power system from fossil generation towards renewable sources is driven by different reasons like decarbonisation and sustainability. Renewable power sources like wind and solar have, due to their weather dependency, fluctuating feed-in profiles, which make their system integration a difficult task. To overcome this issue, several solutions have been investigated in the past like the optimal mix of wind and PV [1], the extension of the transmission grid or storages [2]. In this work, the optimal distribution of wind turbines and solar modules in Europe is investigated. For this purpose, feed-in data with an hourly temporal resolution and a spatial resolution of 7 km covering Europe for the renewable sources wind, photovoltaics and hydro was used. Together with historical load data and a transmission model , a simplified pan-European power power system was simulated. Under cost assumptions of [3] the levelized cost of electricity (LCOE) for this simplified system consisting of generation, consumption, transmission and backup units is calculated. With respect to the LCOE, the optimal distribution of generation facilities in Europe is derived. It is shown, that by optimal placement of renewable generation facilities the LCOE can be reduced by more than 10% compared to a meta study scenario [4] and a self-sufficient scenario (every country produces on average as much from renewable sources as it consumes). This is mainly caused by a shift of generation facilities towards highly suitable locations, reduced backup and increased transmission need. The results of the optimization will be shown and implications for the extension of renewable shares in the European power mix will be discussed. The work is part of the RESTORE 2050 project (Wuppertal Institute, Next Energy, University of Oldenburg), that is financed by the Federal Ministry of Education and Research (BMBF, Fkz. 03SFF0439A). [1] Kies, A. et al.: Kies, Alexander, et al

  20. Polarimetry of the Interstellar Medium

    Science.gov (United States)

    Sandford, Scott; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    The talk will review what is known about the composition of ices and organics in the dense and diffuse interstellar media (ISM). Mixed molecular ices make up a significant fraction of the solid materials in dense molecular clouds and it is now known that thermal and radiation processing of these ices results in the production of more complex organic species, some of which may survive transport into forming stellar systems and the diffuse ISM. Molecular species identified in interstellar ices include H2O, CH3OH, CO, CH4, CO2, and somewhat surprisingly, H2. Theoretical and laboratory studies of the processing of interstellar analog ices containing these species indicate that species like HCO, H2CO, CH3, and NH3 are readily made and should also be present. The irradiation of mixed molecular ices containing these species, when followed by warming, leads to the production of a large variety of more complex species, including ethanol (CH3CH2OH), formamide (HC(=O)NH2), acetamide (CH3C(=O)NH2), nitriles or isonitriles (R-CN or R-NC hexamethylenetetramine (HMT; C6H12N4), a number of polymeric species related to polyoxymethylene [POM,(-CH2O-)n], and ketones {R-C(=O)-R'}. Spectral studies of dust in the diffuse ISM indicate the presence of fairly complex organics, some of which may be related to the organics produced in dense molecular clouds. Spectral comparisons indicate that the diffuse ISM organics may be quite similar to meteoritic kerogens, i.e. they may consist largely of aromatic moieties interlinked by short aliphatic bridges. Interestingly, recent evidence indicates that the galactic distribution of this material closely matches that of silicates, but does not correlate directly with visual extinction. This implies that a large fraction of the visual extinction is caused by a material other than these organics and silicates and that this other material has a significantly different distribution within the galaxy.

  1. Interstellar Dust Inside and Outside the Heliosphere

    CERN Document Server

    Krueger, Harald

    2008-01-01

    In the early 1990s, after its Jupiter flyby, the Ulysses spacecraft identified interstellar dust in the solar system. Since then the in-situ dust detector on board Ulysses continuously monitored interstellar grains with masses up to 10e-13 kg, penetrating deep into the solar system. While Ulysses measured the interstellar dust stream at high ecliptic latitudes between 3 and 5 AU, interstellar impactors were also measured with the in-situ dust detectors on board Cassini, Galileo and Helios, covering a heliocentric distance range between 0.3 and 3 AU in the ecliptic plane. The interstellar dust stream in the inner solar system is altered by the solar radiation pressure force, gravitational focussing and interaction of charged grains with the time varying interplanetary magnetic field. The grains act as tracers of the physical conditions in the local interstellar cloud (LIC). Our in-situ measurements imply the existence of a population of 'big' interstellar grains (up to 10e-13 kg) and a gas-to-dust-mass ratio i...

  2. Cost optimizing of large-scale offshore wind arms in UK waters

    Energy Technology Data Exchange (ETDEWEB)

    Bean, D. [National wind Power Ltd. (United Kingdom)

    1999-07-01

    As part of the study `Cost Optimising of Large Scale Offshore Wind Farms`, National Wind Power`s objective is to broaden the scope of the study into the UK context. The suitability of an offshore wind farm development has been reviewed for a variety of regions around the UK, culminating in the selection of a reference site off the east coast of England. A design basis for this site has been derived, and a preliminary foundation design has been performed from within the consortium. Due primarily to the increased wave exposure at the UK reference site, the resulting gravity and monopile designs were larger and therefore more expensive than their Danish counterparts. A summary of the required consents for an offshore wind farm in UK waters is presented, together with an update on the recent consultation process initiated by UK Government on offshore wind energy. (au) EFP-96; JOULE-3. 22 refs.

  3. Technical improvements applied to a double-pass setup for performance and cost optimization

    Science.gov (United States)

    Sanabria, Ferran; Arévalo, Mikel Aldaba; Díaz-Doutón, Fernando; García-Guerra, Carlos Enrique; Ramo, Jaume Pujol

    2014-06-01

    We have studied the possibility of improving the performance, simplifying, and reducing the cost of a double-pass system by the use of alternative technologies. The system for correcting the spherical correction has been based on a focusable electro-optical lens, and a recording device based on CMOS technology and a superluminescent diode (SLED) light source have been evaluated separately. The suitability of the CMOS camera has been demonstrated, while the SLED could not break the speckle by itself. The final experimental setup, consisting of a CMOS camera and a laser diode, has been compared with a commercial double-pass system, proving its usefulness for ocular optical quality and scattering measurements.

  4. Polarized Emission from Interstellar Dust

    CERN Document Server

    Vaillancourt, J E

    2006-01-01

    Observations of far-infrared (FIR) and submillimeter (SMM) polarized emission are used to study magnetic fields and dust grains in dense regions of the interstellar medium (ISM). These observations place constraints on models of molecular clouds, star-formation, grain alignment mechanisms, and grain size, shape, and composition. The FIR/SMM polarization is strongly dependent on wavelength. We have attributed this wavelength dependence to sampling different grain populations at different temperatures. To date, most observations of polarized emission have been in the densest regions of the ISM. Extending these observations to regions of the diffuse ISM, and to microwave frequencies, will provide additional tests of grain and alignment models. An understanding of polarized microwave emission from dust is key to an accurate measurement of the polarization of the cosmic microwave background. The microwave polarization spectrum will put limits on the contributions to polarized emission from spinning dust and vibrat...

  5. Optimal design of green and grey stormwater infrastructure for small urban catchment based on life-cycle cost-effectiveness analysis

    Science.gov (United States)

    Yang, Y.; Chui, T. F. M.

    2016-12-01

    Green infrastructure (GI) is identified as sustainable and environmentally friendly alternatives to the conventional grey stormwater infrastructure. Commonly used GI (e.g. green roof, bioretention, porous pavement) can provide multifunctional benefits, e.g. mitigation of urban heat island effects, improvements in air quality. Therefore, to optimize the design of GI and grey drainage infrastructure, it is essential to account for their benefits together with the costs. In this study, a comprehensive simulation-optimization modelling framework that considers the economic and hydro-environmental aspects of GI and grey infrastructure for small urban catchment applications is developed. Several modelling tools (i.e., EPA SWMM model, the WERF BMP and LID Whole Life Cycle Cost Modelling Tools) and optimization solvers are coupled together to assess the life-cycle cost-effectiveness of GI and grey infrastructure, and to further develop optimal stormwater drainage solutions. A typical residential lot in New York City is examined as a case study. The life-cycle cost-effectiveness of various GI and grey infrastructure are first examined at different investment levels. The results together with the catchment parameters are then provided to the optimization solvers, to derive the optimal investment and contributing area of each type of the stormwater controls. The relationship between the investment and optimized environmental benefit is found to be nonlinear. The optimized drainage solutions demonstrate that grey infrastructure is preferred at low total investments while more GI should be adopted at high investments. The sensitivity of the optimized solutions to the prices the stormwater controls is evaluated and is found to be highly associated with their utilizations in the base optimization case. The overall simulation-optimization framework can be easily applied to other sites world-wide, and to be further developed into powerful decision support systems.

  6. Reliability-Based and Cost-Oriented Product Optimization Integrating Fuzzy Reasoning Petri Nets, Interval Expert Evaluation and Cultural-Based DMOPSO Using Crowding Distance Sorting

    Directory of Open Access Journals (Sweden)

    Zhaoxi Hong

    2017-08-01

    Full Text Available In reliability-based and cost-oriented product optimization, the target product reliability is apportioned to subsystems or components to achieve the maximum reliability and minimum cost. Main challenges to conducting such optimization design lie in how to simultaneously consider subsystem division, uncertain evaluation provided by experts for essential factors, and dynamic propagation of product failure. To overcome these problems, a reliability-based and cost-oriented product optimization method integrating fuzzy reasoning Petri net (FRPN, interval expert evaluation and cultural-based dynamic multi-objective particle swarm optimization (DMOPSO using crowding distance sorting is proposed in this paper. Subsystem division is performed based on failure decoupling, and then subsystem weights are calculated with FRPN reflecting dynamic and uncertain failure propagation, as well as interval expert evaluation considering six essential factors. A mathematical model of reliability-based and cost-oriented product optimization is established, and the cultural-based DMOPSO with crowding distance sorting is utilized to obtain the optimized design scheme. The efficiency and effectiveness of the proposed method are demonstrated by the numerical example of the optimization design for a computer numerically controlled (CNC machine tool.

  7. Discovery of Interstellar Hydrogen Fluoride

    Science.gov (United States)

    Neufeld, David A.; Zmuidzinas, Jonas; Schilke, Peter; Phillips, Thomas G.

    1997-01-01

    We report the first detection of interstellar hydrogen fluoride. Using the Long Wavelength Spectrometer of the Infrared Space Observatory (ISO), we have detected the 121.6973 micron J = 2-1 line of HF in absorption toward the far-infrared continuum source Sagittarius B2. The detection is statistically significant at the 13 sigma level. On the basis of our model for the excitation of HF in Sgr B2, the observed line equivalent width of 1.0 nm implies a hydrogen fluoride abundance of approximately 3 x 10(exp -10) relative to H2. If the elemental abundance of fluorine in Sgr B2 is the same as that in the solar system, then HF accounts for approximately 2% of the total number of fluorine nuclei. We expect hydrogen fluoride to be the dominant reservoir of gas-phase fluorine in Sgr B2, because it is formed rapidly in exothermic reactions of atomic fluorine with either water or molecular hydrogen; thus, the measured HF abundance suggests a substantial depletion of fluorine onto dust grains. Similar conclusions regarding depletion have previously been reached for the case of chlorine in dense interstellar clouds. We also find evidence at a lower level of statistical significance (approximately 5 sigma) for an emission feature at the expected position of the 4(sub 32)-4(sub 23) 121.7219 micron line of water. The emission-line equivalent width of 0.5 nm for the water feature is consistent with the water abundance of 5 x 10(exp -6) relative to H2 that has been inferred previously from observations of the hot core of Sgr B2.

  8. Interstellar Grains: Effect of Inclusions on Extinction

    CERN Document Server

    Katyal, Nisha; Vaidya, D B

    2011-01-01

    A composite dust grain model which simultaneously explains the observed interstellar extinction, polarization, IR emission and the abundance constraints, is required. We present a composite grain model, which is made up of a host silicate oblate spheroid and graphite inclusions. The interstellar extinction curve is evaluated in the spectral region 3.4-0.1$\\mu m$ using the extinction efficiencies of the composite spheroidal grains for three axial ratios. Extinction curves are computed using the discrete dipole approximation (DDA). The model curves are subsequently compared with the average observed interstellar extinction curve and with an extinction curve derived from the IUE catalogue data.

  9. Interstellar grains: Effect of inclusions on extinction

    Science.gov (United States)

    Katyal, N.; Gupta, R.; Vaidya, D. B.

    2011-10-01

    A composite dust grain model which simultaneously explains the observed interstellar extinction, polarization, IR emission and the abundance constraints, is required. We present a composite grain model, which is made up of a host silicate oblate spheroid and graphite inclusions. The interstellar extinction curve is evaluated in the spectral region 3.4-0.1 μm using the extinction efficiencies of composite spheroidal grains for three axial ratios. Extinction curves are computed using the discrete dipole approximation (DDA). The model curves are subsequently compared with the average observed interstellar extinction curve and with an extinction curve derived from the IUE catalogue data.

  10. Trajectories for a Near Term Mission to the Interstellar Medium

    Science.gov (United States)

    Arora, Nitin; Strange, Nathan; Alkalai, Leon

    2015-01-01

    Trajectories for rapid access to the interstellar medium (ISM) with a Kuiper Belt Object (KBO) flyby, launching between 2022 and 2030, are described. An impulsive-patched-conic broad search algorithm combined with a local optimizer is used for the trajectory computations. Two classes of trajectories, (1) with a powered Jupiter flyby and (2) with a perihelion maneuver, are studied and compared. Planetary flybys combined with leveraging maneuvers reduce launch C3 requirements (by factor of 2 or more) and help satisfy mission-phasing constraints. Low launch C3 combined with leveraging and a perihelion maneuver is found to be enabling for a near-term potential mission to the ISM.

  11. System-Cost-Optimized Smart EVSE for Residential Application: Final Technical Report including Manufacturing Plan

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Charles [Delta Products, Triangle Park, NC (United States)

    2015-05-15

    In the 2nd quarter of 2012, a program was formally initiated at Delta Products to develop smart-grid-enabled Electric Vehicle Supply Equipment (EVSE) product for residential use. The project was funded in part by the U.S. Department of Energy (DOE), under award DE-OE0000590. Delta products was the prime contractor to DOE during the three year duration of the project. In addition to Delta Products, several additional supplier-partners were engaged in this research and development (R&D) program, including Detroit Edison DTE, Mercedes Benz Research and Development North America, and kVA. This report summarizes the program and describes the key research outcomes of the program. A technical history of the project activities is provided, which describes the key steps taken in the research and the findings made at successive stages in the multi-stage work. The evolution of an EVSE prototype system is described in detail, culminating in prototypes shipped to Department of Energy Laboratories for final qualification. After the program history is reviewed, the key attributes of the resulting EVSE are described in terms of functionality, performance, and cost. The results clearly demonstrate the ability of this EVSE to meet or exceed DOE's targets for this program, including: construction of a working product-intent prototype of a smart-grid-enabled EVSE, with suitable connectivity to grid management and home-energy management systems, revenue-grade metering, and related technical functions; and cost reduction of 50% or more compared to typical market priced EVSEs at the time of DOE's funding opportunity announcement (FOA), which was released in mid 2011. In addition to meeting all the program goals, the program was completed within the original budget and timeline established at the time of the award. The summary program budget and timeline, comparing plan versus actual values, is provided for reference, along with several supporting explanatory notes. Technical

  12. Drop cost and wavelength optimal two-period grooming with ratio 4

    CERN Document Server

    Bermond, Jean-Claude; Gionfriddo, Lucia; Quattrocchi, Gaetano; Valls, Ignasi Sau

    2009-01-01

    We study grooming for two-period optical networks, a variation of the traffic grooming problem for WDM ring networks introduced by Colbourn, Quattrocchi, and Syrotiuk. In the two-period grooming problem, during the first period of time, there is all-to-all uniform traffic among $n$ nodes, each request using $1/C$ of the bandwidth; and during the second period, there is all-to-all uniform traffic only among a subset $V$ of $v$ nodes, each request now being allowed to use $1/C'$ of the bandwidth, where $C' < C$. We determine the minimum drop cost (minimum number of ADMs) for any $n,v$ and C=4 and $C' \\in \\{1,2,3\\}$. To do this, we use tools of graph decompositions. Indeed the two-period grooming problem corresponds to minimizing the total number of vertices in a partition of the edges of the complete graph $K_n$ into subgraphs, where each subgraph has at most $C$ edges and where furthermore it contains at most $C'$ edges of the complete graph on $v$ specified vertices. Subject to the condition that the two-p...

  13. Cost Optimization Technique of Task Allocation in Heterogeneous Distributed Computing System

    Directory of Open Access Journals (Sweden)

    Faizul Navi Khan

    2013-11-01

    Full Text Available A Distributed Computing System (DCS is a network of workstations, personal computer and /or other computing systems. Such system may be heterogeneous in the sense that the computing nodes may have different speeds and memory capacities. A DCS accepts tasks from users and executes different modules of these tasks on various nodes of the system. Task allocation in a DCS is a common problem and a good number of task allocation algorithms have been proposed in the literature. In such environment an application runs in a DCS can be accessible on every node present within the DCS. In such cases if number of tasks is less than or equal to available processors in the DCS, we can assign these task without any trouble. But this allocation becomes complicated when numbers of tasks are greater than the numbers of processors. The problem of task allocation for processing of ‘m’ tasks to ‘n’ processors (m>n in a DCS is addressed here through a new modified tasks allocation technique. The model, presented in this paper allocates the tasks to the processor of different processing capacity to increase the performance of the DCS. The technique presented in this paper is based on the consideration of processing cost of the task to the processors. We have tried a new technique to assign all the tasks as per the required availability of processors and their processing capacity so that all the tasks of application get execute in the DCS.

  14. Optimization of energy consumption and cost effectiveness of modular buildings by using renewable energy sources

    Directory of Open Access Journals (Sweden)

    Peter Tauš

    2015-10-01

    Full Text Available Problems of the temporary structures are generally dealt with by the use of modular buildings. These actually meet the terms of low costs, as appose to the terms of convenience of use, or energy efficiency in operation. Using the latest technologies in the production of the modular buildings has improved the operation sufficiently; it is now possible to use them entirely for purposes associated with the use of the buildings. Office buildings, warehouses, and conference rooms have become common standard. In Slovakia, we can already see it as a normal part of cities and municipalities: social housing, schools, and kindergartens, which were all built using this technology. During the assessment phase of these buildings, energy efficiency is always the priority. This article is aimed at establishing the economic potential of modular buildings in the field of use of renewable energy sources. For the formulation of the problem and the definition of borders of studied parameters, we proposed a four-dimensional competency decision-making space. This determines the examination process that should identify areas in which it is appropriate to consider and assess the use of renewable energy sources.

  15. Optimal Cost of a Solar Photovoltaic System for a Remote House in Bihar

    Directory of Open Access Journals (Sweden)

    Sujit Kumar Jha

    2015-07-01

    Full Text Available Energy plays a vital role for the growth of a country. Solar energy is the most important renewable energy resources that can play vital role in the replacement of fossil resources to generate clean energy. Due to technological developments in solar power technologies, solar energy can be used for cooling, heating, and daily electricity demand of the world and emerged as viable alternative to produce clean energy in future. The paper describes the technological development of PV model, its present status and future opportunities in the context of Bihar, India. The study was carried out in Bihar, global solar radiation data is required for the calculation and assessment of the working principles of PV system installed at remotely located house to provide adequate power backup. The case study has been based on the solar radiation data available in Bihar, India, the cost of a suitable PV model for a house has been computed based on the analysis of power requirement of a houses in a day.

  16. Dynamic flood webmapping: an operational and cost-limited tool to optimize crisis management

    Directory of Open Access Journals (Sweden)

    Strappazzon Quentin

    2016-01-01

    Full Text Available Due to strong climate variations and the multiplication of flood events, protection based strategies are no longer sufficient to handle a watershed scale crisis. Monitoring, prediction and alert procedures are required to ensure effective crisis and post-crisis management which explains the recent interest for real time predictions systems. Nevertheless, this kind of system, when fully implemented with in-situ monitoring network, meteorological forecast inputs, hydrological and hydraulic modelling and flood mapping, are often postponed or cancelled because of both their cost and time scale. That is why Prolog Ingénierie and the SyAGE have developed, as an economical and technical sustainable alternative, a tool providing shared access to a real time mapping of current and predicted flooded areas along with a dynamic listing of exposed stakes (such as public buildings, sensible infrastructures, environmental buildings, roads. The update of these maps is performed from the combination of predicted water levels in the river and a flood envelop library (based on 1D/2D hydraulic model results for a wide panel of discharges and hydraulic structures states conditions. This tool has already been implemented on the downstream part of the Yerres River, a tributary of the Seine River in France.

  17. Development & Optimization of Materials and Processes for a Cost Effective Photoelectrochemical Hydrogen Production System. Final report

    Energy Technology Data Exchange (ETDEWEB)

    McFarland, Eric W

    2011-01-17

    The overall project objective was to apply high throughput experimentation and combinatorial methods together with novel syntheses to discover and optimize efficient, practical, and economically sustainable materials for photoelectrochemical production of bulk hydrogen from water. Automated electrochemical synthesis and photoelectrochemical screening systems were designed and constructed and used to study a variety of new photoelectrocatalytic materials. We evaluated photocatalytic performance in the dark and under illumination with or without applied bias in a high-throughput manner and did detailed evaluation on many materials. Significant attention was given to -Fe2O3 based semiconductor materials and thin films with different dopants were synthesized by co-electrodeposition techniques. Approximately 30 dopants including Al, Zn, Cu, Ni, Co, Cr, Mo, Ti, Pt, etc. were investigated. Hematite thin films doped with Al, Ti, Pt, Cr, and Mo exhibited significant improvements in efficiency for photoelectrochemical water splitting compared with undoped hematite. In several cases we collaborated with theorists who used density functional theory to help explain performance trends and suggest new materials. The best materials were investigated in detail by X-ray diffraction (XRD), scanning electron microscopy (SEM), ultraviolet-visual spectroscopy (UV-Vis), X-ray photoelectron spectroscopy (XPS). The photoelectrocatalytic performance of the thin films was evaluated and their incident photon

  18. Cosmocultural Evolution: Cosmic Motivation for Interstellar Travel?

    Science.gov (United States)

    Lupisella, M.

    Motivations for interstellar travel can vary widely from practical survival motivations to wider-ranging moral obligations to future generations. But it may also be fruitful to explore what, if any, "cosmic" relevance there may be regarding interstellar travel. Cosmocultural evolution can be defined as the coevolution of cosmos and culture, with cultural evolution playing an important and perhaps critical role in the overall evolution of the universe. Strong versions of cosmocultural evolution might suggest that cultural evolution may have unlimited potential as a cosmic force. In such a worldview, the advancement of cultural beings throughout the universe could have significant cosmic relevance, perhaps providing additional motivation for interstellar travel. This paper will explore some potential philosophical and policy implications for interstellar travel of a cosmocultural evolutionary perspective and other related concepts, including some from a recent NASA book, Cosmos and Culture: Cultural Evolution in a Cosmic Context.

  19. Physical Processes in the Interstellar Medium

    CERN Document Server

    Klessen, Ralf S

    2014-01-01

    Interstellar space is filled with a dilute mixture of charged particles, atoms, molecules and dust grains, called the interstellar medium (ISM). Understanding its physical properties and dynamical behavior is of pivotal importance to many areas of astronomy and astrophysics. Galaxy formation and evolution, the formation of stars, cosmic nucleosynthesis, the origin of large complex, prebiotic molecules and the abundance, structure and growth of dust grains which constitute the fundamental building blocks of planets, all these processes are intimately coupled to the physics of the interstellar medium. However, despite its importance, its structure and evolution is still not fully understood. Observations reveal that the interstellar medium is highly turbulent, consists of different chemical phases, and is characterized by complex structure on all resolvable spatial and temporal scales. Our current numerical and theoretical models describe it as a strongly coupled system that is far from equilibrium and where th...

  20. Silicate Composition of the Interstellar Medium

    CERN Document Server

    Fogerty, Shane; Watson, Dan M; Sargent, Benjamin A; Koch, Ingrid

    2016-01-01

    The composition of silicate dust in the diffuse interstellar medium and in protoplanetary disks around young stars informs our understanding of the processing and evolution of the dust grains leading up to planet formation. Analysis of the well-known 9.7{\\mu}m feature indicates that small amorphous silicate grains represent a significant fraction of interstellar dust and are also major components of protoplanetary disks. However, this feature is typically modelled assuming amorphous silicate dust of olivine and pyroxene stoichiometries. Here, we analyze interstellar dust with models of silicate dust that include non-stoichiometric amorphous silicate grains. Modelling the optical depth along lines of sight toward the extinguished objects Cyg OB2 No. 12 and {\\zeta} Ophiuchi, we find evidence for interstellar amorphous silicate dust with stoichiometry intermediate between olivine and pyroxene, which we simply refer to as "polivene." Finally, we compare these results to models of silicate emission from the Trapez...

  1. Evaluation of interpolation effects on upsampling and accuracy of cost functions-based optimized automatic image registration.

    Science.gov (United States)

    Mahmoudzadeh, Amir Pasha; Kashou, Nasser H

    2013-01-01

    Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  2. Capacity optimization of battery-generator hybrid power system: Toward minimizing maintenance cost in expeditionary basecamp/operational energy applications

    Science.gov (United States)

    Onwuanumkpe, Jude C.

    Low and transient load condition are known to have deleterious impact on the efficiency and health of diesel generators (DGs). Extensive operation under such loads reduces fuel consumption and energy conversion efficiency, and contribute to diesel engine degradation, damage, or catastrophic failure. Non-ideal loads are prevalent in expeditionary base camps that support contingency operations in austere environments or remote locations where grid electricity is either non-existent or inaccessible. The impact of such loads on DGs exacerbates already overburdened basecamp energy logistics requirements. There is a need, therefore, to eliminate or prevent the occurrence of non-ideal loads. Although advances in diesel engine technologies have improved their performance, DGs remain vulnerable to the consequences of non-ideal loads and inherent inefficiencies of combustion. The mechanisms through which DGs respond to and mitigate non-ideal loads are also mechanically stressful and energy-intensive. Thus, this research investigated the idea of using batteries to prevent DGs from encountering non-ideal loads, as a way to reduce basecamp energy logistics requirements. Using a simple semi-empirical approach, the study modeled and simulated a battery-DG hybrid system under various load conditions. The simulation allowed for synthesis of design space in which specified battery and generator capacity can achieve optimal savings in fuel consumption and maintenance cost. Results show that a right-sized battery-diesel generator system allows for more than 50% cost savings relative to a standalone generator.

  3. Communal sewage systems. Dimensioning, extension, optimization, cost. 2. tot. new rev. ed.; Kommunale Klaeranlagen. Bemessung, Erweiterung, Optimierung und Kosten

    Energy Technology Data Exchange (ETDEWEB)

    Guenthert, F.W.; Reicherter, E.; Gallent, W. [Universitaet der Bundeswehr Muenchen, Neubiberg (Germany). Inst. fuer Wasserwesen; Baumann, P. [Wave GmbH, Stuttgart (Germany); Guender, B. [Berghof Filtrations- und Anlagentechnik GmbH und Co. KG, Eningen (Germany); Kapp, H. [Fachhochschule Biberach - Hochschule fuer Bauwesen und Wirtschaft, Biberach an der Riss (Germany); Mueller, J. [Technische Universitaet, Braunschweig (Germany). Institut fuer Siedlungswasserwirtschaft; Riedl, J. [Wasserwirtschaftsamt Weilheim (Germany); Steinle, E. [Dr. Steinle Ingenieurgesellschaft fuer Abwassertechnik, Weyam (Germany); Strohmeier, A. [Fachhochschule Aachen (Germany)

    2001-07-01

    The book presents practical solutions to current problems encountered in the projecting and construction of sewage systems: Fundamentals of biological sewage treatment, biological processes (activation, immersion disks, submerged fixed bed), further treatment of effluents; Dimensioning of activation systems, alternative processes, extension and optimization of existing sewage treatment systems, simulations and cost of process selection; Further treatment of effluents with phosphorus elimination and filtration systems; Sewage sludge treatment and its effects on waste water treatment and cost. [German] Das Buch bietet praxisnahe Problemloesungen fuer die heute anstehenden Fragestellungen bei Planung und Bau von Abwasserbehandlungsanlagen. In der ersten Auflage Bemessung von kommunalen Klaeranlagen wird sehr ausfuehrlich auf die Grundlagen der biologischen Abwasserreinigung, die verschiedenen Verfahren (Belebungsanlagen, Tropfkoerperanlagen, Anlagen mit getauchtem Festbett) sowie die weitergehende Abwasserreinigung eingegangen. Aufbauend auf diesen Grundlagen enthaelt die vorliegende zweite Auflage die inzwischen weiter entwickelten Bemessungsansaetze fuer das Belebungsverfahren (Belebung und Nachklaerung). Zudem werden Verfahrenspielraeume und Verfahrensalternativen (SBR-Anlagen) fuer den Bau und die Erweiterung und Optimierung von bestehenden Abwasserbehandlungsanlagen aufgezeigt. Grundlagen der Simulation und Ermittlung der anfallenden Kosten zur Bewertung von Verfahrensvarianten sollen dem Anwender helfen, die betrieblich und wirtschaftlich beste Loesung zu finden. Die weitergehende Abwasserbehandlung mit Phosphoreliminaton und Filtrationsanlagen ist in zwei Beitraegen ausfuehrlich dargestellt. Abschliessend wird auf die Klaerschlammbehandlung eingegangen, insbesondere auf ihre Auswirkungen auf die Abwasserbehandlung und deren Kosten. (orig.)

  4. Evaluation of Interpolation Effects on Upsampling and Accuracy of Cost Functions-Based Optimized Automatic Image Registration

    Directory of Open Access Journals (Sweden)

    Amir Pasha Mahmoudzadeh

    2013-01-01

    Full Text Available Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order and to compare the effect of cost functions (least squares (LS, normalized mutual information (NMI, normalized cross correlation (NCC, and correlation ratio (CR for optimized automatic image registration (OAIR on 3D spoiled gradient recalled (SPGR magnetic resonance images (MRI of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.

  5. Physical Processes in the Interstellar Medium

    OpenAIRE

    2014-01-01

    Interstellar space is filled with a dilute mixture of charged particles, atoms, molecules and dust grains, called the interstellar medium (ISM). Understanding its physical properties and dynamical behavior is of pivotal importance to many areas of astronomy and astrophysics. Galaxy formation and evolution, the formation of stars, cosmic nucleosynthesis, the origin of large complex, prebiotic molecules and the abundance, structure and growth of dust grains which constitute the fundamental buil...

  6. Scouting the spectrum for interstellar travellers

    CERN Document Server

    Garcia-Escartin, Juan Carlos

    2012-01-01

    Advanced civilizations capable of interstellar travel, if they exist, are likely to have advanced propulsion methods. Spaceships moving at high speeds would leave a particular signature which could be detected from Earth. We propose a search based on the properties of light reflecting from objects travelling at relativistic speeds. Based on the same principles, we also propose a simple interstellar beacon with a solar sail.

  7. Optimal timing of CO{sub 2} mitigation costs; Optimales Timing von CO{sub 2}-Vermeidungskosten

    Energy Technology Data Exchange (ETDEWEB)

    Schleich, J. [Fraunhofer-Institut fuer Systemtechnik und Innovationsforschung (ISI), Karlsruhe (Germany). Abt. Energietechnik und Energiepolitik

    1999-07-01

    The lecture refers to and summarizes the essence of available publications by experts worldwide, dealing with carbon dioxide mitigation policy and major aspects of global implementation such as: Time-related distribution of greenhouse gas reduction costs, the modelling of optimal carbon dioxide mitigation strategies, and the optimal time path. Various approaches of a variety of authors mentioned in the bibliography are briefly discussed, as well as case studies and models. (orig./CB) [German] Die zeitliche Verteilung der Treibhausgasminderungskosten steht im Mittelpunkt der aktuellen politischen und oekonomischen Debatte um die Ausgestaltung von optimalen CO{sub 2}-Reduktionsstrategien (Jochem 1998). Kernpunkt ist die Frage nach dem optimalen Zeitpfad, d.h. wann CO{sub 2}-Reduktionen vorgenommen werden sollen. Wenngleich die Existenz des Treibhausgaseffektes per se nicht in Frage gestellt wird, so kommen mehrere Studien zu dem Schluss, dass es aus oekonomischer Sicht sinnvoll, ist, Verminderungaktivitaeten zunaechst auf die Zukunft zu verschieben und dann spaeter entsprechend staerker zu vermeiden. Diese abwartende Strategie, die zur Zeit auch von der US-Regierung favorisiert wird, findet in der Literatur Zuspruch von Nordhaus (1994), Manne, Mendelson und Richels (1995), Peck und Teisberg (1993), und Wigley, Richels, und Edmonds (1996). Am bekanntesten sind wohl die Ergebnisse des von Nordhaus (1994) entwickelten DICE Models, wonach es optimal ist, die CO{sub 2}-Emissionsrate ueber die naechsten hundert Jahre zunaechst zu verdreifachen. Eine entgegengesetzte Position vertreten Cline (1992), Azar und Sterner (1996), Schultz und Kastings (1997) sowie Hasselman et al. (1997), die - ebenso wie mehrere EU Laender (z.B. Deutschland, DK) - eine Strategie mit fruehzeitigen Emissionsreduktionen befuerworten. (orig.)

  8. Research on the Modeling and Optimization of Individual Cost-free Normal Students Cultivating Cost%师范生免费教育的生均培养成本建模与优化研究

    Institute of Scientific and Technical Information of China (English)

    禹小英

    2012-01-01

    政府对免费师范生目前采用"两免一补"政策,根据湖南第一师范学院师范生免费教育成本的具体情况,可依据生均培养成本=教学成本/标准学生人数+学生基本生活成本/标准学生人数;教学成本=人员支出+公用支出+对个人和家庭的补助支出+固定资产折旧建立免费师范生生均培养成本模型,并优化免费师范生生均培养成本。%According to the governmental policy of "Two Free and One Allowance" to the cost-free students and the specific situation of the cultivating cost in Hunan First Normal University,we can construct the model and optimize the individual cost-free normal students cultivating cost with the following modeling: Individual cultivating cost equals to the quotient of teaching cost and standard students' amount plus the quotient of basic living cost and standard students' amount;teaching cost equals stuff cost plus public cost plus allowance to individuals and families plus depreciation of fixes assets.

  9. The hydrogen coverage of interstellar PAHs

    Science.gov (United States)

    Tielens, A. G. G. M.; Allamandola, L. J.; Barker, J. R.; Cohen, M.

    1987-01-01

    The rate at which the CH bond in interstellar Polycyclic Aromatic Hydrocarbons (PAHs) rupture due to the absorption of a UV photon has been calculated. The results show that small PAHs (less than or equal to 25 carbon atoms) are expected to be partially dehydrogenated in regions with intense UV fields, while large PAHs (greater than or equal to 25 atoms) are expected to be completely hydrogenated in those regions. Because estimate of the carbon content of interstellar PAHs lie in the range of 20 to 25 carbon atoms, dehydrogenation is probably not very important. Because of the absence of other emission features besides the 11.3 micrometer feature in ground-based 8 to 13 micrometer spectra, it has been suggested that interstellar PAHs are partially dehydrogenated. However, IRAS 8 to 22 micrometer spectra of most sources that show strong 7.7 and 11.2 micrometer emission features also show a plateau of emission extending from about 11.3 to 14 micrometer. Like the 11.3 micrometer feature, this new feature is attributed to the CH out of plane bending mode in PAHs. This new feature shows that interstellar PAHs are not as dehydrogenated as estimated from ground-based 8 to 13 micrometer spectra. It also constrains the molecular structure of interstellar PAHs. In particular, it seems that very condensed PAHs, such as coronene and circumcoronene, dominate the interstellar PAH mixture as expected from stability arguments.

  10. Interstellar grain chemistry and organic molecules

    Science.gov (United States)

    Allamandola, L. J.; Sandford, S. A.

    1990-01-01

    The detection of prominant infrared absorption bands at 3250, 2170, 2138, 1670 and 1470 cm(-1) (3.08, 4.61, 4.677, 5.99 and 6.80 micron m) associated with molecular clouds show that mixed molecular (icy) grain mantles are an important component of the interstellar dust in the dense interstellar medium. These ices, which contain many organic molecules, may also be the production site of the more complex organic grain mantles detected in the diffuse interstellar medium. Theoretical calculations employing gas phase as well as grain surface reactions predict that the ices should be dominated only by the simple molecules H2O, H2CO, N2, CO, O2, NH3, CH4, possibly CH3OH, and their deuterated counterparts. However, spectroscopic observations in the 2500 to 1250 cm(-1)(4 to 8 micron m) range show substantial variation from source reactions alone. By comparing these astronomical spectra with the spectra of laboratory-produced analogs of interstellar ices, one can determine the composition and abundance of the materials frozen on the grains in dense clouds. Experiments are described in which the chemical evolution of an interstellar ice analog is determined during irradiation and subsequent warm-up. Particular attention is paid to the types of moderately complex organic materials produced during these experiments which are likely to be present in interstellar grains and cometary ices.

  11. Amino Acid Formation on Interstellar Dust Particles

    Science.gov (United States)

    Meierhenrich, U. J.; Munoz Caro, G. M.; Barbier, B.; Brack, A.; Thiemann, W.; Goesmann, F.; Rosenbauer, H.

    2003-04-01

    In the dense interstellar medium dust particles accrete ice layers of known molecular composition. In the diffuse interstellar medium these ice layers are subjected to energetic UV-irradiation. Here, photoreactions form complex organic molecules. The interstellar processes were recently successfully simulated in two laboratories. At NASA Ames Research Center three amino acids were detected in interstellar ice analogues [1], contemporaneously, our European team reported on the identification of 16 amino acids therein [2]. Amino acids are the molecular building blocks of proteins in living organisms. The identification of amino acids on the simulated icy surface of interstellar dust particles strongly supports the assumption that the precursor molecules of life were delivered from interstellar and interplanetary space via (micro-) meteorites and/or comets to the earyl Earth. The results shall be verified by the COSAC experiment onboard the ESA cometary mission Rosetta [3]. [1] M.P. Bernstein, J.P. Dworkin, S.A. Sandford, G.W. Cooper, L.J. Allamandola: itshape Nature \\upshape 416 (2002), 401-403. [2] G.M. Muñoz Caro, U.J. Meierhenrich, W.A. Schutte, B. Barbier, A. Arcones Sergovia, H. Rosenbauer, W.H.-P. Thiemann, A. Brack, J.M. Greenberg: itshape Nature \\upshape 416 (2002), 403-406. [3] U. Meierhenrich, W.H.-P. Thiemann, H. Rosenbauer: itshape Chirality \\upshape 11 (1999), 575-582.

  12. Optimism

    Science.gov (United States)

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  13. Optimization

    CERN Document Server

    Pearce, Charles

    2009-01-01

    Focuses on mathematical structure, and on real-world applications. This book includes developments in several optimization-related topics such as decision theory, linear programming, turnpike theory, duality theory, convex analysis, and queuing theory.

  14. Study on Strategy for Optimizing Cost Structure of Agricultural Produce Logistics%农产品物流成本结构优化策略研究

    Institute of Scientific and Technical Information of China (English)

    欧晓钢

    2013-01-01

    In this paper,we introduced the development mode and cost accounting of agricultural produce logistics,discussed the requirement in optimizing the cost structure of the agricultural produce logistics,and at the end on different levels presented the strategies for optimizing the cost structure and reducing the cost of the agricultural produce logistics.%在多种因素的影响下,我国农产品物流的成本结构亟待优化,以此为视角,阐述了农产品物流的发展模式与成本核算,讨论了农产品物流成本结构优化的要求,最后从多个不同层面给出了借助成本结构优化,降低农产品物流成本的策略.

  15. Characterization of Interstellar Organic Molecules

    Science.gov (United States)

    Gençaǧa, Deniz; Carbon, Duane F.; Knuth, Kevin H.

    2008-11-01

    Understanding the origins of life has been one of the greatest dreams throughout history. It is now known that star-forming regions contain complex organic molecules, known as Polycyclic Aromatic Hydrocarbons (PAHs), each of which has particular infrared spectral characteristics. By understanding which PAH species are found in specific star-forming regions, we can better understand the biochemistry that takes place in interstellar clouds. Identifying and classifying PAHs is not an easy task: we can only observe a single superposition of PAH spectra at any given astrophysical site, with the PAH species perhaps numbering in the hundreds or even thousands. This is a challenging source separation problem since we have only one observation composed of numerous mixed sources. However, it is made easier with the help of a library of hundreds of PAH spectra. In order to separate PAH molecules from their mixture, we need to identify the specific species and their unique concentrations that would provide the given mixture. We develop a Bayesian approach for this problem where sources are separated from their mixture by Metropolis Hastings algorithm. Separated PAH concentrations are provided with their error bars, illustrating the uncertainties involved in the estimation process. The approach is demonstrated on synthetic spectral mixtures using spectral resolutions from the Infrared Space Observatory (ISO). Performance of the method is tested for different noise levels.

  16. Interstellar Transfer of Planetary Microbiota

    Science.gov (United States)

    Wallis, Max K.; Wickramasinghe, N. C.

    Panspermia theories require the transport of micro-organisms in a viable form from one astronomical location to another. The evidence of material ejection from planetary surfaces, of dynamical orbit evolution and of potential survival on landing is setting a firm basis for interplanetary panspermia. Pathways for interstellar panspermia are less clear. We compare the direct route, whereby life-bearing planetary ejecta exit the solar system and risk radiation hazards en route to nearby stellar systems, and an indirect route whereby ejecta hitch a ride within the shielded environment of comets of the Edgeworth- Kuiper Belt that are subsequently expelled from the solar system. We identify solutions to the delivery problem. Delivery to fully-fledged planetary systems of either the direct ejecta or the ejecta borne by comets depends on dynamical capture and is of very low efficiency. However, delivery into a proto-planetary disc of an early solar-type nebula and into pre-stellar molecular clouds is effective, because the solid grains efficiently sputter the incoming material in hypervelocity collisions. The total mass of terrestrial fertile material delivered to nearby pre-stellar systems as the solar system moves through the galaxy is from kilogrammes up to a tonne. Subject to further study of bio-viability under irradiation and fragmenting collisions, a few kg of original grains and sputtered fragments could be sufficient to seed the planetary system with a wide range of solar system micro-organisms.

  17. The interstellar medium in galaxies

    CERN Document Server

    1997-01-01

    It has been more than five decades ago that Henk van de Hulst predicted the observability of the 21-cm line of neutral hydrogen (HI ). Since then use of the 21-cm line has greatly improved our knowledge in many fields and has been used for galactic structure studies, studies of the interstellar medium (ISM) in the Milky Way and other galaxies, studies of the mass distribution of the Milky Way and other galaxies, studies of spiral struc­ ture, studies of high velocity gas in the Milky Way and other galaxies, for measuring distances using the Tully-Fisher relation etc. Regarding studies of the ISM, there have been a number of instrumen­ tal developments over the past decade: large CCD's became available on optical telescopes, radio synthesis offered sensitive imaging capabilities, not only in the classical 21-cm HI line but also in the mm-transitions of CO and other molecules, and X-ray imaging capabilities became available to measure the hot component of the ISM. These developments meant that Milky Way was n...

  18. Rotational spectroscopy of interstellar PAHs

    CERN Document Server

    Ali-Haïmoud, Yacine

    2013-01-01

    Polycyclic aromatic hydrocarbons (PAHs) have long been part of the standard model of the interstellar medium, and are believed to play important roles in its physics and chemistry. Yet, up to now it has not been possible to identify any specific molecule among them. In this paper, a new observational avenue is suggested to detect individual PAHs, using their rotational line emission at radio frequencies. Previous PAH searches based on rotational spectroscopy have only targeted the bowl-shaped corannulene molecule, with the underlying assumption that other polar PAHs are triaxial and as a consequence their rotational emission is diluted over a very large number of lines and unusable for detection purposes. In this paper the rotational spectrum of quasi-symmetric PAHs is computed analytically, as a function of the level of triaxiality. It is shown that the asymmetry of planar, nitrogen-substituted symmetric PAHs is small enough that their rotational spectrum, when observed with a resolution of about a MHz, has ...

  19. Physical Processes of Interstellar Turbulence

    CERN Document Server

    Vazquez-Semadeni, Enrique

    2012-01-01

    I discuss the role of self-gravity and radiative heating and cooling in shaping the nature of the turbulence in the interstellar medium (ISM) of our galaxy. The heating and cooling cause it to be highly compressible, and, in some regimes of density and temperature, to become thermally unstable, tending to spontaneously segregate into warm/diffuse and cold/dense phases. On the other hand, turbulence is an inherently mixing process, tending to replenish the density and temperature ranges that would be forbidden under thermal processes alone. The turbulence in the ionized ISM appears to be transonic (i.e, with Mach numbers $\\Ms \\sim 1$), and thus to behave essentially incompressibly. However, in the neutral medium, thermal instability causes the sound speed of the gas to fluctuate by up to factors of $\\sim 30$, and thus the flow can be highly supersonic with respect to the dense/cold gas, although numerical simulations suggest that this behavior corresponds more to the ensemble of cold clumps than to the clumps'...

  20. Measurement and correction of variations in interstellar dispersion in high-precision pulsar timing

    CERN Document Server

    Keith, M J; Shannon, R M; Hobbs, G B; Manchester, R N; Bailes, M; Bhat, N D R; Burke-Spolaor, S; Champion, D J; Chaudhary, A; Hotan, A W; Khoo, J; Kocz, J; Oslowski, S; Ravi, V; Reynolds, J E; Sarkissian, J; van Straten, W; Yardley, D R B

    2012-01-01

    Signals from radio pulsars show a wavelength-dependent delay due to dispersion in the interstellar plasma. At a typical observing wavelength, this delay can vary by tens of microseconds on five-year time scales, far in excess of signals of interest to pulsar timing arrays, such as that induced by a gravitational-wave background. Measurement of these delay variations is not only crucial for the detection of such signals, but also provides an unparallelled measurement of the turbulent interstellar plasma at au scales. In this paper we demonstrate that without consideration of wavelength- independent red-noise, 'simple' algorithms to correct for interstellar dispersion can attenuate signals of interest to pulsar timing arrays. We present a robust method for this correction, which we validate through simulations, and apply it to observations from the Parkes Pulsar Timing Array. Correction for dispersion variations comes at a cost of increased band-limited white noise. We discuss scheduling to minimise this additi...