WorldWideScience

Sample records for cost optimized interstellar

  1. Searching for Cost-Optimized Interstellar Beacons

    Science.gov (United States)

    Benford, Gregory; Benford, James; Benford, Dominic

    2010-06-01

    What would SETI beacon transmitters be like if built by civilizations that had a variety of motives but cared about cost? In a companion paper, we presented how, for fixed power density in the far field, a cost-optimum interstellar beacon system could be built. Here, we consider how we should search for a beacon if it were produced by a civilization similar to ours. High-power transmitters could be built for a wide variety of motives other than the need for two-way communication; this would include beacons built to be seen over thousands of light-years. Extraterrestrial beacon builders would likely have to contend with economic pressures just as their terrestrial counterparts do. Cost, spectral lines near 1 GHz, and interstellar scintillation favor radiating frequencies substantially above the classic "water hole." Therefore, the transmission strategy for a distant, cost-conscious beacon would be a rapid scan of the galactic plane with the intent to cover the angular space. Such pulses would be infrequent events for the receiver. Such beacons built by distant, advanced, wealthy societies would have very different characteristics from what SETI researchers seek. Future searches should pay special attention to areas along the galactic disk where SETI searches have seen coherent signals that have not recurred on the limited listening time intervals we have used. We will need to wait for recurring events that may arriarrive in intermittent bursts. Several new SETI search strategies have emerged from these ideas. We propose a new test for beacons that is based on the Life Plane hypotheses.

  2. Equipment cost optimization

    International Nuclear Information System (INIS)

    Ribeiro, E.M.; Farias, M.A.; Dreyer, S.R.B.

    1995-01-01

    Considering the importance of the cost of material and equipment in the overall cost profile of an oil company, which in the case of Petrobras, represents approximately 23% of the total operational cost or 10% of the sales, an organization for the optimization of such costs has been established within Petrobras. Programs are developed aiming at: optimization of life-cycle cost of material and equipment; optimization of industrial processes costs through material development. This paper describes the methodology used in the management of the development programs and presents some examples of concluded and ongoing programs, which are conducted in permanent cooperation with suppliers, technical laboratories and research institutions and have been showing relevant results

  3. Starship Sails Propelled by Cost-Optimized Directed Energy

    Science.gov (United States)

    Benford, J.

    Microwave and laser-propelled sails are a new class of spacecraft using photon acceleration. It is the only method of interstellar flight that has no physics issues. Laboratory demonstrations of basic features of beam-driven propulsion, flight, stability (`beam-riding'), and induced spin, have been completed in the last decade, primarily in the microwave. It offers much lower cost probes after a substantial investment in the launcher. Engineering issues are being addressed by other applications: fusion (microwave, millimeter and laser sources) and astronomy (large aperture antennas). There are many candidate sail materials: carbon nanotubes and microtrusses, beryllium, graphene, etc. For acceleration of a sail, what is the cost-optimum high power system? Here the cost is used to constrain design parameters to estimate system power, aperture and elements of capital and operating cost. From general relations for cost-optimal transmitter aperture and power, system cost scales with kinetic energy and inversely with sail diameter and frequency. So optimal sails will be larger, lower in mass and driven by higher frequency beams. Estimated costs include economies of scale. We present several starship point concepts. Systems based on microwave, millimeter wave and laser technologies are of equal cost at today's costs. The frequency advantage of lasers is cancelled by the high cost of both the laser and the radiating optic. Cost of interstellar sailships is very high, driven by current costs for radiation source, antennas and especially electrical power. The high speeds necessary for fast interstellar missions make the operating cost exceed the capital cost. Such sailcraft will not be flown until the cost of electrical power in space is reduced orders of magnitude below current levels.

  4. Optimization of administrative management costs

    OpenAIRE

    Podolchak, N.; Chepil, B.

    2015-01-01

    It is important to determine the optimal level of administrative costs in order to achieve main targets of any enterprise, to perform definite tasks, to implement these tasks and not to worsen condition and motivation of the workers. Also it is essential to remember about strategic goals in the area of HR on the long run. The refore, the main idea in using optimization model for assessing the effectiveness of management costs will be to find the minimum level of expenses within the given l...

  5. Cost Overrun Optimism: Fact or Fiction

    Science.gov (United States)

    2016-02-29

    Base, OH. Homgren, C. T. (1990). In G. Foster (Ed.), Cost accounting : A managerial emphasis (7th ed.). Englewood Cliffs, NJ: Prentice Hall. Morrison... Accounting Office. Gansler, J. S. (1989). Affording defense. Cambridge, MA: The MIT Press. Heise, S. R. (1991). A review of cost performance index...Image designed by Diane Fleischer Cost Overrun Optimism: FACT or FICTION? Maj David D. Christensen, USAF Program managers are advocates by

  6. Cost Optimization of Product Families using Analytic Cost Models

    DEFF Research Database (Denmark)

    Brunø, Thomas Ditlev; Nielsen, Peter

    2012-01-01

    This paper presents a new method for analysing the cost structure of a mass customized product family. The method uses linear regression and backwards selection to reduce the complexity of a data set describing a number of historical product configurations and incurred costs. By reducing the data...... set, the configuration variables which best describe the variation in product costs are identified. The method is tested using data from a Danish manufacturing company and the results indicate that the method is able to identify the most critical configuration variables. The method can be applied...... in product family redesign projects focusing on cost reduction to identify which modules contribute the most to cost variation and should thus be optimized....

  7. Cost optimal levels for energy performance requirements

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike

    This report summarises the work done within the Concerted Action EPBD from December 2010 to April 2011 in order to feed into the European Commission's proposal for a common European procedure for a Cost-Optimal methodology under the Directive on the Energy Performance of Buildings (recast) 2010/3...

  8. Interstellar Extinction

    OpenAIRE

    Gontcharov, George

    2017-01-01

    This review describes our current understanding of interstellar extinction. This differ substantially from the ideas of the 20th century. With infrared surveys of hundreds of millions of stars over the entire sky, such as 2MASS, SPITZER-IRAC, and WISE, we have looked at the densest and most rarefied regions of the interstellar medium at distances of a few kpc from the sun. Observations at infrared and microwave wavelengths, where the bulk of the interstellar dust absorbs and radiates, have br...

  9. Optimizing Teleportation Cost in Distributed Quantum Circuits

    Science.gov (United States)

    Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh

    2018-03-01

    The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.

  10. GASIFICATION PLANT COST AND PERFORMANCE OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Samuel S. Tam

    2002-05-01

    The goal of this series of design and estimating efforts was to start from the as-built design and actual operating data from the DOE sponsored Wabash River Coal Gasification Repowering Project and to develop optimized designs for several coal and petroleum coke IGCC power and coproduction projects. First, the team developed a design for a grass-roots plant equivalent to the Wabash River Coal Gasification Repowering Project to provide a starting point and a detailed mid-year 2000 cost estimate based on the actual as-built plant design and subsequent modifications (Subtask 1.1). This unoptimized plant has a thermal efficiency of 38.3% (HHV) and a mid-year 2000 EPC cost of 1,681 $/kW. This design was enlarged and modified to become a Petroleum Coke IGCC Coproduction Plant (Subtask 1.2) that produces hydrogen, industrial grade steam, and fuel gas for an adjacent Gulf Coast petroleum refinery in addition to export power. A structured Value Improving Practices (VIP) approach was applied to reduce costs and improve performance. The base case (Subtask 1.3) Optimized Petroleum Coke IGCC Coproduction Plant increased the power output by 16% and reduced the plant cost by 23%. The study looked at several options for gasifier sparing to enhance availability. Subtask 1.9 produced a detailed report on this availability analyses study. The Subtask 1.3 Next Plant, which retains the preferred spare gasification train approach, only reduced the cost by about 21%, but it has the highest availability (94.6%) and produces power at 30 $/MW-hr (at a 12% ROI). Thus, such a coke-fueled IGCC coproduction plant could fill a near term niche market. In all cases, the emissions performance of these plants is superior to the Wabash River project. Subtasks 1.5A and B developed designs for single-train coal and coke-fueled power plants. This side-by-side comparison of these plants, which contain the Subtask 1.3 VIP enhancements, showed their similarity both in design and cost (1,318 $/kW for the

  11. Costs and benefits of realism and optimism.

    Science.gov (United States)

    Bortolotti, Lisa; Antrobus, Magdalena

    2015-03-01

    What is the relationship between rationality and mental health? By considering the psychological literature on depressive realism and unrealistic optimism, it was hypothesized that, in the context of judgments about the self, accurate cognitions are psychologically maladaptive and inaccurate cognitions are psychologically adaptive. Recent studies recommend being cautious in drawing any general conclusion about the style of thinking and mental health. Recent investigations suggest that people with depressive symptoms are more accurate than controls in tasks involving time perception and estimates of personal circumstances, but not in other tasks. Unrealistic optimism remains a robust phenomenon across a variety of tasks and domains, and researchers are starting to explore its neural bases. However, the challenge is to determine to what extent and in what way unrealistic optimism is beneficial. We should revisit the hypothesis that optimistic cognitions are psychologically adaptive, whereas realistic thinking is not. Realistic beliefs and expectations can be conducive to wellbeing and good functioning, and wildly optimistic cognitions have considerable psychological costs.

  12. Costs and benefits of realism and optimism

    Science.gov (United States)

    Bortolotti, Lisa; Antrobus, Magdalena

    2015-01-01

    Purpose of review What is the relationship between rationality and mental health? By considering the psychological literature on depressive realism and unrealistic optimism, it was hypothesized that, in the context of judgments about the self, accurate cognitions are psychologically maladaptive and inaccurate cognitions are psychologically adaptive. Recent studies recommend being cautious in drawing any general conclusion about the style of thinking and mental health. Recent findings Recent investigations suggest that people with depressive symptoms are more accurate than controls in tasks involving time perception and estimates of personal circumstances, but not in other tasks. Unrealistic optimism remains a robust phenomenon across a variety of tasks and domains, and researchers are starting to explore its neural bases. However, the challenge is to determine to what extent and in what way unrealistic optimism is beneficial. Summary We should revisit the hypothesis that optimistic cognitions are psychologically adaptive, whereas realistic thinking is not. Realistic beliefs and expectations can be conducive to wellbeing and good functioning, and wildly optimistic cognitions have considerable psychological costs. PMID:25594418

  13. Cost Optimal System Identification Experiment Design

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    A structural system identification experiment design method is formulated in the light of decision theory, structural reliability theory and optimization theory. The experiment design is based on a preposterior analysis, well-known from the classical decision theory. I.e. the decisions concerning...... reflecting the cost of the experiment and the value of obtained additional information. An example concerning design of an experiment for parametric identification of a single degree of freedom structural system shows the applicability of the experiment design method....... the experiment design are not based on obtained experimental data. Instead the decisions are based on the expected experimental data assumed to be obtained from the measurements, estimated based on prior information and engineering judgement. The design method provides a system identification experiment design...

  14. Interstellar ammonia

    International Nuclear Information System (INIS)

    Ho, P.T.P.; Townes, C.H.

    1983-01-01

    Investigations and results on interstellar NH3 are discussed. The physics of the molecule, its interstellar excitation, and its formation and dissociation mechanisms are reviewed. The observing techniques and instruments, including single-antenna facilities, infrared and submillimeter techniques, and interferometric studies using the Very Large Array are briefly considered. Spectral data analysis is discussed, including the derivation of optical depths, excitation measurements, ortho-para measurements, and cross sections. Progress achieved in understanding the properties and evolution of the interstellar medium through NH3 studies is reviewed, including observations of nearby dark clouds and of clumping effects in molecular clouds, as well as interferometric observations of hot molecular cores in Orion, W51, and Sagittarius A. Research results on extragalactic NH3, far-infrared, submillimeter, and midinfrared NH3 observations are described. 101 references

  15. Interstellar holography

    NARCIS (Netherlands)

    Walker, M. A.; Koopmans, L. V. E.; Stinebring, D. R.; van Straten, W.

    2008-01-01

    The dynamic spectrum of a radio pulsar is an in-line digital hologram of the ionized interstellar medium. It has previously been demonstrated that such holograms permit image reconstruction, in the sense that one can determine an approximation to the complex electric field values as a function of

  16. Interstellar matter

    International Nuclear Information System (INIS)

    Mezger, P.G.

    1978-01-01

    An overview of the formation of our galaxy is presented followed by a summary of recent work in star formation and related topics. Selected discussions are given on interstellar matter including absorption characteristics of dust, the fully ionised component of the ISM and the energy density of lyc-photons in the solar neighbourhood and the diffuse galactic IR radiation

  17. Implementing the cost-optimal methodology in EU countries

    DEFF Research Database (Denmark)

    Atanasiu, Bogdan; Kouloumpi, Ilektra; Thomsen, Kirsten Engelund

    This study presents three cost-optimal calculations. The overall aim is to provide a deeper analysis and to provide additional guidance on how to properly implement the cost-optimality methodology in Member States. Without proper guidance and lessons from exemplary case studies using realistic...... input data (reflecting the likely future development), there is a risk that the cost-optimal methodology may be implemented at sub-optimal levels. This could lead to a misalignment between the defined cost-optimal levels and the long-term goals, leaving a significant energy saving potential unexploited....... Therefore, this study provides more evidence on the implementation of the cost-optimal methodology and highlights the implications of choosing different values for key factors (e.g. discount rates, simulation variants/packages, costs, energy prices) at national levels. The study demonstrates how existing...

  18. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    Directory of Open Access Journals (Sweden)

    Mihaela STET

    2016-12-01

    Full Text Available The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  19. METHODS FOR DETERMINATION AND OPTIMIZATION OF LOGISTICS COSTS

    OpenAIRE

    Mihaela STET

    2016-01-01

    The paper is dealing with the problems of logistics costs, highlighting some methods for estimation and determination of specific costs for different transport modes in freight distribution. There are highlighted, besides costs of transports, the other costs in supply chain, as well as costing methods used in logistics activities. In this context, there are also revealed some optimization means of transport costs in logistics chain.

  20. System Approach of Logistic Costs Optimization Solution in Supply Chain

    OpenAIRE

    Majerčák, Peter; Masárová, Gabriela; Buc, Daniel; Majerčáková, Eva

    2013-01-01

    This paper is focused on the possibility of using the costs simulation in supply chain, which are on relative high level. Our goal is to determine the costs using logistic costs optimization which must necessarily be used in business activities in the supply chain management. The paper emphasizes the need to perform not isolated optimization in the whole supply chain. Our goal is to compare classic approach, when every part tracks its costs isolated, a try to minimize them, with the system (l...

  1. Optimal skill distribution under convex skill costs

    Directory of Open Access Journals (Sweden)

    Tin Cheuk Leung

    2018-03-01

    Full Text Available This paper studies optimal distribution of skills in an optimal income tax framework with convex skill constraints. The problem is cast as a social planning problem where a redistributive planner chooses how to distribute a given amount of aggregate skills across people. We find that optimal skill distribution is either perfectly equal or perfectly unequal, but an interior level of skill inequality is never optimal.

  2. Cost optimization on example of hotel-restaurant complex enterprises

    Directory of Open Access Journals (Sweden)

    Volkovska I.V.

    2017-08-01

    Full Text Available Optimization of costs is important for increasing competitiveness and profitability of the enterprise, therefore, the purpose of the study is to establish and visualize the basis of cost optimization on the example of hotel-restaurant complex enterprises. The essence of cost optimization is investigated through the analysis of the views of various scholars for this purpose. It is established that cost optimization is the process of planning, accounting, analysis, cost control for searching and selecting of the most effective methods of managing of the conditions of limited resources. The author has developed the sequence of cost optimization on the example of enterprises of the hotel-restaurant complex, which helps to structure the process of cost management. In this sequence, there are areas where costs can be reduced, and the technical and economic conditions under which they can be changed. In addition, it is noted that such implementation is important in the cost management at the enterprise. It is also proposed to optimize costs using the simplex method to carry out a quantitative assessment of the quality of services by the qualimetric method. It is noted that it is necessary to form alternative ways of using resources for rational use of scarce resources. The article proposes cost grouping by the XYZ-analysis with individual approaches to cost management, namely, target costing, the theory of constrains, lean manufacturing. For this purpose, the author develops the table that should be filled in to compare which costs and ways can be reduced or replaced. Besides, the author has added recommendations for filling in the table and commented that with this analysis a transaction and unreasonable costs can be controlled. Thus, with such a sequence of actions, redistribution of funds is possible to optimize costs and save money, which can be directed to enterprise development. The conclusion is made of the need of system analysis to use

  3. Cost optimization for buildings with hybrid ventilation systems

    Science.gov (United States)

    Ji, Kun; Lu, Yan

    2018-02-13

    A method including: computing a total cost for a first zone in a building, wherein the total cost is equal to an actual energy cost of the first zone plus a thermal discomfort cost of the first zone; and heuristically optimizing the total cost to identify temperature setpoints for a mechanical heating/cooling system and a start time and an end time of the mechanical heating/cooling system, based on external weather data and occupancy data of the first zone.

  4. Gasification Plant Cost and Performance Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Samuel Tam; Alan Nizamoff; Sheldon Kramer; Scott Olson; Francis Lau; Mike Roberts; David Stopek; Robert Zabransky; Jeffrey Hoffmann; Erik Shuster; Nelson Zhan

    2005-05-01

    As part of an ongoing effort of the U.S. Department of Energy (DOE) to investigate the feasibility of gasification on a broader level, Nexant, Inc. was contracted to perform a comprehensive study to provide a set of gasification alternatives for consideration by the DOE. Nexant completed the first two tasks (Tasks 1 and 2) of the ''Gasification Plant Cost and Performance Optimization Study'' for the DOE's National Energy Technology Laboratory (NETL) in 2003. These tasks evaluated the use of the E-GAS{trademark} gasification technology (now owned by ConocoPhillips) for the production of power either alone or with polygeneration of industrial grade steam, fuel gas, hydrocarbon liquids, or hydrogen. NETL expanded this effort in Task 3 to evaluate Gas Technology Institute's (GTI) fluidized bed U-GAS{reg_sign} gasifier. The Task 3 study had three main objectives. The first was to examine the application of the gasifier at an industrial application in upstate New York using a Southeastern Ohio coal. The second was to investigate the GTI gasifier in a stand-alone lignite-fueled IGCC power plant application, sited in North Dakota. The final goal was to train NETL personnel in the methods of process design and systems analysis. These objectives were divided into five subtasks. Subtasks 3.2 through 3.4 covered the technical analyses for the different design cases. Subtask 3.1 covered management activities, and Subtask 3.5 covered reporting. Conceptual designs were developed for several coal gasification facilities based on the fluidized bed U-GAS{reg_sign} gasifier. Subtask 3.2 developed two base case designs for industrial combined heat and power facilities using Southeastern Ohio coal that will be located at an upstate New York location. One base case design used an air-blown gasifier, and the other used an oxygen-blown gasifier in order to evaluate their relative economics. Subtask 3.3 developed an advanced design for an air

  5. Cost benefit analysis for optimization of radiation protection

    International Nuclear Information System (INIS)

    Lindell, B.

    1984-01-01

    ICRP recommends three basic principles for radiation protection. One is the justification of the source. Any use of radiation should be justified with regard to its benefit. The second is the optimization of radiation protection, i.e. all radiation exposure should be kept as low as resonably achievable. And the third principle is that there should be a limit for the radiation dose that any individual receives. Cost benefit assessment or cost benefit analysis is one tool to achieve the optimization, but the optimization is not identical with cost benefit analysis. Basically, in principle, the cost benefit analysis for the optimization of radiation protection is to find the minimum sum of the cost of protection and some cost of detriment. (Mori, K.)

  6. Optimal Joint Liability Lending and with Costly Peer Monitoring

    NARCIS (Netherlands)

    Carli, Francesco; Uras, R.B.

    2014-01-01

    This paper characterizes an optimal group loan contract with costly peer monitoring. Using a fairly standard moral hazard framework, we show that the optimal group lending contract could exhibit a joint-liability scheme. However, optimality of joint-liability requires the involvement of a group

  7. Interstellar grains

    Energy Technology Data Exchange (ETDEWEB)

    Hoyle, F.; Wickramasinghe, N.C.

    1980-11-01

    Interstellar extinction of starlight was observed and plotted as a function of inverse wavelength. Agreement with the calculated effects of the particle distribution is shown. The main kinds of grain distinguished are: (1) graphite spheres of radius 0.02 microns, making up 10% of the total grain mass (2) small dielectric spheres of radius 0.04 microns making up 25% and (3) hollow dielectric cylinders containing metallic iron, with diameters of 2/3 microns making up 45%. The remaining 20% consists of other metals, metal oxides, and polysiloxanes. Absorption factor evidence suggests that the main dielectric component of the grains is organic material.

  8. Interstellar chemistry.

    Science.gov (United States)

    Klemperer, William

    2006-08-15

    In the past half century, radioastronomy has changed our perception and understanding of the universe. In this issue of PNAS, the molecular chemistry directly observed within the galaxy is discussed. For the most part, the description of the molecular transformations requires specific kinetic schemes rather than chemical thermodynamics. Ionization of the very abundant molecular hydrogen and atomic helium followed by their secondary reactions is discussed. The rich variety of organic species observed is a challenge for complete understanding. The role and nature of reactions involving grain surfaces as well as new spectroscopic observations of interstellar and circumstellar regions are topics presented in this special feature.

  9. Optimization of life cycle management costs

    International Nuclear Information System (INIS)

    Banerjee, A.K.

    1994-01-01

    As can be seen from the case studies, a LCM program needs to address and integrate, in the decision process, technical, political, licensing, remaining plant life, component replacement cycles, and financial issues. As part of the LCM evaluations, existing plant programs, ongoing replacement projects, short and long-term operation and maintenance issues, and life extension strategies must be considered. The development of the LCM evaluations and the cost benefit analysis identifies critical technical and life cycle cost parameters. These open-quotes discoveriesclose quotes result from the detailed and effective use of a consistent, quantifiable, and well documented methodology. The systematic development and implementation of a plant-wide LCM program provides for an integrated and structured process that leads to the most practical and effective recommendations. Through the implementation of these recommendations and cost effective decisions, the overall power production costs can be controlled and ultimately lowered

  10. Cost-Optimal ATCs in Zonal Electricity Markets

    DEFF Research Database (Denmark)

    Jensen, Tue Vissing; Kazempour, Jalal; Pinson, Pierre

    2017-01-01

    from the physical ATCs based on security indices only typically used in zonal electricity markets today. Determining cost-optimal ATCs requires viewing ATCs as an endogenous market construct, and leads naturally to the definition of a market entity whose responsibility is to optimize ATCs....... The optimization problem which this entity solves is a stochastic bilevel problem, which we decompose to yield a computationally tractable formulation. We show that cost-optimal ATCs depend non-trivially on the underlying network structure, and the problem of finding a setof cost-optimal ATCs is in general non...... by a factor of 2 or more, and ATCs which are zero between well-connected areas.Our results indicate that the perceived efficiency gap between zonal and nodal markets may be exagerrated if non-optimal ATCs are used....

  11. Optimizing Data Centre Energy and Environmental Costs

    Science.gov (United States)

    Aikema, David Hendrik

    Data centres use an estimated 2% of US electrical power which accounts for much of their total cost of ownership. This consumption continues to grow, further straining power grids attempting to integrate more renewable energy. This dissertation focuses on assessing and reducing data centre environmental and financial costs. Emissions of projects undertaken to lower the data centre environmental footprints can be assessed and the emission reduction projects compared using an ISO-14064-2-compliant greenhouse gas reduction protocol outlined herein. I was closely involved with the development of the protocol. Full lifecycle analysis and verifying that projects exceed business-as-usual expectations are addressed, and a test project is described. Consuming power when it is low cost or when renewable energy is available can be used to reduce the financial and environmental costs of computing. Adaptation based on the power price showed 10--50% potential savings in typical cases, and local renewable energy use could be increased by 10--80%. Allowing a fraction of high-priority tasks to proceed unimpeded still allows significant savings. Power grid operators use mechanisms called ancillary services to address variation and system failures, paying organizations to alter power consumption on request. By bidding to offer these services, data centres may be able to lower their energy costs while reducing their environmental impact. If providing contingency reserves which require only infrequent action, savings of up to 12% were seen in simulations. Greater power cost savings are possible for those ceding more control to the power grid operator. Coordinating multiple data centres adds overhead, and altering at which data centre requests are processed based on changes in the financial or environmental costs of power is likely to increase this overhead. Tests of virtual machine migrations showed that in some cases there was no visible increase in power use while in others power use

  12. Estimation of optimal educational cost per medical student.

    Science.gov (United States)

    Yang, Eunbae B; Lee, Seunghee

    2009-09-01

    This study aims to estimate the optimal educational cost per medical student. A private medical college in Seoul was targeted by the study, and its 2006 learning environment and data from the 2003~2006 budget and settlement were carefully analyzed. Through interviews with 3 medical professors and 2 experts in the economics of education, the study attempted to establish the educational cost estimation model, which yields an empirically computed estimate of the optimal cost per student in medical college. The estimation model was based primarily upon the educational cost which consisted of direct educational costs (47.25%), support costs (36.44%), fixed asset purchases (11.18%) and costs for student affairs (5.14%). These results indicate that the optimal cost per student is approximately 20,367,000 won each semester; thus, training a doctor costs 162,936,000 won over 4 years. Consequently, we inferred that the tuition levels of a local medical college or professional medical graduate school cover one quarter or one-half of the per- student cost. The findings of this study do not necessarily imply an increase in medical college tuition; the estimation of the per-student cost for training to be a doctor is one matter, and the issue of who should bear this burden is another. For further study, we should consider the college type and its location for general application of the estimation method, in addition to living expenses and opportunity costs.

  13. Site specific optimization of wind turbines energy cost: Iterative approach

    International Nuclear Information System (INIS)

    Rezaei Mirghaed, Mohammad; Roshandel, Ramin

    2013-01-01

    Highlights: • Optimization model of wind turbine parameters plus rectangular farm layout is developed. • Results show that levelized cost for single turbine fluctuates between 46.6 and 54.5 $/MW h. • Modeling results for two specific farms reported optimal sizing and farm layout. • Results show that levelized cost of the wind farms fluctuates between 45.8 and 67.2 $/MW h. - Abstract: The present study was aimed at developing a model to optimize the sizing parameters and farm layout of wind turbines according to the wind resource and economic aspects. The proposed model, including aerodynamic, economic and optimization sub-models, is used to achieve minimum levelized cost of electricity. The blade element momentum theory is utilized for aerodynamic modeling of pitch-regulated horizontal axis wind turbines. Also, a comprehensive cost model including capital costs of all turbine components is considered. An iterative approach is used to develop the optimization model. The modeling results are presented for three potential regions in Iran: Khaf, Ahar and Manjil. The optimum configurations and sizing for a single turbine with minimum levelized cost of electricity are presented. The optimal cost of energy for one turbine is calculated about 46.7, 54.5 and 46.6 dollars per MW h in the studied sites, respectively. In addition, optimal size of turbines, annual electricity production, capital cost, and wind farm layout for two different rectangular and square shaped farms in the proposed areas have been recognized. According to the results, optimal system configuration corresponds to minimum levelized cost of electricity about 45.8 to 67.2 dollars per MW h in the studied wind farms

  14. RO-75, Reverse Osmosis Plant Design Optimization and Cost Optimization

    International Nuclear Information System (INIS)

    Glueckstern, P.; Reed, S.A.; Wilson, J.V.

    1999-01-01

    1 - Description of problem or function: RO75 is a program for the optimization of the design and economics of one- or two-stage seawater reverse osmosis plants. 2 - Method of solution: RO75 evaluates the performance of the applied membrane module (productivity and salt rejection) at assumed operating conditions. These conditions include the site parameters - seawater salinity and temperature, the membrane module operating parameters - pressure and product recovery, and the membrane module predicted long-term performance parameters - lifetime and long flux decline. RO75 calculates the number of first and second stage (if applied) membrane modules needed to obtain the required product capacity and quality and evaluates the required pumping units and the power recovery turbine (if applied). 3 - Restrictions on the complexity of the problem: The program does not optimize or design the membrane properties and the internal structure and flow characteristics of the membrane modules; it assumes operating characteristics defined by the membrane manufacturers

  15. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  16. Refrigerator Optimal Scheduling to Minimise the Cost of Operation

    Directory of Open Access Journals (Sweden)

    Bálint Roland

    2016-12-01

    Full Text Available The cost optimal scheduling of a household refrigerator is presented in this work. The fundamental approach is the model predictive control methodology applied to the piecewise affine model of the refrigerator.

  17. Costs and benefits of realism and optimism

    OpenAIRE

    Bortolotti, Lisa; Antrobus, Magdalena

    2015-01-01

    Purpose of review What is the relationship between rationality and mental health? By considering the psychological literature on depressive realism and unrealistic optimism, it was hypothesized that, in the context of judgments about the self, accurate cognitions are psychologically maladaptive and inaccurate cognitions are psychologically adaptive. Recent studies recommend being cautious in drawing any general conclusion about the style of thinking and mental health. Recent findings Recent i...

  18. Cost Optimization Through Open Source Software

    Directory of Open Access Journals (Sweden)

    Mark VonFange

    2010-12-01

    Full Text Available The cost of information technology (IT as a percentage of overall operating and capital expenditures is growing as companies modernize their operations and as IT becomes an increasingly indispensable part of company resources. The price tag associated with IT infrastructure is a heavy one, and, in today's economy, companies need to look for ways to reduce overhead while maintaining quality operations and staying current with technology. With its advancements in availability, usability, functionality, choice, and power, free/libre open source software (F/LOSS provides a cost-effective means for the modern enterprise to streamline its operations. iXsystems wanted to quantify the benefits associated with the use of open source software at their company headquarters. This article is the outgrowth of our internal analysis of using open source software instead of commercial software in all aspects of company operations.

  19. Optimal investment with fixed refinancing costs

    OpenAIRE

    Jason G. Cummins; Ingmar Nyman

    2001-01-01

    Case studies show that corporate managers seek financial independence to avoid interference by outside financiers. We incorporate this financial xenophobia as a fixed cost in a simple dynamic model of financing and investment. To avoid refinancing in the future, the firm alters its behavior depending on the extent of its financial xenophobia and the realization of a revenue shock. With a sufficiently adverse shock, the firm holds no liquidity. Otherwise, the firm precautionarily saves and hol...

  20. A Study of Joint Cost Inclusion in Linear Programming Optimization

    Directory of Open Access Journals (Sweden)

    P. Armaos

    2013-08-01

    Full Text Available The concept of Structural Optimization has been a topic or research over the past century. Linear Programming Optimization has proved being the most reliable method of structural optimization. Global advances in linear programming optimization have been recently powered by University of Sheffield researchers, to include joint cost, self-weight and buckling considerations. A joint cost inclusion scopes to reduce the number of joints existing in an optimized structural solution, transforming it to a practically viable solution. The topic of the current paper is to investigate the effects of joint cost inclusion, as this is currently implemented in the optimization code. An extended literature review on this subject was conducted prior to familiarization with small scale optimization software. Using IntelliFORM software, a structured series of problems were set and analyzed. The joint cost tests examined benchmark problems and their consequent changes in the member topology, as the design domain was expanding. The findings of the analyses were remarkable and are being commented further on. The distinct topologies of solutions created by optimization processes are also recognized. Finally an alternative strategy of penalizing joints is presented.

  1. Cost-optimized climate stabilisation (OPTIKS)

    Energy Technology Data Exchange (ETDEWEB)

    Leimbach, Marian; Bauer, Nico; Baumstark, Lavinia; Edenhofer, Ottmar [Potsdam Institut fuer Klimafolgenforschung, Potsdam (Germany)

    2009-11-15

    This study analyses the implications of suggestions for the design of post-2012 climate policy regimes on the basis of model simulations. The focus of the analysis, the determination of regional mitigation costs and the technological development in the energy sector, also considers the feedbacks of investment and trade decisions of the regions that are linked by different global markets for emission permits, goods and resources. The analysed policy regimes are primarily differentiated by their allocation of emission rights. Moreover, they represent alternative designs of an international cap and trade system that is geared to meet the 2 C climate target. The present study analyses ambitious climate protection scenarios that require drastic reduction policies (reductions of 60%-80% globally until 2050). Immediate and multilateral action is needed in such scenarios. Given the rather small variance of mitigation costs in major regions like UCA, Europe, MEA and China, a policy regime should be chosen that provides high incentives to join an international agreement for the remaining regions. From this perspective either the C and C scenario (incentive for Russia) is preferable or the multi-stage approach (incentive for Africa and India). (orig.)

  2. Optimizing quality, service, and cost through innovation.

    Science.gov (United States)

    Walker, Kathleen; Allen, Jennifer; Andrews, Richard

    2011-01-01

    With dramatic increases in health care costs and growing concerns about the quality of health care services, nurse executives are seeking ways to transform their organizations to improve operational and financial performance while enhancing quality care and patient safety. Nurse leaders are challenged to meet new cost, quality and service imperatives, and change cannot be achieved by traditional approaches, it must occur through innovation. Imagine an organization that can mitigate a $56 million loss in revenue and claim the following successes: Increase admissions by a 8 day and a $5.5 million annualized increase by repurposing existing space. Decrease emergency department holding hours by an average of 174 hours a day, with a labor savings of $502,000 annually. Reduce overall inpatient length of stay by 0.5 day with total compensation running $4.2 million less than the budget for first quarter of 2010. Grow emergency department volume 272 visits greater than budgeted for first quarter of 2010. Complete admission assessments and diagnostics in 90 minutes. This article will address how these outcomes were achieved by transforming care delivery, creating a patient transition center, enhancing outreach referrals, and revising admission processes through collaboration and innovation.

  3. Cost-Optimal Analysis for Nearly Zero Energy Buildings Design and Optimization: A Critical Review

    Directory of Open Access Journals (Sweden)

    Maria Ferrara

    2018-06-01

    Full Text Available Since the introduction of the recast of the EPBD European Directive 2010/31/EU, many studies on the cost-effective feasibility of nearly zero-energy buildings (NZEBs were carried out either by academic research bodies and by national bodies. In particular, the introduction of the cost-optimal methodology has given a strong impulse to research in this field. This paper presents a comprehensive and significant review on scientific works based on the application of cost-optimal analysis applications in Europe since the EPBD recast entered into force, pointing out the differences in the analyzed studies and comparing their outcomes before the new recast of EPBD enters into force in 2018. The analysis is conducted with special regard to the methods used for the energy performance assessment, the global cost calculation, and for the selection of the energy efficiency measures leading to design optimization. A critical discussion about the assumptions on which the studies are based and the resulting gaps between the resulting cost-optimal performance and the zero energy target is provided together with a summary of the resulting cost-optimal set of technologies to be used for cost-optimal NZEB design in different contexts. It is shown that the cost-optimal approach results as an effective method for delineating the future of NZEB design throughout Europe while emerging criticalities and open research issues are presented.

  4. Discounted cost model for condition-based maintenance optimization

    International Nuclear Information System (INIS)

    Weide, J.A.M. van der; Pandey, M.D.; Noortwijk, J.M. van

    2010-01-01

    This paper presents methods to evaluate the reliability and optimize the maintenance of engineering systems that are damaged by shocks or transients arriving randomly in time and overall degradation is modeled as a cumulative stochastic point process. The paper presents a conceptually clear and comprehensive derivation of formulas for computing the discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance. The proposed discounted cost model provides a more realistic basis for optimizing the maintenance policies than those based on the asymptotic, non-discounted cost rate criterion.

  5. Value management: optimizing quality, service, and cost.

    Science.gov (United States)

    Makadon, Harvey J; Bharucha, Farzan; Gavin, Michael; Oliveira, Jason; Wietecha, Mark

    2010-01-01

    Hospitals have wrestled with balancing quality, service, and cost for years--and the visibility and urgency around measuring and communicating real metrics has grown exponentially in the last decade. However, even today, most hospital leaders cannot articulate or demonstrate the "value" they provide to patients and payers. Instead of developing a strategic direction that is based around a core value proposition, they focus their strategic efforts on tactical decisions like physician recruitment, facility expansion, and physician alignment. In the healthcare paradigm of the next decade, alignment of various tactical initiatives will require a more coherent understanding of the hospital's core value positioning. The authors draw on their experience in a variety of healthcare settings to suggest that for most hospitals, quality (i.e., clinical outcomes and patient safety) will become the most visible indicator of value, and introduce a framework to help healthcare providers influence their value positioning based on this variable.

  6. PSO Based Optimization of Testing and Maintenance Cost in NPPs

    Directory of Open Access Journals (Sweden)

    Qiang Chou

    2014-01-01

    Full Text Available Testing and maintenance activities of safety equipment have drawn much attention in Nuclear Power Plant (NPP to risk and cost control. The testing and maintenance activities are often implemented in compliance with the technical specification and maintenance requirements. Technical specification and maintenance-related parameters, that is, allowed outage time (AOT, maintenance period and duration, and so forth, in NPP are associated with controlling risk level and operating cost which need to be minimized. The above problems can be formulated by a constrained multiobjective optimization model, which is widely used in many other engineering problems. Particle swarm optimizations (PSOs have proved their capability to solve these kinds of problems. In this paper, we adopt PSO as an optimizer to optimize the multiobjective optimization problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Numerical results have demonstrated the efficiency of our proposed algorithm.

  7. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    Science.gov (United States)

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used

  8. Allocation base of general production costs as optimization of prime costs

    Directory of Open Access Journals (Sweden)

    Levytska I.O.

    2017-03-01

    Full Text Available Qualified management aimed at optimizing financial results is the key factor in today's society. Effective management decisions depend on the necessary information about the costs of production process in all its aspects – their structure, types, accounting policies of reflecting costs. General production costs, the so-called indirect costs that are not directly related to the production process, but provide its functioning in terms of supporting structural divisions and create the necessary conditions of production, play a significant role in calculating prime costs of goods (works, services. However, the accurate estimate of prime costs of goods (works, services should be determined with the value of indirect costs (in other words, general production costs, and properly determined with the base of their allocation. The choice of allocation base of general production costs is the significant moment, depending on the nature of business, which must guarantee fair distribution regarding to the largest share of direct expenses in the total structure of production costs. The study finds the essence of general production costs based on the analysis of key definitions of leading Ukrainian economists. The optimal allocation approach of general production costs is to calculate these costs as direct production costs within each subsidiary division (department separately without selecting a base as the main one to the their total amount.

  9. Need for Cost Optimization of Space Life Support Systems

    Science.gov (United States)

    Jones, Harry W.; Anderson, Grant

    2017-01-01

    As the nation plans manned missions that go far beyond Earth orbit to Mars, there is an urgent need for a robust, disciplined systems engineering methodology that can identify an optimized Environmental Control and Life Support (ECLSS) architecture for long duration deep space missions. But unlike the previously used Equivalent System Mass (ESM), the method must be inclusive of all driving parameters and emphasize the economic analysis of life support system design. The key parameter for this analysis is Life Cycle Cost (LCC). LCC takes into account the cost for development and qualification of the system, launch costs, operational costs, maintenance costs and all other relevant and associated costs. Additionally, an effective methodology must consider system technical performance, safety, reliability, maintainability, crew time, and other factors that could affect the overall merit of the life support system.

  10. Cost-optimal levels for energy performance requirements

    DEFF Research Database (Denmark)

    Thomsen, Kirsten Engelund; Aggerholm, Søren; Kluttig-Erhorn, Heike

    2011-01-01

    The CA conducted a study on experiences and challenges for setting cost optimal levels for energy performance requirements. The results were used as input by the EU Commission in their work of establishing the Regulation on a comparative methodology framework for calculating cost optimal levels...... of minimum energy performance requirements. In addition to the summary report released in August 2011, the full detailed report on this study is now also made available, just as the EC is about to publish its proposed Regulation for MS to apply in their process to update national building requirements....

  11. Optimal treatment cost allocation methods in pollution control

    International Nuclear Information System (INIS)

    Chen Wenying; Fang Dong; Xue Dazhi

    1999-01-01

    Total emission control is an effective pollution control strategy. However, Chinese application of total emission control lacks reasonable and fair methods for optimal treatment cost allocation, a critical issue in total emission control. The author considers four approaches to allocate treatment costs. The first approach is to set up a multiple-objective planning model and to solve the model using the shortest distance ideal point method. The second approach is to define degree of satisfaction for cost allocation results for each polluter and to establish a method based on this concept. The third is to apply bargaining and arbitration theory to develop a model. The fourth is to establish a cooperative N-person game model which can be solved using the Shapley value method, the core method, the Cost Gap Allocation method or the Minimum Costs-Remaining Savings method. These approaches are compared using a practicable case study

  12. Optimization of costs versus radiation exposures in decommissioning

    International Nuclear Information System (INIS)

    Konzek, G.J.

    1979-01-01

    The estimated worth of decommissioning optimization planning during each phase of the reactor's life cycle is dependent on many variables. The major variables are tabulated and relatively ranked. For each phase, optimization qualitative values (i.e., cost, safety, maintainability, ALARA, and decommissioning considerations) are estimated and ranked according to their short-term and long-term potential benefits. These estimates depend on the quality of the input data, interpretation of that data, and engineering judgment. Once identified and ranked, these considerations form an integral part of the information data base from which estimates, decisions, and alternatives are derived. The optimization of costs and the amount of occupational radiation exposure reductions are strongly interrelated during decommissioning. Realizing that building the necessary infrastructure for decommissioning will take time is an important first step in any decommissioning plan. In addition, the following conclusions are established to achieve optimization of costs and reduced occupational radiation exposures: the assignment of cost versus man-rem is item-specific and sensitive to the expertise of many interrelated disciplines; a commitment to long-term decommissioning planning by management will provide the conditions needed to achieve optimization; and, to be most effective, costs and exposure reduction are sensitive to the nearness of the decommissioning operation. For a new plant, it is best to start at the beginning of the cycle, update continually, consider innovations, and realize full potential and benefits of this concept. For an older plant, the life cycle methodology permits a comprehensive review of the plant history and the formulation of an orderly decommissioning program based on planning, organization, and effort

  13. Interstellar Probe: First Step to the Stars

    Science.gov (United States)

    McNutt, R. L., Jr.

    2017-12-01

    The idea of an "Interstellar Probe," a robotic spacecraft traveling into the nearby interstellar medium for the purpose of scientific investigation, dates to the mid-1960s. The Voyager Interstellar Mission (VIM), an "accidental" 40-year-old by-product of the Grand Tour of the solar system, has provided initial answers to the problem of the global heliospheric configuration and the details of its interface with interstellar space. But the twin Voyager spacecraft have, at most, only another decade of lifetime, and only Voyager 1 has emerged from the heliosheath interaction region. To understand the nature of the interaction, a near-term mission to the "near-by" interstellar medium with modern and focused instrumentation remains a compelling priority. Imaging of energetic neutral atoms (ENAs) by the Ion Neutral CAmera (INCA) on Cassini and from the Interstellar Boundary Explorer (IBEX) in Earth orbit have provided significant new insights into the global interaction region but point to discrepancies with our current understanding. Exploring "as far as possible" into "pristine" interstellar space can resolve these. Hence, reaching large heliocentric distances rapidly is a driver for an Interstellar Probe. Such a mission is timely; understanding the interstellar context of exoplanet systems - and perhaps the context for the emergence of life both here and there - hinges upon what we can discover within our own stellar neighborhood. With current spacecraft technology and high-capability launch vehicles, such as the Space Launch System (SLS), a small, but extremely capable spacecraft, could be dispatched to the near-by interstellar medium with at least twice the speed of the Voyagers. Challenges remain with payload mass and power constraints for optimized science measurements. Mission longevity, as experienced by, but not designed into, the Voyagers, communications capability, and radioisotope power system performance and lifetime are solvable engineering challenges. Such

  14. Design optimization for cost and quality: The robust design approach

    Science.gov (United States)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  15. Cost-effectiveness analysis of optimal strategy for tumor treatment

    International Nuclear Information System (INIS)

    Pang, Liuyong; Zhao, Zhong; Song, Xinyu

    2016-01-01

    We propose and analyze an antitumor model with combined immunotherapy and chemotherapy. Firstly, we explore the treatment effects of single immunotherapy and single chemotherapy, respectively. Results indicate that neither immunotherapy nor chemotherapy alone are adequate to cure a tumor. Hence, we apply optimal theory to investigate how the combination of immunotherapy and chemotherapy should be implemented, for a certain time period, in order to reduce the number of tumor cells, while minimizing the implementation cost of the treatment strategy. Secondly, we establish the existence of the optimality system and use Pontryagin’s Maximum Principle to characterize the optimal levels of the two treatment measures. Furthermore, we calculate the incremental cost-effectiveness ratios to analyze the cost-effectiveness of all possible combinations of the two treatment measures. Finally, numerical results show that the combination of immunotherapy and chemotherapy is the most cost-effective strategy for tumor treatment, and able to eliminate the entire tumor with size 4.470 × 10"8 in a year.

  16. Optimal Cost-Analysis and Design of Circular Footings

    Directory of Open Access Journals (Sweden)

    Prabir K. Basudhar

    2012-10-01

    Full Text Available The study pertains to the optimal cost-analysis and design of a circular footing subjected to generalized loadings using sequential unconstrained minimization technique (SUMT in conjunction with Powell’s conjugate direction method for multidimensional search and quadratic interpolation method for one dimensional minimization. The cost of the footing is minimized satisfying all the structural and geotechnical engineering design considerations. As extended penalty function method has been used to convert the constrained problem into an unconstrained one, the developed technique is capable of handling both feasible and infeasible initial design vector. The net saving in cost starting from the best possible manual design ranges from 10 to 20 %. For all practical purposes, the optimum cost is independent of the initial design point. It was observed that for better convergence, the transition parameter  should be chosen at least 100 times the initial penalty parameter kr .

  17. Optimization approach for saddling cost of medical cyclotrons with fuzziness

    International Nuclear Information System (INIS)

    Abass, S.A.; Massoud, E.M.A.

    2007-01-01

    Most radiation fields are combinations of different kinds of radiation. The radiations of most significance are fast neutrons, thermal neutrons, primary gammas and secondary gammas. Thermos's composite shielding materials are designed to attenuate these types of radiation. The shielding design requires an accurate cost-benefit analysis based on uncertainty optimization technique. The theory of fuzzy sets has been employed to formulate and solve the problem of cost-benefit analysis of medical cyclotron. This medical radioisotope production cyclotron is based in Sydney, Australia

  18. Investigations of a Cost-Optimal Zero Energy Balance

    DEFF Research Database (Denmark)

    Marszal, Anna Joanna; Nørgaard, Jesper; Heiselberg, Per

    2012-01-01

    The Net Zero Energy Building (Net ZEB) concept is worldwide recognised as a promising solution for decreasing buildings’ energy use. Nevertheless, a consistent definition of the Net ZEB concept is constantly under discussion. One of the points on the Net ZEB agenda is the zero energy balance...... and taken a view point of private building owner to investigate what types of energy uses should be included in the cost-optimal zero energy balance. The analysis is conducted for five renewable energy supply systems and five user profiles with a study case of a multi-storey residential Net ZEB. The results...... have indicated that with current energy prices and technology, a cost-optimal Net ZEB zero energy balance accounts for only the building related energy use. Moreover, with high user related energy use is even more in favour of excluding appliances from the zero energy balance....

  19. Optimization of costs for the DOEL 3 steam generator replacement

    International Nuclear Information System (INIS)

    Leblois, C.

    1994-01-01

    Several aspects of steam generator replacement economics are discussed on the basis of the recent replacement carried out in the Doel 3 unit. The choice between repair of replacement policies, as well as the selection of the intervention date were based on a comparison of costs in which various possible scenarios were examined. The contractual approach for the different works to be performed was also an important point, as well as the project organization in which CAD played an important role. This organization allowed to optimize the outage duration and to realize numerous interventions in the reactor building in parallel with the replacement itself. A last aspect of the optimization of costs is the possibility to uprate the plant power. In the case of Doel 3, the plant restarted with a nominal power increased by 10%, of which 5,7% were possible by the increase of the SG heat transfer area. (Author) 6 refs

  20. Cost optimization model and its heuristic genetic algorithms

    International Nuclear Information System (INIS)

    Liu Wei; Wang Yongqing; Guo Jilin

    1999-01-01

    Interest and escalation are large quantity in proportion to the cost of nuclear power plant construction. In order to optimize the cost, the mathematics model of cost optimization for nuclear power plant construction was proposed, which takes the maximum net present value as the optimization goal. The model is based on the activity networks of the project and is an NP problem. A heuristic genetic algorithms (HGAs) for the model was introduced. In the algorithms, a solution is represented with a string of numbers each of which denotes the priority of each activity for assigned resources. The HGAs with this encoding method can overcome the difficulty which is harder to get feasible solutions when using the traditional GAs to solve the model. The critical path of the activity networks is figured out with the concept of predecessor matrix. An example was computed with the HGAP programmed in C language. The results indicate that the model is suitable for the objectiveness, the algorithms is effective to solve the model

  1. The Interstellar Medium

    CERN Document Server

    Lequeux, James

    2005-01-01

    Describing interstellar matter in our galaxy in all of its various forms, this book also considers the physical and chemical processes that are occurring within this matter. The first seven chapters present the various components making up the interstellar matter and detail the ways that we are able to study them. The following seven chapters are devoted to the physical, chemical and dynamical processes that control the behaviour of interstellar matter. These include the instabilities and cloud collapse processes that lead to the formation of stars. The last chapter summarizes the transformations that can occur between the different phases of the interstellar medium. Emphasizing methods over results, "The Interstellar Medium" is written for graduate students, for young astronomers, and also for any researchers who have developed an interest in the interstellar medium.

  2. Energy Cost Optimization in a Water Supply System Case Study

    Directory of Open Access Journals (Sweden)

    Daniel F. Moreira

    2013-01-01

    Full Text Available The majority of the life cycle costs (LCC of a pump are related to the energy spent in pumping, with the rest being related to the purchase and maintenance of the equipment. Any optimizations in the energy efficiency of the pumps result in a considerable reduction of the total operational cost. The Fátima water supply system in Portugal was analyzed in order to minimize its operational energy costs. Different pump characteristic curves were analyzed and modeled in order to achieve the most efficient operation point. To determine the best daily pumping operational scheduling pattern, genetic algorithm (GA optimization embedded in the modeling software was considered in contrast with a manual override (MO approach. The main goal was to determine which pumps and what daily scheduling allowed the best economical solution. At the end of the analysis it was possible to reduce the original daily energy costs by 43.7%. This was achieved by introducing more appropriate pumps and by intelligent programming of their operation. Given the heuristic nature of GAs, different approaches were employed and the most common errors were pinpointed, whereby this investigation can be used as a reference for similar future developments.

  3. Cost optimization of induction linac drivers for linear colliders

    International Nuclear Information System (INIS)

    Barletta, W.A.

    1986-01-01

    Recent developments in high reliability components for linear induction accelerators (LIA) make possible the use of these devices as economical power drives for very high gradient linear colliders. A particularly attractive realization of this ''two-beam accelerator'' approach is to configure the LIA as a monolithic relativistic klystron operating at 10 to 12 GHz with induction cells providing periodic reacceleration of the high current beam. Based upon a recent engineering design of a state-of-the-art, 10- to 20-MeV LIA at Lawrence Livermore National Laboratory, this paper presents an algorithm for scaling the cost of the relativistic klystron to the parameter regime of interest for the next generation high energy physics machines. The algorithm allows optimization of the collider luminosity with respect to cost by varying the characteristics (pulse length, drive current, repetition rate, etc.) of the klystron. It also allows us to explore cost sensitivities as a guide to research strategies for developing advanced accelerator technologies

  4. Flash memories economic principles of performance, cost and reliability optimization

    CERN Document Server

    Richter, Detlev

    2014-01-01

    The subject of this book is to introduce a model-based quantitative performance indicator methodology applicable for performance, cost and reliability optimization of non-volatile memories. The complex example of flash memories is used to introduce and apply the methodology. It has been developed by the author based on an industrial 2-bit to 4-bit per cell flash development project. For the first time, design and cost aspects of 3D integration of flash memory are treated in this book. Cell, array, performance and reliability effects of flash memories are introduced and analyzed. Key performance parameters are derived to handle the flash complexity. A performance and array memory model is developed and a set of performance indicators characterizing architecture, cost and durability is defined.   Flash memories are selected to apply the Performance Indicator Methodology to quantify design and technology innovation. A graphical representation based on trend lines is introduced to support a requirement based pr...

  5. A Low Cost Structurally Optimized Design for Diverse Filter Types

    Science.gov (United States)

    Kazmi, Majida; Aziz, Arshad; Akhtar, Pervez; Ikram, Nassar

    2016-01-01

    A wide range of image processing applications deploys two dimensional (2D)-filters for performing diversified tasks such as image enhancement, edge detection, noise suppression, multi scale decomposition and compression etc. All of these tasks require multiple type of 2D-filters simultaneously to acquire the desired results. The resource hungry conventional approach is not a viable option for implementing these computationally intensive 2D-filters especially in a resource constraint environment. Thus it calls for optimized solutions. Mostly the optimization of these filters are based on exploiting structural properties. A common shortcoming of all previously reported optimized approaches is their restricted applicability only for a specific filter type. These narrow scoped solutions completely disregard the versatility attribute of advanced image processing applications and in turn offset their effectiveness while implementing a complete application. This paper presents an efficient framework which exploits the structural properties of 2D-filters for effectually reducing its computational cost along with an added advantage of versatility for supporting diverse filter types. A composite symmetric filter structure is introduced which exploits the identities of quadrant and circular T-symmetries in two distinct filter regions simultaneously. These T-symmetries effectually reduce the number of filter coefficients and consequently its multipliers count. The proposed framework at the same time empowers this composite filter structure with additional capabilities of realizing all of its Ψ-symmetry based subtypes and also its special asymmetric filters case. The two-fold optimized framework thus reduces filter computational cost up to 75% as compared to the conventional approach as well as its versatility attribute not only supports diverse filter types but also offers further cost reduction via resource sharing for sequential implementation of diversified image

  6. Cost Effectiveness Analysis of Optimal Malaria Control Strategies in Kenya

    Directory of Open Access Journals (Sweden)

    Gabriel Otieno

    2016-03-01

    Full Text Available Malaria remains a leading cause of mortality and morbidity among the children under five and pregnant women in sub-Saharan Africa, but it is preventable and controllable provided current recommended interventions are properly implemented. Better utilization of malaria intervention strategies will ensure the gain for the value for money and producing health improvements in the most cost effective way. The purpose of the value for money drive is to develop a better understanding (and better articulation of costs and results so that more informed, evidence-based choices could be made. Cost effectiveness analysis is carried out to inform decision makers on how to determine where to allocate resources for malaria interventions. This study carries out cost effective analysis of one or all possible combinations of the optimal malaria control strategies (Insecticide Treated Bednets—ITNs, Treatment, Indoor Residual Spray—IRS and Intermittent Preventive Treatment for Pregnant Women—IPTp for the four different transmission settings in order to assess the extent to which the intervention strategies are beneficial and cost effective. For the four different transmission settings in Kenya the optimal solution for the 15 strategies and their associated effectiveness are computed. Cost-effective analysis using Incremental Cost Effectiveness Ratio (ICER was done after ranking the strategies in order of the increasing effectiveness (total infections averted. The findings shows that for the endemic regions the combination of ITNs, IRS, and IPTp was the most cost-effective of all the combined strategies developed in this study for malaria disease control and prevention; for the epidemic prone areas is the combination of the treatment and IRS; for seasonal areas is the use of ITNs plus treatment; and for the low risk areas is the use of treatment only. Malaria transmission in Kenya can be minimized through tailor-made intervention strategies for malaria control

  7. Linear versus quadratic portfolio optimization model with transaction cost

    Science.gov (United States)

    Razak, Norhidayah Bt Ab; Kamil, Karmila Hanim; Elias, Siti Masitah

    2014-06-01

    Optimization model is introduced to become one of the decision making tools in investment. Hence, it is always a big challenge for investors to select the best model that could fulfill their goal in investment with respect to risk and return. In this paper we aims to discuss and compare the portfolio allocation and performance generated by quadratic and linear portfolio optimization models namely of Markowitz and Maximin model respectively. The application of these models has been proven to be significant and popular among others. However transaction cost has been debated as one of the important aspects that should be considered for portfolio reallocation as portfolio return could be significantly reduced when transaction cost is taken into consideration. Therefore, recognizing the importance to consider transaction cost value when calculating portfolio' return, we formulate this paper by using data from Shariah compliant securities listed in Bursa Malaysia. It is expected that, results from this paper will effectively justify the advantage of one model to another and shed some lights in quest to find the best decision making tools in investment for individual investors.

  8. Nature of interstellar turbulence

    International Nuclear Information System (INIS)

    Altunin, V.

    1981-01-01

    A significant role in producing the pattern of interstellar scintillation observed in discrete radio sources may be played by the magnetoacoustic turbulence that will be generated as shock waves are propagated at velocity V/sub sh/roughly-equal 20--100 km/sec through the interstellar medium, as well as by irregularities in stellar wind emanating from type OB stars

  9. Interstellar hydrogen bonding

    Science.gov (United States)

    Etim, Emmanuel E.; Gorai, Prasanta; Das, Ankan; Chakrabarti, Sandip K.; Arunan, Elangannan

    2018-06-01

    This paper reports the first extensive study of the existence and effects of interstellar hydrogen bonding. The reactions that occur on the surface of the interstellar dust grains are the dominant processes by which interstellar molecules are formed. Water molecules constitute about 70% of the interstellar ice. These water molecules serve as the platform for hydrogen bonding. High level quantum chemical simulations for the hydrogen bond interaction between 20 interstellar molecules (known and possible) and water are carried out using different ab-intio methods. It is evident that if the formation of these species is mainly governed by the ice phase reactions, there is a direct correlation between the binding energies of these complexes and the gas phase abundances of these interstellar molecules. Interstellar hydrogen bonding may cause lower gas abundance of the complex organic molecules (COMs) at the low temperature. From these results, ketenes whose less stable isomers that are more strongly bonded to the surface of the interstellar dust grains have been observed are proposed as suitable candidates for astronomical observations.

  10. Life Cycle Cost Optimization of a BOLIG+ Zero Energy Building

    DEFF Research Database (Denmark)

    Marszal, Anna Joanna

    . However, before being fully implemented in the national building codes and international standards, the ZEB concept requires a clear understanding and a uniform definition. The ZEB concept is an energy-conservation solution, whose successful adaptation in real life depends significantly on private...... building owners’ approach to it. For this particular target group, the cost is often an obstacle when investing money in environmental or climate friendly products. Therefore, this PhD project took the perspective of a future private ZEB owner to investigate the cost-optimal Net ZEB definition applicable...... in the Danish context. The review of the various ZEB approaches indicated a general concept of a Zero Energy Building as a building with significantly reduced energy demand that is balanced by an equivalent energy generation from renewable sources. And, with this as a general framework, each ZEB definition...

  11. Life Cycle Cost Optimization of a Bolig+ Zero Energy Building

    DEFF Research Database (Denmark)

    Marszal, Anna Joanna

    . However, before being fully implemented in the national building codesand international standards, the ZEB concept requires a clear understanding and a uniform definition. The ZEB concept is an energy-conservation solution, whose successful adaptation in real life depends significantly on private building...... owners’ approach to it. For thisparticular target group, the cost is often an obstacle when investing money in environmental or climate friendly products. Therefore, this PhD project took theperspective of a future private ZEB owner to investigate the cost-optimal Net ZEB definition applicable...... in the Danish context. The review of the various ZEB approaches indicated a general concept of a Zero Energy Building as a building with significantly reduced energy demand that isbalanced by an equivalent energy generation from renewable sources. And, with this as a general framework, each ZEB definition...

  12. Physical Protection System Upgrades - Optimizing for Performance and Cost

    International Nuclear Information System (INIS)

    Hicks, Mary Jane; Bouchard, Ann M.

    1999-01-01

    CPA--Cost and Performance Analysis--is an architecture that supports analysis of physical protection systems and upgrade options. ASSESS (Analytic System and Software for Evaluating Security Systems), a tool for evaluating performance of physical protection systems, currently forms the cornerstone for evaluating detection probabilities and delay times of the system. Cost and performance data are offered to the decision-maker at the systems level and to technologists at the path-element level. A new optimization engine has been attached to the CPA methodology to automate analyses of many combinations (portfolios) of technologies. That engine controls a new analysis sequencer that automatically modifies ASSESS PPS files (facility descriptions), automatically invokes ASSESS Outsider analysis and then saves results for post-processing. Users can constrain the search to an upper bound on total cost, to a lower bound on level of performance, or to include specific technologies or technology types. This process has been applied to a set of technology development proposals to identify those portfolios that provide the most improvement in physical security for the lowest cost to install, operate and maintain at a baseline facility

  13. Testing of Strategies for the Acceleration of the Cost Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ponciroli, Roberto [Argonne National Lab. (ANL), Argonne, IL (United States); Vilim, Richard B. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-08-31

    The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) and the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal

  14. Designing the optimal bit: balancing energetic cost, speed and reliability.

    Science.gov (United States)

    Deshpande, Abhishek; Gopalkrishnan, Manoj; Ouldridge, Thomas E; Jones, Nick S

    2017-08-01

    We consider the challenge of operating a reliable bit that can be rapidly erased. We find that both erasing and reliability times are non-monotonic in the underlying friction, leading to a trade-off between erasing speed and bit reliability. Fast erasure is possible at the expense of low reliability at moderate friction, and high reliability comes at the expense of slow erasure in the underdamped and overdamped limits. Within a given class of bit parameters and control strategies, we define 'optimal' designs of bits that meet the desired reliability and erasing time requirements with the lowest operational work cost. We find that optimal designs always saturate the bound on the erasing time requirement, but can exceed the required reliability time if critically damped. The non-trivial geometry of the reliability and erasing time scales allows us to exclude large regions of parameter space as suboptimal. We find that optimal designs are either critically damped or close to critical damping under the erasing procedure.

  15. No Cost – Low Cost Compressed Air System Optimization in Industry

    Science.gov (United States)

    Dharma, A.; Budiarsa, N.; Watiniasih, N.; Antara, N. G.

    2018-04-01

    Energy conservation is a systematic, integrated of effort, in order to preserve energy sources and improve energy utilization efficiency. Utilization of energy in efficient manner without reducing the energy usage it must. Energy conservation efforts are applied at all stages of utilization, from utilization of energy resources to final, using efficient technology, and cultivating an energy-efficient lifestyle. The most common way is to promote energy efficiency in the industry on end use and overcome barriers to achieve such efficiency by using system energy optimization programs. The facts show that energy saving efforts in the process usually only focus on replacing tools and not an overall system improvement effort. In this research, a framework of sustainable energy reduction work in companies that have or have not implemented energy management system (EnMS) will be conducted a systematic technical approach in evaluating accurately a compressed-air system and potential optimization through observation, measurement and verification environmental conditions and processes, then processing the physical quantities of systems such as air flow, pressure and electrical power energy at any given time measured using comparative analysis methods in this industry, to provide the potential savings of energy saving is greater than the component approach, with no cost to the lowest cost (no cost - low cost). The process of evaluating energy utilization and energy saving opportunities will provide recommendations for increasing efficiency in the industry and reducing CO2 emissions and improving environmental quality.

  16. Life Cycle Cost optimization of a BOLIG+ Zero Energy Building

    Energy Technology Data Exchange (ETDEWEB)

    Marszal, A.J.

    2011-12-15

    Buildings consume approximately 40% of the world's primary energy use. Considering the total energy consumption throughout the whole life cycle of a building, the energy performance and supply is an important issue in the context of climate change, scarcity of energy resources and reduction of global energy consumption. An energy consuming as well as producing building, labelled as the Zero Energy Building (ZEB) concept, is seen as one of the solutions that could change the picture of energy consumption in the building sector, and thus contribute to the reduction of the global energy use. However, before being fully implemented in the national building codes and international standards, the ZEB concept requires a clear understanding and a uniform definition. The ZEB concept is an energy-conservation solution, whose successful adaptation in real life depends significantly on private building owners' approach to it. For this particular target group, the cost is often an obstacle when investing money in environmental or climate friendly products. Therefore, this PhD project took the perspective of a future private ZEB owner to investigate the cost-optimal Net ZEB definition applicable in the Danish context. The review of the various ZEB approaches indicated a general concept of a Zero Energy Building as a building with significantly reduced energy demand that is balanced by an equivalent energy generation from renewable sources. And, with this as a general framework, each ZEB definition should further specify: (1) the connection or the lack of it to the energy infrastructure, (2) the unit of the balance, (3) the period of the balance, (4) the types of energy use included in the balance, (5) the minimum energy performance requirements (6) the renewable energy supply options, and if applicable (7) the requirements of the building-grid interaction. Moreover, the study revealed that the future ZEB definitions applied in Denmark should mostly be focused on grid

  17. NASA's interstellar probe mission

    International Nuclear Information System (INIS)

    Liewer, P.C.; Ayon, J.A.; Wallace, R.A.; Mewaldt, R.A.

    2000-01-01

    NASA's Interstellar Probe will be the first spacecraft designed to explore the nearby interstellar medium and its interaction with our solar system. As envisioned by NASA's Interstellar Probe Science and Technology Definition Team, the spacecraft will be propelled by a solar sail to reach >200 AU in 15 years. Interstellar Probe will investigate how the Sun interacts with its environment and will directly measure the properties and composition of the dust, neutrals and plasma of the local interstellar material which surrounds the solar system. In the mission concept developed in the spring of 1999, a 400-m diameter solar sail accelerates the spacecraft to ∼15 AU/year, roughly 5 times the speed of Voyager 1 and 2. The sail is used to first bring the spacecraft to ∼0.25 AU to increase the radiation pressure before heading out in the interstellar upwind direction. After jettisoning the sail at ∼5 AU, the spacecraft coasts to 200-400 AU, exploring the Kuiper Belt, the boundaries of the heliosphere, and the nearby interstellar medium

  18. Derivative-free optimization under uncertainty applied to costly simulators

    International Nuclear Information System (INIS)

    Pauwels, Benoit

    2016-01-01

    The modeling of complex phenomena encountered in industrial issues can lead to the study of numerical simulation codes. These simulators may require extensive execution time (from hours to days), involve uncertain parameters and even be intrinsically stochastic. Importantly within the context of simulation-based optimization, the derivatives of the outputs with respect to the inputs may be inexistent, inaccessible or too costly to approximate reasonably. This thesis is organized in four chapters. The first chapter discusses the state of the art in derivative-free optimization and uncertainty modeling. The next three chapters introduce three independent - although connected - contributions to the field of derivative-free optimization in the presence of uncertainty. The second chapter addresses the emulation of costly stochastic simulation codes - stochastic in the sense simulations run with the same input parameters may lead to distinct outputs. Such was the matter of the CODESTOCH project carried out at the Summer mathematical research center on scientific computing and its applications (CEMRACS) during the summer of 2013, together with two Ph.D. students from Electricity of France (EDF) and the Atomic Energy and Alternative Energies Commission (CEA). We designed four methods to build emulators for functions whose values are probability density functions. These methods were tested on two toy functions and applied to industrial simulation codes concerned with three complex phenomena: the spatial distribution of molecules in a hydrocarbon system (IFPEN), the life cycle of large electric transformers (EDF) and the repercussions of a hypothetical accidental in a nuclear plant (CEA). Emulation was a preliminary process towards optimization in the first two cases. In the third chapter we consider the influence of inaccurate objective function evaluations on direct search - a classical derivative-free optimization method. In real settings inaccuracy may never vanish

  19. Optimal design of degradation tests in presence of cost constraint

    International Nuclear Information System (INIS)

    Wu, S.-J.; Chang, C.-T.

    2002-01-01

    Degradation test is a useful technique to provide information about the lifetime of highly reliable products. We obtain the degradation measurements over time in such a test. In general, the degradation data are modeled by a nonlinear regression model with random coefficients. If we can obtain the estimates of parameters under the model, then the failure time distribution can be estimated. However, in order to obtain a precise estimate of the percentile of failure time distribution, one needs to design an optimal degradation test. Therefore, this study proposes an approach to determine the number of units to test, inspection frequency, and termination time of a degradation test under a determined cost of experiment such that the variance of estimator of percentile of failure time distribution is minimum. The method will be applied to a numerical example and the sensitivity analysis will be discussed

  20. The galactic interstellar medium

    CERN Document Server

    Burton, WB; Genzel, R

    1992-01-01

    This volume contains the papers of three extended lectures addressing advanced topics in astronomy and astrophysics. The topics discussed include the most recent observational data on interstellar matter outside our galaxy and the physics and chemistry of molecular clouds.

  1. Dynamics of interstellar matter

    International Nuclear Information System (INIS)

    Kahn, F.D.

    1975-01-01

    A review of the dynamics of interstellar matter is presented, considering the basic equations of fluid flow, plane waves, shock waves, spiral structure, thermal instabilities and early star cocoons. (B.R.H.)

  2. Aircraft path planning for optimal imaging using dynamic cost functions

    Science.gov (United States)

    Christie, Gordon; Chaudhry, Haseeb; Kochersberger, Kevin

    2015-05-01

    Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an "applications lag" for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.

  3. Interstellar organic chemistry.

    Science.gov (United States)

    Sagan, C.

    1972-01-01

    Most of the interstellar organic molecules have been found in the large radio source Sagittarius B2 toward the galactic center, and in such regions as W51 and the IR source in the Orion nebula. Questions of the reliability of molecular identifications are discussed together with aspects of organic synthesis in condensing clouds, degradational origin, synthesis on grains, UV natural selection, interstellar biology, and contributions to planetary biology.

  4. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    Science.gov (United States)

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  5. Optimal Design and Operation of In-Situ Chemical Oxidation Using Stochastic Cost Optimization Toolkit

    Science.gov (United States)

    Kim, U.; Parker, J.; Borden, R. C.

    2014-12-01

    In-situ chemical oxidation (ISCO) has been applied at many dense non-aqueous phase liquid (DNAPL) contaminated sites. A stirred reactor-type model was developed that considers DNAPL dissolution using a field-scale mass transfer function, instantaneous reaction of oxidant with aqueous and adsorbed contaminant and with readily oxidizable natural oxygen demand ("fast NOD"), and second-order kinetic reactions with "slow NOD." DNAPL dissolution enhancement as a function of oxidant concentration and inhibition due to manganese dioxide precipitation during permanganate injection are included in the model. The DNAPL source area is divided into multiple treatment zones with different areas, depths, and contaminant masses based on site characterization data. The performance model is coupled with a cost module that involves a set of unit costs representing specific fixed and operating costs. Monitoring of groundwater and/or soil concentrations in each treatment zone is employed to assess ISCO performance and make real-time decisions on oxidant reinjection or ISCO termination. Key ISCO design variables include the oxidant concentration to be injected, time to begin performance monitoring, groundwater and/or soil contaminant concentrations to trigger reinjection or terminate ISCO, number of monitoring wells or geoprobe locations per treatment zone, number of samples per sampling event and location, and monitoring frequency. Design variables for each treatment zone may be optimized to minimize expected cost over a set of Monte Carlo simulations that consider uncertainty in site parameters. The model is incorporated in the Stochastic Cost Optimization Toolkit (SCOToolkit) program, which couples the ISCO model with a dissolved plume transport model and with modules for other remediation strategies. An example problem is presented that illustrates design tradeoffs required to deal with characterization and monitoring uncertainty. Monitoring soil concentration changes during ISCO

  6. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

    Science.gov (United States)

    Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

    2013-06-01

    We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

  7. Particle swarm optimization algorithm based low cost magnetometer calibration

    Science.gov (United States)

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  8. Charging cost optimization for EV buses using neural network based energy predictor

    NARCIS (Netherlands)

    Nageshrao, S.P.; Jacob, J.; Wilkins, S.

    2017-01-01

    For conventional buses, based on the decades of their operational knowledge, public transport companies are able to optimize their cost of operation. However, with recent trend in the usage of electric buses, cost optimal operation can become challenging. In this paper an offline optimal charging

  9. Investigating nearby exoplanets via interstellar radar

    Science.gov (United States)

    Scheffer, Louis K.

    2014-01-01

    Interstellar radar is a potential intermediate step between passive observation of exoplanets and interstellar exploratory missions. Compared with passive observation, it has the traditional advantages of radar astronomy. It can measure surface characteristics, determine spin rates and axes, provide extremely accurate ranges, construct maps of planets, distinguish liquid from solid surfaces, find rings and moons, and penetrate clouds. It can do this even for planets close to the parent star. Compared with interstellar travel or probes, it also offers significant advantages. The technology required to build such a radar already exists, radar can return results within a human lifetime, and a single facility can investigate thousands of planetary systems. The cost, although too high for current implementation, is within the reach of Earth's economy.

  10. TRU Waste Management Program. Cost/schedule optimization analysis

    International Nuclear Information System (INIS)

    Detamore, J.A.; Raudenbush, M.H.; Wolaver, R.W.; Hastings, G.A.

    1985-10-01

    This Current Year Work Plan presents in detail a description of the activities to be performed by the Joint Integration Office Rockwell International (JIO/RI) during FY86. It breaks down the activities into two major work areas: Program Management and Program Analysis. Program Management is performed by the JIO/RI by providing technical planning and guidance for the development of advanced TRU waste management capabilities. This includes equipment/facility design, engineering, construction, and operations. These functions are integrated to allow transition from interim storage to final disposition. JIO/RI tasks include program requirements identification, long-range technical planning, budget development, program planning document preparation, task guidance development, task monitoring, task progress information gathering and reporting to DOE, interfacing with other agencies and DOE lead programs, integrating public involvement with program efforts, and preparation of reports for DOE detailing program status. Program Analysis is performed by the JIO/RI to support identification and assessment of alternatives, and development of long-term TRU waste program capabilities. These analyses include short-term analyses in response to DOE information requests, along with performing an RH Cost/Schedule Optimization report. Systems models will be developed, updated, and upgraded as needed to enhance JIO/RI's capability to evaluate the adequacy of program efforts in various fields. A TRU program data base will be maintained and updated to provide DOE with timely responses to inventory related questions

  11. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Mahayni, Malek A.

    2011-01-01

    developed to solve the optimal paths problem with different kinds of graphs. An algorithm that solves the problem of paths’ optimization in directed graphs relative to different cost functions is described in [1]. It follows an approach extended from

  12. Cost optimization of biofuel production – The impact of scale, integration, transport and supply chain configurations

    NARCIS (Netherlands)

    de Jong, S.A.|info:eu-repo/dai/nl/41200836X; Hoefnagels, E.T.A.|info:eu-repo/dai/nl/313935998; Wetterlund, Elisabeth; Pettersson, Karin; Faaij, André; Junginger, H.M.|info:eu-repo/dai/nl/202130703

    2017-01-01

    This study uses a geographically-explicit cost optimization model to analyze the impact of and interrelation between four cost reduction strategies for biofuel production: economies of scale, intermodal transport, integration with existing industries, and distributed supply chain configurations

  13. Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects

    OpenAIRE

    Li, Ming; Wu, Guangdong

    2014-01-01

    Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illus...

  14. Diffuse interstellar clouds

    International Nuclear Information System (INIS)

    Black, J.H.

    1987-01-01

    The author defines and discusses the nature of diffuse interstellar clouds. He discusses how they contribute to the general extinction of starlight. The atomic and molecular species that have been identified in the ultraviolet, visible, and near infrared regions of the spectrum of a diffuse cloud are presented. The author illustrates some of the practical considerations that affect absorption line observations of interstellar atoms and molecules. Various aspects of the theoretical description of diffuse clouds required for a full interpretation of the observations are discussed

  15. Infrared diffuse interstellar bands

    Science.gov (United States)

    Galazutdinov, G. A.; Lee, Jae-Joon; Han, Inwoo; Lee, Byeong-Cheol; Valyavin, G.; Krełowski, J.

    2017-05-01

    We present high-resolution (R ˜ 45 000) profiles of 14 diffuse interstellar bands in the ˜1.45 to ˜2.45 μm range based on spectra obtained with the Immersion Grating INfrared Spectrograph at the McDonald Observatory. The revised list of diffuse bands with accurately estimated rest wavelengths includes six new features. The diffuse band at 15 268.2 Å demonstrates a very symmetric profile shape and thus can serve as a reference for finding the 'interstellar correction' to the rest wavelength frame in the H range, which suffers from a lack of known atomic/molecular lines.

  16. Optimalization of logistic costs by the controlling approach

    Directory of Open Access Journals (Sweden)

    Katarína Teplická

    2007-10-01

    Full Text Available Article deals with logistic cost problems, their following, reporting and evaluation in the firm. It gives basic information about possible access of the effective decreasing of the logistic cost by the way of controlling access, or evaluation of Balance Score Card. It indicates short algorithm of logistic costs reporting by the way of controlling.

  17. Optimalization of logistic costs by the controlling approach

    OpenAIRE

    Katarína Teplická

    2007-01-01

    Article deals with logistic cost problems, their following, reporting and evaluation in the firm. It gives basic information about possible access of the effective decreasing of the logistic cost by the way of controlling access, or evaluation of Balance Score Card. It indicates short algorithm of logistic costs reporting by the way of controlling.

  18. A theoretical cost optimization model of reused flowback distribution network of regional shale gas development

    International Nuclear Information System (INIS)

    Li, Huajiao; An, Haizhong; Fang, Wei; Jiang, Meng

    2017-01-01

    The logistical issues surrounding the timing and transport of flowback generated by each shale gas well to the next is a big challenge. Due to more and more flowback being stored temporarily near the shale gas well and reused in the shale gas development, both transportation cost and storage cost are the heavy burden for the developers. This research proposed a theoretical cost optimization model to get the optimal flowback distribution solution for regional multi shale gas wells in a holistic perspective. Then, we used some empirical data of Marcellus Shale to do the empirical study. In addition, we compared the optimal flowback distribution solution by considering both the transportation cost and storage cost with the flowback distribution solution which only minimized the transportation cost or only minimized the storage cost. - Highlights: • A theoretical cost optimization model to get optimal flowback distribution solution. • An empirical study using the shale gas data in Bradford County of Marcellus Shale. • Visualization of optimal flowback distribution solutions under different scenarios. • Transportation cost is a more important factor for reducing the cost. • Help the developers to cut the storage and transportation cost of reusing flowback.

  19. Nebulae and interstellar matter

    International Nuclear Information System (INIS)

    1987-01-01

    The South African Astronomical Observatory (SAAO) has investigated the IRAS source 1912+172. This source appears to be a young planetary nebula with a binary central star. During 1986 SAAO has also studied the following: hydrogen deficient planetary nebulae; high speed flows in HII regions, and the wavelength dependence of interstellar polarization. 2 figs

  20. A cost-efficient method to optimize package size in emerging markets

    NARCIS (Netherlands)

    Gamez-Alban, H.M.; Soto-Cardona, O.C.; Mejia Argueta, C.; Sarmiento, A.T.

    2015-01-01

    Packaging links the entire supply chain and coordinates all participants in the process to give a flexible and effective response to customer needs in order to maximize satisfaction at optimal cost. This research proposes an optimization model to define the minimum total cost combination of outer

  1. Ionization of Interstellar Hydrogen

    Science.gov (United States)

    Whang, Y. C.

    1996-09-01

    Interstellar hydrogen can penetrate through the heliopause, enter the heliosphere, and may become ionized by photoionization and by charge exchange with solar wind protons. A fluid model is introduced to study the flow of interstellar hydrogen in the heliosphere. The flow is governed by moment equations obtained from integration of the Boltzmann equation over the velocity space. Under the assumption that the flow is steady axisymmetric and the pressure is isotropic, we develop a method of solution for this fluid model. This model and the method of solution can be used to study the flow of neutral hydrogen with various forms of ionization rate β and boundary conditions for the flow on the upwind side. We study the solution of a special case in which the ionization rate β is inversely proportional to R2 and the interstellar hydrogen flow is uniform at infinity on the upwind side. We solve the moment equations directly for the normalized density NH/NN∞, bulk velocity VH/VN∞, and temperature TH/TN∞ of interstellar hydrogen as functions of r/λ and z/λ, where λ is the ionization scale length. The solution is compared with the kinetic theory solution of Lallement et al. The fluid solution is much less time-consuming than the kinetic theory solutions. Since the ionization rate for production of pickup protons is directly proportional to the local density of neutral hydrogen, the high-resolution solution of interstellar neutral hydrogen obtained here will be used to study the global distribution of pickup protons.

  2. Process Cost Modeling for Multi-Disciplinary Design Optimization

    Science.gov (United States)

    Bao, Han P.; Freeman, William (Technical Monitor)

    2002-01-01

    For early design concepts, the conventional approach to cost is normally some kind of parametric weight-based cost model. There is now ample evidence that this approach can be misleading and inaccurate. By the nature of its development, a parametric cost model requires historical data and is valid only if the new design is analogous to those for which the model was derived. Advanced aerospace vehicles have no historical production data and are nowhere near the vehicles of the past. Using an existing weight-based cost model would only lead to errors and distortions of the true production cost. This report outlines the development of a process-based cost model in which the physical elements of the vehicle are costed according to a first-order dynamics model. This theoretical cost model, first advocated by early work at MIT, has been expanded to cover the basic structures of an advanced aerospace vehicle. Elemental costs based on the geometry of the design can be summed up to provide an overall estimation of the total production cost for a design configuration. This capability to directly link any design configuration to realistic cost estimation is a key requirement for high payoff MDO problems. Another important consideration in this report is the handling of part or product complexity. Here the concept of cost modulus is introduced to take into account variability due to different materials, sizes, shapes, precision of fabrication, and equipment requirements. The most important implication of the development of the proposed process-based cost model is that different design configurations can now be quickly related to their cost estimates in a seamless calculation process easily implemented on any spreadsheet tool. In successive sections, the report addresses the issues of cost modeling as follows. First, an introduction is presented to provide the background for the research work. Next, a quick review of cost estimation techniques is made with the intention to

  3. Cost Information – an Objective Necessity in Optimizing Decision Making

    OpenAIRE

    Petre Mihaela – Cosmina; Petroianu Grazia - Oana

    2012-01-01

    An overall growth can be registered at macro and micro level without achieving a development and this only under conditions of continuous improvement methods and techniques of organization and management within the unit. Cost and cost information play an important role being considered and recognized as useful and effective tools to reach any leader. They have features such as multiple facets to facilitate continuous improvement towards business unit. Cost awareness represents a decisive fact...

  4. Data on cost-optimal Nearly Zero Energy Buildings (NZEBs) across Europe.

    Science.gov (United States)

    D'Agostino, Delia; Parker, Danny

    2018-04-01

    This data article refers to the research paper A model for the cost-optimal design of Nearly Zero Energy Buildings (NZEBs) in representative climates across Europe [1]. The reported data deal with the design optimization of a residential building prototype located in representative European locations. The study focus on the research of cost-optimal choices and efficiency measures in new buildings depending on the climate. The data linked within this article relate to the modelled building energy consumption, renewable production, potential energy savings, and costs. Data allow to visualize energy consumption before and after the optimization, selected efficiency measures, costs and renewable production. The reduction of electricity and natural gas consumption towards the NZEB target can be visualized together with incremental and cumulative costs in each location. Further data is available about building geometry, costs, CO 2 emissions, envelope, materials, lighting, appliances and systems.

  5. Data on cost-optimal Nearly Zero Energy Buildings (NZEBs across Europe

    Directory of Open Access Journals (Sweden)

    Delia D'Agostino

    2018-04-01

    Full Text Available This data article refers to the research paper A model for the cost-optimal design of Nearly Zero Energy Buildings (NZEBs in representative climates across Europe [1]. The reported data deal with the design optimization of a residential building prototype located in representative European locations. The study focus on the research of cost-optimal choices and efficiency measures in new buildings depending on the climate. The data linked within this article relate to the modelled building energy consumption, renewable production, potential energy savings, and costs. Data allow to visualize energy consumption before and after the optimization, selected efficiency measures, costs and renewable production. The reduction of electricity and natural gas consumption towards the NZEB target can be visualized together with incremental and cumulative costs in each location. Further data is available about building geometry, costs, CO2 emissions, envelope, materials, lighting, appliances and systems.

  6. Sequential optimization of matrix chain multiplication relative to different cost functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    In this paper, we present a methodology to optimize matrix chain multiplication sequentially relative to different cost functions such as total number of scalar multiplications, communication overhead in a multiprocessor environment, etc. For n matrices our optimization procedure requires O(n 3) arithmetic operations per one cost function. This work is done in the framework of a dynamic programming extension that allows sequential optimization relative to different criteria. © 2011 Springer-Verlag Berlin Heidelberg.

  7. The marginal cost of public funds is one at the optimal tax system

    NARCIS (Netherlands)

    B. Jacobs (Bas)

    2018-01-01

    textabstractThis paper develops a Mirrlees framework with skill and preference heterogeneity to analyze optimal linear and nonlinear redistributive taxes, optimal provision of public goods, and the marginal cost of public funds (MCF). It is shown that the MCF equals one at the optimal tax system,

  8. Growth Optimal Portfolio Selection Under Proportional Transaction Costs with Obligatory Diversification

    International Nuclear Information System (INIS)

    Duncan, T.; Pasik Duncan, B.; Stettner, L.

    2011-01-01

    A continuous time long run growth optimal or optimal logarithmic utility portfolio with proportional transaction costs consisting of a fixed proportional cost and a cost proportional to the volume of transaction is considered. The asset prices are modeled as exponent of diffusion with jumps whose parameters depend on a finite state Markov process of economic factors. An obligatory portfolio diversification is introduced, accordingly to which it is required to invest at least a fixed small portion of our wealth in each asset.

  9. Global warming and carbon taxation. Optimal policy and the role of administration costs

    International Nuclear Information System (INIS)

    Williams, M.

    1995-01-01

    This paper develops a model relating CO 2 emissions to atmosphere concentrations, global temperature change and economic damages. For a variety of parameter assumptions, the model provides estimates of the marginal cost of emissions in various years. The optimal carbon tax is a function of the marginal emission cost and the costs of administering the tax. This paper demonstrates that under any reasonable assumptions, the optimal carbon tax is zero for at least several decades. (author)

  10. Optimal Algorithms and a PTAS for Cost-Aware Scheduling

    NARCIS (Netherlands)

    L. Chen; N. Megow; R. Rischke; L. Stougie (Leen); J. Verschae

    2015-01-01

    htmlabstractWe consider a natural generalization of classical scheduling problems in which using a time unit for processing a job causes some time-dependent cost which must be paid in addition to the standard scheduling cost. We study the scheduling objectives of minimizing the makespan and the

  11. Cost-risk optimization of nondestructive inspection level

    International Nuclear Information System (INIS)

    Johnson, D.P.

    1978-01-01

    This paper develops a quantitative methodology for determining the nondestructive inspection (NDI) level that will result in a minimum cost product considering both type one inspection errors, acceptance of defective material units, and type two inspection errors, rejection of sound material units. This methodology represents an advance over fracture mechanics - nondestructive inspection (FM-NDI) design systems that do not consider type two inspection errors or the pre-inspection material quality. The inputs required for the methodology developed in this paper are (1) the rejection probability as a function of inspection size and imperfection size, (2) the flaw frequency (FF), as a function of imperfection size, (3) the probability of failure given the material unit contains an imperfection of a given size as a function of that given size, (4) the manufacturing cost per material unit, (5) the inspection cost per material unit, and (6) the average cost per failure including indirect costs. Four methods are identified for determining the flaw-frequency and three methods are identified for determining the conditional failure probability (one of these methods is probabilistic fracture mechanics). Methods for determining the rejection probability are discussed elsewhere. The NDI-FF methodology can have significant impact where the cost of failures represents a significant fraction of the manufacturing costs, or when a significant fraction of the components are being rejected by the inspection. (Auth.)

  12. Interstellar extinction correlations

    International Nuclear Information System (INIS)

    Jones, A.P.; Williams, D.A.; Duley, W.W.

    1987-01-01

    A recently proposed model for interstellar grains in which the extinction arises from small silicate cores with mantles of hydrogenated amorphous carbon (HAC or α-C:H), and large, but thinly coated, silicate grains can successfully explain many of the observed properties of interstellar dust. The small silicate cores give rise to the 2200 A extinction feature. The extinction in the visual is produced by the large silicates and the HAC mantles on the small cores, whilst the far UV extinction arises in the HAC mantles with a small contribution form the silicate grains. The grain model requires that the silicate material is the more resilient component and that variations in the observed extinction from region to region are due to the nature and depletion of the carbon in the HAC mantles. (author)

  13. OPTIMIZATION METHOD AND SOFTWARE FOR FUEL COST REDUCTION IN CASE OF ROAD TRANSPORT ACTIVITY

    Directory of Open Access Journals (Sweden)

    György Kovács

    2017-06-01

    Full Text Available The transport activity is one of the most expensive processes in the supply chain and the fuel cost is the highest cost among the cost components of transportation. The goal of the research is to optimize the transport costs in case of a given transport task both by the selecting the optimal petrol station and by determining the optimal amount of the refilled fuel. Recently, in practice, these two decisions have not been made centrally at the forwarding company, but they depend on the individual decision of the driver. The aim of this study is to elaborate a precise and reliable mathematical method for selecting the optimal refuelling stations and determining the optimal amount of the refilled fuel to fulfil the transport demands. Based on the elaborated model, new decision-supporting software is developed for the economical fulfilment of transport trips.

  14. A bivariate optimal replacement policy with cumulative repair cost ...

    Indian Academy of Sciences (India)

    Min-Tsai Lai

    Shock model; cumulative damage model; cumulative repair cost limit; preventive maintenance model. 1. Introduction ... with two types of shocks: one type is failure shock, and the other type is damage ...... Theory, methods and applications.

  15. Cost optimization of load carrying thin-walled precast high performance concrete sandwich panels

    DEFF Research Database (Denmark)

    Hodicky, Kamil; Hansen, Sanne; Hulin, Thomas

    2015-01-01

    and HPCSP’s geometrical parameters as well as on material cost function in the HPCSP design. Cost functions are presented for High Performance Concrete (HPC), insulation layer, reinforcement and include labour-related costs. The present study reports the economic data corresponding to specific manufacturing......The paper describes a procedure to find the structurally and thermally efficient design of load-carrying thin-walled precast High Performance Concrete Sandwich Panels (HPCSP) with an optimal economical solution. A systematic optimization approach is based on the selection of material’s performances....... The solution of the optimization problem is performed in the computer package software Matlab® with SQPlab package and integrates the processes of HPCSP design, quantity take-off and cost estimation. The proposed optimization process outcomes in complex HPCSP design proposals to achieve minimum cost of HPCSP....

  16. Optimal Power Cost Management Using Stored Energy in Data Centers

    OpenAIRE

    Urgaonkar, Rahul; Urgaonkar, Bhuvan; Neely, Michael J.; Sivasubramaniam, Anand

    2011-01-01

    Since the electricity bill of a data center constitutes a significant portion of its overall operational costs, reducing this has become important. We investigate cost reduction opportunities that arise by the use of uninterrupted power supply (UPS) units as energy storage devices. This represents a deviation from the usual use of these devices as mere transitional fail-over mechanisms between utility and captive sources such as diesel generators. We consider the problem of opportunistically ...

  17. Reducing Operating Costs by Optimizing Space in Facilities

    Science.gov (United States)

    2012-03-01

    Design: Mapping the High Performance Workscape. Jossey-Bass. San Francisco. Berkman, Elliot. (2012). A Conceptual Guide to Statistics using SPSS. Sage ...Cleaning: Includes labor costs for in-house and contract service, payroll , taxes and fringe benefits, plus salaried supervisors and managers, as well as...Labor costs include payroll , taxes and fringe benefits for employees and contracted workers. Personnel include operating engineers, general

  18. A RECREATION OPTIMIZATION MODEL BASED ON THE TRAVEL COST METHOD

    OpenAIRE

    Hof, John G.; Loomis, John B.

    1983-01-01

    A recreation allocation model is developed which efficiently selects recreation areas and degree of development from an array of proposed and existing sites. The model does this by maximizing the difference between gross recreation benefits and travel, investment, management, and site-opportunity costs. The model presented uses the Travel Cost Method for estimating recreation benefits within an operations research framework. The model is applied to selection of potential wilderness areas in C...

  19. Optimized Cost per Click in Taobao Display Advertising

    OpenAIRE

    Zhu, Han; Jin, Junqi; Tan, Chang; Pan, Fei; Zeng, Yifan; Li, Han; Gai, Kun

    2017-01-01

    Taobao, as the largest online retail platform in the world, provides billions of online display advertising impressions for millions of advertisers every day. For commercial purposes, the advertisers bid for specific spots and target crowds to compete for business traffic. The platform chooses the most suitable ads to display in tens of milliseconds. Common pricing methods include cost per mille (CPM) and cost per click (CPC). Traditional advertising systems target certain traits of users and...

  20. Evolution of interstellar grains

    International Nuclear Information System (INIS)

    Greenberg, J.M.

    1984-01-01

    The principal aim of this chapter is to derive the properties of interstellar grains as a probe of local physical conditions and as a basis for predicting such properties as related to infrared emissivity and radiative transfer which can affect the evolution of dense clouds. The first sections will develop the criteria for grain models based directly on observations of gas and dust. A summary of the chemical evolution of grains and gas in diffuse and dense clouds follows. (author)

  1. Optimal cost for strengthening or destroying a given network

    Science.gov (United States)

    Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo

    2017-05-01

    Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c (k ) , which is the cost of strengthening or destroying a node with degree k . Using the degrees k in a network and the cost function c (k ) , we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their pc, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.

  2. Optimizing CO2 avoided cost by means of repowering

    International Nuclear Information System (INIS)

    Escosa, Jesus M.; Romeo, Luis M.

    2009-01-01

    Repowering fossil fuel power plants by means of gas turbines has been traditionally considered to increase power output and reduce NO x and SO 2 emissions both at low cost and short outage periods. At present, reduction in CO 2 emissions represents an additional advantage of repowering due to partial fuel shift and overall efficiency increase. This is especially important in existing installations with a CO 2 reduction mandatory that should be carried out in a short time and in a cost-effective manner. Feedwater and parallel repowering schemes have been analysed using thermodynamic, environmental and economic simulations. The objective is not only to evaluate the cost of electricity and the efficiency increase of the overall system, but calculate and minimize the cost of CO 2 avoided as a function of gas turbine power output. It seems that integration of larger gas turbines reduces the overall CO 2 emissions, but there is a compromise between CO 2 reduction due to fuel shift and a optimum integration of waste heat into the power plant to minimize the CO 2 avoided costs. Results highlight the repowering as a suitable technology to reduce 10-30% of CO 2 emissions in existing power plants with cost well below 20 Euro /tCO 2 . It could help to control emissions up to the carbon capture technologies commercial development.

  3. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    Science.gov (United States)

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  4. The Optimization of Transportation Costs in Logistics Enterprises with Time-Window Constraints

    Directory of Open Access Journals (Sweden)

    Qingyou Yan

    2015-01-01

    Full Text Available This paper presents a model for solving a multiobjective vehicle routing problem with soft time-window constraints that specify the earliest and latest arrival times of customers. If a customer is serviced before the earliest specified arrival time, extra inventory costs are incurred. If the customer is serviced after the latest arrival time, penalty costs must be paid. Both the total transportation cost and the required fleet size are minimized in this model, which also accounts for the given capacity limitations of each vehicle. The total transportation cost consists of direct transportation costs, extra inventory costs, and penalty costs. This multiobjective optimization is solved by using a modified genetic algorithm approach. The output of the algorithm is a set of optimal solutions that represent the trade-off between total transportation cost and the fleet size required to service customers. The influential impact of these two factors is analyzed through the use of a case study.

  5. Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings

    Directory of Open Access Journals (Sweden)

    Paolo Maria Congedo

    2016-10-01

    Full Text Available The recast of the energy performance of buildings directive (EPBD describes a comparative methodological framework to promote energy efficiency and establish minimum energy performance requirements in buildings at the lowest costs. The aim of the cost-optimal methodology is to foster the achievement of nearly zero energy buildings (nZEBs, the new target for all new buildings by 2020, characterized by a high performance with a low energy requirement almost covered by renewable sources. The paper presents the results of the application of the cost-optimal methodology in two existing buildings located in the Mediterranean area. These buildings are a kindergarten and a nursery school that differ in construction period, materials and systems. Several combinations of measures have been applied to derive cost-effective efficient solutions for retrofitting. The cost-optimal level has been identified for each building and the best performing solutions have been selected considering both a financial and a macroeconomic analysis. The results illustrate the suitability of the methodology to assess cost-optimality and energy efficiency in school building refurbishment. The research shows the variants providing the most cost-effective balance between costs and energy saving. The cost-optimal solution reduces primary energy consumption by 85% and gas emissions by 82%–83% in each reference building.

  6. Response surface method to optimize the low cost medium for ...

    African Journals Online (AJOL)

    A protease producing Bacillus sp. GA CAS10 was isolated from ascidian Phallusia arabica, Tuticorin, Southeast coast of India. Response surface methodology was employed for the optimization of different nutritional and physical factors for the production of protease. Plackett-Burman method was applied to identify ...

  7. An efficient cost function for the optimization of an n-layered isotropic cloaked cylinder

    International Nuclear Information System (INIS)

    Paul, Jason V; Collins, Peter J; Coutu, Ronald A Jr

    2013-01-01

    In this paper, we present an efficient cost function for optimizing n-layered isotropic cloaked cylinders. Cost function efficiency is achieved by extracting the expression for the angle independent scatterer contribution of an associated Green's function. Therefore, since this cost function is not a function of angle, accounting for every bistatic angle is not necessary and thus more efficient than other cost functions. With this general and efficient cost function, isotropic cloaked cylinders can be optimized for many layers and material parameters. To demonstrate this, optimized cloaked cylinders made of 10, 20 and 30 equal thickness layers are presented for TE and TM incidence. Furthermore, we study the effect layer thickness has on optimized cloaks by optimizing a 10 layer cloaked cylinder over the material parameters and individual layer thicknesses. The optimized material parameters in this effort do not exhibit the dual nature that is evident in the ideal transformation optics design. This indicates that the inevitable field penetration and subsequent PEC boundary condition at the cylinder must be taken into account for an optimal cloaked cylinder design. Furthermore, a more effective cloaked cylinder can be designed by optimizing both layer thickness and material parameters than by additional layers alone. (paper)

  8. Parametric optimization for the low cost production of nanostructure ...

    African Journals Online (AJOL)

    In recent years, nanocrystalline materials have drawn the attention of researchers in the field of materials science engineering due to its enhanced mechanical properties such as high strength and high hardness. However, the cost of nanocrystalline materials is prohibitively high, primarily due to the expensive equipments ...

  9. Cost-optimization of the IPv4 zeroconf protocol

    NARCIS (Netherlands)

    Bohnenkamp, H.C.; van der Stok, Peter; Hermanns, H.; Vaandrager, Frits

    2003-01-01

    This paper investigates the tradeoff between reliability and effectiveness for the IPv4 Zeroconf protocol, proposed by Cheshire/Adoba/Guttman in 2002, dedicated to the selfconfiguration of IP network interfaces. We develop a simple stochastic cost model of the protocol, where reliability is measured

  10. Optimization of Placement Driven by the Cost of Wire Crossing

    National Research Council Canada - National Science Library

    Kapur, Nevin

    1997-01-01

    .... We implemented a prototype placement algorithm TOCO that minimizes the cost of wire crossing, and a universal unit-grid based placement evaluator place_eval. We have designed a number of statistical experiments to demonstrate the feasibility and the promise of the proposed approach.

  11. The Cost-Optimal Size of Future Reusable Launch Vehicles

    Science.gov (United States)

    Koelle, D. E.

    2000-07-01

    The paper answers the question, what is the optimum vehicle size — in terms of LEO payload capability — for a future reusable launch vehicle ? It is shown that there exists an optimum vehicle size that results in minimum specific transportation cost. The optimum vehicle size depends on the total annual cargo mass (LEO equivalent) enviseaged, which defines at the same time the optimum number of launches per year (LpA). Based on the TRANSCOST-Model algorithms a wide range of vehicle sizes — from 20 to 100 Mg payload in LEO, as well as launch rates — from 2 to 100 per year — have been investigated. It is shown in a design chart how much the vehicle size as well as the launch rate are influencing the specific transportation cost (in MYr/Mg and USS/kg). The comparison with actual ELVs (Expendable Launch Vehicles) and Semi-Reusable Vehicles (a combination of a reusable first stage with an expendable second stage) shows that there exists only one economic solution for an essential reduction of space transportation cost: the Fully Reusable Vehicle Concept, with rocket propulsion and vertical take-off. The Single-stage Configuration (SSTO) has the best economic potential; its feasibility is not only a matter of technology level but also of the vehicle size as such. Increasing the vehicle size (launch mass) reduces the technology requirements because the law of scale provides a better mass fraction and payload fraction — practically at no cost. The optimum vehicle design (after specification of the payload capability) requires a trade-off between lightweight (and more expensive) technology vs. more conventional (and cheaper) technology. It is shown that the the use of more conventional technology and accepting a somewhat larger vehicle is the more cost-effective and less risky approach.

  12. Optimization of economic load dispatch of higher order general cost polynomials and its sensitivity using modified particle swarm optimization

    International Nuclear Information System (INIS)

    Saber, Ahmed Yousuf; Chakraborty, Shantanu; Abdur Razzak, S.M.; Senjyu, Tomonobu

    2009-01-01

    This paper presents a modified particle swarm optimization (MPSO) for constrained economic load dispatch (ELD) problem. Real cost functions are more complex than conventional second order cost functions when multi-fuel operations, valve-point effects, accurate curve fitting, etc., are considering in deregulated changing market. The proposed modified particle swarm optimization (PSO) consists of problem dependent variable number of promising values (in velocity vector), unit vector and error-iteration dependent step length. It reliably and accurately tracks a continuously changing solution of the complex cost function and no extra concentration/effort is needed for the complex higher order cost polynomials in ELD. Constraint management is incorporated in the modified PSO. The modified PSO has balance between local and global searching abilities, and an appropriate fitness function helps to converge it quickly. To avoid the method to be frozen, stagnated/idle particles are reset. Sensitivity of the higher order cost polynomials is also analyzed visually to realize the importance of the higher order cost polynomials for the optimization of ELD. Finally, benchmark data sets and methods are used to show the effectiveness of the proposed method. (author)

  13. Optimizing Cost of Continuous Overlapping Queries over Data Streams by Filter Adaption

    KAUST Repository

    Xie, Qing; Zhang, Xiangliang; Li, Zhixu; Zhou, Xiaofang

    2016-01-01

    The problem we aim to address is the optimization of cost management for executing multiple continuous queries on data streams, where each query is defined by several filters, each of which monitors certain status of the data stream. Specially

  14. Optimization of costs of Port Operations in Nigeria: A Scenario For ...

    African Journals Online (AJOL)

    2013-03-01

    Mar 1, 2013 ... +Department of Maritime Management Technology,. Federal ... Abstract. This study attempts to optimize the cost of port operations in Nigeria. ..... Slack. Original Value. Lower. Bound. Upper Bound. Const. 1. 0. 685.7727. 672.

  15. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Abubeker, Jewahir Ali; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    This paper is devoted to the consideration of an algorithm for sequential optimization of paths in directed graphs relative to di_erent cost functions. The considered algorithm is based on an extension of dynamic programming which allows

  16. Optimal design of a NGNP heat exchanger with cost model

    International Nuclear Information System (INIS)

    Ridluan, Artit; Danchus, William; Tokuhiro, Akira

    2009-01-01

    With steady increase in energy consumption, the vulnerability of the fossil fuel supply, and environmental concerns, the U.S. Department of Energy (DOE) has initiated the Next Generation Nuclear Power Plants (NGNP), also known as Very High Temperature Reactor (VHTR). The VHTR is planned to be operational by 2021 with possible demonstration of a hydrogen generating plant. Various engineering design studies on both the reactor plant and energy conversion system are underway. For this and related Generation IV plants, it is the goal to not only meet safety criteria but to also be efficient, economically competitive, and environmentally friendly (proliferation resistant). Traditionally, heat exchanger (HX) design is based on two main approaches: Log-Mean Temperature Difference (LMTD) and effectiveness-NTU (ε-NTU). These methods yield the dimension of the HX under anticipate condition and vice-versa. However, one is not assured that the dimension calculated give the best performing HX when economics are also considered. Here, we develop and show a specific optimization algorithm (exercise) using LMTD and simple (optimal) design theory to establish a reference case for the Printed Circuit Heat Exchanger (PCHE). Computational Fluid Dynamics (CFD) was further used as a design tool to investigate the optimal design of PCHE thermohydraulic flow. The CFD results were validated against the Blasius correlation before being subjected to optimal design analyses. Benchmark results for the pipe flow indicated that the predictive ability of SST k-ω is superior to the other (standard and RNG k-ε and RSM) turbulence models. The difference between CFD and the empirical expression is less than 10%. (author)

  17. An Optimal Operating Strategy for Battery Life Cycle Costs in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yinghua Han

    2014-01-01

    Full Text Available Impact on petroleum based vehicles on the environment, cost, and availability of fuel has led to an increased interest in electric vehicle as a means of transportation. Battery is a major component in an electric vehicle. Economic viability of these vehicles depends on the availability of cost-effective batteries. This paper presents a generalized formulation for determining the optimal operating strategy and cost optimization for battery. Assume that the deterioration of the battery is stochastic. Under the assumptions, the proposed operating strategy for battery is formulated as a nonlinear optimization problem considering reliability and failure number. And an explicit expression of the average cost rate is derived for battery lifetime. Results show that the proposed operating strategy enhances the availability and reliability at a low cost.

  18. Control and operation cost optimization of the HISS cryogenic system

    International Nuclear Information System (INIS)

    Porter, J.; Anderson, D.; Bieser, F.

    1984-01-01

    This chapter describes a control strategy for the Heavy Ion Spectrometer System (HISS), which relies upon superconducting coils of cryostable design to provide a particle bending field of 3 tesla. The control strategy has allowed full time unattended operation and significant operating cost reductions. Microprocessor control of flash boiling style LIN circuits has been successful. It is determined that the overall operating cost of most cryogenic systems using closed loop helium systems can be minimized by properly balancing the total heat load between the helium and nitrogen circuits to take advantage of the non-linearity which exists in the power input to 4K refrigeration characteristic. Variable throughput compressors have the advantage of turndown capability at steady state. It is concluded that a hybrid system using digital and analog input for control, data display and alarms enables full time unattended operation

  19. Lean Accounting - An Ingenious Solution for Cost Optimization

    OpenAIRE

    Dimi Ofileanu; Dan Ioan Topor

    2014-01-01

    The aim of this work is to present a new concept in accounting management: Lean Accounting. This work explains the way the lean concept was born; its benefits for the production system of the factories and the necessity of applying lean accounting in the factories which have implemented lean production, taking into account both its advantages and the boundaries of the other cost management methods in those factories.

  20. A Transactions Cost Economics (TCE) Approach to Optimal Contract Type

    OpenAIRE

    Franck, Raymond; Melese, Francois; Dillard, John

    2006-01-01

    Proceedings Paper (for Acquisition Research Program) This study examines defense acquisition through the new lens of Transaction Cost Economics (TCE). TCE is an emergent field in economics that has multiple applications to defense acquisition practices. TCE''s original focus was to guide ''make-or-buy?'' decisions that define the boundaries of a firm. This study reviews insights afforded by TCE that impact government outsourcing (''buy'' decisions), paying special attention to defense pro...

  1. Defense Remote Handled Transuranic Waste Cost/Schedule Optimization Study

    International Nuclear Information System (INIS)

    Pierce, G.D.; Wolaver, R.W.; Carson, P.H.

    1986-11-01

    The purpose of this study is to provide the DOE information with which it can establish the most efficient program for the long management and disposal, in the Waste Isolation Pilot Plant (WIPP), of remote handled (RH) transuranic (TRU) waste. To fulfill this purpose, a comprehensive review of waste characteristics, existing and projected waste inventories, processing and transportation options, and WIPP requirements was made. Cost differences between waste management alternatives were analyzed and compared to an established baseline. The result of this study is an information package that DOE can use as the basis for policy decisions. As part of this study, a comprehensive list of alternatives for each element of the baseline was developed and reviewed with the sites. The principle conclusions of the study follow. A single processing facility for RH TRU waste is both necessary and sufficient. The RH TRU processing facility should be located at Oak Ridge National Laboratory (ORNL). Shielding of RH TRU to contact handled levels is not an economic alternative in general, but is an acceptable alternative for specific waste streams. Compaction is only cost effective at the ORNL processing facility, with a possible exception at Hanford for small compaction of paint cans of newly generated glovebox waste. It is more cost effective to ship certified waste to WIPP in 55-gal drums than in canisters, assuming a suitable drum cask becomes available. Some waste forms cannot be packaged in drums, a canister/shielded cask capability is also required. To achieve the desired disposal rate, the ORNL processing facility must be operational by 1996. Implementing the conclusions of this study can save approximately $110 million, compared to the baseline, in facility, transportation, and interim storage costs through the year 2013. 10 figs., 28 tabs

  2. Cost and radiation exposure optimization of demineralizer operation

    International Nuclear Information System (INIS)

    Bernal, F.E.; Burn, R.R.; Cook, G.M.; Simonetti, L.; Simpson, P.A.

    1985-01-01

    A pool water demineralizer is utilized at a research reactor to minimize impurities that become radioactive; to minimize impurities that react chemically with reactor components; to maintain optical clarity of the pool water; and to minimize aluminum fuel cladding corrosion by maintaining a slightly acidic pH. Balanced against these advantages are the dollar costs of equipment, resins, recharging chemicals, and maintenance; the man-rem costs of radiation exposure during maintenance, demineralizer recharges, and resin replacement; and hazardous chemical exposure. At the Ford Nuclear Reactor (FNR), maintenance of the demineralizer system is the second largest source of radiation exposure to operators. Theoretical and practical aspects of demineralizer operation are discussed. The most obvious way to reduce radiation exposure due to demineralizer system operation is to perform recharges after the reactor has been shut down for the maximum possible time. Setting a higher depletion limit and operating with the optimum system lineup reduce the frequency between recharges, saving both exposure and cost. Recharge frequency and resin lifetime seem to be relatively independent of the quality of the chemicals used and the personnel performing recharges, provided consistent procedures are followed

  3. Human factors issues for interstellar spacecraft

    Science.gov (United States)

    Cohen, Marc M.; Brody, Adam R.

    1991-01-01

    Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality.

  4. Optimal replacement time estimation for machines and equipment based on cost function

    OpenAIRE

    J. Šebo; J. Buša; P. Demeč; J. Svetlík

    2013-01-01

    The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables). Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is ...

  5. A problem of optimization for the specific cost of installed electric power in nuclear plants

    Energy Technology Data Exchange (ETDEWEB)

    Sultan, M A; Khattab, M S [Reactors Dept. nuclear research centre, atomic energy authority, Cairo, (Egypt)

    1995-10-01

    The optimization problem analyzed in this paper is related to the thermal cycle parameters in nuclear power stations having steam generators. The optimization the specific cost of installed power with respect to the average operating saturation temperature in the station thermal cycle. The analysis considers the maximum fuel cladding temperature as a limiting factor in the optimization process as it is related to the safe operation of the reactor. 4 figs.

  6. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Abubeker, Jewahir Ali

    2011-05-14

    This paper is devoted to the consideration of an algorithm for sequential optimization of paths in directed graphs relative to di_erent cost functions. The considered algorithm is based on an extension of dynamic programming which allows to represent the initial set of paths and the set of optimal paths after each application of optimization procedure in the form of a directed acyclic graph.

  7. Non-Linear Transaction Costs Inclusion in Mean-Variance Optimization

    Directory of Open Access Journals (Sweden)

    Christian Johannes Zimmer

    2005-12-01

    Full Text Available In this article we propose a new way to include transaction costs into a mean-variance portfolio optimization. We consider brokerage fees, bid/ask spread and the market impact of the trade. A pragmatic algorithm is proposed, which approximates the optimal portfolio, and we can show that is converges in the absence of restrictions. Using Brazilian financial market data we compare our approximation algorithm with the results of a non-linear optimizer.

  8. Integration of safety engineering into a cost optimized development program.

    Science.gov (United States)

    Ball, L. W.

    1972-01-01

    A six-segment management model is presented, each segment of which represents a major area in a new product development program. The first segment of the model covers integration of specialist engineers into 'systems requirement definition' or the system engineering documentation process. The second covers preparation of five basic types of 'development program plans.' The third segment covers integration of system requirements, scheduling, and funding of specialist engineering activities into 'work breakdown structures,' 'cost accounts,' and 'work packages.' The fourth covers 'requirement communication' by line organizations. The fifth covers 'performance measurement' based on work package data. The sixth covers 'baseline requirements achievement tracking.'

  9. Efficient Guiding Towards Cost-Optimality in UPPAAL

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Fehnker, Ansgar; Hune, Thomas S.

    2001-01-01

    with prices on both locations and transitions. The presented algorithm is based on a symbolic semantics of UTPA, and an efficient representation and operations based on difference bound matrices. In analogy with Dijkstra’s shortest path algorithm, we show that the search order of the algorithm can be chosen......In this paper we present an algorithm for efficiently computing the minimum cost of reaching a goal state in the model of Uniformly Priced Timed Automata (UPTA). This model can be seen as a submodel of the recently suggested model of linearly priced timed automata, which extends timed automata...

  10. Integrated emission management for cost optimal EGR-SCR balancing in diesels

    NARCIS (Netherlands)

    Willems, F.P.T.; Mentink, P.R.; Kupper, F.; Eijnden, E.A.C. van den

    2013-01-01

    The potential of a cost-based optimization method is experimentally demonstrated on a Euro-VI heavy-duty diesel engine. Based on the actual engine-aftertreatment state, this model-based Integrated Emission Management (IEM) strategy minimizes operational (fuel and AdBlue) costs within emission

  11. Health Cost Risk and Optimal Retirement Provision : A Simple Rule for Annuity Demand

    NARCIS (Netherlands)

    Peijnenburg, J.M.J.; Nijman, T.E.; Werker, B.J.M.

    2010-01-01

    We analyze the effect of health cost risk on optimal annuity demand and consumption/savings decisions. Many retirees are exposed to sizeable out-of-pocket medical expenses, while annuities potentially impair the ability to get liquidity to cover these costs and smooth consumption. We find that if

  12. Configuration space analysis of common cost functions in radiotherapy beam-weight optimization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rowbottom, Carl Graham [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom); Webb, Steve [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom)

    2002-01-07

    The successful implementation of downhill search engines in radiotherapy optimization algorithms depends on the absence of local minima in the search space. Such techniques are much faster than stochastic optimization methods but may become trapped in local minima if they exist. A technique known as 'configuration space analysis' was applied to examine the search space of cost functions used in radiotherapy beam-weight optimization algorithms. A downhill-simplex beam-weight optimization algorithm was run repeatedly to produce a frequency distribution of final cost values. By plotting the frequency distribution as a function of final cost, the existence of local minima can be determined. Common cost functions such as the quadratic deviation of dose to the planning target volume (PTV), integral dose to organs-at-risk (OARs), dose-threshold and dose-volume constraints for OARs were studied. Combinations of the cost functions were also considered. The simple cost function terms such as the quadratic PTV dose and integral dose to OAR cost function terms are not susceptible to local minima. In contrast, dose-threshold and dose-volume OAR constraint cost function terms are able to produce local minima in the example case studied. (author)

  13. Contingency Contractor Optimization Phase 3 Sustainment Cost by JCA Implementation Guide

    Energy Technology Data Exchange (ETDEWEB)

    Durfee, Justin David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Frazier, Christopher Rawls [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Arguello, Bryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bandlow, Alisa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gearhart, Jared Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Katherine A [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    This document provides implementation guidance for implementing personnel group FTE costs by JCA Tier 1 or 2 categories in the Contingency Contractor Optimization Tool – Engineering Prototype (CCOT-P). CCOT-P currently only allows FTE costs by personnel group to differ by mission. Changes will need to be made to the user interface inputs pages and the database

  14. Robust Optimization for Time-Cost Tradeoff Problem in Construction Projects

    Directory of Open Access Journals (Sweden)

    Ming Li

    2014-01-01

    Full Text Available Construction projects are generally subject to uncertainty, which influences the realization of time-cost tradeoff in project management. This paper addresses a time-cost tradeoff problem under uncertainty, in which activities in projects can be executed in different construction modes corresponding to specified time and cost with interval uncertainty. Based on multiobjective robust optimization method, a robust optimization model for time-cost tradeoff problem is developed. In order to illustrate the robust model, nondominated sorting genetic algorithm-II (NSGA-II is modified to solve the project example. The results show that, by means of adjusting the time and cost robust coefficients, the robust Pareto sets for time-cost tradeoff can be obtained according to different acceptable risk level, from which the decision maker could choose the preferred construction alternative.

  15. Molecular diagnostics of interstellar shocks

    International Nuclear Information System (INIS)

    Hartquist, T.W.; Oppenheimer, M.; Dalgarno, A.

    1980-01-01

    The chemistry of molecules in shocked regions of the interstellar gas is considered and calculations are carried out for a region subjected to a shock at a velocity of 8 km s -1 Substantial enhancements are predicted in the concentrations of the molecules H 2 S, SO, and SiO compared to those anticipated in cold interstellar clouds

  16. Molecular diagnostics of interstellar shocks

    Science.gov (United States)

    Hartquist, T. W.; Dalgarno, A.; Oppenheimer, M.

    1980-02-01

    The chemistry of molecules in shocked regions of the interstellar gas is considered and calculations are carried out for a region subjected to a shock at a velocity of 8 km/sec. Substantial enhancements are predicted in the concentrations of the molecules H2S, SO, and SiO compared to those anticipated in cold interstellar clouds.

  17. Observational constraints on interstellar chemistry

    International Nuclear Information System (INIS)

    Winnewisser, G.

    1984-01-01

    The author points out presently existing observational constraints in the detection of interstellar molecular species and the limits they may cast on our knowledge of interstellar chemistry. The constraints which arise from the molecular side are summarised and some technical difficulties encountered in detecting new species are discussed. Some implications for our understanding of molecular formation processes are considered. (Auth.)

  18. Molecular diagnostics of interstellar shocks

    Science.gov (United States)

    Hartquist, T. W.; Dalgarno, A.; Oppenheimer, M.

    1980-01-01

    The chemistry of molecules in shocked regions of the interstellar gas is considered and calculations are carried out for a region subjected to a shock at a velocity of 8 km/sec. Substantial enhancements are predicted in the concentrations of the molecules H2S, SO, and SiO compared to those anticipated in cold interstellar clouds.

  19. Optimal investment strategies and hedging of derivatives in the presence of transaction costs (Invited Paper)

    Science.gov (United States)

    Muratore-Ginanneschi, Paolo

    2005-05-01

    Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.

  20. Using Electromagnetic Algorithm for Total Costs of Sub-contractor Optimization in the Cellular Manufacturing Problem

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Shahriari

    2016-12-01

    Full Text Available In this paper, we present a non-linear binary programing for optimizing a specific cost in cellular manufacturing system in a controlled production condition. The system parameters are determined by the continuous distribution functions. The aim of the presented model is to optimize the total cost of imposed sub-contractors to the manufacturing system by determining how to allocate the machines and parts to each seller. In this system, DM could control the occupation level of each machine in the system. For solving the presented model, we used the electromagnetic meta-heuristic algorithm and Taguchi method for determining the optimal algorithm parameters.

  1. The shutdown reactor: Optimizing spent fuel storage cost

    International Nuclear Information System (INIS)

    Pennington, C.W.

    1995-01-01

    Several studies have indicated that the most prudent way to store fuel at a shutdown reactor site safely and economically is through the use of a dry storage facility licensed under 10CFR72. While such storage is certainly safe, is it true that the dry ISFSI represents the safest and most economical approach for the utility? While no one is really able to answer that question definitely, as yet, Holtec has studied this issue for some time and believes that both an economic and safety case can be made for an optimization strategy that calls for the use of both wet and dry ISFSI storage of spent fuel at some plants. For the sake of brevity, this paper summarizes some of Holtec's findings with respect to the economics of maintaining some fuel in wet storage at a shutdown reactor. The safety issue, or more importantly the perception of safety of spent fuel in wet storage, still varies too much with the eye of the beholder, and until a more rigorous presentation of safety analyses can be made in a regulatory setting, it is not practically useful to argue about how many angels can sit on the head of a safety-related pin. Holtec is prepared to present such analyses, but this does not appear to be the proper venue. Thus, this paper simply looks at certain economic elements of a wet ISFSI at a shutdown reactor to make a prima facie case that wet storage has some attractiveness at a shutdown reactor and should not be rejected out of hand. Indeed, an optimization study at certain plants may well show the economic vitality of keeping some fuel in the pool and converting the NRC licensing coverage from 10CFR50 to 10CFR72. If the economics look attractive, then the safety issue may be confronted with a compelling interest

  2. The environmental cost of subsistence: Optimizing diets to minimize footprints

    International Nuclear Information System (INIS)

    Gephart, Jessica A.; Davis, Kyle F.; Emery, Kyle A.; Leach, Allison M.; Galloway, James N.; Pace, Michael L.

    2016-01-01

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result, this study

  3. The environmental cost of subsistence: Optimizing diets to minimize footprints

    Energy Technology Data Exchange (ETDEWEB)

    Gephart, Jessica A.; Davis, Kyle F. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States); Emery, Kyle A. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States); University of California, Santa Barbara. Marine Science Institute, Santa Barbara, CA 93106 (United States); Leach, Allison M. [University of New Hampshire, 107 Nesmith Hall, 131 Main Street, Durham, NH, 03824 (United States); Galloway, James N.; Pace, Michael L. [University of Virginia, Department of Environmental Sciences, 291 McCormick Road, Charlottesville, VA 22904 (United States)

    2016-05-15

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result

  4. Visualizing Interstellar's Wormhole

    Science.gov (United States)

    James, Oliver; von Tunzelmann, Eugénie; Franklin, Paul; Thorne, Kip S.

    2015-06-01

    Christopher Nolan's science fiction movie Interstellar offers a variety of opportunities for students in elementary courses on general relativity theory. This paper describes such opportunities, including: (i) At the motivational level, the manner in which elementary relativity concepts underlie the wormhole visualizations seen in the movie; (ii) At the briefest computational level, instructive calculations with simple but intriguing wormhole metrics, including, e.g., constructing embedding diagrams for the three-parameter wormhole that was used by our visual effects team and Christopher Nolan in scoping out possible wormhole geometries for the movie; (iii) Combining the proper reference frame of a camera with solutions of the geodesic equation, to construct a light-ray-tracing map backward in time from a camera's local sky to a wormhole's two celestial spheres; (iv) Implementing this map, for example, in Mathematica, Maple or Matlab, and using that implementation to construct images of what a camera sees when near or inside a wormhole; (v) With the student's implementation, exploring how the wormhole's three parameters influence what the camera sees—which is precisely how Christopher Nolan, using our implementation, chose the parameters for Interstellar's wormhole; (vi) Using the student's implementation, exploring the wormhole's Einstein ring and particularly the peculiar motions of star images near the ring, and exploring what it looks like to travel through a wormhole.

  5. Interstellar molecules and masers

    International Nuclear Information System (INIS)

    Nguyen-Q-Rieu; Guibert, J.

    1978-01-01

    The study of dense and dark clouds, in which hydrogen is mostly in molecular form, became possible since the discovery of interstellar molecules, emitting in the centimeter and millimeter wavelengths. The molecular lines are generally not in local thermal equilibrium (LTE). Their intensity can often be explained by invoking a population inversion mechanism. Maser emission lines due to OH, H 2 O and SiO molecules are among the most intense molecular lines. The H 2 CO molecule, detected in absorption in front of the cold cosmic background radiation of 2.7 K, illustrates the inverse phenomenon, the antimaser absorption. For a radio transition of frequency v, the inversion rate Δn (relative population difference between the upper and lower level) as well as the maser gain can be determined from the radio observations. In the case of the OH lines in the 2 PIsub(3/2), J=3/2 state, the inversion rates approximately 1 to 2% derived from the observations, are comparable with those obtained in the laboratory. The determination of the excitation mechanisms of the masers, through the statistical equilibrium and radiative transfer equations, implies the knowledge of collisional and radiative transition probabilities. A pumping model, which can satisfactorily explain the radio observations of some interstellar OH clouds, will be discussed [fr

  6. Dynamic Portfolio Optimization with Transaction Costs and State-Dependent Drift

    DEFF Research Database (Denmark)

    Palczewski, Jan; Poulsen, Rolf; Schenk-Hoppe, Klaus Reiner

    2015-01-01

    The problem of dynamic portfolio choice with transaction costs is often addressed by constructing a Markov Chain approximation of the continuous time price processes. Using this approximation, we present an efficient numerical method to determine optimal portfolio strategies under time- and state......-dependent drift and proportional transaction costs. This scenario arises when investors have behavioral biases or the actual drift is unknown and needs to be estimated. Our numerical method solves dynamic optimal portfolio problems with an exponential utility function for time-horizons of up to 40 years....... It is applied to measure the value of information and the loss from transaction costs using the indifference principle....

  7. Cost optimization of long-cycle LWR operation

    International Nuclear Information System (INIS)

    Handwerk, C.S.; Driscoll, M.J.; McMahon, M.V.; Todreas, N.E.

    1997-01-01

    The continuing emphasis on improvement of plant capacity factor, as a major means to make nuclear energy more cost competitive in the current deregulatory environment, motivates heightened interest in long intra-refueling intervals and high burnup in LWR units. This study examines the economic implications of these trends, to determine the envelope of profitable fuel management tactics. One batch management is found to be significantly more expensive than two-batch management. Parametric studies were carried out varying the most important input parameters. If ultra-high burnup can be achieved, then n = 3 or even n = 4 management may be preferable. For n = 1 or 2, economic performance declines at higher burnups, hence providing no great incentive for moving further in that direction. Values for n > 2 are also attractive because, for a given burnup target, required enrichment decreases as n increases. This study was limited to average batch burnups below 60,000 MWd/MT

  8. Control and operation cost optimization of the HISS cryogenic system

    International Nuclear Information System (INIS)

    Porter, J.; Bieser, F.; Anderson, D.

    1983-08-01

    The Heavy Ion Spectrometer System (HISS) relies upon superconducting coils of cryostable design to provide a maximum particle bending field of 3 tesla. A previous paper describes the cryogenic facility including helium refrigeration and gas management. This paper discusses a control strategy which has allowed full time unattended operation, along with significant nitrogen and power cost reductions. Reduction of liquid nitrogen consumption has been accomplished by making use of the sensible heat available in the cold exhaust gas. Measured nitrogen throughput agrees with calculations for sensible heat utilization of zero to 70%. Calculated consumption saving over this range is 40 liters per hour for conductive losses to the supports only. The measured throughput differential for the total system is higher

  9. Limited applicability of cost-effectiveness and cost-benefit analyses for the optimization of radon remedial measures

    International Nuclear Information System (INIS)

    Jiránek, Martin; Rovenská, Kateřina

    2010-01-01

    Ways of using different decision-aiding techniques for optimizing and evaluating radon remedial measures have been studied on a large set of data obtained from the remediation of 32 houses that had an original indoor radon concentration level above 1,000 Bq/m 3 (around 0.2 % of all dwellings in the Czech Republic have a radon concentration higher than 1,000 Bq/ m 3 ). Detailed information about radon concentrations before and after remediation, type and extent of remedial measures and installation and operation costs were used as the input parameters for a comparison of costs and for determining the efficiencies, for a cost-benefit analysis and a cost-effectiveness analysis, in order to find out whether these criteria and techniques provide sufficient and relevant information for the improvement and optimization of remediation. The study has delivered quite new results. It is confirmed that the installation costs of remedial measures do not depend on the original level of indoor radon concentration, but on the technical state of the building. In addition, the study reveals that the efficiency of remediation does not depend on the installation costs. Each of the studied remedial measures will on an average save 0.3 lives and gain 4.3 years of life. On one hand, the general decision-aiding techniques - cost-benefit analysis and cost-effectiveness analysis - lead to the conclusion that the remedial measures reducing the indoor radon concentration from values above 1,000 Bq/m 3 to values below the action level of 400 Bq/m 3 are always acceptable and reasonable. On the other hand, these analytical techniques can neither help the designer to choose the proper remedial measure nor provide information resulting in improved remediation. (author)

  10. Modeling the lowest-cost splitting of a herd of cows by optimizing a cost function

    Science.gov (United States)

    Gajamannage, Kelum; Bollt, Erik M.; Porter, Mason A.; Dawkins, Marian S.

    2017-06-01

    Animals live in groups to defend against predation and to obtain food. However, for some animals—especially ones that spend long periods of time feeding—there are costs if a group chooses to move on before their nutritional needs are satisfied. If the conflict between feeding and keeping up with a group becomes too large, it may be advantageous for some groups of animals to split into subgroups with similar nutritional needs. We model the costs and benefits of splitting in a herd of cows using a cost function that quantifies individual variation in hunger, desire to lie down, and predation risk. We model the costs associated with hunger and lying desire as the standard deviations of individuals within a group, and we model predation risk as an inverse exponential function of the group size. We minimize the cost function over all plausible groups that can arise from a given herd and study the dynamics of group splitting. We examine how the cow dynamics and cost function depend on the parameters in the model and consider two biologically-motivated examples: (1) group switching and group fission in a herd of relatively homogeneous cows, and (2) a herd with an equal number of adult males (larger animals) and adult females (smaller animals).

  11. Production of Low Cost Carbon-Fiber through Energy Optimization of Stabilization Process

    Directory of Open Access Journals (Sweden)

    Gelayol Golkarnarenji

    2018-03-01

    Full Text Available To produce high quality and low cost carbon fiber-based composites, the optimization of the production process of carbon fiber and its properties is one of the main keys. The stabilization process is the most important step in carbon fiber production that consumes a large amount of energy and its optimization can reduce the cost to a large extent. In this study, two intelligent optimization techniques, namely Support Vector Regression (SVR and Artificial Neural Network (ANN, were studied and compared, with a limited dataset obtained to predict physical property (density of oxidative stabilized PAN fiber (OPF in the second zone of a stabilization oven within a carbon fiber production line. The results were then used to optimize the energy consumption in the process. The case study can be beneficial to chemical industries involving carbon fiber manufacturing, for assessing and optimizing different stabilization process conditions at large.

  12. Multi-objective optimization approach for cost management during product design at the conceptual phase

    Science.gov (United States)

    Durga Prasad, K. G.; Venkata Subbaiah, K.; Narayana Rao, K.

    2014-03-01

    The effective cost management during the conceptual design phase of a product is essential to develop a product with minimum cost and desired quality. The integration of the methodologies of quality function deployment (QFD), value engineering (VE) and target costing (TC) could be applied to the continuous improvement of any product during product development. To optimize customer satisfaction and total cost of a product, a mathematical model is established in this paper. This model integrates QFD, VE and TC under multi-objective optimization frame work. A case study on domestic refrigerator is presented to show the performance of the proposed model. Goal programming is adopted to attain the goals of maximum customer satisfaction and minimum cost of the product.

  13. Building a Model for Optimization of Informational-Analytical Ensuring of Cost Management of Industrial Enterprise

    Directory of Open Access Journals (Sweden)

    Lisovskyi Ihor V

    2015-09-01

    Full Text Available The article examines peculiarities of building a model of informational-analytical optimization of cost management. The main sources of information together with approaches to cost management of industrial enterprises have been identified. In order to ensure the successful operation of enterprise in the conditions of growing manifestations of crisis, a continuous improving of the system for enterprise management along with the most important elements, which are necessary for its normal functioning, should be carried out. One of these so important elements are costs of enterprise. Accordingly, for an effective cost management, the most appropriate management approaches and tools must be used, based on a proper informational-analytical support of all processes. The article proposes an optimization model of informationalanalytical ensuring of cost management of industrial enterprises, which will serve as a ground for more informed and economically feasible solutions. A combination of best practices and tools to improve the efficiency of enterprise management has been proposed

  14. Risk-Assessment Score and Patient Optimization as Cost Predictors for Ventral Hernia Repair.

    Science.gov (United States)

    Saleh, Sherif; Plymale, Margaret A; Davenport, Daniel L; Roth, John Scott

    2018-04-01

    Ventral hernia repair (VHR) is associated with complications that significantly increase healthcare costs. This study explores the associations between hospital costs for VHR and surgical complication risk-assessment scores, need for cardiac or pulmonary evaluation, and smoking or obesity counseling. An IRB-approved retrospective study of patients having undergone open VHR over 3 years was performed. Ventral Hernia Risk Score (VHRS) for surgical site occurrence and surgical site infection, and the Ventral Hernia Working Group grade were calculated for each case. Also recorded were preoperative cardiology or pulmonary evaluations, smoking cessation and weight reduction counseling, and patient goal achievement. Hospital costs were obtained from the cost accounting system for the VHR hospitalization stratified by major clinical cost drivers. Univariate regression analyses were used to compare the predictive power of the risk scores. Multivariable analysis was performed to develop a cost prediction model. The mean cost of index VHR hospitalization was $20,700. Total and operating room costs correlated with increasing CDC wound class, VHRS surgical site infection score, VHRS surgical site occurrence score, American Society of Anesthesiologists class, and Ventral Hernia Working Group (all p variance in costs (p optimization significantly reduced direct and operating room costs (p < 0.05). Cardiac evaluation was associated with increased costs. Ventral hernia repair hospital costs are more accurately predicted by CDC wound class than VHR risk scores. A straightforward 6-factor model predicted most cost variation for VHR. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  15. Cost optimal building performance requirements. Calculation methodology for reporting on national energy performance requirements on the basis of cost optimality within the framework of the EPBD

    Energy Technology Data Exchange (ETDEWEB)

    Boermans, T.; Bettgenhaeuser, K.; Hermelink, A.; Schimschar, S. [Ecofys, Utrecht (Netherlands)

    2011-05-15

    On the European level, the principles for the requirements for the energy performance of buildings are set by the Energy Performance of Buildings Directive (EPBD). Dating from December 2002, the EPBD has set a common framework from which the individual Member States in the EU developed or adapted their individual national regulations. The EPBD in 2008 and 2009 underwent a recast procedure, with final political agreement having been reached in November 2009. The new Directive was then formally adopted on May 19, 2010. Among other clarifications and new provisions, the EPBD recast introduces a benchmarking mechanism for national energy performance requirements for the purpose of determining cost-optimal levels to be used by Member States for comparing and setting these requirements. The previous EPBD set out a general framework to assess the energy performance of buildings and required Member States to define maximum values for energy delivered to meet the energy demand associated with the standardised use of the building. However it did not contain requirements or guidance related to the ambition level of such requirements. As a consequence, building regulations in the various Member States have been developed by the use of different approaches (influenced by different building traditions, political processes and individual market conditions) and resulted in different ambition levels where in many cases cost optimality principles could justify higher ambitions. The EPBD recast now requests that Member States shall ensure that minimum energy performance requirements for buildings are set 'with a view to achieving cost-optimal levels'. The cost optimum level shall be calculated in accordance with a comparative methodology. The objective of this report is to contribute to the ongoing discussion in Europe around the details of such a methodology by describing possible details on how to calculate cost optimal levels and pointing towards important factors and

  16. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    Science.gov (United States)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  17. A system-level cost-of-energy wind farm layout optimization with landowner modeling

    International Nuclear Information System (INIS)

    Chen, Le; MacDonald, Erin

    2014-01-01

    Highlights: • We model the role of landowners in determining the success of wind projects. • A cost-of-energy (COE) model with realistic landowner remittances is developed. • These models are included in a system-level wind farm layout optimization. • Basic verification indicates the optimal COE is in-line with real-world data. • Land plots crucial to a project’s success can be identified with the approach. - Abstract: This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustry. The system-level cost-of-energy (COE) optimization model is also tested under two land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability

  18. Interstellar dust and extinction

    International Nuclear Information System (INIS)

    Mathis, J.S.

    1990-01-01

    It is noted that the term interstellar dust refers to materials with rather different properties, and that the mean extinction law of Seaton (1979) or Savage and Mathis (1979) should be replaced by the expression given by Cardelli et al. (1989), using the appropriate value of total-to-selective extinction. The older laws were appropriate for the diffuse ISM but dust in clouds differs dramatically in its extinction law. Dust is heavily processed while in the ISM by being included within clouds and cycled back into the diffuse ISM many times during its lifetime. Hence, grains probably reflect only a trace of their origin, although meteoritic inclusions with isotopic anomalies demonstrate that some tiny particles survive intact from a supernova origin to the present. 186 refs

  19. The diffuse interstellar medium

    Science.gov (United States)

    Cox, Donald P.

    1990-01-01

    The last 20 years of the efforts to understand the diffuse ISM are reviewed, with recent changes of fundamental aspects being highlighted. Attention is given to the interstellar pressure and its components, the weight of the ISM, the midplane pressure contributions, and pressure contributions at 1 kpc. What velocity dispersions, cosmic ray pressure, and magnetic field pressure that can be expected for a gas in a high magnetic field environment is addressed. The intercloud medium is described, with reference to the work of Cox and Slavin (1989). Various caveats are discussed and a number of areas for future investigation are identified. Steps that could be taken toward a successful phase segregation model are discussed.

  20. Cost analysis in interventional radiology-A tool to optimize management costs

    International Nuclear Information System (INIS)

    Clevert, D.-A.; Stickel, M.; Jung, E.M.; Reiser, M.; Rupp, N.

    2007-01-01

    Objective: The objective of the study was to analyze the methods to reduce cost in interventional radiology departments by reorganizing procurement. Materials and methods: All products used in Department of Interventional Radiology were inventoried. An ABC-analysis was completed and A-products (high-value and high turnover products) underwent a XYZ-analysis which predicted demand on the basis of ordering frequency. Then criteria for a procurement strategy for the different material categories were fixed. The net working capital (NWC) was calculated using an interest rate of 8%/year. Results: Total annual material turnover was 353,000 Euro . The value of all A-products determined by the inventory was 260,000 Euro . Changes in the A-product procurement strategy tapped a cost reduction potential of 14,500/year Euro . The resulting total saving was 17,200 Euro . Improved stores management added another 37,500 Euro. The total cost cut of 52,000 Euro is equivalent to 14.7% of annual expenses. Conclusion: A flexible procurement strategy helps to reduce the storage and capital tie-up costs of A-products in interventional radiology without affecting the quality of service provided to patients

  1. Interstellar scattering and resolution limitations

    International Nuclear Information System (INIS)

    Dennison, B.

    1987-01-01

    Density irregularities in both the interplanetary medium and the ionized component of the interstellar medium scatter radio waves, resulting in limitations on the achievable resolution. Interplanetary scattering (IPS) is weak for most observational situations, and in principle the resulting phase corruption can be corrected for when observing with sufficiently many array elements. Interstellar scattering (ISS), on the other hand, is usually strong at frequencies below about 8 GHz, in which case intrinsic structure information over a range of angular scales is irretrievably lost. With the earth-space baselines now planned, it will be possible to search directly for interstellar refraction, which is suspected of modulating the fluxes of background sources. 14 references

  2. The distribution of interstellar dust

    International Nuclear Information System (INIS)

    Clocchiatti, A.; Marraco, H.G.

    1986-01-01

    We propose the interstellar matter structural function as a tool to derive the features of the interstellar dust distribution. We study that function resolving some ideal dust distribution models. Later we describe the method used to find a reliable computing algorithm for the observational case. Finally, we describe the steps to build a model for the interstellar matter composed by spherically symmetrical clouds. The density distribution for each of these clouds is D(r) = D 0 .esup(-r/r 0 ) 2 . The preliminary results obtained are summarised. (author)

  3. Recent interstellar molecular line work

    International Nuclear Information System (INIS)

    Winnewisser, G.

    1975-01-01

    A summary of recent interstellar molecular line work is presented. Transitions of the following molecules have been detected in Sgr B2: Vinylcyanide, H 2 C 2 HCN, formic acid, HCOOH, dimethyl ether (CH 3 ) 2 O and isotopically labelled cyanoacetylene- 13 C,HC 13 CCN and HCC 13 CN. The data on cyanoacetylene give an upper limit to the abundance ratio 12 C/ 13 C of 36 +- 5. A short discussion of the interstellar chemistry leads to the conclusion that hydrocarbons such as acetylene, HCCH, ethylen, H 2 CCH 2 and ethane H 3 CCH 3 should be present in interstellar clouds. 13 refs

  4. Four Interstellar Dust Candidates from the Stardust Interstellar Dust Collector

    Science.gov (United States)

    Westphal, A. J.; Allen, C.; Bajt, S.; Bechtel, H. A.; Borg, J.; Brenker, F.; Bridges, J.; Brownlee, D. E.; Burchell, M.; Burghammer, M.; hide

    2011-01-01

    In January 2006, the Stardust sample return capsule returned to Earth bearing the first solid samples from a primitive solar system body, Comet 81P/Wild2, and a collector dedicated to the capture and return of contemporary interstellar dust. Both collectors were approx. 0.1 sq m in area and were composed of aerogel tiles (85% of the collecting area) and aluminum foils. The Stardust Interstellar Dust Collector (SIDC) was exposed to the interstellar dust stream for a total exposure factor of 20 sq m/day. The Stardust Interstellar Preliminary Examination (ISPE) is a consortium-based project to characterize the collection using nondestructive techniques. The goals and restrictions of the ISPE are described . A summary of analytical techniques is described.

  5. Projected reduction in healthcare costs in Belgium after optimization of iodine intake: impact on costs related to thyroid nodular disease.

    Science.gov (United States)

    Vandevijvere, Stefanie; Annemans, Lieven; Van Oyen, Herman; Tafforeau, Jean; Moreno-Reyes, Rodrigo

    2010-11-01

    Several surveys in the last 50 years have repeatedly indicated that Belgium is affected by mild iodine deficiency. Within the framework of the national food and health plan in Belgium, a selective, progressive, and monitored strategy was proposed in 2009 to optimize iodine intake. The objective of the present study was to perform a health economic evaluation of the consequences of inadequate iodine intake in Belgium, focusing on undisputed and measurable health outcomes such as thyroid nodular disease and its associated morbidity (hyperthyroidism). For the estimation of direct, indirect, medical, and nonmedical costs related to thyroid nodular diseases in Belgium, data from the Federal Public Service of Public Health, Food Chain Safety and Environment, the National Institute for Disease and Disability Insurance (RIZIV/INAMI), the Information Network about the prescription of reimbursable medicines (FARMANET), Intercontinental Marketing Services, and expert opinions were used. These costs translate into savings after implementation of the iodization program and are defined as costs due to thyroid nodular disease throughout the article. Costs related to the iodization program are referred to as program costs. Only figures dating from before the start of the intervention were exploited. Only adult and elderly people (≥18 years) were taken into account in this study because thyroid nodular diseases predominantly affect this age group. The yearly costs due to thyroid nodular diseases caused by mild iodine deficiency in the Belgian adult population are ∼€38 million. It is expected that the iodization program will result in additional costs of ∼€54,000 per year and decrease the prevalence of thyroid nodular diseases by 38% after a 4-5-year period. The net savings after establishment of the program are therefore estimated to be at least €14 million a year. Optimization of iodine intake in Belgium should be quite cost effective, if only considering its impact on

  6. Interstellar dust within the life cycle of the interstellar medium

    OpenAIRE

    Demyk K.

    2012-01-01

    Cosmic dust is omnipresent in the Universe. Its presence influences the evolution of the astronomical objects which in turn modify its physical and chemical properties. The nature of cosmic dust, its intimate coupling with its environment, constitute a rich field of research based on observations, modelling and experimental work. This review presents the observations of the different components of interstellar dust and discusses their evolution during the life cycle of the interstellar medium.

  7. Optimal ordering quantities for substitutable deteriorating items under joint replenishment with cost of substitution

    Science.gov (United States)

    Mishra, Vinod Kumar

    2017-09-01

    In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.

  8. Enabling the First Interstellar Missions

    Science.gov (United States)

    Lubin, P.

    2017-12-01

    All propulsion systems that leave the Earth are based on chemical reactions. Chemical reactions, at best, have an efficiency compared to rest mass of 10-10 (or about 1eV per bond). All the mass in the universe converted to chemical reactions would not propel even a single proton to relativistic speeds. While chemistry will get us to Mars it will not allow interstellar capability in any reasonable mission time. Barring new physics we are left with few realistic solutions. None of our current propulsion systems, including nuclear, are capable of the relativistic speeds needed for exploring the many nearby stellar systems and exo-planets. However recent advances in photonics and directed energy systems now allow us to realize what was only a decade ago, simply science fiction, namely the ability to seriously conceive of and plan for relativistic flight. From fully-functional gram-level wafer-scale spacecraft capable of speeds greater than c/4 that could reach the nearest star in 20 years to spacecraft for large missions capable of supporting human life with masses more than 105 kg (100 tons) for rapid interplanetary transit that could reach speeds of greater than 1000 km/s can be realized. With this technology spacecraft can be propelled to speeds currently unimaginable. Photonics, like electronics, and unlike chemical propulsion is an exponential technology with a current double time of about 20 months. This is the key. The cost of such a system is amortized over the essentially unlimited number of launches. In addition, the same photon driver can be used for many other purposes including beamed energy to power high Isp ion engines, remote asteroid composition analysis and planetary defense. This would be a profound change in human capability with enormous implications. Known as Starlight we are now in a NASA Phase II study. The FY 2017 congressional appropriations request directs NASA to study the feasibility of an interstellar mission to coincide with the 100th

  9. IMAGINE: Interstellar MAGnetic field INference Engine

    Science.gov (United States)

    Steininger, Theo

    2018-03-01

    IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.

  10. Central Plant Optimization for Waste Energy Reduction (CPOWER). ESTCP Cost and Performance Report

    Science.gov (United States)

    2016-12-01

    meet all demands, and not necessarily for fuel economy or energy efficiency. Plant operators run the equipment according to a pre-set, fixed strategy ...exchanger, based on the site protocol. Thermal Energy Storage Tank Site-specific optimal operating strategies were developed for the chilled water...being served by the central plant Hypothesis The hypothesis tested that the optimized operation reduces wasted energy and energy costs by smart

  11. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Mahayni, Malek A.

    2011-07-01

    Finding optimal paths in directed graphs is a wide area of research that has received much of attention in theoretical computer science due to its importance in many applications (e.g., computer networks and road maps). Many algorithms have been developed to solve the optimal paths problem with different kinds of graphs. An algorithm that solves the problem of paths’ optimization in directed graphs relative to different cost functions is described in [1]. It follows an approach extended from the dynamic programming approach as it solves the problem sequentially and works on directed graphs with positive weights and no loop edges. The aim of this thesis is to implement and evaluate that algorithm to find the optimal paths in directed graphs relative to two different cost functions ( , ). A possible interpretation of a directed graph is a network of roads so the weights for the function represent the length of roads, whereas the weights for the function represent a constraint of the width or weight of a vehicle. The optimization aim for those two functions is to minimize the cost relative to the function and maximize the constraint value associated with the function. This thesis also includes finding and proving the relation between the two different cost functions ( , ). When given a value of one function, we can find the best possible value for the other function. This relation is proven theoretically and also implemented and experimented using Matlab®[2].

  12. Cost efficiency and optimal scale of electricity distribution firms in Taiwan: An application of metafrontier analysis

    International Nuclear Information System (INIS)

    Huang, Y.-J.; Chen, K.-H.; Yang, C.-H.

    2010-01-01

    This paper analyzes the cost efficiency and optimal scale of Taiwan's electricity distribution industry. Due to the substantial difference in network density, firms may differ widely in production technology. We employ the stochastic metafrontier approach to estimate the cost efficiency of 24 distribution units during the period 1997-2002. Empirical results find that the average cost efficiency is overestimated using the traditional stochastic frontier model, especially for low density regions. The average cost efficiency of the high density group is significantly higher than that of the low density group as it benefits from network economies. This study also calculates both short-term and long-term optimal scales of electricity distribution firms, lending policy implications for the deregulation of the electricity distribution industry.

  13. A Cost Optimized Fully Sustainable Power System for Southeast Asia and the Pacific Rim

    OpenAIRE

    Ashish Gulagi; Dmitrii Bogdanov; Christian Breyer

    2017-01-01

    In this paper, a cost optimal 100% renewable energy based system is obtained for Southeast Asia and the Pacific Rim region for the year 2030 on an hourly resolution for the whole year. For the optimization, the region was divided into 15 sub-regions and three different scenarios were set up based on the level of high voltage direct current grid connections. The results obtained for a total system levelized cost of electricity showed a decrease from 66.7 €/MWh in a decentralized scenario to 63...

  14. Implementation of the cost-optimal methodology according to the EPBD recast

    DEFF Research Database (Denmark)

    Wittchen, Kim Bjarne; Thomsen, Kirsten Engelund

    2012-01-01

    The EPBD recast states that Member States (MS) must ensure that minimum energy performance requirements for buildings are set “with a view to achieving cost-optimal levels”. The costoptimal level must be calculated in accordance with a comparative methodology.......The EPBD recast states that Member States (MS) must ensure that minimum energy performance requirements for buildings are set “with a view to achieving cost-optimal levels”. The costoptimal level must be calculated in accordance with a comparative methodology....

  15. Riddling bifurcation and interstellar journeys

    International Nuclear Information System (INIS)

    Kapitaniak, Tomasz

    2005-01-01

    We show that riddling bifurcation which is characteristic for low-dimensional attractors embedded in higher-dimensional phase space can give physical mechanism explaining interstellar journeys described in science-fiction literature

  16. Cost-optimal power system extension under flow-based market coupling

    Energy Technology Data Exchange (ETDEWEB)

    Hagspiel, Simeon; Jaegemann, Cosima; Lindenberger, Dietmar [Koeln Univ. (Germany). Energiewirtschaftliches Inst.; Brown, Tom; Cherevatskiy, Stanislav; Troester, Eckehard [Energynautics GmbH, Langen (Germany)

    2013-05-15

    Electricity market models, implemented as dynamic programming problems, have been applied widely to identify possible pathways towards a cost-optimal and low carbon electricity system. However, the joint optimization of generation and transmission remains challenging, mainly due to the fact that different characteristics and rules apply to commercial and physical exchanges of electricity in meshed networks. This paper presents a methodology that allows to optimize power generation and transmission infrastructures jointly through an iterative approach based on power transfer distribution factors (PTDFs). As PTDFs are linear representations of the physical load flow equations, they can be implemented in a linear programming environment suitable for large scale problems. The algorithm iteratively updates PTDFs when grid infrastructures are modified due to cost-optimal extension and thus yields an optimal solution with a consistent representation of physical load flows. The method is first demonstrated on a simplified three-node model where it is found to be robust and convergent. It is then applied to the European power system in order to find its cost-optimal development under the prescription of strongly decreasing CO{sub 2} emissions until 2050.

  17. Sequential Optimization of Global Sequence Alignments Relative to Different Cost Functions

    KAUST Repository

    Odat, Enas M.

    2011-05-01

    The purpose of this dissertation is to present a methodology to model global sequence alignment problem as directed acyclic graph which helps to extract all possible optimal alignments. Moreover, a mechanism to sequentially optimize sequence alignment problem relative to different cost functions is suggested. Sequence alignment is mostly important in computational biology. It is used to find evolutionary relationships between biological sequences. There are many algo- rithms that have been developed to solve this problem. The most famous algorithms are Needleman-Wunsch and Smith-Waterman that are based on dynamic program- ming. In dynamic programming, problem is divided into a set of overlapping sub- problems and then the solution of each subproblem is found. Finally, the solutions to these subproblems are combined into a final solution. In this thesis it has been proved that for two sequences of length m and n over a fixed alphabet, the suggested optimization procedure requires O(mn) arithmetic operations per cost function on a single processor machine. The algorithm has been simulated using C#.Net programming language and a number of experiments have been done to verify the proved statements. The results of these experiments show that the number of optimal alignments is reduced after each step of optimization. Furthermore, it has been verified that as the sequence length increased linearly then the number of optimal alignments increased exponentially which also depends on the cost function that is used. Finally, the number of executed operations increases polynomially as the sequence length increase linearly.

  18. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    Science.gov (United States)

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Heat exchanger inventory cost optimization for power cycles with one feedwater heater

    International Nuclear Information System (INIS)

    Qureshi, Bilal Ahmed; Antar, Mohamed A.; Zubair, Syed M.

    2014-01-01

    Highlights: • Cost optimization of heat exchanger inventory in power cycles is investigated. • Analysis for an endoreversible power cycle with an open feedwater heater is shown. • Different constraints on the power cycle are investigated. • The constant heat addition scenario resulted in the lowest value of the cost function. - Abstract: Cost optimization of heat exchanger inventory in power cycles with one open feedwater heater is undertaken. In this regard, thermoeconomic analysis for an endoreversible power cycle with an open feedwater heater is shown. The scenarios of constant heat rejection and addition rates, power as well as rate of heat transfer in the open feedwater heater are studied. All cost functions displayed minima with respect to the high-side absolute temperature ratio (θ 1 ). In this case, the effect of the Carnot temperature ratio (Φ 1 ), absolute temperature ratio (ξ) and the phase-change absolute temperature ratio for the feedwater heater (Φ 2 ) are qualitatively the same. Furthermore, the constant heat addition scenario resulted in the lowest value of the cost function. For variation of all cost functions, the smaller the value of the phase-change absolute temperature ratio for the feedwater heater (Φ 2 ), lower the cost at the minima. As feedwater heater to hot end unit cost ratio decreases, the minimum total conductance required increases

  20. Sequential Optimization of Global Sequence Alignments Relative to Different Cost Functions

    KAUST Repository

    Odat, Enas M.

    2011-01-01

    The algorithm has been simulated using C#.Net programming language and a number of experiments have been done to verify the proved statements. The results of these experiments show that the number of optimal alignments is reduced after each step of optimization. Furthermore, it has been verified that as the sequence length increased linearly then the number of optimal alignments increased exponentially which also depends on the cost function that is used. Finally, the number of executed operations increases polynomially as the sequence length increase linearly.

  1. The Interstellar Conspiracy

    Science.gov (United States)

    Johnson, Les; Matloff, Gregory L.

    2005-01-01

    If we were designing a human-carrying starship that could be launched in the not-too-distant future, it would almost certainly not use a warp drive to instantaneously bounce around the universe, as is done in Isaac Asimov's classic Foundation series or in episodes of Star Trek or Star Wars. Sadly, those starships that seem to be within technological reach could not even travel at high relativistic speeds, as does the interstellar ramjet in Poul Anderson's Tau Zero. Warp-speeds seem to be well outside the realm of currently understood physical law; proton-fusing ramjets may never be technologically feasible. Perhaps fortunately in our terrorist-plagued world, the economics of antimatter may never be attractive for large-scale starship propulsion. But interstellar travel will be possible within a few centuries, although it will certainly not be as fast as we might prefer. If humans learn how to hibernate, perhaps we will sleep our way to the stars, as do the crew in A. E. van Vogt's Far Centaurus. However, as discussed in a landmark paper in The Journal of the British Interplanetary Society, the most feasible approach to transporting a small human population to the planets (if any) of Alpha Centauri is the worldship. Such craft have often been featured in science fiction. See for example Arthur C. Clarke's Rendezvous with Rama, and Robert A. Heinlein's Orphans of the Sky. Worldships are essentially mobile versions of the O Neill free-space habitats. Constructed mostly from lunar and/or asteroidal materials, these solar-powered, multi-kilometer-dimension structures could house 10,000 to 100,000 humans in Earth-approximating environments. Artificial gravity would be provided by habitat rotation, and cosmic ray shielding would be provided by passive methods, such as habitat atmosphere and mass shielding, or magnetic fields. A late 21st century space-habitat venture might support itself economically by constructing large solar-powered satellites to beam energy back to

  2. Optimal replacement time estimation for machines and equipment based on cost function

    Directory of Open Access Journals (Sweden)

    J. Šebo

    2013-01-01

    Full Text Available The article deals with a multidisciplinary issue of estimating the optimal replacement time for the machines. Considered categories of machines, for which the optimization method is usable, are of the metallurgical and engineering production. Different models of cost function are considered (both with one and two variables. Parameters of the models were calculated through the least squares method. Models testing show that all are good enough, so for estimation of optimal replacement time is sufficient to use simpler models. In addition to the testing of models we developed the method (tested on selected simple model which enable us in actual real time (with limited data set to indicate the optimal replacement time. The indicated time moment is close enough to the optimal replacement time t*.

  3. Assessment of various failure theories for weight and cost optimized laminated composites using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Goyal, T. [Indian Institute of Technology Kanpur. Dept. of Aerospace Engineering, UP (India); Gupta, R. [Infotech Enterprises Ltd., Hyderabad (India)

    2012-07-01

    In this work, minimum weight-cost design for laminated composites is presented. A genetic algorithm has been developed for the optimization process. Maximum-Stress, Tsai-Wu and Tsai-Hill failure criteria have been used along with buckling analysis parameter for the margin of safety calculations. The design variables include three materials; namely Carbon-Epoxy, Glass-Epoxy, Kevlar-Epoxy; number of plies; ply orientation angles, varying from -75 deg. to 90 deg. in the intervals of 15 deg. and ply thicknesses which depend on the material in use. The total cost is a sum of material cost and layup cost. Layup cost is a function of the ply angle. Validation studies for solution convergence and weight-cost inverse proportionality are carried out. One set of results for shear loading are also validated from literature for a particular case. A Pareto-Optimal solution set is demonstrated for biaxial loading conditions. It is then extended to applied moments. It is found that global optimum for a given loading condition is a function of the failure criteria for shear loading, with Maximum Stress criteria giving the lightest-cheapest and Tsai-Wu criteria giving the heaviest-costliest optimized laminates. Optimized weight results are plotted from the three criteria to do a comparative study. This work gives a global optimized laminated composite and also a set of other local optimum laminates for a given set of loading conditions. The current algorithm also provides with adequate data to supplement the use of different failure criteria for varying loadings. This work can find use in the industry and/or academia considering the increased use of laminated composites in modern wind blades. (Author)

  4. Life cycle costs for the optimized production of hydrogen and biogas from microalgae

    International Nuclear Information System (INIS)

    Meyer, Markus A.; Weiss, Annika

    2014-01-01

    Despite the known advantages of microalgae compared with other biomass providers or fossil fuels, microalgae are predominately produced for high-value products. Economic constraints might limit the commercial energetic use of microalgae. Therefore, we identify the LCCs (life cycle costs) and economic hot spots for photoautotrophic hydrogen generation from photoautotrophically grown Chlamydomonas reinhardtii in a novel staggered PBR (photobioreactor) and the anaerobic digestion of the residual biomass to obtain biogas. The novel PBR aims at minimizing energy consumption for mixing and aeration and at optimizing the light conditions for algal growth. The LCCs per MJ amounted to 12.17 Euro for hydrogen and 0.99 Euro for biogas in 2011 for Germany. Market prices per MJ of 0.02 Euro for biogas and 0.04 Euro for hydrogen are considerably exceeded. Major contributors to operating costs, about 70% of total LCCs, are personnel and overhead costs. The investment costs consist to about 92% of those for the PBR with a share of 61% membrane costs. The choice of Madrid as another production location with higher incident solar irradiation and lower personnel costs reduces LCCs by about 40%. Projecting LCCs to 2030 with experience curves, the LCCs still exceed future market prices. - Highlights: • Life cycle cost assessment of hydrogen and biogas from microalgae in a novel photobioreactor. • Current and future (2030) economically viable production unlikely in Germany. • Personnel and photobioreactor costs are major cost drivers. • Changing the production location may significantly reduce the life cycle costs

  5. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    Science.gov (United States)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  6. Optimal Allocation of Smart Substations in a Distribution System Considering Interruption Costs of Customers

    DEFF Research Database (Denmark)

    Sun, Lei; You, Shi; Hu, Junjie

    2016-01-01

    number and allocation of smart substations in a given distribution system is presented, with the upgrade costs of substations and the interruption costs of customers taken into account. Besides, the reliability criterion is also properly considered in the model. By linearization strategies, the SSAM......One of the major functions of a smart substation (SS) is to restore power supply to interrupted customers as quickly as possible after an outage. The high cost of a smart substation limits its widespread utilization. In this paper, a smart substation allocation model (SSAM) to determine the optimal...

  7. A multiple ship routing and speed optimization problem under time, cost and environmental objectives

    DEFF Research Database (Denmark)

    Wen, M.; Pacino, Dario; Kontovas, C.A.

    2017-01-01

    The purpose of this paper is to investigate a multiple ship routing and speed optimization problem under time, cost and environmental objectives. A branch and price algorithm as well as a constraint programming model are developed that consider (a) fuel consumption as a function of payload, (b......) fuel price as an explicit input, (c) freight rate as an input, and (d) in-transit cargo inventory costs. The alternative objective functions are minimum total trip duration, minimum total cost and minimum emissions. Computational experience with the algorithm is reported on a variety of scenarios....

  8. Cost Minimization for Joint Energy Management and Production Scheduling Using Particle Swarm Optimization

    Science.gov (United States)

    Shah, Rahul H.

    Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the

  9. Cost-optimal energy performance renovation measures of educational buildings in cold climate

    International Nuclear Information System (INIS)

    Niemelä, Tuomo; Kosonen, Risto; Jokisalo, Juha

    2016-01-01

    Highlights: • The proposed national nZEB target can be cost-effectively achieved in renovations. • Energy saving potential of HVAC systems is significant compared to the building envelope. • Modern renewable energy production technologies are cost-efficient and recommendable. • Improving the indoor climate conditions in deep renovations is recommendable. • Simulation-based optimization method is efficient in building performance analyzes. - Abstract: The paper discusses cost-efficient energy performance renovation measures for typical educational buildings built in the 1960s and 1970s in cold climate regions. The study analyzes the impact of different energy renovation measures on the energy efficiency and economic viability in a Finnish case study educational building located in Lappeenranta University of Technology (LUT) campus area. The main objective of the study was to determine the cost-optimal energy performance renovation measures to meet the proposed national nearly zero-energy building (nZEB) requirements, which are defined according to the primary energy consumption of buildings. The main research method of the study was simulation-based optimization (SBO) analysis, which was used to determine the cost-optimal renovation solutions. The results of the study indicate that the minimum national energy performance requirement of new educational buildings (E_p_r_i_m_a_r_y ⩽ 170 kWh/(m"2,a)) can be cost-effectively achieved in deep renovations of educational buildings. In addition, the proposed national nZEB-targets are also well achievable, while improving the indoor climate (thermal comfort and indoor air quality) conditions significantly at the same time. Cost-effective solutions included renovation of the original ventilation system, a ground source heat pump system with relatively small dimensioning power output, new energy efficient windows and a relatively large area of PV-panels for solar-based electricity production. The results and

  10. Cost-optimal levels of minimum energy performance requirements in the Danish Building Regulations

    Energy Technology Data Exchange (ETDEWEB)

    Aggerholm, S.

    2013-09-15

    The purpose of the report is to analyse the cost optimality of the energy requirements in the Danish Building Regulations 2010, BR10 to new building and to existing buildings undergoing major renovation. The energy requirements in the Danish Building Regulations have by tradition always been based on the cost and benefits related to the private economical or financial perspective. Macro economical calculations have in the past only been made in addition. The cost optimum used in this report is thus based on the financial perspective. Due to the high energy taxes in Denmark there is a significant difference between the consumer price and the macro economical for energy. Energy taxes are also paid by commercial consumers when the energy is used for building operation e.g. heating, lighting, ventilation etc. In relation to the new housing examples the present minimum energy requirements in BR 10 all shows gaps that are negative with a deviation of up till 16 % from the point of cost optimality. With the planned tightening of the requirements to new houses in 2015 and in 2020, the energy requirements can be expected to be tighter than the cost optimal point, if the costs for the needed improvements don't decrease correspondingly. In relation to the new office building there is a gap of 31 % to the point of cost optimality in relation to the 2010 requirement. In relation to the 2015 and 2020 requirements there are negative gaps to the point of cost optimality based on today's prices. If the gaps for all the new buildings are weighted to an average based on mix of building types and heat supply for new buildings in Denmark there is a gap of 3 % in average for the new building. The excessive tightness with today's prices is 34 % in relation to the 2015 requirement and 49 % in relation to the 2020 requirement. The component requirement to elements in the building envelope and to installations in existing buildings adds up to significant energy efficiency

  11. Long term developments in irradiated natural uranium processing costs. Optimal size and siting of plants

    International Nuclear Information System (INIS)

    Thiriet, L.

    1964-01-01

    The aim of this paper is to help solve the problem of the selection of optimal sizes and sites for spent nuclear fuel processing plants associated with power capacity programmes already installed. Firstly, the structure of capital and running costs of irradiated natural uranium processing plants is studied, as well as the influence of plant sizes on these costs and structures. Shipping costs from the production site to the plant must also be added to processing costs. An attempt to reach a minimum cost for the production of a country or a group of countries must therefore take into account both the size and the location of the plants. The foreseeable shipping costs and their structure (freight, insurance, container cost and depreciation), for spent natural uranium are indicated. Secondly, for various annual spent fuel reprocessing programmes, the optimal sizes and locations of the plants are determined. The sensitivity of the results to the basic assumptions relative to processing costs, shipping costs, the starting up year of the plant programme and the length of period considered, is also tested. - this rather complex problem, of a combinative nature, is solved through dynamic programming methods. - It is shown that these methods can also be applied to the problem of selecting the optimal sizes and locations of processing plants for MTR type fuel elements, related to research reactor programmes, as well as to future plutonium element processing plants related to breeder reactors. Thirdly, the case where yearly extraction of the plutonium contained in the irradiated natural uranium is not compulsory is examined; some stockpiling of the fuel is then allowed some years, entailing delayed processing. The load factor of such plants is thus greatly improved with respect to that of plants where the annual plutonium demand is strictly satisfied. By including spent natural uranium stockpiling costs an optimal rhythm of introduction and optimal sizes for spent fuel

  12. Derating design for optimizing reliability and cost with an application to liquid rocket engines

    International Nuclear Information System (INIS)

    Kim, Kyungmee O.; Roh, Taeseong; Lee, Jae-Woo; Zuo, Ming J.

    2016-01-01

    Derating is the operation of an item at a stress that is lower than its rated design value. Previous research has indicated that reliability can be increased from operational derating. In order to derate an item in field operation, however, an engineer must rate the design of the item at a stress level higher than the operational stress level, which increases the item's nominal failure rate and development costs. At present, there is no model available to quantify the cost and reliability that considers the design uprating as well as the operational derating. In this paper, we establish the reliability expression in terms of the derating level assuming that the nominal failure rate is constant with time for a fixed rated design value. The total development cost is expressed in terms of the rated design value and the number of tests necessary to demonstrate the reliability requirement. The properties of the optimal derating level are explained for maximizing the reliability or for minimizing the cost. As an example, the proposed model is applied to the design of liquid rocket engines. - Highlights: • Modeled the effect of derating design on the reliability and the development cost. • Discovered that derating design may reduce the cost of reliability demonstration test. • Optimized the derating design parameter for reliability maximization or cost minimization.

  13. Search for interstellar methane

    International Nuclear Information System (INIS)

    Knacke, R.F.; Kim, Y.H.; Noll, K.S.; Geballe, T.R.

    1990-01-01

    Researchers searched for interstellar methane in the spectra of infrared sources embedded in molecular clouds. New observations of several lines of the P and R branches of the nu 3 band of CH4 near 3.3 microns give column densities in the range N less than 1(-2) times 10 to the minus 16th power cm(-2). Resulting abundance ratios are (CH4)/(CO) less than 3.3 times 10 to the minus 2nd power toward GL961 in NGC 2244 and less than 2.4 times 10 to the minus 3rd power toward GL989 in the NGC 2264 molecular cloud. The limits, and those determined in earlier observations of BN in Orion and GL490, suggest that there is little methane in molecular clouds. The result agrees with predictions of chemical models. Exceptions could occur in clouds where oxygen may be depleted, for example by H2O freezing on grains. The present observations probably did not sample such regions

  14. Detection of interstellar methylcyanoacetylene

    International Nuclear Information System (INIS)

    Broten, N.W.; MacLeod, J.M.; Avery, L.W.; Irvine, W.M.; Hoeglund, B.; Friberg, P.; Hjalmarson

    1984-01-01

    A new interstellar molecule, methylcyanoacetylene (CH 3 C 3 N), has been detected in the molecular cloud TMC-1. The J = 8 → 7, J = 7 → 6, J = 6 → 5, and J = 5 → 4 transitions have been observed. For the first three of these, both the K = 0 and K = 1 components are present, while for J = 5 → 4, only the K = 0 line has been detected. The observed frequencies were calculated by assuming a value of radial velocity V/sub lSR/ = 5.8 km s -1 for TMC-1, typical of other molecules in the cloud. All Observed frequencies are within 10 kHz of the calculated frequencies, which are based on the 1982 laboratory constants of Moises et al., so the identification is secure. The lines are broadened by hyperfine splitting, and the J = 5 → 4, K = 0 transition shows incipient resolution into three hyperfine components. The rotational temperature determined from these observations is quite low, with 2.7 K 12 cm -2

  15. Evaluating Application-Layer Traffic Optimization Cost Metrics for P2P Multimedia Streaming

    DEFF Research Database (Denmark)

    Poderys, Justas; Soler, José

    2017-01-01

    To help users of P2P communication systems perform better-than-random selection of communication peers, Internet Engineering Task Force standardized the Application Layer Traffic Optimization (ALTO) protocol. The ALTO provided data-routing cost metric, can be used to rank peers in P2P communicati...

  16. Towards Cost and Comfort Based Hybrid Optimization for Residential Load Scheduling in a Smart Grid

    Directory of Open Access Journals (Sweden)

    Nadeem Javaid

    2017-10-01

    Full Text Available In a smart grid, several optimization techniques have been developed to schedule load in the residential area. Most of these techniques aim at minimizing the energy consumption cost and the comfort of electricity consumer. Conversely, maintaining a balance between two conflicting objectives: energy consumption cost and user comfort is still a challenging task. Therefore, in this paper, we aim to minimize the electricity cost and user discomfort while taking into account the peak energy consumption. In this regard, we implement and analyse the performance of a traditional dynamic programming (DP technique and two heuristic optimization techniques: genetic algorithm (GA and binary particle swarm optimization (BPSO for residential load management. Based on these techniques, we propose a hybrid scheme named GAPSO for residential load scheduling, so as to optimize the desired objective function. In order to alleviate the complexity of the problem, the multi dimensional knapsack is used to ensure that the load of electricity consumer will not escalate during peak hours. The proposed model is evaluated based on two pricing schemes: day-ahead and critical peak pricing for single and multiple days. Furthermore, feasible regions are calculated and analysed to develop a relationship between power consumption, electricity cost, and user discomfort. The simulation results are compared with GA, BPSO and DP, and validate that the proposed hybrid scheme reflects substantial savings in electricity bills with minimum user discomfort. Moreover, results also show a phenomenal reduction in peak power consumption.

  17. Integrated Emission Management strategy for cost-optimal engine-aftertreatment operation

    NARCIS (Netherlands)

    Cloudt, R.P.M.; Willems, F.P.T.

    2011-01-01

    A new cost-based control strategy is presented that optimizes engine-aftertreatment performance under all operating conditions. This Integrated Emission Management strategy minimizes fuel consumption within the set emission limits by on-line adjustment of air management based on the actual state of

  18. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    Science.gov (United States)

    Handford, Matthew L.; Srinivasan, Manoj

    2016-02-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.

  19. Cost-optimal electricity systems with increasing renewable energy penetration for islands across the globe

    NARCIS (Netherlands)

    Blok, K.; van Velzen, Leonore

    2018-01-01

    Cost-optimal electricity system configurations with increasing renewable energy penetration were determined in this article for six islands of different geographies, sizes and contexts, utilizing photovoltaic energy, wind energy, pumped hydro storage and battery storage. The results of the

  20. Constrained Optimization Problems in Cost and Managerial Accounting--Spreadsheet Tools

    Science.gov (United States)

    Amlie, Thomas T.

    2009-01-01

    A common problem addressed in Managerial and Cost Accounting classes is that of selecting an optimal production mix given scarce resources. That is, if a firm produces a number of different products, and is faced with scarce resources (e.g., limitations on labor, materials, or machine time), what combination of products yields the greatest profit…

  1. BEST-4, Fuel Cycle and Cost Optimization for Discrete Power Levels

    International Nuclear Information System (INIS)

    1973-01-01

    1 - Nature of physical problem solved: Determination of optimal power strategy for a fuel cycle, for discrete power levels and n temporal stages, taking into account replacement energy costs and de-rating. 2 - Method of solution: Dynamic programming. 3 - Restrictions on the complexity of the problem: Restrictions may arise from number of power levels and temporal stages, due to machine limitations

  2. Model-based predictive control scheme for cost optimization and balancing services for supermarket refrigeration Systems

    NARCIS (Netherlands)

    Weerts, H.H.M.; Shafiei, S.E.; Stoustrup, J.; Izadi-Zamanabadi, R.; Boje, E.; Xia, X.

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive

  3. Cost optimization in the (S-1,S) lost sales inventory model with multiple demand classes

    NARCIS (Netherlands)

    Kranenburg, A.A.; Houtum, van G.J.J.A.N.

    2007-01-01

    For the (S-1,S) lost sales inventory model with multiple demand classes that have different lost sales penalty cost parameters, three accurate and efficient heuristic algorithms are presented that, at a given base stock level, aim to find optimal values for the critical levels, i.e., values that

  4. Life cycle cost optimization of buildings with regard to energy use, thermal indoor environment and daylight

    DEFF Research Database (Denmark)

    Nielsen, Toke Rammer; Svendsen, Svend

    2002-01-01

    by the life cycle cost taking all expenses in the buildings service life into consideration. Also the performance of buildings is important as the performance influences the comfort of the occupants, heating demand etc. Different performance requirements are stated in building codes, standards......Buildings represent a large economical investment and have long service lives through which expenses for heating, cooling, maintenance and replacement depends on the chosen building design. Therefore, the building cost should not only be evaluated by the initial investment cost but rather...... and by the customer. The influence of different design variables on life cycle cost and building performance is very complicated and the design variables can be combined in an almost unlimited number of ways. Optimization can be applied to achieve a building design with low life cycle cost and good performance...

  5. Cost optimization of the dimensions of the antennas of a solar power satellite system

    Energy Technology Data Exchange (ETDEWEB)

    Vasilev, A.V.; Klassen, V.I.; Laskin, N.N.; Tobolev, A.K.

    1983-05-01

    The problem of the cost optimization of the dimensions of the antennas of a solar power satellite system is formulated. The optimization problem is twofold: (1) for a given power delivered to the microwave transmitting antenna (TA), to determine the dimensions Lt (the characteristic dimension of the TA) and Lr (the characteristic dimension of the rectenna) which minimize the unit-power cost function for a given amplitude-phase distribution in the aperture of the TA, and (2) for a power delivered to the TA which is proportional to the aperture area, to determine the dimensions Lt and Lr which minimize the unit-power cost function for a given amplitude-phase distribution in the aperture of the TA. Two possible variants of the solution of this problem are considered: (1) the case of a linear antenna (the two-dimensional problem), and (2) the case of square apertures (the three-dimensional problem). A specific example of optimization is considered, where the cost of the TA is $1000/sq m and the cost of the rectenna is $12/sq m. 11 references.

  6. Cost Optimal Design of a Single-Phase Dry Power Transformer

    Directory of Open Access Journals (Sweden)

    Raju Basak

    2015-08-01

    Full Text Available The Dry type transformers are preferred to their oil-immersed counterparts for various reasons, particularly because their operation is hazardless. The application of dry transformers was limited to small ratings in the earlier days. But now these are being used for considerably higher ratings.  Therefore, their cost-optimal design has gained importance. This paper deals with the design procedure for achieving cost optimal design of a dry type single-phase power transformer of small rating, subject to usual design constraints on efficiency and voltage regulation. The selling cost for the transformer has been taken as the objective function. Only two key variables have been chosen, the turns/volt and the height: width ratio of window, which affects the cost function to high degrees. Other variables have been chosen on the basis of designers’ experience. Copper has been used as conductor material and CRGOS as core material to achieve higher efficiency, lower running cost and compact design. The electrical and magnetic loadings have been kept at their maximum values without violating the design constraints. The optimal solution has been obtained by the method of exhaustive search using nested loops.

  7. Optimal decoding and information transmission in Hodgkin-Huxley neurons under metabolic cost constraints.

    Science.gov (United States)

    Kostal, Lubomir; Kobayashi, Ryota

    2015-10-01

    Information theory quantifies the ultimate limits on reliable information transfer by means of the channel capacity. However, the channel capacity is known to be an asymptotic quantity, assuming unlimited metabolic cost and computational power. We investigate a single-compartment Hodgkin-Huxley type neuronal model under the spike-rate coding scheme and address how the metabolic cost and the decoding complexity affects the optimal information transmission. We find that the sub-threshold stimulation regime, although attaining the smallest capacity, allows for the most efficient balance between the information transmission and the metabolic cost. Furthermore, we determine post-synaptic firing rate histograms that are optimal from the information-theoretic point of view, which enables the comparison of our results with experimental data. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Nonfragile Guaranteed Cost Control and Optimization for Interconnected Systems of Neutral Type

    Directory of Open Access Journals (Sweden)

    Heli Hu

    2013-01-01

    Full Text Available The design and optimization problems of the nonfragile guaranteed cost control are investigated for a class of interconnected systems of neutral type. A novel scheme, viewing the interconnections with time-varying delays as effective information but not disturbances, is developed to decrease the conservatism. Many techniques on decomposing and magnifying the matrices are utilized to obtain the guaranteed cost of the considered system. Also, an algorithm is proposed to solve the nonlinear problem of the interconnected matrices. Based on this algorithm, the minimization of the guaranteed cost of the considered system is obtained by optimization. Further, the state feedback control is extended to the case in which the underlying system is dependent on uncertain parameters. Finally, two numerical examples are given to illustrate the proposed method, and some comparisons are made to show the advantages of the schemes of dealing with the interconnections.

  9. Cost Optimization of Mortars Containing Different Pigments and Their Freeze-Thaw Resistance Properties

    Directory of Open Access Journals (Sweden)

    Sadık Alper Yıldızel

    2016-01-01

    Full Text Available Nowadays, it is common to use colored concrete or mortar in prefabricated concrete and reinforced concrete construction elements. Within the scope of this study, colored mortars were obtained with the addition of brown, yellow, black, and red pigments into the white cement. Those mixtures are examined for their compressive strength, unit weight, water absorption, and freeze-thaw resistance. Subsequent to comparison of these properties, a cost optimization has been conducted in order to compare pigment costs. The outcomes showed that the pore structure in architectural mortar applications plays an important role in terms of durability. And cost optimization results show that light colored minerals can be used instead of white cements.

  10. The French biofuels mandates under cost uncertainty - an assessment based on robust optimization

    International Nuclear Information System (INIS)

    Lorne, Daphne; Tchung-Ming, Stephane

    2012-01-01

    This paper investigates the impact of primary energy and technology cost uncertainty on the achievement of renewable and especially biofuel policies - mandates and norms - in France by 2030. A robust optimization technique that allows to deal with uncertainty sets of high dimensionality is implemented in a TIMES-based long-term planning model of the French energy transport and electricity sectors. The energy system costs and potential benefits (GHG emissions abatements, diversification) of the French renewable mandates are assessed within this framework. The results of this systemic analysis highlight how setting norms and mandates allows to reduce the variability of CO 2 emissions reductions and supply mix diversification when the costs of technological progress and prices are uncertain. Beyond that, we discuss the usefulness of robust optimization in complement of other techniques to integrate uncertainty in large-scale energy models. (authors)

  11. Integrated approach to optimize operation and maintenance costs for operating nuclear power plants

    International Nuclear Information System (INIS)

    2006-06-01

    In the context of increasingly open electricity markets and the 'unbundling' of generating companies from former utility monopolies, an area of major concern is the economic performance of the existing fleet of nuclear power plants. Nuclear power, inevitably, must compete directly with other electricity generation sources. Coping with this competitive pressure is a challenge that the nuclear industry should meet if the nuclear option is to remain a viable one. This competitive environment has significant implications for nuclear plant operations, including, among others, the need for the more cost effective management of plant activities, and the greater use of analytical tools to balance the costs and benefits of proposed activities, in order to optimize operation and maintenance costs, and thus insure the economic competitiveness of existing nuclear power plants. In the framework of the activities on Nuclear Economic Performance Information System (NEPIS), the IAEA embarked in developing guidance on optimization of operation and maintenance costs for nuclear power plants. The report was prepared building on the fundamental that optimization of operation and maintenance costs of a nuclear power plant is a key component of a broader integrated business strategic planning process, having as overall result achievement of organization's business objectives. It provides advice on optimization of O and M costs in the framework of strategic business planning, with additional details on operational planning and controlling. This TECDOC was elaborated in 2004-2005 in the framework of the IAEA's programme on Nuclear Power Plant Operating Performance and Life Cycle Management, with the support of two consultants meetings and one technical meeting and based on contributions provided by participants. It can serve as a useful reference for the management and operation staff within utilities, nuclear power plant operators and regulators and other organizations involved in

  12. Analysis of efficiency and marketing trends cost optimization in enterprises of baking branch

    Directory of Open Access Journals (Sweden)

    Lukan О.М.

    2017-06-01

    Full Text Available Today, at the bakery industry, little attention is paid to marketing activities. Limited financial resources and the lack of a comprehensive assessment of the effectiveness of marketing activities leads to a reduction in marketing budgets and a decrease in the profitability of the enterprise as a whole. Therefore, despite the complexity of conducting an analysis of the cost effectiveness of marketing activities, in market conditions it is necessary to control the level of costs and the formation of optimal marketing budgets. In the work it is determined that the main direction of marketing activity evaluation is the analysis of the cost effectiveness for its implementation. A scientific-methodical approach to the analysis of the effectiveness of marketing costs in the bakery industry is suggested. The analysis of the cost effectiveness of marketing activities on the basis of the assumption that marketing costs are a factor variable determining the patterns of changes in the values of the resulting indicators of financial and economic activities of the enterprise, such as net income from sales of products, gross profit, financial results from operating activities and net profit (losses. The main directions of optimization of marketing activities at bakery enterprises are given.

  13. Interstellar Sweat Equity

    Science.gov (United States)

    Cohen, M. H.; Becker, R. E.; O'Donnell, D. J.; Brody, A. R.

    So, you have just launched aboard the Starship, headed to an exoplanet light years from Earth. You will spend the rest of your natural life on this journey in the expectation and hope that your grandchildren will arrive safely, land, and build a new settlement. You will need to govern the community onboard the Starship. This system of governance must meet unique requirements for participation, representation, and decision-making. On a spaceship that can fly and operate by itself, what will the crewmembers do for their generations in transit? Certainly, they will train and train again to practice the skills they will need upon arrival at a new world. However, this vicarious practice neither suffices to prepare the future pioneers for their destiny at a new star nor will it provide them with the satisfaction in their own work. To hone the crewmembers' inventive and technical skills, to challenge and prepare them for pioneering, the crew would build and expand the interstellar ship in transit. This transstellar ``sweat equity'' gives a stake in the enterprise to all the people, providing meaningful and useful activity to the new generations of crewmembers. They build all the new segments of the vessel from raw materials - including atmosphere - stored on board. Construction of new pressure shell modules would be one option, but they also reconstruct or fill-in existing pressurized volumes. The crew makes new life support system components and develops new agricultural modules in anticipation of their future needs. Upon arrival at the new star or planet, the crew shall apply these robustly developed skills and self-sufficient spirit to their new home.

  14. A Study on Development of a Cost Optimal and Energy Saving Building Model: Focused on Industrial Building

    Directory of Open Access Journals (Sweden)

    Hye Yeon Kim

    2016-03-01

    Full Text Available This study suggests an optimization method for the life cycle cost (LCC in an economic feasibility analysis when applying energy saving techniques in the early design stage of a building. Literature and previous studies were reviewed to select appropriate optimization and LCC analysis techniques. The energy simulation (Energy Plus and computational program (MATLAB were linked to provide an automated optimization process. From the results, it is suggested that this process could outline the cost optimization model with which it is possible to minimize the LCC. To aid in understanding the model, a case study on an industrial building was performed to outline the operations of the cost optimization model including energy savings. An energy optimization model was also presented to illustrate the need for the cost optimization model.

  15. Energy Hub’s Structural and Operational Optimization for Minimal Energy Usage Costs in Energy Systems

    Directory of Open Access Journals (Sweden)

    Thanh Tung Ha

    2018-03-01

    Full Text Available The structural and optimal operation of an Energy Hub (EH has a tremendous influence on the hub’s performance and reliability. This paper envisions an innovative methodology that prominently increases the synergy between structural and operational optimization and targets system cost affordability. The generalized energy system structure is presented theoretically with all selective hub sub-modules, including electric heater (EHe and solar sources block sub-modules. To minimize energy usage cost, an energy hub is proposed that consists of 12 kinds of elements (i.e., energy resources, conversion, and storage functions and is modeled mathematically in a General Algebraic Modeling System (GAMS, which indicates the optimal hub structure’s corresponding elements with binary variables (0, 1. Simulation results contrast with 144 various scenarios established in all 144 categories of hub structures, in which for each scenario the corresponding optimal operation cost is previously calculated. These case studies demonstrate the effectiveness of the suggested model and methodology. Finally, avenues for future research are also prospected.

  16. Optimal household refrigerator replacement policy for life cycle energy, greenhouse gas emissions, and cost

    International Nuclear Information System (INIS)

    Kim, Hyung Chul; Keoleian, Gregory A.; Horie, Yuhta A.

    2006-01-01

    Although the last decade witnessed dramatic progress in refrigerator efficiencies, inefficient, outdated refrigerators are still in operation, sometimes consuming more than twice as much electricity per year compared with modern, efficient models. Replacing old refrigerators before their designed lifetime could be a useful policy to conserve electric energy and greenhouse gas emissions. However, from a life cycle perspective, product replacement decisions also induce additional economic and environmental burdens associated with disposal of old models and production of new models. This paper discusses optimal lifetimes of mid-sized refrigerator models in the US, using a life cycle optimization model based on dynamic programming. Model runs were conducted to find optimal lifetimes that minimize energy, global warming potential (GWP), and cost objectives over a time horizon between 1985 and 2020. The baseline results show that depending on model years, optimal lifetimes range 2-7 years for the energy objective, and 2-11 years for the GWP objective. On the other hand, an 18-year of lifetime minimizes the economic cost incurred during the time horizon. Model runs with a time horizon between 2004 and 2020 show that current owners should replace refrigerators that consume more than 1000 kWh/year of electricity (typical mid-sized 1994 models and older) as an efficient strategy from both cost and energy perspectives

  17. Aging Cost Optimization for Planning and Management of Energy Storage Systems

    Directory of Open Access Journals (Sweden)

    Saman Korjani

    2017-11-01

    Full Text Available In recent years, many studies have proposed the use of energy storage systems (ESSs for the mitigation of renewable energy source (RES intermittent power output. However, the correct estimation of the ESS degradation costs is still an open issue, due to the difficult estimation of their aging in the presence of intermittent power inputs. This is particularly true for battery ESSs (BESSs, which have been proven to exhibit complex aging functions. Unfortunately, this collides with considering aging costs when performing ESS planning and management procedures, which are crucial for the exploitation of this technology. In order to overcome this issue, this paper presents the genetic algorithm-based multi-period optimal power flow (GA-MPOPF procedure, which aims to economically optimize the management of ESSs by taking into account their degradation costs. The proposed methodology has been tested in two different applications: the planning of the correct positioning of a Li-ion BESS in the PG& E 69 bus network in the presence of high RES penetration, and the definition of its management strategy. Simulation results show that GA-MPOPF is able to optimize the ESS usage for time scales of up to one month, even for complex operative costs functions, showing at the same time excellent convergence properties.

  18. Nutritionally Optimized, Culturally Acceptable, Cost-Minimized Diets for Low Income Ghanaian Families Using Linear Programming.

    Science.gov (United States)

    Nykänen, Esa-Pekka A; Dunning, Hanna E; Aryeetey, Richmond N O; Robertson, Aileen; Parlesak, Alexandr

    2018-04-07

    The Ghanaian population suffers from a double burden of malnutrition. Cost of food is considered a barrier to achieving a health-promoting diet. Food prices were collected in major cities and in rural areas in southern Ghana. Linear programming (LP) was used to calculate nutritionally optimized diets (food baskets (FBs)) for a low-income Ghanaian family of four that fulfilled energy and nutrient recommendations in both rural and urban settings. Calculations included implementing cultural acceptability for families living in extreme and moderate poverty (food budget under USD 1.9 and 3.1 per day respectively). Energy-appropriate FBs minimized for cost, following Food Balance Sheets (FBS), lacked key micronutrients such as iodine, vitamin B12 and iron for the mothers. Nutritionally adequate FBs were achieved in all settings when optimizing for a diet cheaper than USD 3.1. However, when delimiting cost to USD 1.9 in rural areas, wild foods had to be included in order to meet nutritional adequacy. Optimization suggested to reduce roots, tubers and fruits and to increase cereals, vegetables and oil-bearing crops compared with FBS. LP is a useful tool to design culturally acceptable diets at minimum cost for low-income Ghanaian families to help advise national authorities how to overcome the double burden of malnutrition.

  19. Optimization of solar cell contacts by system cost-per-watt minimization

    Science.gov (United States)

    Redfield, D.

    1977-01-01

    New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.

  20. Evaluation of Externality Costs in Life-Cycle Optimization of Municipal Solid Waste Management Systems

    DEFF Research Database (Denmark)

    Martinez Sanchez, Veronica; Levis, James W.; Damgaard, Anders

    2017-01-01

    The development of sustainable solid waste management (SWM) systems requires consideration of both economic and environmental impacts. Societal life-cycle costing (S-LCC) provides a quantitative framework to estimate both economic and environmental impacts, by including "budget costs...... suburban U.S. county of 500 000 people generating 320 000 Mg of waste annually. Estimated externality costs are based on emissions of CO2, CH4, N2O, PM2.5, PM10, NOx, SO2, VOC, CO, NH3, Hg, Pb, Cd, Cr (VI), Ni, As, and dioxins. The results indicate that incorporating S-LCC into optimized SWM strategy...... development encourages the use of a mixed waste material recovery facility with residues going to incineration, and separated organics to anaerobic digestion. Results are sensitive to waste composition, energy mix and recycling rates. Most of the externality costs stem from SO2, NOx, PM2.5, CH4, fossil CO2...

  1. Optimal replacement policy of products with repair-cost threshold after the extended warranty

    Institute of Scientific and Technical Information of China (English)

    Lijun Shang; Zhiqiang Cai

    2017-01-01

    The reliability of the product sold under a warranty is usually maintained by the manufacturer during the warranty period. After the expiry of the warranty, however, the consumer confronts a problem about how to maintain the reliability of the product. This paper proposes, from the consumer's perspective, a replace-ment policy after the extended warranty, under the assumption that the product is sold under the renewable free replacement warranty (RFRW) policy in which the replacement is dependent on the repair-cost threshold. The proposed replacement policy is the replacement after the extended warranty is performed by the consumer based on the repair-cost threshold or preventive replacement (PR) age, which are decision variables. The expected cost rate model is derived from the consumer's perspective. The existence and uniqueness of the optimal solution that minimizes the expected cost rate per unit time are offered. Finally, a numeri-cal example is presented to exemplify the proposed model.

  2. Operation optimization of a distributed energy system considering energy costs and exergy efficiency

    International Nuclear Information System (INIS)

    Di Somma, M.; Yan, B.; Bianco, N.; Graditi, G.; Luh, P.B.; Mongibello, L.; Naso, V.

    2015-01-01

    Highlights: • Operation optimization model of a Distributed Energy System (DES). • Multi-objective strategy to optimize energy cost and exergy efficiency. • Exergy analysis in building energy supply systems. - Abstract: With the growing demand of energy on a worldwide scale, improving the efficiency of energy resource use has become one of the key challenges. Application of exergy principles in the context of building energy supply systems can achieve rational use of energy resources by taking into account the different quality levels of energy resources as well as those of building demands. This paper is on the operation optimization of a Distributed Energy System (DES). The model involves multiple energy devices that convert a set of primary energy carriers with different energy quality levels to meet given time-varying user demands at different energy quality levels. By promoting the usage of low-temperature energy sources to satisfy low-quality thermal energy demands, the waste of high-quality energy resources can be reduced, thereby improving the overall exergy efficiency. To consider the economic factor as well, a multi-objective linear programming problem is formulated. The Pareto frontier, including the best possible trade-offs between the economic and exergetic objectives, is obtained by minimizing a weighted sum of the total energy cost and total primary exergy input using branch-and-cut. The operation strategies of the DES under different weights for the two objectives are discussed. The operators of DESs can choose the operation strategy from the Pareto frontier based on costs, essential in the short run, and sustainability, crucial in the long run. The contribution of each energy device in reducing energy costs and the total exergy input is also analyzed. In addition, results show that the energy cost can be much reduced and the overall exergy efficiency can be significantly improved by the optimized operation of the DES as compared with the

  3. Integrating sequencing technologies in personal genomics: optimal low cost reconstruction of structural variants.

    Directory of Open Access Journals (Sweden)

    Jiang Du

    2009-07-01

    Full Text Available The goal of human genome re-sequencing is obtaining an accurate assembly of an individual's genome. Recently, there has been great excitement in the development of many technologies for this (e.g. medium and short read sequencing from companies such as 454 and SOLiD, and high-density oligo-arrays from Affymetrix and NimbelGen, with even more expected to appear. The costs and sensitivities of these technologies differ considerably from each other. As an important goal of personal genomics is to reduce the cost of re-sequencing to an affordable point, it is worthwhile to consider optimally integrating technologies. Here, we build a simulation toolbox that will help us optimally combine different technologies for genome re-sequencing, especially in reconstructing large structural variants (SVs. SV reconstruction is considered the most challenging step in human genome re-sequencing. (It is sometimes even harder than de novo assembly of small genomes because of the duplications and repetitive sequences in the human genome. To this end, we formulate canonical problems that are representative of issues in reconstruction and are of small enough scale to be computationally tractable and simulatable. Using semi-realistic simulations, we show how we can combine different technologies to optimally solve the assembly at low cost. With mapability maps, our simulations efficiently handle the inhomogeneous repeat-containing structure of the human genome and the computational complexity of practical assembly algorithms. They quantitatively show how combining different read lengths is more cost-effective than using one length, how an optimal mixed sequencing strategy for reconstructing large novel SVs usually also gives accurate detection of SNPs/indels, how paired-end reads can improve reconstruction efficiency, and how adding in arrays is more efficient than just sequencing for disentangling some complex SVs. Our strategy should facilitate the sequencing of

  4. A methodology based in particle swarm optimization algorithm for preventive maintenance focused in reliability and cost

    International Nuclear Information System (INIS)

    Luz, Andre Ferreira da

    2009-01-01

    In this work, a Particle Swarm Optimization Algorithm (PSO) is developed for preventive maintenance optimization. The proposed methodology, which allows the use flexible intervals between maintenance interventions, instead of considering fixed periods (as usual), allows a better adaptation of scheduling in order to deal with the failure rates of components under aging. Moreover, because of this flexibility, the planning of preventive maintenance becomes a difficult task. Motivated by the fact that the PSO has proved to be very competitive compared to other optimization tools, this work investigates the use of PSO as an alternative tool of optimization. Considering that PSO works in a real and continuous space, it is a challenge to use it for discrete optimization, in which scheduling may comprise variable number of maintenance interventions. The PSO model developed in this work overcome such difficulty. The proposed PSO searches for the best policy for maintaining and considers several aspects, such as: probability of needing repair (corrective maintenance), the cost of such repairs, typical outage times, costs of preventive maintenance, the impact of maintaining the reliability of systems as a whole, and the probability of imperfect maintenance. To evaluate the proposed methodology, we investigate an electro-mechanical system consisting of three pumps and four valves, High Pressure Injection System (HPIS) of a PWR. Results show that PSO is quite efficient in finding the optimum preventive maintenance policies for the HPIS. (author)

  5. A Cost Optimized Fully Sustainable Power System for Southeast Asia and the Pacific Rim

    Directory of Open Access Journals (Sweden)

    Ashish Gulagi

    2017-04-01

    Full Text Available In this paper, a cost optimal 100% renewable energy based system is obtained for Southeast Asia and the Pacific Rim region for the year 2030 on an hourly resolution for the whole year. For the optimization, the region was divided into 15 sub-regions and three different scenarios were set up based on the level of high voltage direct current grid connections. The results obtained for a total system levelized cost of electricity showed a decrease from 66.7 €/MWh in a decentralized scenario to 63.5 €/MWh for a centralized grid connected scenario. An integrated scenario was simulated to show the benefit of integrating additional demand of industrial gas and desalinated water which provided the system the required flexibility and increased the efficiency of the usage of storage technologies. This was reflected in the decrease of system cost by 9.5% and the total electricity generation by 5.1%. According to the results, grid integration on a larger scale decreases the total system cost and levelized cost of electricity by reducing the need for storage technologies due to seasonal variations in weather and demand profiles. The intermittency of renewable technologies can be effectively stabilized to satisfy hourly demand at a low cost level. A 100% renewable energy based system could be a reality economically and technically in Southeast Asia and the Pacific Rim with the cost assumptions used in this research and it may be more cost competitive than the nuclear and fossil carbon capture and storage (CCS alternatives.

  6. Cost optimization of wind turbines for large-scale offshore wind farms

    International Nuclear Information System (INIS)

    Fuglsang, P.; Thomsen, K.

    1998-02-01

    This report contains a preliminary investigation of site specific design of off-shore wind turbines for a large off-shore wind farm project at Roedsand that is currently being proposed by ELKRAFT/SEAS. The results were found using a design tool for wind turbines that involve numerical optimization and aeroelastic calculations of response. The wind climate was modeled in detail and a cost function was used to estimate costs from manufacture and installation. Cost of energy is higher for off-shore installations. A comparison of an off-shore wind farm site with a typical stand alone on-shore site showed an increase of the annual production of 28% due to the difference in wind climate. Extreme loads and blade fatigue loads were nearly identical, however,fatigue loads on other main components increased significantly. Optimizations were carried out to find the optimum overall off-shore wind turbine design. A wind turbine for the off-shore wind farm should be different compared with a stand-alone on-shore wind turbine. The overall design changed were increased swept area and rated power combined with reduced rotor speed and tower height. Cost was reduced by 12% for the final 5D/14D off-shore wind turbine from 0.306 DKr/kWh to 0.270 DKr/kWh. These figures include capital costs from manufacture and installation but not on-going costs from maintenance. These results make off-shore wind farms more competitive and comparable to the reference on-shore stand-alone wind turbine. A corresponding reduction of cost of energy could not be found for the stand alone on-shore wind turbine. Furthermore the fatigue loads on wind turbines in on-shore wind farms will increase and cost of energy will increase in favor of off-shore wind farms. (au) EFP-95; EU-JOULE-3; 21 tabs., 7 ills., 8 refs

  7. Comet Halley and interstellar chemistry

    International Nuclear Information System (INIS)

    Snyder, L.E.

    1989-01-01

    How complex is the chemistry of the interstellar medium? How far does it evolve and how has it interacted with the chemistry of the solar system? Are the galactic chemical processes destroyed, preserved, or even enhanced in comets? Are biogenic molecules formed in space and have the formation mechanisms interacted in any way with prebiotic organic chemical processes on the early earth? Radio molecular studies of comets are important for probing deep into the coma and nuclear region and thus may help answer these questions. Comets are believed to be pristine samples of the debris left from the formation of the solar system and may have been the carrier between interstellar and terrestrial prebiotic chemistries. Recent observations of Comet Halley and subsequent comets have given the author an excellent opportunity to study the relationship between interstellar molecular chemistry and cometary chemistry

  8. Convex Optimization for the Energy Management of Hybrid Electric Vehicles Considering Engine Start and Gearshift Costs

    Directory of Open Access Journals (Sweden)

    Tobias Nüesch

    2014-02-01

    Full Text Available This paper presents a novel method to solve the energy management problem for hybrid electric vehicles (HEVs with engine start and gearshift costs. The method is based on a combination of deterministic dynamic programming (DP and convex optimization. As demonstrated in a case study, the method yields globally optimal results while returning the solution in much less time than the conventional DP method. In addition, the proposed method handles state constraints, which allows for the application to scenarios where the battery state of charge (SOC reaches its boundaries.

  9. Optimizing Cost of Continuous Overlapping Queries over Data Streams by Filter Adaption

    KAUST Repository

    Xie, Qing

    2016-01-12

    The problem we aim to address is the optimization of cost management for executing multiple continuous queries on data streams, where each query is defined by several filters, each of which monitors certain status of the data stream. Specially the filter can be shared by different queries and expensive to evaluate. The conventional objective for such a problem is to minimize the overall execution cost to solve all queries, by planning the order of filter evaluation in shared strategy. However, in streaming scenario, the characteristics of data items may change in process, which can bring some uncertainty to the outcome of individual filter evaluation, and affect the plan of query execution as well as the overall execution cost. In our work, considering the influence of the uncertain variation of data characteristics, we propose a framework to deal with the dynamic adjustment of filter ordering for query execution on data stream, and focus on the issues of cost management. By incrementally monitoring and analyzing the results of filter evaluation, our proposed approach can be effectively adaptive to the varied stream behavior and adjust the optimal ordering of filter evaluation, so as to optimize the execution cost. In order to achieve satisfactory performance and efficiency, we also discuss the trade-off between the adaptivity of our framework and the overhead incurred by filter adaption. The experimental results on synthetic and two real data sets (traffic and multimedia) show that our framework can effectively reduce and balance the overall query execution cost and keep high adaptivity in streaming scenario.

  10. Life-cycle cost assessment of optimally designed reinforced concrete buildings under seismic actions

    International Nuclear Information System (INIS)

    Mitropoulou, Chara Ch.; Lagaros, Nikos D.; Papadrakakis, Manolis

    2011-01-01

    Life-cycle cost analysis (LCCA) is an assessment tool for studying the performance of systems in many fields of engineering. In earthquake engineering LCCA demands the calculation of the cost components that are related to the performance of the structure in multiple earthquake hazard levels. Incremental static and dynamic analyses are two procedures that can be used for estimating the seismic capacity of a structural system and can therefore be incorporated into the LCCA methodology. In this work the effect of the analysis procedure, the number of seismic records imposed, the performance criterion used and the structural type (regular or irregular) is investigated, on the life-cycle cost analysis of 3D reinforced concrete structures. Furthermore, the influence of uncertainties on the seismic response of structural systems and their impact on LCCA is examined. The uncertainty on the material properties, the cross-section dimensions and the record-incident angle is taking into account with the incorporation of the Latin hypercube sampling method into the incremental dynamic analysis procedure. In addition, the LCCA methodology is used as an assessment tool for the designs obtained by means of prescriptive and performance-based optimum design methodologies. The first one is obtained from a single-objective optimization problem, where the initial construction cost was the objective to be minimized, while the second one as a two-objective optimization problem where the life-cycle cost was the additional objective also to be minimized.

  11. An Invocation Cost Optimization Method for Web Services in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Lianyong Qi

    2017-01-01

    Full Text Available The advent of cloud computing technology has enabled users to invoke various web services in a “pay-as-you-go” manner. However, due to the flexible pricing model of web services in cloud environment, a cloud user’ service invocation cost may be influenced by many factors (e.g., service invocation time, which brings a great challenge for cloud users’ cost-effective web service invocation. In view of this challenge, in this paper, we first investigate the multiple factors that influence the invocation cost of a cloud service, for example, user’s job size, service invocation time, and service quality level; and afterwards, a novel Cloud Service Cost Optimization Method named CS-COM is put forward, by considering the above multiple impact factors. Finally, a set of experiments are designed, deployed, and tested to validate the feasibility of our proposal in terms of cost optimization. The experiment results show that our proposed CS-COM method outperforms other related methods.

  12. Estimation of Partial Safety Factors and Target Failure Probability Based on Cost Optimization of Rubble Mound Breakwaters

    DEFF Research Database (Denmark)

    Kim, Seung-Woo; Suh, Kyung-Duck; Burcharth, Hans F.

    2010-01-01

    The breakwaters are designed by considering the cost optimization because a human risk is seldom considered. Most breakwaters, however, were constructed without considering the cost optimization. In this study, the optimum return period, target failure probability and the partial safety factors...

  13. Hybrid Cloud Computing Architecture Optimization by Total Cost of Ownership Criterion

    Directory of Open Access Journals (Sweden)

    Elena Valeryevna Makarenko

    2014-12-01

    Full Text Available Achieving the goals of information security is a key factor in the decision to outsource information technology and, in particular, to decide on the migration of organizational data, applications, and other resources to the infrastructure, based on cloud computing. And the key issue in the selection of optimal architecture and the subsequent migration of business applications and data to the cloud organization information environment is the question of the total cost of ownership of IT infrastructure. This paper focuses on solving the problem of minimizing the total cost of ownership cloud.

  14. Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost

    Science.gov (United States)

    Martin, J. A.

    1973-01-01

    An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.

  15. Optimal scenario balance of reduction in costs and greenhouse gas emissions for municipal solid waste management

    Institute of Scientific and Technical Information of China (English)

    邓娜; 张强; 陈广武; 齐长青; 崔文谦; 张于峰; 马洪亭

    2015-01-01

    To reduce carbon intensity, an improved management method balancing the reduction in costs and greenhouse gas (GHG) emissions is required for Tianjin’s waste management system. Firstly, six objective functions, namely, cost minimization, GHG minimization, eco-efficiency minimization, cost maximization, GHG maximization and eco-efficiency maximization, are built and subjected to the same constraints with each objective function corresponding to one scenario. Secondly, GHG emissions and costs are derived from the waste flow of each scenario. Thirdly, the range of GHG emissions and costs of other potential scenarios are obtained and plotted through adjusting waste flow with infinitely possible step sizes according to the correlation among the above six scenarios. And the optimal scenario is determined based on this range. The results suggest the following conclusions. 1) The scenarios located on the border between scenario cost minimization and GHG minimization create an optimum curve, and scenario GHG minimization has the smallest eco-efficiency on the curve;2) Simple pursuit of eco-efficiency minimization using fractional programming may be unreasonable; 3) Balancing GHG emissions from incineration and landfills benefits Tianjin’s waste management system as it reduces GHG emissions and costs.

  16. Formation of interstellar anions

    Science.gov (United States)

    Senent, Maria Luisa

    2012-05-01

    Formation of interstellar anions: M.L. Senent. The recent detection of negative charged species in the ISM1 has instigated enthusiasm for anions in the astrophysical community2. Many of these species are new and entail characterization. How they are formed in astrophysical sources is a question of major relevance. The anion presence in ISM was first predicted theoretically on the basis of electron affinities and on the negative linear chain molecular stabilities. Although very early, they were considered in astrochemical models3-4, their discovery is so recent because their abundances seem to be relatively low. These have to be understood in terms of molecular stabilities, reaction probabilities and radiative and collisional excitations. Then, we present our theoretical work on even carbon chains type Cn and CnH (n=2,4,6) focused to the understanding of anion abundances. We use highly correlated ab initio methods. We performed spectroscopic studies of various isomers that can play important roles as intermediates5-8. In previous papers9-10, we compared C2H and C2H- collisional rates responsible for observed line intensities. Actually, we study hydrogen attachment (Cn +H → CnH and Cn- +H → CnH-) and associative detachment processes (Cn- +H → CnH +e-) for 2, 4 and 6 carbon atom chains11. [1] M.C.McCarthy, C.A.Gottlieb, H.Gupta, P.Thaddeus, Astrophys.J, 652, L141 (2006) [2] V.M.Bierbaum, J.Cernicharo, R.Bachiller, eds., 2011, pp 383-389. [3] A. Dalgarno, R.A. Mc Cray, Astrophys.J,, 181, 95 (1973) [4] E. Herbst E., Nature, 289, 656 (1981); [5] H.Massó, M.L.Senent, P.Rosmus, M.Hochlaf, J.Chem.Phys., 124, 234304 (2006) [6] M.L.Senent, M.Hochlaf, Astrophys. J. , 708, 1452(2010) [7] H.Massó, M.L.Senent, J.Phys.Chem.A, 113, 12404 (2009) [8] D. Hammoutene, M.Hochlaf, M.L.Senent, submitted. [9] A. Spielfiedel, N. Feautrier, F. Najar, D. ben Abdallah, F. Dayou, M.L. Senent, F. Lique, Mon.Not.R.Astron.Soc., 421, 1891 (2012) [10] F.Dumouchel, A, Spielfieldel , M

  17. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    Science.gov (United States)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  18. OPTIMIZATION OF TIMES AND COSTS OF PROJECT OF HORIZONTAL LAMINATOR PRODUCTION USING PERT/CPM TECHNICAL

    Directory of Open Access Journals (Sweden)

    Fernando Henrique Lermen

    2016-09-01

    Full Text Available The PERT/CPM is a technique widely used in both the scheduling and in the project feasibility in terms of cost control and time.  In order to optimize time and costs involved in production, the work presented here aims to apply the PERT/CPM technique in the production project of the Horizontal Laminator, a machine used to cut polyurethane foam blocks in the mattresses industries. For the application of PERT/CPM technique in the project of Horizontal Laminator production were identified the activities that compose the project, the dependence between them, the normal and accelerated durations and the normal and accelerated costs. In this study, deterministic estimates for the duration of the activities were considered. The results show that the project can be completed in 520 hours at a total cost of R$7,042.50, when all activities are performed in their normal durations.  When all the activities that compose the critical path are accelerated, the project can be completed in 333.3 hours at a total cost of R$9,263.01. If the activities slacks have been exploited, it can obtain a final total cost of R$6,157.8, without changing the new duration of the project. It is noteworthy that the final total cost of the project if the slacks are used, will be lower than the initial cost. Regarding the initial cost of the project, after the application of the PERT/CPM technique, it presents a decrease of 12.56% of the total project cost.

  19. Chemisputtering of interstellar graphite grains

    International Nuclear Information System (INIS)

    Draine, B.T.

    1979-01-01

    The rate of erosion of interstellar graphite grains as a result of chemical reaction with H, N, and O is estimated using the available experiment evidence. It is argued that ''chemical sputtering'' yields for interstellar graphite grains will be much less than unity, contrary to earlier estimates by Barlow and Silk. Chemical sputtering of graphite grains in evolving H II regions is found to be unimportant, except in extremely compact (n/sub H/> or approx. =10 5 cm -3 ) H II regions. Alternative explanations are considered for the apparent weakness of the lambda=2175 A extinction ''bump'' in the direction of several early type stars

  20. Interstellar Initiative Web Page Design

    Science.gov (United States)

    Mehta, Alkesh

    1999-01-01

    This summer at NASA/MSFC, I have contributed to two projects: Interstellar Initiative Web Page Design and Lenz's Law Relative Motion Demonstration. In the Web Design Project, I worked on an Outline. The Web Design Outline was developed to provide a foundation for a Hierarchy Tree Structure. The Outline would help design a Website information base for future and near-term missions. The Website would give in-depth information on Propulsion Systems and Interstellar Travel. The Lenz's Law Relative Motion Demonstrator is discussed in this volume by Russell Lee.

  1. Interstellar matter within elliptical galaxies

    Science.gov (United States)

    Jura, Michael

    1988-01-01

    Multiwavelength observations of elliptical galaxies are reviewed, with an emphasis on their implications for theoretical models proposed to explain the origin and evolution of the interstellar matter. Particular attention is given to interstellar matter at T less than 100 K (atomic and molecular gas and dust), gas at T = about 10,000 K, and gas at T = 10 to the 6th K or greater. The data are shown to confirm the occurrence of mass loss from evolved stars, significant accretion from companion galaxies, and cooling inflows; no evidence is found for large mass outflow from elliptical galaxies.

  2. Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems

    DEFF Research Database (Denmark)

    Weerts, Hermanus H. M.; Shafiei, Seyed Ehsan; Stoustrup, Jakob

    2014-01-01

    A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive...... control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated...... in the same objective as of cost optimization one. A simulation benchmark validated against real data and including significant dynamics of the system are employed to show the effectiveness of the proposed control scheme....

  3. External costs in the global energy optimization models. A tool in favour of sustain ability

    International Nuclear Information System (INIS)

    Cabal Cuesta, H.

    2007-01-01

    The aim of this work is the analysis of the effects of the GHG external costs internalization in the energy systems. This may provide a useful tool to support decision makers to help reaching the energy systems sustain ability. External costs internalization has been carried out using two methods. First, CO 2 externalities of different power generation technologies have been internalized to evaluate their effects on the economic competitiveness of these present and future technologies. The other method consisted of analysing and optimizing the global energy system, from an economic and environmental point of view, using the global energy optimization model generator, TIMES, with a time horizon of 50 years. Finally, some scenarios regarding environmental and economic strategic measures have been analysed. (Author)

  4. Optimal cost design of base-isolated pool structures for the storage of nuclear spent fuel

    International Nuclear Information System (INIS)

    Ko, H. M.; Park, K. S.; Song, J. H.

    1999-01-01

    A method of cost-effectiveness evaluation for seismic isolated pool structures is presented. Input ground motion is modeled as spectral density function compatible with response spectrum for combination of acceleration coefficient and site coefficient. Interaction effects between flexible walls and contained fluid are considered in the form of added mass matrix. Wall thickness and isolator stiffness are adopted as design variables for optimization. Transfer function vector of the structure-isolator system is derived from the equation of motion. Spectral analysis method based on random vibration theories is used for the calculation of failure probability. The exemplifying designs and analyses show that cost-effectiveness of isolated pool structure is relatively high in low-moderate seismic region and stiff soil condition. Sensitiveness of optimal design variables to assumed damage scales is relatively low in such region

  5. Integration of water footprint accounting and costs for optimal pulp supply mix in paper industry

    DEFF Research Database (Denmark)

    Manzardo, Alessandro; Ren, Jingzheng; Piantella, Antonio

    2014-01-01

    studies have focused on these aspects, but there have been no previous reports on the integrated application of raw material water footprint accounting and costs in the definition of the optimal supply mix of chemical pulps from different countries. The current models that have been applied specifically...... that minimizes the water footprint accounting results and costs of chemical pulp, thereby facilitating the assessment of the water footprint by accounting for different chemical pulps purchased from various suppliers, with a focus on the efficiency of the production process. Water footprint accounting...... was adapted to better represent the efficiency of pulp and paper production. A multi-objective model for supply mix optimization was also developed using multi-criteria decision analysis (MCDA). Water footprint accounting confirmed the importance of the production efficiency of chemical pulp, which affected...

  6. Operating cost reduction by optimization of I and C backfitting strategy

    International Nuclear Information System (INIS)

    Kraft, Heinz-U.

    2002-01-01

    Full text: The safe and economic operation of a nuclear power plant requires a large scope of automation systems to act properly in combination. The associated maintenance costs, necessary to test these systems periodically and to repair or to replace them partly or completely, are one important factor in the overall operating costs of a nuclear power plant. Reducing these costs by reducing the maintenance effort could decrease the availability of the power plant and by this way increase the operating costs significantly. The minimization of the overall operating costs requires a well-balanced maintenance strategy taking into account all these opposite influences. The replacement of an existing I and C system by a new one reduces the maintenance cost in the long term and increases the plant availability. However, it requires some investments in the short term. On the other hand the repair of an I and C system avoids investments, but it doesn't solve the aging problems. That means maintenance costs will increase in the long term and the plant availability could be decreased. An optimized maintenance strategy can be elaborated on a plant specific base taking into account the residual lifetime of the plant, the properties of the installed I and C systems as well as their influence on the plant availability. As a general result of such an optimization performed by FANP it has been found as a rule that the replacement of I and C systems becomes the most economic way the longer the expected lifetime is and the stronger the I and C system influences, the availability of the plant. (author)

  7. Optimizing staffing, quality, and cost in home healthcare nursing: theory synthesis.

    Science.gov (United States)

    Park, Claire Su-Yeon

    2017-08-01

    To propose a new theory pinpointing the optimal nurse staffing threshold delivering the maximum quality of care relative to attendant costs in home health care. Little knowledge exists on the theoretical foundation addressing the inter-relationship among quality of care, nurse staffing, and cost. Theory synthesis. Cochrane Library, PubMed, CINAHL, EBSCOhost Web and Web of Science (25 February - 26 April 2013; 20 January - 22 March 2015). Most of the existing theories/models lacked the detail necessary to explain the relationship among quality of care, nurse staffing and cost. Two notable exceptions are: 'Production Function for Staffing and Quality in Nursing Homes,' which describes an S-shaped trajectory between quality of care and nurse staffing and 'Thirty-day Survival Isoquant and Estimated Costs According to the Nurse Staff Mix,' which depicts a positive quadric relationship between nurse staffing and cost according to quality of care. A synthesis of these theories led to an innovative multi-dimensional econometric theory helping to determine the maximum quality of care for patients while simultaneously delivering nurse staffing in the most cost-effective way. The theory-driven threshold, navigated by Mathematical Programming based on the Duality Theorem in Mathematical Economics, will help nurse executives defend sufficient nurse staffing with scientific justification to ensure optimal patient care; help stakeholders set an evidence-based reasonable economical goal; and facilitate patient-centred decision-making in choosing the institution which delivers the best quality of care. A new theory to determine the optimum nurse staffing maximizing quality of care relative to cost was proposed. © 2017 The Author. Journal of Advanced Nursing © John Wiley & Sons Ltd.

  8. Experimental interstellar organic chemistry - Preliminary findings

    Science.gov (United States)

    Khare, B. N.; Sagan, C.

    1973-01-01

    Review of the results of some explicit experimental simulation of interstellar organic chemistry consisting in low-temperature high-vacuum UV irradiation of condensed simple gases known or suspected to be present in the interstellar medium. The results include the finding that acetonitrile may be present in the interstellar medium. The implication of this and other findings are discussed.

  9. Affordable Design: A Methodolgy to Implement Process-Based Manufacturing Cost into the Traditional Performance-Focused Multidisciplinary Design Optimization

    Science.gov (United States)

    Bao, Han P.; Samareh, J. A.

    2000-01-01

    The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.

  10. Construction Performance Optimization toward Green Building Premium Cost Based on Greenship Rating Tools Assessment with Value Engineering Method

    Science.gov (United States)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat

    2017-07-01

    Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.

  11. COST ANALYSIS AND OPTIMIZATION IN THE LOGISTIC SUPPLY CHAIN USING THE SIMPROLOGIC PROGRAM

    OpenAIRE

    Ilona MAŃKA; Adam MAŃKA

    2016-01-01

    This article aims to characterize the authorial SimProLOGIC program, version 2.1, which enables one to conduct a cost analysis of individual links, as well as the entire logistic supply chain (LSC). This article also presents an example of the analysis of the parameters, which characterize the supplier of subsystems in the examined logistic chain, and the results of the initial optimization, which makes it possible to improve the economic balance, as well as the level of customer servic...

  12. Modelling and Optimal Control of Typhoid Fever Disease with Cost-Effective Strategies.

    Science.gov (United States)

    Tilahun, Getachew Teshome; Makinde, Oluwole Daniel; Malonza, David

    2017-01-01

    We propose and analyze a compartmental nonlinear deterministic mathematical model for the typhoid fever outbreak and optimal control strategies in a community with varying population. The model is studied qualitatively using stability theory of differential equations and the basic reproductive number that represents the epidemic indicator is obtained from the largest eigenvalue of the next-generation matrix. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined. The model exhibits a forward transcritical bifurcation and the sensitivity analysis is performed. The optimal control problem is designed by applying Pontryagin maximum principle with three control strategies, namely, the prevention strategy through sanitation, proper hygiene, and vaccination; the treatment strategy through application of appropriate medicine; and the screening of the carriers. The cost functional accounts for the cost involved in prevention, screening, and treatment together with the total number of the infected persons averted. Numerical results for the typhoid outbreak dynamics and its optimal control revealed that a combination of prevention and treatment is the best cost-effective strategy to eradicate the disease.

  13. Addressing imperfect maintenance modelling uncertainty in unavailability and cost based optimization

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, Ana [Department of Statistics and Operational Research, Polytechnic University of Valencia, Camino de Vera, s/n, 46071 Valencia (Spain); Carlos, Sofia [Department of Chemical and Nuclear Engineering, Polytechnic University of Valencia, Camino de Vera, s/n, 46071 Valencia (Spain); Martorell, Sebastian [Department of Chemical and Nuclear Engineering, Polytechnic University of Valencia, Camino de Vera, s/n, 46071 Valencia (Spain)], E-mail: smartore@iqn.upv.es; Villanueva, Jose F. [Department of Chemical and Nuclear Engineering, Polytechnic University of Valencia, Camino de Vera, s/n, 46071 Valencia (Spain)

    2009-01-15

    Optimization of testing and maintenance activities performed in the different systems of a complex industrial plant is of great interest as the plant availability and economy strongly depend on the maintenance activities planned. Traditionally, two types of models, i.e. deterministic and probabilistic, have been considered to simulate the impact of testing and maintenance activities on equipment unavailability and the cost involved. Both models present uncertainties that are often categorized as either aleatory or epistemic uncertainties. The second group applies when there is limited knowledge on the proper model to represent a problem, and/or the values associated to the model parameters, so the results of the calculation performed with them incorporate uncertainty. This paper addresses the problem of testing and maintenance optimization based on unavailability and cost criteria and considering epistemic uncertainty in the imperfect maintenance modelling. It is framed as a multiple criteria decision making problem where unavailability and cost act as uncertain and conflicting decision criteria. A tolerance interval based approach is used to address uncertainty with regard to effectiveness parameter and imperfect maintenance model embedded within a multiple-objective genetic algorithm. A case of application for a stand-by safety related system of a nuclear power plant is presented. The results obtained in this application show the importance of considering uncertainties in the modelling of imperfect maintenance, as the optimal solutions found are associated with a large uncertainty that influences the final decision making depending on, for example, if the decision maker is risk averse or risk neutral.

  14. Replacement policy of residential lighting optimized for cost, energy, and greenhouse gas emissions

    Science.gov (United States)

    Liu, Lixi; Keoleian, Gregory A.; Saitou, Kazuhiro

    2017-11-01

    Accounting for 10% of the electricity consumption in the US, artificial lighting represents one of the easiest ways to cut household energy bills and greenhouse gas (GHG) emissions by upgrading to energy-efficient technologies such as compact fluorescent lamps (CFL) and light emitting diodes (LED). However, given the high initial cost and rapidly improving trajectory of solid-state lighting today, estimating the right time to switch over to LEDs from a cost, primary energy, and GHG emissions perspective is not a straightforward problem. This is an optimal replacement problem that depends on many determinants, including how often the lamp is used, the state of the initial lamp, and the trajectories of lighting technology and of electricity generation. In this paper, multiple replacement scenarios of a 60 watt-equivalent A19 lamp are analyzed and for each scenario, a few replacement policies are recommended. For example, at an average use of 3 hr day-1 (US average), it may be optimal both economically and energetically to delay the adoption of LEDs until 2020 with the use of CFLs, whereas purchasing LEDs today may be optimal in terms of GHG emissions. In contrast, incandescent and halogen lamps should be replaced immediately. Based on expected LED improvement, upgrading LED lamps before the end of their rated lifetime may provide cost and environmental savings over time by taking advantage of the higher energy efficiency of newer models.

  15. Addressing imperfect maintenance modelling uncertainty in unavailability and cost based optimization

    International Nuclear Information System (INIS)

    Sanchez, Ana; Carlos, Sofia; Martorell, Sebastian; Villanueva, Jose F.

    2009-01-01

    Optimization of testing and maintenance activities performed in the different systems of a complex industrial plant is of great interest as the plant availability and economy strongly depend on the maintenance activities planned. Traditionally, two types of models, i.e. deterministic and probabilistic, have been considered to simulate the impact of testing and maintenance activities on equipment unavailability and the cost involved. Both models present uncertainties that are often categorized as either aleatory or epistemic uncertainties. The second group applies when there is limited knowledge on the proper model to represent a problem, and/or the values associated to the model parameters, so the results of the calculation performed with them incorporate uncertainty. This paper addresses the problem of testing and maintenance optimization based on unavailability and cost criteria and considering epistemic uncertainty in the imperfect maintenance modelling. It is framed as a multiple criteria decision making problem where unavailability and cost act as uncertain and conflicting decision criteria. A tolerance interval based approach is used to address uncertainty with regard to effectiveness parameter and imperfect maintenance model embedded within a multiple-objective genetic algorithm. A case of application for a stand-by safety related system of a nuclear power plant is presented. The results obtained in this application show the importance of considering uncertainties in the modelling of imperfect maintenance, as the optimal solutions found are associated with a large uncertainty that influences the final decision making depending on, for example, if the decision maker is risk averse or risk neutral

  16. Distributed Bees Algorithm Parameters Optimization for a Cost Efficient Target Allocation in Swarms of Robots

    Directory of Open Access Journals (Sweden)

    Álvaro Gutiérrez

    2011-11-01

    Full Text Available Swarms of robots can use their sensing abilities to explore unknown environments and deploy on sites of interest. In this task, a large number of robots is more effective than a single unit because of their ability to quickly cover the area. However, the coordination of large teams of robots is not an easy problem, especially when the resources for the deployment are limited. In this paper, the Distributed Bees Algorithm (DBA, previously proposed by the authors, is optimized and applied to distributed target allocation in swarms of robots. Improved target allocation in terms of deployment cost efficiency is achieved through optimization of the DBA’s control parameters by means of a Genetic Algorithm. Experimental results show that with the optimized set of parameters, the deployment cost measured as the average distance traveled by the robots is reduced. The cost-efficient deployment is in some cases achieved at the expense of increased robots’ distribution error. Nevertheless, the proposed approach allows the swarm to adapt to the operating conditions when available resources are scarce.

  17. Marginal abatement cost curves and the optimal timing of mitigation measures

    International Nuclear Information System (INIS)

    Vogt-Schilb, Adrien; Hallegatte, Stéphane

    2014-01-01

    Decision makers facing abatement targets need to decide which abatement measures to implement, and in which order. Measure-explicit marginal abatement cost curves depict the cost and abating potential of available mitigation options. Using a simple intertemporal optimization model, we demonstrate why this information is not sufficient to design emission reduction strategies. Because the measures required to achieve ambitious emission reductions cannot be implemented overnight, the optimal strategy to reach a short-term target depends on longer-term targets. For instance, the best strategy to achieve European's −20% by 2020 target may be to implement some expensive, high-potential, and long-to-implement options required to meet the −75% by 2050 target. Using just the cheapest abatement options to reach the 2020 target can create a carbon-intensive lock-in and make the 2050 target too expensive to reach. Designing mitigation policies requires information on the speed at which various measures to curb greenhouse gas emissions can be implemented, in addition to the information on the costs and potential of such measures provided by marginal abatement cost curves. - Highlights: • Classification of existing Marginal Abatement Cost Curves (MACC). • MACCs do not provide separated data on the speed at which measures can be implemented. • Optimal measures to reach a short-term target depend on longer-term targets. • Unique carbon price or aggregated emission-reduction target may be insufficient. • Room for short-term sectoral policies if agents are myopic or governments cannot commit

  18. A cost optimization model for 100% renewable residential energy supply systems

    DEFF Research Database (Denmark)

    Milan, Christian; Bojesen, Carsten; Nielsen, Mads Pagh

    2012-01-01

    The concept of net zero energy buildings (Net ZEB) has received increased attention throughout the last years. A well adapted and optimized design of the energy supply system is crucial for the performance of these buildings. To achieve this, a holistic approach is needed which accounts for the i......The concept of net zero energy buildings (Net ZEB) has received increased attention throughout the last years. A well adapted and optimized design of the energy supply system is crucial for the performance of these buildings. To achieve this, a holistic approach is needed which accounts......'s involving on-site production of heat and electricity in combination with electricity exchanged with the public grid. The model is based on linear programming and determines the optimal capacities for each relevant supply technology in terms of the overall system costs. It has been successfully applied...

  19. Optimal pricing policies for services with consideration of facility maintenance costs

    Science.gov (United States)

    Yeh, Ruey Huei; Lin, Yi-Fang

    2012-06-01

    For survival and success, pricing is an essential issue for service firms. This article deals with the pricing strategies for services with substantial facility maintenance costs. For this purpose, a mathematical framework that incorporates service demand and facility deterioration is proposed to address the problem. The facility and customers constitute a service system driven by Poisson arrivals and exponential service times. A service demand with increasing price elasticity and a facility lifetime with strictly increasing failure rate are also adopted in modelling. By examining the bidirectional relationship between customer demand and facility deterioration in the profit model, the pricing policies of the service are investigated. Then analytical conditions of customer demand and facility lifetime are derived to achieve a unique optimal pricing policy. The comparative statics properties of the optimal policy are also explored. Finally, numerical examples are presented to illustrate the effects of parameter variations on the optimal pricing policy.

  20. Less wireless costs : optimizing firms aim to cut wireless service bills

    International Nuclear Information System (INIS)

    Mahony, J.

    2006-01-01

    The Calgary-based firm Alliance is offering optimized billing to oil companies, many of which spend more than $100,000 a month on wireless services for devices such as cellular telephones, pagers and Blackberries. In particular, Alliance is focusing on cutting the cost of wireless for corporate clients by analyzing client-usage patterns and choosing the most cost-efficient rate plans offered by the telecoms. Alliance suggests that do-it-yourself optimization is too complex for the average user, given the very large choice of rate plans. Using algorithms, Alliance software goes through all the wireless service contract options from the telecoms to choose the best plan for a company's needs. Optimizers claim their clients will see significant savings on wireless, in the order to 20 to 50 per cent. This article presented a brief case history of a successful optimization plan for Nabors Canada LP. Alliance allows its clients to view their billing information on their web-based server. Call records can be viewed by device or company division. 1 ref., 1 fig

  1. Genetic algorithm for project time-cost optimization in fuzzy environment

    Directory of Open Access Journals (Sweden)

    Khan Md. Ariful Haque

    2012-12-01

    Full Text Available Purpose: The aim of this research is to develop a more realistic approach to solve project time-cost optimization problem under uncertain conditions, with fuzzy time periods. Design/methodology/approach: Deterministic models for time-cost optimization are never efficient considering various uncertainty factors. To make such problems realistic, triangular fuzzy numbers and the concept of a-cut method in fuzzy logic theory are employed to model the problem. Because of NP-hard nature of the project scheduling problem, Genetic Algorithm (GA has been used as a searching tool. Finally, Dev-C++ 4.9.9.2 has been used to code this solver. Findings: The solution has been performed under different combinations of GA parameters and after result analysis optimum values of those parameters have been found for the best solution. Research limitations/implications: For demonstration of the application of the developed algorithm, a project on new product (Pre-paid electric meter, a project under government finance launching has been chosen as a real case. The algorithm is developed under some assumptions. Practical implications: The proposed model leads decision makers to choose the desired solution under different risk levels. Originality/value: Reports reveal that project optimization problems have never been solved under multiple uncertainty conditions. Here, the function has been optimized using Genetic Algorithm search technique, with varied level of risks and fuzzy time periods.

  2. Towards an Automatic Parameter-Tuning Framework for Cost Optimization on Video Encoding Cloud

    Directory of Open Access Journals (Sweden)

    Xiaowei Li

    2012-01-01

    Full Text Available The emergence of cloud encoding services facilitates many content owners, such as the online video vendors, to transcode their digital videos without infrastructure setup. Such service provider charges the customers only based on their resource consumption. For both the service provider and customers, lowering the resource consumption while maintaining the quality is valuable and desirable. Thus, to choose a cost-effective encoding parameter, configuration is essential and challenging due to the tradeoff between bitrate, encoding speed, and resulting quality. In this paper, we explore the feasibility of an automatic parameter-tuning framework, based on which the above objective can be achieved. We introduce a simple service model, which combines the bitrate and encoding speed into a single value: encoding cost. Then, we conduct an empirical study to examine the relationship between the encoding cost and various parameter settings. Our experiment is based on the one-pass Constant Rate Factor method in x264, which can achieve relatively stable perceptive quality, and we vary each parameter we choose to observe how the encoding cost changes. The experiment results show that the tested parameters can be independently tuned to minimize the encoding cost, which makes the automatic parameter-tuning framework feasible and promising for optimizing the cost on video encoding cloud.

  3. Some considerations on cost-benefit analysis in the optimization of radiation protection

    International Nuclear Information System (INIS)

    Doi, Masahiro; Nakashima, Yoshiyuki.

    1988-01-01

    To carry out Cost-Benefit Analysis in the optimization of radiation protection, first, we have to overcome the paradoxical problem in ethics, that is, how to convert radiological detriments into monetary value. Radiological detriments are composed of not only the objective health detriment (alpha-detriment) but also subjective non-health ones due to psychological stresses against radiological risks (beta-detriments). Nevertheless we can't neglect the problem of Cost-Benefit Analysis because of the fact that protectional costs are apt to be reduced as other fundamental production costs from the managemental point of view. The authors have proposed following two different situations concerning the treatment of radiological detriments in the decision-making processes for the optimization of radiation protection. That is. (1) Since protectional decision making processes for workers are parts of the total safety planning of the facility of interest, beta-detriments for workers should be discussed and determined in the labour-management negotiations. (2) In case of publics, subjective non-health detriments arise from the gap between radiation risks and radiation risk perception that can be clarified by social research techniques. In addition, this study has clarified criteria in planning of social researches for beta-detriments and constructed a theoretical model designed for these. (author)

  4. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    Science.gov (United States)

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  5. Cost-Optimal Pathways to 75% Fuel Reduction in Remote Alaskan Villages: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Simpkins, Travis; Cutler, Dylan; Hirsch, Brian; Olis, Dan; Anderson, Kate

    2015-10-28

    There are thousands of isolated, diesel-powered microgrids that deliver energy to remote communities around the world at very high energy costs. The Remote Communities Renewable Energy program aims to help these communities reduce their fuel consumption and lower their energy costs through the use of high penetration renewable energy. As part of this program, the REopt modeling platform for energy system integration and optimization was used to analyze cost-optimal pathways toward achieving a combined 75% reduction in diesel fuel and fuel oil consumption in a select Alaskan village. In addition to the existing diesel generator and fuel oil heating technologies, the model was able to select from among wind, battery storage, and dispatchable electric heaters to meet the electrical and thermal loads. The model results indicate that while 75% fuel reduction appears to be technically feasible it may not be economically viable at this time. When the fuel reduction target was relaxed, the results indicate that by installing high-penetration renewable energy, the community could lower their energy costs by 21% while still reducing their fuel consumption by 54%.

  6. Interstellar turbulence and shock waves

    International Nuclear Information System (INIS)

    Bykov, A.M.

    1982-01-01

    Random deflections of shock fronts propagated through the turbulent interstellar medium can produce the strong electro-density fluctuations on scales l> or approx. =10 13 cm inferred from pulsar radio scintillations. The development of turbulence in the hot-phase ISM is discussed

  7. Stardust Interstellar Preliminary Examination (ISPE)

    Science.gov (United States)

    Westphal, A. J.; Allen, C.; Bajt, S.; Basset, R.; Bastien, R.; Bechtel, H.; Bleuet, P.; Borg, J.; Brenker F.; Bridges, J.

    2009-01-01

    In January 2006 the Stardust sample return capsule returned to Earth bearing the first solid samples from a primitive solar system body, C omet 81P/Wild2, and a collector dedicated to the capture and return o f contemporary interstellar dust. Both collectors were approximately 0.1m(exp 2) in area and were composed of aerogel tiles (85% of the co llecting area) and aluminum foils. The Stardust Interstellar Dust Col lector (SIDC) was exposed to the interstellar dust stream for a total exposure factor of 20 m(exp 2-) day during two periods before the co metary encounter. The Stardust Interstellar Preliminary Examination ( ISPE) is a three-year effort to characterize the collection using no ndestructive techniques. The ISPE consists of six interdependent proj ects: (1) Candidate identification through automated digital microsco py and a massively distributed, calibrated search (2) Candidate extr action and photodocumentation (3) Characterization of candidates thro ugh synchrotronbased FourierTranform Infrared Spectroscopy (FTIR), S canning XRay Fluoresence Microscopy (SXRF), and Scanning Transmission Xray Microscopy (STXM) (4) Search for and analysis of craters in f oils through FESEM scanning, Auger Spectroscopy and synchrotronbased Photoemission Electron Microscopy (PEEM) (5) Modeling of interstell ar dust transport in the solar system (6) Laboratory simulations of h ypervelocity dust impacts into the collecting media

  8. Magnetite and the interstellar medium

    International Nuclear Information System (INIS)

    Landaberry, S.C.; Magalhaes, A.M.

    1976-01-01

    Recent observations concerning interstellar circular polarization are explained by a simple two-cloud model using magnetite (Fe 3 O 4 ) grains as polarizing agents. Three stars covering a wide range of linear polarization spectral shapes were selected. Reasonably low column densities are required in order to interpret polarization data [pt

  9. Radioiodine (I-131) treatment for uncomplicated hyperthyroidism: An assessment of optimal dose and cost-effectiveness

    International Nuclear Information System (INIS)

    Paul, A.K.; Rahman, H.A.; Jahan, N.

    2002-01-01

    Aim: Radioiodine (I-131) is increasingly being considered for the treatment of hyperthyroidism but there is no general agreement for the initial dose. To determine the cost-effectiveness and optimal dose of I-131 to cure disease, we prospectively studied the outcome of radioiodine therapy of 423 patients. Material and Methods: Any of the fixed doses of 6, 8, 10, 12 or 15 mCi of I-131 was administered to the patients relating to thyroid gland size. The individual was excluded from this study who had multinodular goitre and autonomous toxic nodule. Patients were classified as cured if the clinical and biochemical status was either euthyroid or hypothyroid at one year without further treatment by antithyroid drugs or radioiodine. The costs were assessed by analyzing the total cost of care including office visit, laboratory testing, radioiodine treatment, average conveyance and income loss of patient and attendant and thyroxine replacement for a period of 2 years from the day of I-131 administration. Results: The results showed a progressive increase of cure rate from the doses of 6, 8 and 10 mCi by 67%, 76.5% and 85.7% respectively but the cure rate for the doses of 12 and 15 mCi was 87.9% and 88.8% respectively. Cure was directly related to the dose between 6 and 10 mCi but at higher doses the cure rate was increased marginally at the expense of increased total body radiation. There was little variation in total costs, but was higher for low dose-therapy and the cost proportion between the 6 mCi regimen and 10 mCi regimen was 1.04:1. Conclusion: We could conclude that an initial 10 mCi of I-131 may be the optimal dose for curing hyperthyroidism and will also limit the total costs

  10. Physical processes in the interstellar medium

    CERN Document Server

    Spitzer, Lyman

    2008-01-01

    Physical Processes in the Interstellar Medium discusses the nature of interstellar matter, with a strong emphasis on basic physical principles, and summarizes the present state of knowledge about the interstellar medium by providing the latest observational data. Physics and chemistry of the interstellar medium are treated, with frequent references to observational results. The overall equilibrium and dynamical state of the interstellar gas are described, with discussions of explosions produced by star birth and star death and the initial phases of cloud collapse leading to star formation.

  11. PAHs in Translucent Interstellar Clouds

    Science.gov (United States)

    Salama, Farid; Galazutdinov, G.; Krelowski, J.; Biennier, L.; Beletsky, Y.; Song, I.

    2011-05-01

    We discuss the proposal of relating the origin of some of the diffuse interstellar bands (DIBs) to neutral polycyclic aromatic hydrocarbons (PAHs) present in translucent interstellar clouds. The spectra of several cold, isolated gas-phase PAHs have been measured in the laboratory under experimental conditions that mimic the interstellar conditions and are compared with an extensive set of astronomical spectra of reddened, early type stars. This comparison provides - for the first time - accurate upper limits for the abundances of specific PAH molecules along specific lines-of-sight. Something that is not attainable from IR observations alone. The comparison of these unique laboratory data with high resolution, high S/N ratio astronomical observations leads to two major findings: (1) a finding specific to the individual molecules that were probed in this study and, which leads to the clear and unambiguous conclusion that the abundance of these specific neutral PAHs must be very low in the individual translucent interstellar clouds that were probed in this survey (PAH features remain below the level of detection) and, (2) a general finding that neutral PAHs exhibit intrinsic band profiles that are similar to the profile of the narrow DIBs indicating that the carriers of the narrow DIBs must have close molecular structure and characteristics. This study is the first quantitative survey of neutral PAHs in the optical range and it opens the way for unambiguous quantitative searches of PAHs in a variety of interstellar and circumstellar environments. // Reference: F. Salama et al. (2011) ApJ. 728 (1), 154 // Acknowledgements: F.S. acknowledges the support of the NASA's Space Mission Directorate APRA Program. J.K. acknowledges the financial support of the Polish State (grant N203 012 32/1550). The authors are deeply grateful to the ESO archive as well as to the ESO staff members for their active support.

  12. Life span and reproductive cost explain interspecific variation in the optimal onset of reproduction.

    Science.gov (United States)

    Mourocq, Emeline; Bize, Pierre; Bouwhuis, Sandra; Bradley, Russell; Charmantier, Anne; de la Cruz, Carlos; Drobniak, Szymon M; Espie, Richard H M; Herényi, Márton; Hötker, Hermann; Krüger, Oliver; Marzluff, John; Møller, Anders P; Nakagawa, Shinichi; Phillips, Richard A; Radford, Andrew N; Roulin, Alexandre; Török, János; Valencia, Juliana; van de Pol, Martijn; Warkentin, Ian G; Winney, Isabel S; Wood, Andrew G; Griesser, Michael

    2016-02-01

    Fitness can be profoundly influenced by the age at first reproduction (AFR), but to date the AFR-fitness relationship only has been investigated intraspecifically. Here, we investigated the relationship between AFR and average lifetime reproductive success (LRS) across 34 bird species. We assessed differences in the deviation of the Optimal AFR (i.e., the species-specific AFR associated with the highest LRS) from the age at sexual maturity, considering potential effects of life history as well as social and ecological factors. Most individuals adopted the species-specific Optimal AFR and both the mean and Optimal AFR of species correlated positively with life span. Interspecific deviations of the Optimal AFR were associated with indices reflecting a change in LRS or survival as a function of AFR: a delayed AFR was beneficial in species where early AFR was associated with a decrease in subsequent survival or reproductive output. Overall, our results suggest that a delayed onset of reproduction beyond maturity is an optimal strategy explained by a long life span and costs of early reproduction. By providing the first empirical confirmations of key predictions of life-history theory across species, this study contributes to a better understanding of life-history evolution. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  13. The cost of leg forces in bipedal locomotion: a simple optimization study.

    Directory of Open Access Journals (Sweden)

    John R Rebula

    Full Text Available Simple optimization models show that bipedal locomotion may largely be governed by the mechanical work performed by the legs, minimization of which can automatically discover walking and running gaits. Work minimization can reproduce broad aspects of human ground reaction forces, such as a double-peaked profile for walking and a single peak for running, but the predicted peaks are unrealistically high and impulsive compared to the much smoother forces produced by humans. The smoothness might be explained better by a cost for the force rather than work produced by the legs, but it is unclear what features of force might be most relevant. We therefore tested a generalized force cost that can penalize force amplitude or its n-th time derivative, raised to the p-th power (or p-norm, across a variety of combinations for n and p. A simple model shows that this generalized force cost only produces smoother, human-like forces if it penalizes the rate rather than amplitude of force production, and only in combination with a work cost. Such a combined objective reproduces the characteristic profiles of human walking (R² = 0.96 and running (R² = 0.92, more so than minimization of either work or force amplitude alone (R² = -0.79 and R² = 0.22, respectively, for walking. Humans might find it preferable to avoid rapid force production, which may be mechanically and physiologically costly.

  14. Near Zero Energy House (NZEH) Design Optimization to Improve Life Cycle Cost Performance Using Genetic Algorithm

    Science.gov (United States)

    Latief, Y.; Berawi, M. A.; Koesalamwardi, A. B.; Supriadi, L. S. R.

    2018-03-01

    Near Zero Energy House (NZEH) is a housing building that provides energy efficiency by using renewable energy technologies and passive house design. Currently, the costs for NZEH are quite expensive due to the high costs of the equipment and materials for solar panel, insulation, fenestration and other renewable energy technology. Therefore, a study to obtain the optimum design of a NZEH is necessary. The aim of the optimum design is achieving an economical life cycle cost performance of the NZEH. One of the optimization methods that could be utilized is Genetic Algorithm. It provides the method to obtain the optimum design based on the combinations of NZEH variable designs. This paper discusses the study to identify the optimum design of a NZEH that provides an optimum life cycle cost performance using Genetic Algorithm. In this study, an experiment through extensive design simulations of a one-level house model was conducted. As a result, the study provide the optimum design from combinations of NZEH variable designs, which are building orientation, window to wall ratio, and glazing types that would maximize the energy generated by photovoltaic panel. Hence, the design would support an optimum life cycle cost performance of the house.

  15. The optimal vertical structure in the electricity industry when the incumbent has a cost advantage

    International Nuclear Information System (INIS)

    Kurakawa, Yukihide

    2013-01-01

    This paper studies how the vertical structure of the electricity industry affects the social welfare when the incumbent has a cost advantage in generation relative to the entrants. The model consists of a generation sector and a transmission sector. In the generation sector the incumbent and entrants compete in a Cournot fashion taking as given the access charge to the transmission network set in advance by the regulator to maximize the social welfare. Two vertical structures, integration and separation, are considered. Under vertical separation the transmission network is established as an organization independent of every generator, whereas under vertical integration it is a part of the incumbent's organization. The optimal vertical structure is shown to depend on the number of entrants. If the number of entrants is smaller than a certain threshold, vertical separation is superior in welfare to vertical integration, and vice versa. This is because the choice of vertical structure produces a trade-off in the effects on competition promotion and production efficiency. If a break-even constraint is imposed in the transmission sector, however, vertical integration is shown to be always superior in welfare. - Highlights: • We examine the optimal vertical structure in the electricity industry. • We model a generation sector in which the incumbent has a cost advantage. • A trade-off between production efficiency and competition promotion occurs. • The optimal vertical structure depends on the number of entrants. • Vertical integration is always superior if a break-even constraint is imposed

  16. An Integrated Modeling Approach to Evaluate and Optimize Data Center Sustainability, Dependability and Cost

    Directory of Open Access Journals (Sweden)

    Gustavo Callou

    2014-01-01

    Full Text Available Data centers have evolved dramatically in recent years, due to the advent of social networking services, e-commerce and cloud computing. The conflicting requirements are the high availability levels demanded against the low sustainability impact and cost values. The approaches that evaluate and optimize these requirements are essential to support designers of data center architectures. Our work aims to propose an integrated approach to estimate and optimize these issues with the support of the developed environment, Mercury. Mercury is a tool for dependability, performance and energy flow evaluation. The tool supports reliability block diagrams (RBD, stochastic Petri nets (SPNs, continuous-time Markov chains (CTMC and energy flow (EFM models. The EFM verifies the energy flow on data center architectures, taking into account the energy efficiency and power capacity that each device can provide (assuming power systems or extract (considering cooling components. The EFM also estimates the sustainability impact and cost issues of data center architectures. Additionally, a methodology is also considered to support the modeling, evaluation and optimization processes. Two case studies are presented to illustrate the adopted methodology on data center power systems.

  17. Life-cycle cost as basis to optimize waste collection in space and time: A methodology for obtaining a detailed cost breakdown structure.

    Science.gov (United States)

    Sousa, Vitor; Dias-Ferreira, Celia; Vaz, João M; Meireles, Inês

    2018-05-01

    Extensive research has been carried out on waste collection costs mainly to differentiate costs of distinct waste streams and spatial optimization of waste collection services (e.g. routes, number, and location of waste facilities). However, waste collection managers also face the challenge of optimizing assets in time, for instance deciding when to replace and how to maintain, or which technological solution to adopt. These issues require a more detailed knowledge about the waste collection services' cost breakdown structure. The present research adjusts the methodology for buildings' life-cycle cost (LCC) analysis, detailed in the ISO 15686-5:2008, to the waste collection assets. The proposed methodology is then applied to the waste collection assets owned and operated by a real municipality in Portugal (Cascais Ambiente - EMAC). The goal is to highlight the potential of the LCC tool in providing a baseline for time optimization of the waste collection service and assets, namely assisting on decisions regarding equipment operation and replacement.

  18. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Directory of Open Access Journals (Sweden)

    Shigang Zhang

    2015-10-01

    Full Text Available Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics.

  19. Optimal Sequential Diagnostic Strategy Generation Considering Test Placement Cost for Multimode Systems

    Science.gov (United States)

    Zhang, Shigang; Song, Lijun; Zhang, Wei; Hu, Zheng; Yang, Yongmin

    2015-01-01

    Sequential fault diagnosis is an approach that realizes fault isolation by executing the optimal test step by step. The strategy used, i.e., the sequential diagnostic strategy, has great influence on diagnostic accuracy and cost. Optimal sequential diagnostic strategy generation is an important step in the process of diagnosis system construction, which has been studied extensively in the literature. However, previous algorithms either are designed for single mode systems or do not consider test placement cost. They are not suitable to solve the sequential diagnostic strategy generation problem considering test placement cost for multimode systems. Therefore, this problem is studied in this paper. A formulation is presented. Two algorithms are proposed, one of which is realized by system transformation and the other is newly designed. Extensive simulations are carried out to test the effectiveness of the algorithms. A real-world system is also presented. All the results show that both of them have the ability to solve the diagnostic strategy generation problem, and they have different characteristics. PMID:26457709

  20. Optimal climate policy is a utopia. From quantitative to qualitative cost-benefit analysis

    International Nuclear Information System (INIS)

    Van den Bergh, Jeroen C.J.M.

    2004-01-01

    The dominance of quantitative cost-benefit analysis (CBA) and optimality concepts in the economic analysis of climate policy is criticised. Among others, it is argued to be based in a misplaced interpretation of policy for a complex climate-economy system as being analogous to individual inter-temporal welfare optimisation. The transfer of quantitative CBA and optimality concepts reflects an overly ambitious approach that does more harm than good. An alternative approach is to focus the attention on extreme events, structural change and complexity. It is argued that a qualitative rather than a quantitative CBA that takes account of these aspects can support the adoption of a minimax regret approach or precautionary principle in climate policy. This means: implement stringent GHG reduction policies as soon as possible

  1. Optimization Method of a Low Cost, High Performance Ceramic Proppant by Orthogonal Experimental Design

    Science.gov (United States)

    Zhou, Y.; Tian, Y. M.; Wang, K. Y.; Li, G.; Zou, X. W.; Chai, Y. S.

    2017-09-01

    This study focused on optimization method of a ceramic proppant material with both low cost and high performance that met the requirements of Chinese Petroleum and Gas Industry Standard (SY/T 5108-2006). The orthogonal experimental design of L9(34) was employed to study the significance sequence of three factors, including weight ratio of white clay to bauxite, dolomite content and sintering temperature. For the crush resistance, both the range analysis and variance analysis reflected the optimally experimental condition was weight ratio of white clay to bauxite=3/7, dolomite content=3 wt.%, temperature=1350°C. For the bulk density, the most important factor was the sintering temperature, followed by the dolomite content, and then the ratio of white clay to bauxite.

  2. Optimization of PHEV Power Split Gear Ratio to Minimize Fuel Consumption and Operation Cost

    Science.gov (United States)

    Li, Yanhe

    A Plug-in Hybrid Electric Vehicle (PHEV) is a vehicle powered by a combination of an internal combustion engine and an electric motor with a battery pack. The battery pack can be charged by plugging the vehicle to the electric grid and from using excess engine power. The research activity performed in this thesis focused on the development of an innovative optimization approach of PHEV Power Split Device (PSD) gear ratio with the aim to minimize the vehicle operation costs. Three research activity lines have been followed: • Activity 1: The PHEV control strategy optimization by using the Dynamic Programming (DP) and the development of PHEV rule-based control strategy based on the DP results. • Activity 2: The PHEV rule-based control strategy parameter optimization by using the Non-dominated Sorting Genetic Algorithm (NSGA-II). • Activity 3: The comprehensive analysis of the single mode PHEV architecture to offer the innovative approach to optimize the PHEV PSD gear ratio.

  3. Optimal combinations of control strategies and cost-effective analysis for visceral leishmaniasis disease transmission.

    Directory of Open Access Journals (Sweden)

    Santanu Biswas

    Full Text Available Visceral leishmaniasis (VL is a deadly neglected tropical disease that poses a serious problem in various countries all over the world. Implementation of various intervention strategies fail in controlling the spread of this disease due to issues of parasite drug resistance and resistance of sandfly vectors to insecticide sprays. Due to this, policy makers need to develop novel strategies or resort to a combination of multiple intervention strategies to control the spread of the disease. To address this issue, we propose an extensive SIR-type model for anthroponotic visceral leishmaniasis transmission with seasonal fluctuations modeled in the form of periodic sandfly biting rate. Fitting the model for real data reported in South Sudan, we estimate the model parameters and compare the model predictions with known VL cases. Using optimal control theory, we study the effects of popular control strategies namely, drug-based treatment of symptomatic and PKDL-infected individuals, insecticide treated bednets and spray of insecticides on the dynamics of infected human and vector populations. We propose that the strategies remain ineffective in curbing the disease individually, as opposed to the use of optimal combinations of the mentioned strategies. Testing the model for different optimal combinations while considering periodic seasonal fluctuations, we find that the optimal combination of treatment of individuals and insecticide sprays perform well in controlling the disease for the time period of intervention introduced. Performing a cost-effective analysis we identify that the same strategy also proves to be efficacious and cost-effective. Finally, we suggest that our model would be helpful for policy makers to predict the best intervention strategies for specific time periods and their appropriate implementation for elimination of visceral leishmaniasis.

  4. Optimal Willingness to Supply Wholesale Electricity Under Asymmetric Linearized Marginal Costs

    Directory of Open Access Journals (Sweden)

    David Hudgins

    2012-01-01

    Full Text Available This analysis derives the profit-maximizing willingness to supply functions for single-plant and multi-plant wholesale electricity suppliers that all incur linear marginal costs. The optimal strategy must result in linear residual demand functions in the absence of capacity constraints. This necessarily leads to a linear pricing rule structure that can be used by firm managers to construct their offer curves and to serve as a benchmark to evaluate firm profit-maximizing behavior. The procedure derives the cost functions and the residual demand curves for merged or multi-plant generators, and uses these to construct the individual generator plant offer curves for a multi-plant firm.

  5. Systems and methods for energy cost optimization in a building system

    Energy Technology Data Exchange (ETDEWEB)

    Turney, Robert D.; Wenzel, Michael J.

    2016-09-06

    Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.

  6. Efficiency-optimized low-cost TDPAC spectrometer using a versatile routing/coincidence unit

    International Nuclear Information System (INIS)

    Renteria, M.; Bibiloni, A. G.; Darriba, G. N.; Errico, L. A.; Munoz, E. L.; Richard, D.; Runco, J.

    2008-01-01

    A highly efficient, reliable, and low-cost γ-γ TDPAC spectrometer, PACAr, optimized for 181 Hf-implanted low-activity samples, is presented. A versatile EPROM-based routing/coincidence unit was developed and implemented to be use with the memory-card-based multichannel analyzer hosted in a personal computer. The excellent energy resolution and very good overall resolution and efficiency of PACAr are analyzed and compare with advanced and already tested fast-fast and slow-fast PAC spectrometers.

  7. Time- and Cost-Optimal Parallel Algorithms for the Dominance and Visibility Graphs

    Directory of Open Access Journals (Sweden)

    D. Bhagavathi

    1996-01-01

    Full Text Available The compaction step of integrated circuit design motivates associating several kinds of graphs with a collection of non-overlapping rectangles in the plane. These graphs are intended to capture various visibility relations amongst the rectangles in the collection. The contribution of this paper is to propose time- and cost-optimal algorithms to construct two such graphs, namely, the dominance graph (DG, for short and the visibility graph (VG, for short. Specifically, we show that with a collection of n non-overlapping rectangles as input, both these structures can be constructed in θ(log n time using n processors in the CREW model.

  8. The Optimal Pricing of Computer Software and Other Products with High Switching Costs

    OpenAIRE

    Pekka Ahtiala

    2004-01-01

    The paper studies the determinants of the optimum prices of computer programs and their upgrades. It is based on the notion that because of the human capital invested in the use of a computer program by its user, this product has high switching costs, and on the finding that pirates are responsible for generating over 80 per cent of new software sales. A model to maximize the present value of the program to the program house is constructed to determine the optimal prices of initial programs a...

  9. Climate agreements: Optimal taxation of fossil fuels and the distribution of costs and benefits across countries

    Energy Technology Data Exchange (ETDEWEB)

    Holtsmark, Bjart

    1997-12-31

    This report analyses the response of governments to a climate agreement that commits them to reduce their CO{sub 2} emissions. It develops a formula for optimal taxation of fossil fuels in open economies subject both to an emission constraint and a public budget constraint. The theory captures how national governments` behaviours are sensitive to the size of the benefits from revenue recycling and how these benefits adjust the distribution of abatement costs. The empirical part of the report illustrates the significance of the participating countries` current and potential fossil fuel taxation schemes and their roles in the fossil fuel markets. 23 refs., 11 figs., 2 tabs.

  10. Optimal dividend policies with transaction costs for a class of jump-diffusion processes

    DEFF Research Database (Denmark)

    Hunting, Martin; Paulsen, Jostein

    2013-01-01

    his paper addresses the problem of finding an optimal dividend policy for a class of jump-diffusion processes. The jump component is a compound Poisson process with negative jumps, and the drift and diffusion components are assumed to satisfy some regularity and growth restrictions. Each dividend...... payment is changed by a fixed and a proportional cost, meaning that if ξ is paid out by the company, the shareholders receive kξ−K, where k and K are positive. The aim is to maximize expected discounted dividends until ruin. It is proved that when the jumps belong to a certain class of light...

  11. Efficiency-optimized low-cost TDPAC spectrometer using a versatile routing/coincidence unit

    Energy Technology Data Exchange (ETDEWEB)

    Renteria, M., E-mail: renteria@fisica.unlp.edu.ar; Bibiloni, A. G.; Darriba, G. N.; Errico, L. A.; Munoz, E. L.; Richard, D.; Runco, J. [Universidad Nacional de La Plata, Departamento de Fisica, Facultad de Ciencias Exactas (Argentina)

    2008-01-15

    A highly efficient, reliable, and low-cost {gamma}-{gamma} TDPAC spectrometer, PACAr, optimized for {sup 181}Hf-implanted low-activity samples, is presented. A versatile EPROM-based routing/coincidence unit was developed and implemented to be use with the memory-card-based multichannel analyzer hosted in a personal computer. The excellent energy resolution and very good overall resolution and efficiency of PACAr are analyzed and compare with advanced and already tested fast-fast and slow-fast PAC spectrometers.

  12. Interstellar Gas Flow Vector and Temperature Determination over 5 Years of IBEX Observations

    International Nuclear Information System (INIS)

    Möbius, E; Heirtzler, D; Kucharek, H; Lee, M A; Leonard, T; Schwadron, N; Bzowski, M; Kubiak, M A; Sokół, J M; Fuselier, S A; McComas, D J; Wurz, P

    2015-01-01

    The Interstellar Boundary Explorer (IBEX) observes the interstellar neutral gas flow trajectories at their perihelion in Earth's orbit every year from December through early April, when the Earth's orbital motion is into the oncoming flow. These observations have defined a narrow region of possible, but very tightly coupled interstellar neutral flow parameters, with inflow speed, latitude, and temperature as well-defined functions of inflow longitude. The best- fit flow vector is different by ≈ 3° and lower by ≈ 3 km/s than obtained previously with Ulysses GAS, but the temperature is comparable. The possible coupled parameter space reaches to the previous flow vector, but only for a substantially higher temperature (by ≈ 2000 K). Along with recent pickup ion observations and including historical observations of the interstellar gas, these findings have led to a discussion, whether the interstellar gas flow into the solar system has been stable or variable over time. These intriguing possibilities call for more detailed analysis and a longer database. IBEX has accumulated observations over six interstellar flow seasons. We review key observations and refinements in the analysis, in particular, towards narrowing the uncertainties in the temperature determination. We also address ongoing attempts to optimize the flow vector determination through varying the IBEX spacecraft pointing and discuss related implications for the local interstellar cloud and its interaction with the heliosphere

  13. Modelling interstellar extinction: Pt. 1

    International Nuclear Information System (INIS)

    Jones, A.P.

    1988-01-01

    Several methods of calculating the extinction of porous silicate grains are discussed, these include effective medium theories and hollow spherical shells. Porous silicate grains are shown to produce enhanced infrared, ultraviolet and far-ultraviolet extinction and this effect can be used to reduce the abundance of carbon required to match the average interstellar extinction, however, matching the visual extinction is rather more problematical. We have shown that the enhanced extinction at long and short wavelengths have different origins, and have explained why the visual extinction is little affected by porosity. The implications of porous grains in the interstellar medium are discussed with particular reference to surface chemistry, the polarization of starlight, and their dynamical evolution. (author)

  14. Interstellar Grains: 50 Years on

    Science.gov (United States)

    Wickramasinghe, N. C.

    Our understanding of the nature of interstellar grains has evolved considerably over the past half century with the present author and Fred Hoyle being intimately involved at several key stages of progress. The currently fashionable graphite-silicate-organic grain model has all its essential aspects unequivocally traceable to original peer-reviewed publications by the author and/or Fred Hoyle. The prevailing reluctance to accept these clear-cut priorities may be linked to our further work that argued for interstellar grains and organics to have a biological provenance -- a position perceived as heretical. The biological model, however, continues to provide a powerful unifying hypothesis for a vast amount of otherwise disconnected and disparate astronomical data.

  15. Why do interstellar grains exist

    International Nuclear Information System (INIS)

    Seab, C.G.; Hollenbach, D.J.; Mckee, C.F.; Tielens, A.G.G.M.

    1986-01-01

    There exists a discrepancy between calculated destruction rates of grains in the interstellar medium and postulated sources of new grains. This problem was examined by modelling the global life cycle of grains in the galaxy. The model includes: grain destruction due to supernovae shock waves; grain injection from cool stars, planetary nebulae, star formation, novae, and supernovae; grain growth by accretion in dark clouds; and a mixing scheme between phases of the interstellar medium. Grain growth in molecular clouds is considered as a mechanism or increasing the formation rate. To decrease the shock destruction rate, several new physical processes, such as partial vaporization effects in grain-grain collisions, breakdown of the small Larmor radius approximation for betatron acceleration, and relaxation of the steady-state shock assumption are included

  16. Design Optimization and Construction of the Thyratron/PFN Based Cost Model Modulator for the NLC

    International Nuclear Information System (INIS)

    Koontz, Roland F

    1999-01-01

    As design studies and various R and D efforts continue on Next Linear Collider (NLC) systems, much R and D work is being done on X-Band klystron development, and development of pulse modulators to drive these X-Band klystrons. A workshop on this subject was held at SLAC in June of 1998, and a follow-up workshop is scheduled at SLAC June 23-25, 1999. At the 1998 workshop, several avenues of R and D were proposed using solid state switching, induction LINAC principles, high voltage hard tubes, and a few more esoteric ideas. An optimized version of the conventional thyratron-PFN-pulse transformer modulator for which there is extensive operating experience is also a strong candidate for use in the NLC. Such a modulator is currently under construction for base line demonstration purposes. The performance of this ''Cost Model'' modulator will be compared to other developing technologies. Important parameters including initial capital cost, operating maintenance cost, reliability, maintainability, power efficiency, in addition to the usual operating parameters of pulse flatness, timing and pulse height jitter, etc. will be considered in the choice of a modulator design for the NLC. This paper updates the progress on this ''Cost Model'' modulator design and construction

  17. Vehicle path-planning in three dimensions using optics analogs for optimizing visibility and energy cost

    Science.gov (United States)

    Rowe, Neil C.; Lewis, David H.

    1989-01-01

    Path planning is an important issue for space robotics. Finding safe and energy-efficient paths in the presence of obstacles and other constraints can be complex although important. High-level (large-scale) path planning for robotic vehicles was investigated in three-dimensional space with obstacles, accounting for: (1) energy costs proportional to path length; (2) turn costs where paths change trajectory abruptly; and (3) safety costs for the danger associated with traversing a particular path due to visibility or invisibility from a fixed set of observers. Paths optimal with respect to these cost factors are found. Autonomous or semi-autonomous vehicles were considered operating either in a space environment around satellites and space platforms, or aircraft, spacecraft, or smart missiles operating just above lunar and planetary surfaces. One class of applications concerns minimizing detection, as for example determining the best way to make complex modifications to a satellite without being observed by hostile sensors; another example is verifying there are no paths (holes) through a space defense system. Another class of applications concerns maximizing detection, as finding a good trajectory between mountain ranges of a planet while staying reasonably close to the surface, or finding paths for a flight between two locations that maximize the average number of triangulation points available at any time along the path.

  18. Fermilab D-0 Experimental Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    International Nuclear Information System (INIS)

    Krstulovich, S.F.

    1987-01-01

    This report is developed as part of the Fermilab D-0 Experimental Facility Project Title II Design Documentation Update. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis

  19. Origins of amorphous interstellar grains

    International Nuclear Information System (INIS)

    Hasegawa, H.

    1984-01-01

    The existence of amorphous interstellar grains has been suggested from infrared observations. Some carbon stars show the far infrared emission with a lambda -1 wavelength dependence. Far infrared emission supposed to be due to silicate grains often show the lambda -1 wavelength dependence. Mid infrared spectra around 10 μm have broad structure. These may be due to the amorphous silicate grains. The condition that the condensed grains from the cosmic gas are amorphous is discussed. (author)

  20. Representing culture in interstellar messages

    Science.gov (United States)

    Vakoch, Douglas A.

    2008-09-01

    As scholars involved with the Search for Extraterrestrial Intelligence (SETI) have contemplated how we might portray humankind in any messages sent to civilizations beyond Earth, one of the challenges they face is adequately representing the diversity of human cultures. For example, in a 2003 workshop in Paris sponsored by the SETI Institute, the International Academy of Astronautics (IAA) SETI Permanent Study Group, the International Society for the Arts, Sciences and Technology (ISAST), and the John Templeton Foundation, a varied group of artists, scientists, and scholars from the humanities considered how to encode notions of altruism in interstellar messages . Though the group represented 10 countries, most were from Europe and North America, leading to the group's recommendation that subsequent discussions on the topic should include more globally representative perspectives. As a result, the IAA Study Group on Interstellar Message Construction and the SETI Institute sponsored a follow-up workshop in Santa Fe, New Mexico, USA in February 2005. The Santa Fe workshop brought together scholars from a range of disciplines including anthropology, archaeology, chemistry, communication science, philosophy, and psychology. Participants included scholars familiar with interstellar message design as well as specialists in cross-cultural research who had participated in the Symposium on Altruism in Cross-cultural Perspective, held just prior to the workshop during the annual conference of the Society for Cross-cultural Research . The workshop included discussion of how cultural understandings of altruism can complement and critique the more biologically based models of altruism proposed for interstellar messages at the 2003 Paris workshop. This paper, written by the chair of both the Paris and Santa Fe workshops, will explore the challenges of communicating concepts of altruism that draw on both biological and cultural models.

  1. Interstellar Grains: 50 Years On

    OpenAIRE

    Wickramasinghe, N. Chandra

    2011-01-01

    Our understanding of the nature of interstellar grains has evolved considerably over the past half century with the present author and Fred Hoyle being intimately involved at several key stages of progress. The currently fashionable graphite-silicate-organic grain model has all its essential aspects unequivocally traceable to original peer-reviewed publications by the author and/or Fred Hoyle. The prevailing reluctance to accept these clear-cut priorities may be linked to our further work tha...

  2. Interstellar space: the astrochemist's laboratory

    International Nuclear Information System (INIS)

    Allen, M.A.

    1976-01-01

    A mechanism for the formation of molecules on small (radius less than or equal to 0.04 μ) interstellar grains is proposed. A simplified H 2 formation model is then presented that utilizes this surface reaction mechanism. This approach is further developed into an ab initio chemical model for dense interstellar clouds that incorporates 598 grain surface reactions, with small grains again providing the key reaction area. Gas-phase molecules are depleted through collisions with grains. The abundances of 372 chemical species are calculated as a function of time and are found to be of sufficient magnitude to explain most observations. The reaction rates for ion-molecule chemistry are approximately the same, therefore indicating that surface and gas-phase chemistry may be coupled in certain regions. The composition of grain mantles is shown to be a function of grain radius. In certain grain size ranges, large molecules containing two or more heavy atoms are more predominant than lighter ''ices''--H 2 O, NH 3 , and CH 4 . It is possible that absorption due to these large molecules in the mantles may contribute to the observed 3μ band in astronomical spectra. The second part of this thesis is an account of a radio astronomy observational program to detect new transitions of both previously observed and yet undetected interstellar molecules. The negative results yield order ofmagnitude upper limits to the column densities of the lower transition states of the various molecules. One special project was the search for the Λ-doublet transitions of the 2 H/sub 3 / 2 /, J = 3 / 2 state of OD. The resulting upper limit for the OD/OH column density ratio towards the galactic center is 1/400 and is discussed with reference to theories about deuterium enrichment in interstellar molecules

  3. On the ionization of interstellar magnesium

    International Nuclear Information System (INIS)

    Gurzadyan, G.A.

    1977-01-01

    It has been shown that two concentric ionization zones of interstellar magnesium must exist around each star: internal, with a radius coinciding with that of the zone of hydrogen ionization Ssub(H); and external, with a radius greater than Ssub(H), by one order. Unlike interstellar hydrogen, interstellar magnesium is ionized throughout the Galaxy. It also transpires that the ionizing radiation of ordinary hot stars cannot provide for the observed high degree of ionization of interstellar magnesium. The discrepance can be eliminated by assuming the existence of circumstellar clouds or additional ionization sources of interstellar magnesium (X-ray background radiation, high-energy particles, etc.). Stars of the B5 and BO class play the main role in the formation of ionization zones of interstellar magnesium; the contribution of O class stars is negligible (<1%). (Auth.)

  4. Tobacco BY-2 Media Component Optimization for a Cost-Efficient Recombinant Protein Production.

    Science.gov (United States)

    Häkkinen, Suvi T; Reuter, Lauri; Nuorti, Ninni; Joensuu, Jussi J; Rischer, Heiko; Ritala, Anneli

    2018-01-01

    Plant cells constitute an attractive platform for production of recombinant proteins as more and more animal-free products and processes are desired. One of the challenges in using plant cells as production hosts has been the costs deriving from expensive culture medium components. In this work, the aim was to optimize the levels of most expensive components in the nutrient medium without compromising the accumulation of biomass and recombinant protein yields. Wild-type BY-2 culture and transgenic tobacco BY-2 expressing green fluorescent protein-Hydrophobin I (GFP-HFBI) fusion protein were used to determine the most inexpensive medium composition. One particularly high-accumulating BY-2 clone, named 'Hulk,' produced 1.1 ± 0.2 g/l GFP-HFBI in suspension and kept its high performance during prolonged subculturing. In addition, both cultures were successfully cryopreserved enabling truly industrial application of this plant cell host. With the optimized culture medium, 43-55% cost reduction with regard to biomass and up to 69% reduction with regard to recombinant protein production was achieved.

  5. Cost and optimal feed-in tariff for small scale photovoltaic systems in China

    International Nuclear Information System (INIS)

    Rigter, Jasper; Vidican, Georgeta

    2010-01-01

    China has recently become a dominant player in the solar photovoltaic (PV) industry, producing more than one-third of the global supply of solar cells in 2008. However, as of 2008, less than 1% of global installations were based in China. Recently, the government has stated its grand ambitions of expanding the share of electricity derived from solar power. As part of this initiative, policy makers are currently in the process of drafting a feed-in tariff policy to support the development of the solar energy market. In this paper, we aim to calculate what the level of such a tariff should be. We develop a closed form equation for the cost of PV, and use forecasts on prices of solar systems to derive an optimal feed-in tariff, including a digression rate. The focus is on the potential of residential and small scale commercial solar PV installations. We show that the cost of small scale PV in China has decreased rapidly during the period 2005-2009. Our analysis also shows that optimal feed-in tariffs vary widely between regions within China, and that grid parity could be reached in large parts of the country depending on the expected escalation in electricity prices. (author)

  6. Mechanisms of heating the interstellar matter

    International Nuclear Information System (INIS)

    Lequeux, J.

    1975-01-01

    The knowledge of the interstellar medium has been considerably improved in the recent years, thanks in particular to Radioastronomy and Ultraviolet Space Astronomy. This medium is a natural laboratory where the conditions and various and very different to what can be realised in terrestrial laboratories. To illustrate its interest for physicists here one of the most interesting but controversial points of interstellar astronomy is discussed: the mechanisms for heating and cooling the interstellar medium [fr

  7. Manufacturing enterprise’s logistics operational cost simulation and optimization from the perspective of inter-firm network

    Directory of Open Access Journals (Sweden)

    Chun Fu

    2015-05-01

    Full Text Available Purpose: By studying the case of a Changsha engineering machinery manufacturing firm, this paper aims to find out the optimization tactics to reduce enterprise’s logistics operational cost. Design/methodology/approach: This paper builds the structure model of manufacturing enterprise’s logistics operational costs from the perspective of inter-firm network and simulates the model based on system dynamics. Findings: It concludes that applying system dynamics in the research of manufacturing enterprise’s logistics cost control can better reflect the relationship of factors in the system. And the case firm can optimize the logistics costs by implement joint distribution. Research limitations/implications: This study still lacks comprehensive consideration about the variables quantities and quantitative of the control factors. In the future, we should strengthen the collection of data and information about the engineering manufacturing firms and improve the logistics operational cost model. Practical implications: This study puts forward some optimization tactics to reduce enterprise’s logistics operational cost. And it is of great significance for enterprise’s supply chain management optimization and logistics cost control. Originality/value: Differing from the existing literatures, this paper builds the structure model of manufacturing enterprise’s logistics operational costs from the perspective of inter-firm network and simulates the model based on system dynamics.

  8. Is there an optimal pension fund size? A scale-economy analysis of administrative and investment costs

    NARCIS (Netherlands)

    Bikker, J.A.

    2013-01-01

    This paper investigates scale economies and the optimal scale of pension funds, estimating different cost functions with varying assumptions about the shape of the underlying average cost function: Ushaped versus monotonically declining. Using unique data for Dutch pension funds over 1992-2009, we

  9. Joint cost of energy under an optimal economic policy of hybrid power systems subject to uncertainty

    International Nuclear Information System (INIS)

    Díaz, Guzmán; Planas, Estefanía; Andreu, Jon; Kortabarria, Iñigo

    2015-01-01

    Economical optimization of hybrid systems is usually performed by means of LCoE (levelized cost of energy) calculation. Previous works deal with the LCoE calculation of the whole hybrid system disregarding an important issue: the stochastic component of the system units must be jointly considered. This paper deals with this issue and proposes a new fast optimal policy that properly calculates the LCoE of a hybrid system and finds the lowest LCoE. This proposed policy also considers the implied competition among power sources when variability of gas and electricity prices are taken into account. Additionally, it presents a comparative between the LCoE of the hybrid system and its individual technologies of generation by means of a fast and robust algorithm based on vector logical computation. Numerical case analyses based on realistic data are presented that valuate the contribution of technologies in a hybrid power system to the joint LCoE. - Highlights: • We perform the LCoE calculation with the stochastic component jointly considered. • We propose a fast an optimal policy that minimizes the LCoE. • We compare the obtained LCoEs by means of a fast and robust algorithm. • We take into account the competition among gas prices and electricity prices

  10. Staged cost optimization of urban storm drainage systems based on hydraulic performance in a changing environment

    Directory of Open Access Journals (Sweden)

    M. Maharjan

    2009-04-01

    Full Text Available Urban flooding causes large economic losses, property damage and loss of lives. The impact of environmental changes, mainly urbanization and climatic change, leads to increased runoff and peak flows which the drainage system must be able to cope with to reduce potential damage and inconvenience. Allowing for detention storage to compliment the conveyance capacity of the drainage system network is one of the approaches to reduce urban floods. Contemporary practice is to design systems against stationary environmental forcings – including design rainfall, landuse, etc. Due to the rapid change in the climate- and the urban environment, this approach is no longer appropriate, and explicit consideration of gradual changes during the life-time of the drainage system is warranted. In this paper, a staged cost optimization tool based on the hydraulic performance of the drainage system is presented. A one dimensional hydraulic model is used for hydraulic evaluation of the network together with a genetic algorithm based optimization tool to determine optimal intervention timings and responses over the analysis period. The model was applied in a case study area in the city of Porto Alegre, Brazil. It was concluded that considerable financial savings and/or additional level of flood-safety can be achieved by approaching the design problem as a staged plan rather than one-off scheme.

  11. LIFE-CYCLE COST MODEL AND DESIGN OPTIMIZATION OF BASE ISOLATED BUILDING STRUCTURES

    Directory of Open Access Journals (Sweden)

    Chara C. Mitropoulou

    2016-11-01

    Full Text Available Design of economic structures adequately resistant to withstand during their service life, without catastrophic failures, all possible loading conditions and to absorb the induced seismic energy in a controlled fashion, has been the subject of intensive research so far. Modern buildings usually contain extremely sensitive and costly equipment that are vital in business, commerce, education and/or health care. The building contents frequently are more valuable than the buildings them-selves. Furthermore, hospitals, communication and emergency centres, police and fire stations must be operational when needed most: immediately after an earthquake. Conventional con-struction can cause very high floor accelerations in stiff buildings and large interstorey drifts in flexible structures. These two factors cause difficulties in insuring the safety of both building and its contents. For this reason base-isolated structures are considered as an efficient alternative design practice to the conventional fixed-base one. In this study a systematic assessment of op-timized fixed and base-isolated reinforced concrete buildings is presented in terms of their initial and total cost taking into account the life-cycle cost of the structures.

  12. Optimal search strategies for detecting cost and economic studies in EMBASE

    Directory of Open Access Journals (Sweden)

    Haynes R Brian

    2006-06-01

    Full Text Available Abstract Background Economic evaluations in the medical literature compare competing diagnosis or treatment methods for their use of resources and their expected outcomes. The best evidence currently available from research regarding both cost and economic comparisons will continue to expand as this type of information becomes more important in today's clinical practice. Researchers and clinicians need quick, reliable ways to access this information. A key source of this type of information is large bibliographic databases such as EMBASE. The objective of this study was to develop search strategies that optimize the retrieval of health costs and economics studies from EMBASE. Methods We conducted an analytic survey, comparing hand searches of journals with retrievals from EMBASE for candidate search terms and combinations. 6 research assistants read all issues of 55 journals indexed by EMBASE for the publishing year 2000. We rated all articles using purpose and quality indicators and categorized them into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized for purpose (i.e., cost and economics and other clinical topics and depending on the purpose as 'pass' or 'fail' for methodologic rigor. Candidate search strategies were developed for economic and cost studies, then run in the 55 EMBASE journals, the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. Results Combinations of search terms for detecting both cost and economic studies attained levels of 100% sensitivity with specificity levels of 92.9% and 92.3% respectively. When maximizing for both sensitivity and specificity, the combination of terms for detecting cost studies (sensitivity increased 2.2% over the single term but at a slight decrease in specificity of 0.9%. The maximized combination of terms

  13. Activity-Based Costing & Warm Fuzzies - Costing, Presentation & Framing Influences on Decision-Making ~ A Business Optimization Simulation ~

    OpenAIRE

    Harrison, David Shelby

    1998-01-01

    Activity-Based Costing is presented in accounting text books as a costing system that can be used to make valuable managerial decisions. Accounting journals regularly report the successful implementations and benefits of activity-based costing systems for particular businesses. Little experimental or empirical evidence exists, however, that has demonstrated the benefits of activity-based costing under controlled conditions. Similarly, although case studies report conditions that may or may...

  14. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    Science.gov (United States)

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  15. Cost Optimization of Water Resources in Pernambuco, Brazil: Valuing Future Infrastructure and Climate Forecasts

    Science.gov (United States)

    Kumar, Ipsita; Josset, Laureline; Lall, Upmanu; Cavalcanti e Silva, Erik; Cordeiro Possas, José Marcelo; Cauás Asfora, Marcelo

    2017-04-01

    Optimal management of water resources is paramount in semi-arid regions to limit strains on the society and economy due to limited water availability. This problem is likely to become even more recurrent as droughts are projected to intensify in the coming years, causing increasing stresses to the water supply in the concerned areas. The state of Pernambuco, in the Northeast Brazil is one such case, where one of the largest reservoir, Jucazinho, has been at approximately 1% capacity throughout 2016, making infrastructural challenges in the region very real. To ease some of the infrastructural stresses and reduce vulnerabilities of the water system, a new source of water from Rio São Francisco is currently under development. Till its development, water trucks have been regularly mandated to cover water deficits, but at a much higher cost, thus endangering the financial sustainability of the region. In this paper, we propose to evaluate the sustainability of the considered water system by formulating an optimization problem and determine the optimal operations to be conducted. We start with a comparative study of the current and future infrastructures capabilities to face various climate. We show that while the Rio Sao Francisco project mitigates the problems, both implementations do not prevent failure and require the reliance on water trucks during prolonged droughts. We also study the cost associated with the provision of water to the municipalities for several streamflow forecasts. In particular, we investigate the value of climate predictions to adapt operational decisions by comparing the results with a fixed policy derived from historical data. We show that the use of climate information permits the reduction of the water deficit and reduces overall operational costs. We conclude with a discussion on the potential of the approach to evaluate future infrastructure developments. This study is funded by the Inter-American Development Bank (IADB), and in

  16. Parameterizing the interstellar dust temperature

    Science.gov (United States)

    Hocuk, S.; Szűcs, L.; Caselli, P.; Cazaux, S.; Spaans, M.; Esplugues, G. B.

    2017-08-01

    The temperature of interstellar dust particles is of great importance to astronomers. It plays a crucial role in the thermodynamics of interstellar clouds, because of the gas-dust collisional coupling. It is also a key parameter in astrochemical studies that governs the rate at which molecules form on dust. In 3D (magneto)hydrodynamic simulations often a simple expression for the dust temperature is adopted, because of computational constraints, while astrochemical modelers tend to keep the dust temperature constant over a large range of parameter space. Our aim is to provide an easy-to-use parametric expression for the dust temperature as a function of visual extinction (AV) and to shed light on the critical dependencies of the dust temperature on the grain composition. We obtain an expression for the dust temperature by semi-analytically solving the dust thermal balance for different types of grains and compare to a collection of recent observational measurements. We also explore the effect of ices on the dust temperature. Our results show that a mixed carbonaceous-silicate type dust with a high carbon volume fraction matches the observations best. We find that ice formation allows the dust to be warmer by up to 15% at high optical depths (AV> 20 mag) in the interstellar medium. Our parametric expression for the dust temperature is presented as Td = [ 11 + 5.7 × tanh(0.61 - log 10(AV) ]χuv1/5.9, where χuv is in units of the Draine (1978, ApJS, 36, 595) UV field.

  17. Probing the diffuse interstellar medium with diffuse interstellar bands

    Science.gov (United States)

    Theodorus van Loon, Jacco; Bailey, Mandy; Farhang, Amin; Javadi, Atefeh; Khosroshahi, Habib

    2015-08-01

    For a century already, a large number of absorption bands have been known at optical wavelengths, called the diffuse interstellar bands (DIBs). While their carriers remain unidentified, the relative strengths of these bands in various environments make them interesting new probes of the diffuse interstellar medium (ISM). We present the results from two large, dedicated campaigns to map the ISM using DIBs measured in the high signal-to-noise spectra of hundreds of early-type stars: [1] in and around the Local Bubble using ESO's New Technology Telescope and the Isaac Newton Telescope, and [2] across both Magellanic Clouds using the Very Large Telescope and the Anglo-Australian Telescope. We discuss the implications for the structure and dynamics of the ISM, as well as the constraints these maps place on the nature of the carriers of the DIBs. Partial results have appeared in the recent literature (van Loon et al. 2013; Farhang et al. 2015a,b; Bailey, PhD thesis 2014) with the remainder being prepared for publication now.

  18. Grain destruction in interstellar shocks

    International Nuclear Information System (INIS)

    Seab, C.G.; Shull, J.M.

    1984-01-01

    One of the principal methods for removing grains from the Interstellar Medium is to destroy them in shock waves. Previous theoretical studies of shock destruction have generally assumed only a single size and type of grain; most do not account for the effect of the grain destruction on the structure of the shock. Earlier calculations have been improved in three ways: first, by using a ''complete'' grain model including a distribution of sizes and types of grains; second, by using a self-consistent shock structure that incorporates the changing elemental depletions as the grains are destroyed; and third, by calculating the shock-processed ultraviolet extinction curves for comparison with observations. (author)

  19. Activity Based Costing (ABC as an Approach to Optimize Purchasing Performance in Hospitality Industry

    Directory of Open Access Journals (Sweden)

    Mohamed S. El-Deeb

    2011-07-01

    Full Text Available ABC (Activity Based Costing system has proved success in both products and services. The researchers propose using a new model through the application of ABC approach that can be implemented in purchasing department as one of the most dynamic departments in service sector to optimize purchasing activities performance. The researchers propose purchasing measures, targeting customers’ loyalty ensuring the continuous flow of supplies. The researchers used the questionnaire as a tool of data collection method for verifying the hypothesis of the research. Data obtained was analyzed by using Statistical Package for Social Sciences (SPSS. The results of the research based on limited survey that have been distributed to number of hotels in Great Cairo region. Our research was targeting three hundred purchasing manager and staff through five star hotels. It is recognized that further research is necessary to establish the exact nature of the causal linkages between proposed performance measures and strategic intent in order to gain insights into practice elsewhere.

  20. Design Optimization of Mixed-Criticality Real-Time Applications on Cost-Constrained Partitioned Architectures

    DEFF Research Database (Denmark)

    Tamas-Selicean, Domitian; Pop, Paul

    2011-01-01

    In this paper we are interested to implement mixed-criticality hard real-time applications on a given heterogeneous distributed architecture. Applications have different criticality levels, captured by their Safety-Integrity Level (SIL), and are scheduled using static-cyclic scheduling. Mixed......-criticality tasks can be integrated onto the same architecture only if there is enough spatial and temporal separation among them. We consider that the separation is provided by partitioning, such that applications run in separate partitions, and each partition is allocated several time slots on a processor. Tasks...... slots on each processor and (iv) the schedule tables, such that all the applications are schedulable and the development costs are minimized. We have proposed a Tabu Search-based approach to solve this optimization problem. The proposed algorithm has been evaluated using several synthetic and real...

  1. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming.

    Science.gov (United States)

    Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Identifying specific interstellar polycyclic aromatic hydrocarbons

    International Nuclear Information System (INIS)

    Mulas, Giacomo; Malloci, Giuliano; Porceddu, Ignazio

    2005-01-01

    Interstellar Polycyclic Aromatic Hydrocarbons (PAHs) have been thought to be ubiquitous for more than twenty years, yet no single species in this class has been identified in the Interstellar Medium (ISM) to date. The unprecedented sensitivity and resolution of present Infrared Space Observatory (ISO) and forthcoming Herschel observations in the far infrared spectral range will offer a unique way out of this embarrassing impasse

  3. Can spores survive in interstellar space

    Energy Technology Data Exchange (ETDEWEB)

    Weber, P.; Greenberg, J.M.

    1985-08-01

    Inactivation of spores (Bacillus subtilis) has been investigated in the laboratory by vacuum ultraviolet radiation in simulated interstellar conditions. Damage produced at the normal interstellar particle temperature of 10 K is less than at higher temperatures: the major damage being produced by radiation in the 2,000-3,000 A range. The results place constraints on the panspermia hypothesis. (author).

  4. MEASURING THE FRACTAL STRUCTURE OF INTERSTELLAR CLOUDS

    NARCIS (Netherlands)

    VOGELAAR, MGR; WAKKER, BP

    To study the structure of interstellar matter we have applied the concept of fractal curves to the brightness contours of maps of interstellar clouds and from these estimated the fractal dimension for some of them. We used the so-called perimeter-area relation as the basis for these estimates. We

  5. MEASURING THE FRACTAL STRUCTURE OF INTERSTELLAR CLOUDS

    NARCIS (Netherlands)

    VOGELAAR, MGR; WAKKER, BP

    1994-01-01

    To study the structure of interstellar matter we have applied the concept of fractal curves to the brightness contours of maps of interstellar clouds and from these estimated the fractal dimension for some of them. We used the so-called perimeter-area relation as the basis for these estimates. We

  6. Interstellar grains - the 75th anniversary

    International Nuclear Information System (INIS)

    Li Aigen

    2005-01-01

    The year of 2005 marks the 75th anniversary since Trumpler (1930) provided the first definitive proof of interstellar grains by demonstrating the existence of general absorption and reddening of starlight in the galactic plane. This article reviews our progressive understanding of the nature of interstellar dust

  7. Multi-objective optimization of aircraft design for emission and cost reductions

    Directory of Open Access Journals (Sweden)

    Wang Yu

    2014-02-01

    Full Text Available Pollutant gases emitted from the civil jet are doing more and more harm to the environment with the rapid development of the global commercial aviation transport. Low environmental impact has become a new requirement for aircraft design. In this paper, estimation method for emission in aircraft conceptual design stage is improved based on the International Civil Aviation Organization (ICAO aircraft engine emissions databank and the polynomial curve fitting methods. The greenhouse gas emission (CO2 equivalent per seat per kilometer is proposed to measure the emissions. An approximate sensitive analysis and a multi-objective optimization of aircraft design for tradeoff between greenhouse effect and direct operating cost (DOC are performed with five geometry variables of wing configuration and two flight operational parameters. The results indicate that reducing the cruise altitude and Mach number may result in a decrease of the greenhouse effect but an increase of DOC. And the two flight operational parameters have more effects on the emissions than the wing configuration. The Pareto-optimal front shows that a decrease of 29.8% in DOC is attained at the expense of an increase of 10.8% in greenhouse gases.

  8. Optimal design and allocation of electrified vehicles and dedicated charging infrastructure for minimum life cycle greenhouse gas emissions and cost

    International Nuclear Information System (INIS)

    Traut, Elizabeth; Hendrickson, Chris; Klampfl, Erica; Liu, Yimin; Michalek, Jeremy J.

    2012-01-01

    Electrified vehicles can reduce greenhouse gas (GHG) emissions by shifting energy demand from gasoline to electricity. GHG reduction potential depends on vehicle design, adoption, driving and charging patterns, charging infrastructure, and electricity generation mix. We construct an optimization model to study these factors by determining optimal design of conventional vehicles, hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), and battery electric vehicles (BEVs) with optimal allocation of vehicle designs and dedicated workplace charging infrastructure in the fleet for minimum life cycle cost or GHG emissions over a range of scenarios. We focus on vehicles with similar body size and acceleration to a Toyota Prius under government 5-cycle driving conditions. We find that under the current US grid mix, PHEVs offer only small GHG emissions reductions compared to HEVs, and workplace charging is insignificant. With grid decarbonization, PHEVs and BEVs offer substantial GHG emissions reductions, and workplace charging provides additional benefits. HEVs are optimal or near-optimal for minimum cost in most scenarios. High gas prices and low vehicle and battery costs are the major drivers for PHEVs and BEVs to enter and dominate the cost-optimal fleet. Carbon prices have little effect. Cost and range restrictions limit penetration of BEVs. - Highlights: ► We pose an MINLP model to minimize cost and GHG emissions of electrified vehicles. ► We design PHEVs and BEVs and assign vehicles and charging infrastructure in US fleet. ► Under US grid mix, PEVs provide minor GHG reductions and work chargers do little. ► HEVs are robust; PEVs and work charging potential improve with a decarbonized grid. ► We quantify factors needed for PEVs to enter and dominate the optimal fleet.

  9. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    Directory of Open Access Journals (Sweden)

    Maciej Malawski

    2015-01-01

    Full Text Available This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL and allows us to minimize the cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.

  10. Computerized systems analysis and optimization of aircraft engine performance, weight, and life cycle costs

    Science.gov (United States)

    Fishbach, L. H.

    1979-01-01

    The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.

  11. Optimization of airfoil-type PCHE for the recuperator of small scale brayton cycle by cost-based objective function

    International Nuclear Information System (INIS)

    Kwon, Jin Gyu; Kim, Tae Ho; Park, Hyun Sun; Cha, Jae Eun; Kim, Moo Hwan

    2016-01-01

    Highlights: • Suggest the Nusselt number and Fanning friction factor correlation for airfoil-type PCHE. • Show that cost-based optimization is available to airfoil-type PCHE. • Suggest the recuperator design for SCIEL test loop at KAERI by cost-based objective function with correlations from numerical analysis. - Abstract: Supercritical carbon dioxide (SCO_2) Brayton cycle gives high efficiency of power cycle with small size. Printed circuit heat exchangers (PCHE) are proper selection for the Brayton cycle because their operability at high temperature and high pressure with small size. Airfoil fin PCHE was suggested by Kim et al. (2008b), it can provide high heat transfer-like zigzag channel PCHE with low pressure drop-like straight channel PCHE. Optimization of the airfoil fin PCHE was not performed like the zigzag channel PCHE. For optimization of the airfoil fin PCHE, the operating condition of the recuperator of SCO_2 Integral Experiment Loop (SCIEL) Brayton cycle test loop at Korea Atomic Energy Research Institute (KAERI) was used. We performed CFD analysis for various airfoil fin configurations using ANSYS CFX 15.0, and made correlations for predicting the Nusselt number and the Fanning friction factor. The recuperator was designed by the simple energy balance code with our correlations. Using the cost-based objective function with production cost and operation cost from size and pressure drop of the recuperator, we evaluated airfoil fin configuration by using total cost and suggested the optimization configuration of the airfoil fin PCHE.

  12. Optimization of airfoil-type PCHE for the recuperator of small scale brayton cycle by cost-based objective function

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Jin Gyu [Division of Advanced Nuclear Engineering, POSTECH, Pohang 790-784 (Korea, Republic of); Kim, Tae Ho [Department of Mechanical Engineering, POSTECH, Pohang 790-784 (Korea, Republic of); Park, Hyun Sun, E-mail: hejsunny@postech.ac.kr [Division of Advanced Nuclear Engineering, POSTECH, Pohang 790-784 (Korea, Republic of); Cha, Jae Eun [Korea Atomic Energy Research Institute, Daejeon 305-353 (Korea, Republic of); Kim, Moo Hwan [Division of Advanced Nuclear Engineering, POSTECH, Pohang 790-784 (Korea, Republic of); Korea Institute of Nuclear Safety, Daejeon 305-338 (Korea, Republic of)

    2016-03-15

    Highlights: • Suggest the Nusselt number and Fanning friction factor correlation for airfoil-type PCHE. • Show that cost-based optimization is available to airfoil-type PCHE. • Suggest the recuperator design for SCIEL test loop at KAERI by cost-based objective function with correlations from numerical analysis. - Abstract: Supercritical carbon dioxide (SCO{sub 2}) Brayton cycle gives high efficiency of power cycle with small size. Printed circuit heat exchangers (PCHE) are proper selection for the Brayton cycle because their operability at high temperature and high pressure with small size. Airfoil fin PCHE was suggested by Kim et al. (2008b), it can provide high heat transfer-like zigzag channel PCHE with low pressure drop-like straight channel PCHE. Optimization of the airfoil fin PCHE was not performed like the zigzag channel PCHE. For optimization of the airfoil fin PCHE, the operating condition of the recuperator of SCO{sub 2} Integral Experiment Loop (SCIEL) Brayton cycle test loop at Korea Atomic Energy Research Institute (KAERI) was used. We performed CFD analysis for various airfoil fin configurations using ANSYS CFX 15.0, and made correlations for predicting the Nusselt number and the Fanning friction factor. The recuperator was designed by the simple energy balance code with our correlations. Using the cost-based objective function with production cost and operation cost from size and pressure drop of the recuperator, we evaluated airfoil fin configuration by using total cost and suggested the optimization configuration of the airfoil fin PCHE.

  13. THERMODYNAMICS AND CHARGING OF INTERSTELLAR IRON NANOPARTICLES

    Energy Technology Data Exchange (ETDEWEB)

    Hensley, Brandon S. [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Draine, B. T., E-mail: brandon.s.hensley@jpl.nasa.gov [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

    2017-01-10

    Interstellar iron in the form of metallic iron nanoparticles may constitute a component of the interstellar dust. We compute the stability of iron nanoparticles to sublimation in the interstellar radiation field, finding that iron clusters can persist down to a radius of ≃4.5 Å, and perhaps smaller. We employ laboratory data on small iron clusters to compute the photoelectric yields as a function of grain size and the resulting grain charge distribution in various interstellar environments, finding that iron nanoparticles can acquire negative charges, particularly in regions with high gas temperatures and ionization fractions. If ≳10% of the interstellar iron is in the form of ultrasmall iron clusters, the photoelectric heating rate from dust may be increased by up to tens of percent relative to dust models with only carbonaceous and silicate grains.

  14. Use of software to optimize time and costs in the elaboration of the basic drawing

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Tchaikowisky M. [Faculdade de Tecnologia e Ciencias (FTC), Itabuna, BA (Brazil); Bresci, Claudio T.; Franca, Carlos M.M. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In order to choose areas to implement pipe yards, terminals, campsite, pipeline path and other areas of assistance for the construction and assembly of the pipeline, it is necessary previous location studies to elaborate the basic drawing. However it is not always possible to contract a company registered in aerial survey to elaborate this type of drawing, either for the cost or time, and where the lack of a basic drawing can lead to an erroneous choice or one which reflects an exaggerated estimate or not adequate logistics. In order to minimize costs and optimizing time, without compromising quality, this study proposes the use of software 'virtual globe' type for mapping and geographic location available on the internet to assist in the gathering of geographical, altimetric, hydro graphic, road, environmental, socioeconomic and aerial images, information necessary to elaborate the basic drawing. This article includes a case study of the oil pipeline Cacimbas-Barra do Riacho project and a proposed procedure to be used in the elaboration of basic drawings using data generated using the referred to software. In 2007 the Pipeline Construction and Assembly sector, CMDPI unit of PETROBRAS Engenharia/IETEG/IEDT was designated to select an area of 2 hectares where the pipe storage yard for pipes to be used in the Oil Pipeline Cacimbas- Barra do Riacho in Espirito Santo state would be constructed. The area was chosen following some pre-requisites and using as main resources in the gathering of information necessary to select the area, 'virtual globe' type software and a CAD type software with the capacity to import images and altimetric identification as well as exporting drawing data to the mentioned. The activity, much simpler than hiring an aerial survey, was elaborated surpassing expectations of time-costs besides complying with the necessary technical demands. (author)

  15. Knocking on Industry’s Door: Needs in Product-Cost Optimization in the Early Product Life Cycle Stages

    Directory of Open Access Journals (Sweden)

    Matthias Walter

    2017-12-01

    Full Text Available While theoretical concepts for product-costing methodologies have evolved over the decades, little emphasis has been placed on their integration into modern information systems. During a co-innovation workshop at SAP SE, we initiated our collaborative research with selected large-scale enterprises from the discrete manufacturing industry. Moreover, we conducted interviews with business experts to gain a sophisticated understanding of the cost-optimization process itself. As a result, we present an exemplary optimization process with an emphasis on the specific characteristics of the product development stage. Based upon this example, we identified associated deficits in information system support. No current software fulfills the enterprises’ requirements regarding cost optimization in the early stages of a product’s life cycle. Thus, the respective processes lack integration in corporate environments. Taking this on, our article compiles detailed problem identification and, moreover, suggests approaches to overcome these hurdles.

  16. Premium cost optimization of operational and maintenance of green building in Indonesia using life cycle assessment method

    Science.gov (United States)

    Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Budiman, Rachmat; Riswanto

    2017-06-01

    Building has a big impact on the environmental developments. There are three general motives in building, namely the economy, society, and environment. Total completed building construction in Indonesia increased by 116% during 2009 to 2011. It made the energy consumption increased by 11% within the last three years. In fact, 70% of energy consumption is used for electricity needs on commercial buildings which leads to an increase of greenhouse gas emissions by 25%. Green Building cycle costs is known as highly building upfront cost in Indonesia. The purpose of optimization in this research improves building performance with some of green concept alternatives. Research methodology is mixed method of qualitative and quantitative approaches through questionnaire surveys and case study. Assessing the successful of optimization functions in the existing green building is based on the operational and maintenance phase with the Life Cycle Assessment Method. Choosing optimization results were based on the largest efficiency of building life cycle and the most effective cost to refund.

  17. Efficiency and cost optimization of a regenerative Organic Rankine Cycle power plant through the multi-objective approach

    International Nuclear Information System (INIS)

    Gimelli, A.; Luongo, A.; Muccillo, M.

    2017-01-01

    Highlights: • Multi-objective optimization method for ORC design has been addressed. • Trade-off between electric efficiency and overall heat exchangers area is evaluated. • The heat exchangers area was used as objective function to minimize the plant cost. • MDM was considered as organic working fluid for the thermodynamic cycle. • Electric efficiency: 14.1–18.9%. Overall heat exchangers area: 446–1079 m 2 . - Abstract: Multi-objective optimization could be, in the industrial sector, a fundamental strategic approach for defining the target design specifications and operating parameters of new competitive products for the market, especially in renewable energy and energy savings fields. Vector optimization mostly enabled the determination of a set of optimal solutions characterized by different costs, sizes, efficiencies and other key features. The designer can subsequently select the solution with the best compromise between the objective functions for the specific application and constraints. In this paper, a multi-objective optimization problem addressing an Organic Rankine Cycle system is solved with consideration for the electric efficiency and overall heat exchangers area as quantities that should be optimized. In fact, considering that the overall capital cost of the ORC system is dominated by the cost of the heat exchangers rather than that of the pump and turbine, this area is related to the cost of the plant and so it was used to indirectly optimize the economic system performance. For this reason, although cost data have not been used, the heat exchangers area was used as a second objective function to minimize the plant cost. Pareto optimal solutions highlighted a trade-off between the two conflicting objective functions. Octamethyltrisiloxane (MDM) was considered organic working fluid, while the following input parameters were used as decision variables: minimum and maximum pressure of the thermodynamic cycle; superheating and subcooling

  18. Optimizing inspection intervals—Reliability and availability in terms of a cost model: A case study on railway carriers

    International Nuclear Information System (INIS)

    Wolde, Mike ten; Ghobbar, Adel A.

    2013-01-01

    This paper states the problem that railway carriers have with inspections and maintenance in its most cost optimal way. Often there is external pressure to improve reliability and availability and to reduce costs significantly. To overcome this problem this paper suggests a model that tries to find the optimal inspection interval. Not all maintenance companies have their inspection intervals that match with the actual reliability of a system anymore and the inspection intervals are not necessarily at a cost optimum. This research retrieves the actual failure and repair data and combines this together with the availability of a system to find the optimum inspection interval in terms of costs. The application of the optimization approach to a railway carrier maintenance company in the Netherlands is also presented. -- Highlights: ► New optimization technique for reliability and availability by minimizing costs. ► Adaptable to multiple types of repairs. ► Usable as a practical model. ► Applicable to the FMECA, RCM and ISO31000 risk management methods

  19. Toward a new spacecraft optimal design lifetime? Impact of marginal cost of durability and reduced launch price

    Science.gov (United States)

    Snelgrove, Kailah B.; Saleh, Joseph Homer

    2016-10-01

    The average design lifetime of satellites continues to increase, in part due to the expectation that the satellite cost per operational day decreases monotonically with increased design lifetime. In this work, we challenge this expectation by revisiting the durability choice problem for spacecraft in the face of reduced launch price and under various cost of durability models. We first provide a brief overview of the economic thought on durability and highlight its limitations as they pertain to our problem (e.g., the assumption of zero marginal cost of durability). We then investigate the merging influence of spacecraft cost of durability and launch price, and we identify conditions that give rise cost-optimal design lifetimes that are shorter than the longest lifetime technically achievable. For example, we find that high costs of durability favor short design lifetimes, and that under these conditions the optimal choice is relatively robust to reduction in launch prices. By contrast, lower costs of durability favor longer design lifetimes, and the optimal choice is highly sensitive to reduction in launch price. In both cases, reduction in launch prices translates into reduction of the optimal design lifetime. Our results identify a number of situations for which satellite operators would be better served by spacecraft with shorter design lifetimes. Beyond cost issues and repeat purchases, other implications of long design lifetime include the increased risk of technological slowdown given the lower frequency of purchases and technology refresh, and the increased risk for satellite operators that the spacecraft will be technologically obsolete before the end of its life (with the corollary of loss of value and competitive advantage). We conclude with the recommendation that, should pressure to extend spacecraft design lifetime continue, satellite manufacturers should explore opportunities to lease their spacecraft to operators, or to take a stake in the ownership

  20. Multiobjective optimization applied to structural sizing of low cost university-class microsatellite projects

    Science.gov (United States)

    Ravanbakhsh, Ali; Franchini, Sebastián

    2012-10-01

    In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In this paper, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. Finally, a Genetic Algorithm (GA) multiobjective optimization is applied to the design space. The result is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget, which gives a useful insight to the design team at the early phases of the design.

  1. Using machine-learning to optimize phase contrast in a low-cost cellphone microscope

    Science.gov (United States)

    Wartmann, Rolf; Schadwinkel, Harald; Heintzmann, Rainer

    2018-01-01

    Cellphones equipped with high-quality cameras and powerful CPUs as well as GPUs are widespread. This opens new prospects to use such existing computational and imaging resources to perform medical diagnosis in developing countries at a very low cost. Many relevant samples, like biological cells or waterborn parasites, are almost fully transparent. As they do not exhibit absorption, but alter the light’s phase only, they are almost invisible in brightfield microscopy. Expensive equipment and procedures for microscopic contrasting or sample staining often are not available. Dedicated illumination approaches, tailored to the sample under investigation help to boost the contrast. This is achieved by a programmable illumination source, which also allows to measure the phase gradient using the differential phase contrast (DPC) [1, 2] or even the quantitative phase using the derived qDPC approach [3]. By applying machine-learning techniques, such as a convolutional neural network (CNN), it is possible to learn a relationship between samples to be examined and its optimal light source shapes, in order to increase e.g. phase contrast, from a given dataset to enable real-time applications. For the experimental setup, we developed a 3D-printed smartphone microscope for less than 100 $ using off-the-shelf components only such as a low-cost video projector. The fully automated system assures true Koehler illumination with an LCD as the condenser aperture and a reversed smartphone lens as the microscope objective. We show that the effect of a varied light source shape, using the pre-trained CNN, does not only improve the phase contrast, but also the impression of an improvement in optical resolution without adding any special optics, as demonstrated by measurements. PMID:29494620

  2. Optimal replacement of residential air conditioning equipment to minimize energy, greenhouse gas emissions, and consumer cost in the US

    International Nuclear Information System (INIS)

    De Kleine, Robert D.; Keoleian, Gregory A.; Kelly, Jarod C.

    2011-01-01

    A life cycle optimization of the replacement of residential central air conditioners (CACs) was conducted in order to identify replacement schedules that minimized three separate objectives: life cycle energy consumption, greenhouse gas (GHG) emissions, and consumer cost. The analysis was conducted for the time period of 1985-2025 for Ann Arbor, MI and San Antonio, TX. Using annual sales-weighted efficiencies of residential CAC equipment, the tradeoff between potential operational savings and the burdens of producing new, more efficient equipment was evaluated. The optimal replacement schedule for each objective was identified for each location and service scenario. In general, minimizing energy consumption required frequent replacement (4-12 replacements), minimizing GHG required fewer replacements (2-5 replacements), and minimizing cost required the fewest replacements (1-3 replacements) over the time horizon. Scenario analysis of different federal efficiency standards, regional standards, and Energy Star purchases were conducted to quantify each policy's impact. For example, a 16 SEER regional standard in Texas was shown to either reduce primary energy consumption 13%, GHGs emissions by 11%, or cost by 6-7% when performing optimal replacement of CACs from 2005 or before. The results also indicate that proper servicing should be a higher priority than optimal replacement to minimize environmental burdens. - Highlights: → Optimal replacement schedules for residential central air conditioners were found. → Minimizing energy required more frequent replacement than minimizing consumer cost. → Significant variation in optimal replacement was observed for Michigan and Texas. → Rebates for altering replacement patterns are not cost effective for GHG abatement. → Maintenance levels were significant in determining the energy and GHG impacts.

  3. Radio propagation through the turbulent interstellar plasma

    International Nuclear Information System (INIS)

    Rickett, B.J.

    1990-01-01

    The current understanding of interstellar scattering is reviewed, and its impact on radio astronomy is examined. The features of interstellar plasma turbulence are also discussed. It is concluded that methods involving the investigation of the flux variability of pulsars and extragalactic sources and the VLBI visibility curves constitute new techniques for probing the ISM. However, scattering causes a seeing limitation in radio observations. It is now clear that variation due to RISS (refractive interstellar scintillations) is likely to be important for several classes of variable sources, especially low-frequency variables and centimeter-wave flickering. 168 refs

  4. Physics of the galaxy and interstellar matter

    International Nuclear Information System (INIS)

    Scheffler, H.; Elsasser, H.

    1988-01-01

    This book is based on the authors' long standing experience in teaching astronomy courses. It presents in a modern and complete way our present picture of the physics of the Milky Way system. The first part of the book deals with topics of more empirical character, such as the positions and motions of stars, the structure and kinetics of the stellar systems and interstellar phenomena. The more advanced second part is devoted to the interpretation of observational results, i.e. to the physics of interstellar gas and dust, to stellar dynamics, to the theory of spiral structures and the dynamics of interstellar gas

  5. Structure and evolution of the interstellar medium

    International Nuclear Information System (INIS)

    Chieze, J.P.

    1985-10-01

    We give a two dimensional hydrodynamical analysis of HI clouds collisions in order to determine the mass spectrum of diffuse interstellar clouds. We have taken into account evaporation and abrasion by supernovae blast waves. The conditions for cloud merging or fragmentation are precised. Applications to the model of the interstellar medium of Mc Kee and Ostriker are also discussed. On the other hand, we show that molecular clouds belong to a one parameter family which can be identified to the sequence of the gravitationally unstable states of clouds bounded by the uniform pressure of the coronal phase of the interstellar medium. Hierarchical fragmentation of molecular clouds is analysed in this context [fr

  6. INTERSTELLAR GAS FLOW PARAMETERS DERIVED FROM INTERSTELLAR BOUNDARY EXPLORER-Lo OBSERVATIONS IN 2009 AND 2010: ANALYTICAL ANALYSIS

    International Nuclear Information System (INIS)

    Möbius, E.; Bochsler, P.; Heirtzler, D.; Kucharek, H.; Lee, M. A.; Leonard, T.; Schwadron, N. A.; Wu, X.; Petersen, L.; Valovcin, D.; Wurz, P.; Bzowski, M.; Kubiak, M. A.; Fuselier, S. A.; Crew, G.; Vanderspek, R.; McComas, D. J.; Saul, L.

    2012-01-01

    Neutral atom imaging of the interstellar gas flow in the inner heliosphere provides the most detailed information on physical conditions of the surrounding interstellar medium (ISM) and its interaction with the heliosphere. The Interstellar Boundary Explorer (IBEX) measured neutral H, He, O, and Ne for three years. We compare the He and combined O+Ne flow distributions for two interstellar flow passages in 2009 and 2010 with an analytical calculation, which is simplified because the IBEX orientation provides observations at almost exactly the perihelion of the gas trajectories. This method allows separate determination of the key ISM parameters: inflow speed, longitude, and latitude, as well as temperature. A combined optimization, as in complementary approaches, is thus not necessary. Based on the observed peak position and width in longitude and latitude, inflow speed, latitude, and temperature are found as a function of inflow longitude. The latter is then constrained by the variation of the observed flow latitude as a function of observer longitude and by the ratio of the widths of the distribution in longitude and latitude. Identical results are found for 2009 and 2010: an He flow vector somewhat outside previous determinations (λ ISM∞ = 79. 0 0+3. 0 0(–3. 0 5), β ISM∞ = –4. 0 9 ± 0. 0 2, V ISM∞ 23.5 + 3.0(–2.0) km s –1 , T He = 5000-8200 K), suggesting a larger inflow longitude and lower speed. The O+Ne temperature range, T O+Ne = 5300-9000 K, is found to be close to the upper range for He and consistent with an isothermal medium for all species within current uncertainties.

  7. Operation Cost Minimization of Droop-Controlled DC Microgrids Based on Real-Time Pricing and Optimal Power Flow

    DEFF Research Database (Denmark)

    Li, Chendan; de Bosio, Federico; Chaudhary, Sanjay Kumar

    2015-01-01

    In this paper, an optimal power flow problem is formulated in order to minimize the total operation cost by considering real-time pricing in DC microgrids. Each generation resource in the system, including the utility grid, is modeled in terms of operation cost, which combines the cost...... problem is solved in a heuristic way by using genetic algorithms. In order to test the proposed algorithm, a six-bus droop-controlled DC microgrid is used as a case-study. The obtained simulation results show that under variable renewable generation, load, and electricity prices, the proposed method can...

  8. On optimal upgrade level for used products under given cost structures

    International Nuclear Information System (INIS)

    Shafiee, Mahmood; Finkelstein, Maxim; Chukova, Stefanka

    2011-01-01

    In spite of the growing share of the second-hand market, often customers of used products encounter the following three problems: (a) they are uncertain regarding the durability and performance of these products due to lack of information on the item's past usage and maintenance history, (b) they are uncertain about the accurate pricing of warranties and the post-warranty repair costs, and (c) sometimes, right after the sale, used items may have high failure rate and could be harmful to their new owner. Due to these problems, the dealers are currently carrying out actions such as overhaul and upgrade of the used products before their release. Reliability improvement, which is closely related to the concept of warranty, for used products is a relatively new concept and has received very limited attention. This paper also develops a stochastic model which results in the derivation of the optimal expected upgrade level under given structures of the profit and failure rate functions. We provide a numerical study to illustrate our results.

  9. D1+ Simulator: A cost and risk optimized approach to nuclear power plant simulator modernization

    International Nuclear Information System (INIS)

    Wischert, W.

    2006-01-01

    D1-Simulator is operated by Kraftwerks-Simulator-Gesellschaft (KSG) and Gesellschaft f?r Simulatorschulung (GfS) at the Simulator Centre in Essen since 1977. The full-scope control room training simulator, used for Kernkraftwerk Biblis (KWB) is based on a PDP-11 hardware platform and is mainly programmed in ASSEMBLER language. The Simulator has reached a continuous high availability of operation throughout the years due to specialized hardware and software support from KSG maintenance team. Nevertheless, D1-Simulator largely reveals limitations with respect to computer capacity and spares and suffers progressively from the non-availability of hardware replacement materials. In order to ensure long term maintainability within the framework of the consensus on nuclear energy, a 2-years refurbishing program has been launched by KWB focusing on quality and budgetary aspects. The so-called D1+ Simulator project is based on the re-use of validated data from existing simulators. Allowing for flexible project management methods, the project outlines a cost and risk optimized approach to Nuclear Power Plant (NPP) Simulator modernization. D1+ Simulator is being built by KSG/GfS in close collaboration with KWB and the simulator vendor THALES by re-using a modern hardware and software development environment from D56-Simulator, used by Kernkraftwerk Obrigheim (KWO) before its decommissioning in 2005. The Simulator project, launched in 2004, is expected to be completed by end of 2006. (author)

  10. Cost Benefit Optimization of the Israeli Medical Diagnostic X-Ray Exposure

    International Nuclear Information System (INIS)

    Ben-Shlomo, A.; Shlesinger, T.; Shani, G.; Kushilevsky, A.

    1999-01-01

    Diagnostic and therapeutic radiology is playing a major role in modern medicine. A preliminary survey was carried out during 1997 on 3 major Israeli hospitals in order to assess the extent of exposure of the population to medical x-rays (1). The survey has found that the annual collective dose of the Israeli population to x-ray medical imaging procedures (excluding radio-therapy) is about 7,500 Man-Sv. The results of the survey were analyzed in order to. 1. Carry out a cost-benefit optimization procedure related to the means that should be used to reduce the exposure of the Israeli patients under x-ray procedures. 2. Establish a set of practical recommendations to reduce the x-ray radiation exposure of patients and to increase the image quality. . Establish a number of basic rules to be utilized by health policy makers in Israel. Based on the ICRP-60 linear model risk assessments (2), the extent of the annual risk arising A.om the 7,500 Man-Sv medical x-ray collective dose in Israel has been found to be the potential addition of 567 cancer cases per year, 244 of which to be fatal, and a potential additional birth of 3-4 children with severe genetic damage per year. This assessment take into account the differential risk and the collective dose according to the age distribution in the Israeli exposed population, and excludes patients with chronic diseases

  11. Inclusion of tank configurations as a variable in the cost optimization of branched piped-water networks

    Science.gov (United States)

    Hooda, Nikhil; Damani, Om

    2017-06-01

    The classic problem of the capital cost optimization of branched piped networks consists of choosing pipe diameters for each pipe in the network from a discrete set of commercially available pipe diameters. Each pipe in the network can consist of multiple segments of differing diameters. Water networks also consist of intermediate tanks that act as buffers between incoming flow from the primary source and the outgoing flow to the demand nodes. The network from the primary source to the tanks is called the primary network, and the network from the tanks to the demand nodes is called the secondary network. During the design stage, the primary and secondary networks are optimized separately, with the tanks acting as demand nodes for the primary network. Typically the choice of tank locations, their elevations, and the set of demand nodes to be served by different tanks is manually made in an ad hoc fashion before any optimization is done. It is desirable therefore to include this tank configuration choice in the cost optimization process itself. In this work, we explain why the choice of tank configuration is important to the design of a network and describe an integer linear program model that integrates the tank configuration to the standard pipe diameter selection problem. In order to aid the designers of piped-water networks, the improved cost optimization formulation is incorporated into our existing network design system called JalTantra.

  12. Optimizing diagnostic workup in the DRG environment: Dynamic algorithms and minimizing radiologic costs may cost your hospital money

    International Nuclear Information System (INIS)

    Saint-Louis, L.A.; Henschke, C.I.; Balter, S.; Whalen, J.P.; Balter, P.

    1987-01-01

    In certain diagnosis-related group (DRG) categories, the availability of sufficient CT scanners or of new equipment, such as MR equipment, can expedite the definitive workup. This will reduce the average length of stay and hospital cost. We analyzed the total hospital and radiologic charges by DRG category for all patients admitted to our hospital in 1985 and 1986. Although the cost per procedure is relatively high, the radiologic component is a small percentage of total hospital costs (median, 3%; maximum, <10%). The authors developed alternative diagnostic algorithms for radiologic-intensive DRG categories. Different diagnostic algorithms proposed for the same clinical problems were compared analytically in terms of impact on the hospital (cost, equipment availability, and length of stay). An example is the workup for FUO. Traditional approach uses plain x-rays and gallium scans and only uses CT when localizing symptoms are present. An alternative approach is to perform CT only. Although more CT examinations would be required, there is considerable reduction in the length of hospital stay and in overall charges. Neurologic and thoracic workups will be given as examples of classes or problems that can be addressed analytically: sequencing of the workup; prevalence; patient population; resource of allocation; and introduction of new imaging modality

  13. Cost-effectiveness optimization of a solar hot water heater with integrated storage system

    International Nuclear Information System (INIS)

    Kamaruzzaman Sopian; Syahri, M.; Shahrir, A.; Mohd Yusof Othman; Baharuddin Yatim

    2006-01-01

    Solar processes are generally characterized by high first cost and low operating costs. Therefore, the basic economic problem is one of comparing an initial known investment with estimated future operating cost. This paper present the cost-benefit ratio of solar collector with integrated storage system. Evaluation of the annual cost (AC) and the annual energy gain (AEG) of the collector are performed and the ratio of AC/AEG or the cost benefit ratio is presented for difference combination of mass flow rate, solar collector length and channel depth. Using these cost-effectiveness curves, the user can select optimum design features, which correspond to minimum AC/AEG

  14. Evidence-based medicine is affordable: the cost-effectiveness of current compared with optimal treatment in rheumatoid and osteoarthritis.

    Science.gov (United States)

    Andrews, Gavin; Simonella, Leonardo; Lapsley, Helen; Sanderson, Kristy; March, Lyn

    2006-04-01

    To determine the cost-effectiveness of averting the burden of disease. We used secondary population data and metaanalyses of various government-funded services and interventions to investigate the costs and benefits of various levels of treatment for rheumatoid arthritis (RA) and osteoarthritis (OA) in adults using a burden of disease framework. Population burden was calculated for both diseases in the absence of any treatment as years lived with disability (YLD), ignoring the years of life lost. We then estimated the proportion of burden averted with current interventions, the proportion that could be averted with optimally implemented current evidence-based guidelines, and the direct treatment cost-effectiveness ratio in dollars per YLD averted for both treatment levels. The majority of people with arthritis sought medical treatment. Current treatment for RA averted 26% of the burden, with a cost-effectiveness ratio of dollar 19,000 per YLD averted. Optimal, evidence-based treatment would avert 48% of the burden, with a cost-effectiveness ratio of dollar 12,000 per YLD averted. Current treatment of OA in Australia averted 27% of the burden, with a cost-effectiveness ratio of dollar 25,000 per YLD averted. Optimal, evidence-based treatment would avert 39% of the burden, with an unchanged cost-effectiveness ratio of dollar 25,000 per YLD averted. While the precise dollar costs in each country will differ, the relativities at this level of coverage should remain the same. There is no evidence that closing the gap between evidence and practice would result in a drop in efficiency.

  15. Good Manufacturing Practices (GMP) manufacturing of advanced therapy medicinal products: a novel tailored model for optimizing performance and estimating costs.

    Science.gov (United States)

    Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra

    2013-03-01

    Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  16. Optimized electricity expansions with external costs internalized and risk of severe accidents as a new criterion in the decision analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martin del Campo M, C.; Estrada S, G. J., E-mail: cmcm@fi-b.unam.mx [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico)

    2011-11-15

    The external cost of severe accidents was incorporated as a new element for the assessment of energy technologies in the expansion plans of the Mexican electric generating system. Optimizations of the electric expansions were made by internalizing the external cost into the objective function of the WASP-IV model as a variable cost, and these expansions were compared with the expansion plans that did not internalize them. Average external costs reported by the Extern E Project were used for each type of technology and were added to the variable component of operation and maintenance cost in the study cases in which the externalises were internalized. Special attention was paid to study the convenience of including nuclear energy in the generating mix. The comparative assessment of six expansion plans was made by means of the Position Vector of Minimum Regret Analysis (PVMRA) decision analysis tool. The expansion plans were ranked according to seven decision criteria which consider internal costs, economical impact associated with incremental fuel prices, diversity, external costs, foreign capital fraction, carbon-free fraction, and external costs of severe accidents. A set of data for the calculation of the last criterion was obtained from a Report of the European Commission. We found that with the external costs included in the optimization process of WASP-IV, better electric expansion plans, with lower total (internal + external) generating costs, were found. On the other hand, the plans which included the participation of nuclear power plants were in general relatively more attractive than the plans that did not. (Author)

  17. Consistent cost curves for identification of optimal energy savings across industry and residential sectors

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik; Baldini, Mattia

    the costs are incurred and savings (difference in discount rates both private and social) • The issue of marginal investment in a case of replacement anyway or a full investment in the energy saving technology • Implementation costs (and probability of investment) differs across sectors • Cost saving...... with constructing and applying the cost curves in modelling: • Cost curves do not have the same cost interpretation across economic subsectors and end-use technologies (investment cost for equipment varies – including/excluding installation – adaptation costs – indirect production costs) • The time issue of when...... options are not additive - meaning that marginal energy savings from one option depends on what other options implemented We address the importance of these issues and illustrate with Danish cases how large the difference in savings cost curves can be if different methodologies are used. For example...

  18. Cost-benefit study of consumer product take-back programs using IBM's WIT reverse logistics optimization tool

    Science.gov (United States)

    Veerakamolmal, Pitipong; Lee, Yung-Joon; Fasano, J. P.; Hale, Rhea; Jacques, Mary

    2002-02-01

    In recent years, there has been increased focus by regulators, manufacturers, and consumers on the issue of product end of life management for electronics. This paper presents an overview of a conceptual study designed to examine the costs and benefits of several different Product Take Back (PTB) scenarios for used electronics equipment. The study utilized a reverse logistics supply chain model to examine the effects of several different factors in PTB programs. The model was done using the IBM supply chain optimization tool known as WIT (Watson Implosion Technology). Using the WIT tool, we were able to determine a theoretical optimal cost scenario for PTB programs. The study was designed to assist IBM internally in determining theoretical optimal Product Take Back program models and determining potential incentives for increasing participation rates.

  19. Abundances in the diffuse interstellar medium

    International Nuclear Information System (INIS)

    Harris, A.W.

    1988-04-01

    The wealth of interstellar absorption line data obtained with the Copernicus and IUE satellites has opened up a new era in studies of the interstellar gas. It is now well established that certain elements, generally those with high condensation temperatures, are substantially under-abundant in the gas-phase relative to total solar or cosmic abundances. This depletion of elements is due to the existence of solid material in the form of dust grains in the interstellar medium. Surprisingly, however, recent surveys indicate that even volatile elements such as Zn and S are significantly depleted in many sight lines. Developments in this field which have been made possible by the large base of UV interstellar absorption line data built up over recent years are reviewed and the implications of the results for our understanding of the physical processes governing depletion are discussed. (author)

  20. The composition of circumstellar and interstellar dust

    NARCIS (Netherlands)

    Tielens, AGGM; Woodward, CE; Biscay, MD; Shull, JM

    2001-01-01

    A large number of solid dust components have been identified through analysis of stardust recovered from meteorites, and analysis of IR observations of circumstellar shells and the interstellar medium. These include graphite, hydrogenated amorphous carbon, diamond, PAHs, silicon-, iron-, and

  1. Experimental interstellar organic chemistry: Preliminary findings

    Science.gov (United States)

    Khare, B. N.; Sagan, C.

    1971-01-01

    In a simulation of interstellar organic chemistry in dense interstellar clouds or on grain surfaces, formaldehyde, water vapor, ammonia and ethane are deposited on a quartz cold finger and ultraviolet-irradiated in high vacuum at 77K. The HCHO photolytic pathway which produces an aldehyde radical and a superthermal hydrogen atom initiates solid phase chain reactions leading to a range of new compounds, including methanol, ethanol, acetaldehyde, acetonitrile, acetone, methyl formate, and possibly formic acid. Higher nitriles are anticipated. Genetic relations among these interstellar organic molecules (e.g., the Cannizzaro and Tischenko reactions) must exist. Some of them, rather than being synthesized from smaller molecules, may be degradation products of larger organic molecules, such as hexamethylene tetramine, which are candidate consitituents of the interstellar grains. The experiments reported here may also be relevant to cometary chemistry.

  2. Cost and performance optimization of natural draft dry cooling towers using genetic algorithm. Paper no. IGEC-1-002

    International Nuclear Information System (INIS)

    Shokuhmand, H.; Ghaempanah, B.

    2005-01-01

    In this paper the cost - performance optimization of natural draft dry cooling towers with specific kind of heat exchangers, known as Forgo T60 has been investigated. These cooling towers are used in combined and steam cycle power plants. The optimization has been done using genetic algorithm. The objective function has two parts, which are minimizing the cost and maximizing the performance. In the first part the geometrical and operating parameters are defined and for the next part the performance of the designed tower for different ambient temperatures during a year is calculated considering the characteristic curve of the turbine. The applied genetic algorithm has been tuned up using the data of some working power cycles. The results show it is possible to find an optimum for all design parameters; however it is very dependent on how exact the cost analysis is. (author)

  3. Optimal control of switching time in switched stochastic systems with multi-switching times and different costs

    Science.gov (United States)

    Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian

    2017-08-01

    In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.

  4. Update on an Interstellar Asteroid

    Science.gov (United States)

    Kohler, Susanna

    2018-01-01

    Whats the news coming from the research world on the interstellar asteroid visitor, asteroid 1I/Oumuamua? Read on for an update from a few of the latest studies.What is Oumuamua?In lateOctober2017, the discovery of minor planet 1I/Oumuamua was announced. This body which researchers first labeled asa comet and later revised to an asteroid had just zipped around the Sun and was already in the process of speeding away whenwe trained our telescopes on it. Its trajectory, however, marked it as being a visitor from outside our solar system: the first knownvisitorof its kind.Since Oumuamuasdiscovery, scientists have been gathering as many observations of this bodyas possible before it vanishes into the distance. Simultaneously, theorists have leapt at the opportunity to explain its presence and the implications its passage has on our understanding of our surroundings. Here we present just a few of the latest studies that have been published on this first detected interstellar asteroid including several timelystudies published in our new journal, Research Notes of the AAS.The galactic velocity of Oumuamua does not coincide with any of the nearest stars to us. [Mamajek 2018]Where Did Oumuamua Come From?Are we sure Oumuamua didnt originate in our solar system andget scattered into a weird orbit? Jason Wright (The Pennsylvania State University) demonstrates via a series of calculations that no known solar system body could have scattered Oumuamua onto its current orbit nor could any stillunknown object bound to our solar system.Eric Mamajek (Caltech and University of Rochester) showsthat thekinematics of Oumuamua areconsistent with what we might expect of interstellar field objects, though he argues that its kinematics suggest its unlikely to have originated from many of the neareststellar systems.What AreOumuamuas Properties?Oumuamuas light curve. [Bannister et al. 2017]A team of University of Maryland scientists led by Matthew Knight captured a light curve of Oumuamua using

  5. Projected reduction in healthcare costs in Belgium after optimization of iodine intake: impact on costs related to thyroid nodular disease

    OpenAIRE

    Vandevijvere, Stefanie; Annemans, Lieven; Van Oyen, Herman; Tafforeau, Jean; Moreno-Reyes, Rodrigo

    2010-01-01

    Background: Several surveys in the last 50 years have repeatedly indicated that Belgium is affected by mild iodine deficiency. Within the framework of the national food and health plan in Belgium, a selective, progressive, and monitored strategy was proposed in 2009 to optimize iodine intake. The objective of the present study was to perform a health economic evaluation of the consequences of inadequate iodine intake in Belgium, focusing on undisputed and measurable health outcomes such as th...

  6. Cost-optimal power system extension under flow-based market coupling and high shares of photovoltaics

    Energy Technology Data Exchange (ETDEWEB)

    Hagspiel, Simeon; Jaegemann, Cosima; Lindenberger, Dietmar [Koeln Univ. (Germany). Inst. of Energy Economics; Cherevatskiy, Stanislav; Troester, Eckehard; Brown, Tom [Energynautics GmbH, Langen (Germany)

    2012-07-01

    Electricity market models, implemented as dynamic programming problems, have been applied widely to identify possible pathways towards a cost-optimal and low carbon electricity system. However, the joint optimization of generation and transmission remains challenging, mainly due to the fact that different characteristics and rules apply to commercial and physical exchanges of electricity in meshed networks. This paper presents a methodology that allows to optimize power generation and transmission infrastructures jointly through an iterative approach based on power transfer distribution factors (PTDFs). As PTDFs are linear representations of the physical load flow equations, they can be implemented in a linear programming environment suitable for large scale problems such as the European power system. The algorithm iteratively updates PTDFs when grid infrastructures are modified due to cost-optimal extension and thus yields an optimal solution with a consistent representation of physical load flows. The method is demonstrated on a simplified three-node model where it is found to be stable and convergent. It is then scaled to the European level in order to find the optimal power system infrastructure development under the prescription of strongly decreasing CO{sub 2} emissions in Europe until 2050 with a specific focus on photovoltaic (PV) power. (orig.)

  7. Newly detected molecules in dense interstellar clouds

    Science.gov (United States)

    Irvine, William M.; Avery, L. W.; Friberg, P.; Matthews, H. E.; Ziurys, L. M.

    Several new interstellar molecules have been identified including C2S, C3S, C5H, C6H and (probably) HC2CHO in the cold, dark cloud TMC-1; and the discovery of the first interstellar phosphorus-containing molecule, PN, in the Orion "plateau" source. Further results include the observations of 13C3H2 and C3HD, and the first detection of HCOOH (formic acid) in a cold cloud.

  8. Carbon chain molecules in interstellar clouds

    International Nuclear Information System (INIS)

    Winnewisser, G.; Walmsley, C.M.

    1979-01-01

    A survey of the distribution of long carbon chain molecules in interstellar clouds shows that their abundance is correlated. The various formation schemes for these molecules are discussed. It is concluded that the ion-molecule type formation mechanisms are more promising than their competitors. They have also the advantage of allowing predictions which can be tested by observations. Acetylene C 2 H 2 and diacetylene HCCCCH, may be very abundant in interstellar clouds. (Auth.)

  9. Design, Development and Optimization of a Low Cost System for Digital Industrial Radiology

    International Nuclear Information System (INIS)

    2013-01-01

    regional training courses in which participants from Member States were given training in DIR techniques. The IAEA also supported establishing facilities for DIR techniques in some Member States. Realizing the need for easy construction and assembly of a low cost, more economically viable system for DIR technology, the IAEA conducted a coordinated research project (CRP) during 2007-2010 for research and development in the field of digital radiology, with the participation of 12 Member State laboratories. The current publication on design, development and optimization of a low cost DIR system is based on the findings of this CRP and inputs from other experts. The report provides guidelines to enable interested Member States to build their own DIR system in an affordable manner

  10. Low cost and conformal microwave water-cut sensor for optimizing oil production process

    KAUST Repository

    Karimi, Muhammad Akram

    2015-08-01

    Efficient oil production and refining processes require the precise measurement of water content in oil (i.e., water-cut) which is extracted out of a production well as a byproduct. Traditional water-cut (WC) laboratory measurements are precise, but are incapable of providing real-time information, while recently reported in-line WC sensors (both in research and industry) are usually incapable of sensing the full WC range (0 – 100 %), are bulky, expensive and non-scalable for the variety of pipe sizes used in the oil industry. This work presents a novel implementation of a planar microwave T-resonator for fully non-intrusive in situ WC sensing over the full range of operation, i.e., 0 – 100 %. As opposed to non-planar resonators, the choice of a planar resonator has enabled its direct implementation on the pipe surface using low cost fabrication methods. WC sensors make use of series resonance introduced by a λ/4 open shunt stub placed in the middle of a microstrip line. The detection mechanism is based on the measurement of the T-resonator’s resonance frequency, which varies with the relative percentage of oil and water (due to the difference in their dielectric properties). In order to implement the planar T-resonator based sensor on the curved surface of the pipe, a novel approach of utilizing two ground planes is proposed in this work. The innovative use of dual ground planes makes this sensor scalable to a wide range of pipe sizes present in the oil industry. The design and optimization of this sensor was performed in an electromagnetic Finite Element Method (FEM) solver, i.e., High Frequency Structural Simulator (HFSS) and the dielectric properties of oil, water and their emulsions of different WCs used in the simulation model were measured using a SPEAG-dielectric assessment kit (DAK-12). The simulation results were validated through characterization of fabricated prototypes. Initial rapid prototyping was completed using copper tape, after which a

  11. An optimized OPC and MDP flow for reducing mask write time and mask cost

    Science.gov (United States)

    Yang, Ellyn; Li, Cheng He; Park, Se Jin; Zhu, Yu; Guo, Eric

    2010-09-01

    In the process of optical proximity correction, layout edge or fragment is migrating to proper position in order to minimize edge placement error (EPE). During this fragment migration, several factors other than EPE can be also taken into account as a part of cost function for optimal fragment displacement. Several factors are devised in favor of OPC stability, which can accommodate room for high mask error enhancement factor (MEEF), lack of process window, catastrophic pattern failure such as pinch/bridge and improper fragmentation. As technology node becomes finer, there happens conflict between OPC accuracy and stability. Especially for metal layers, OPC has focused on the stability by loss of accurate OPC results. On this purpose, several techniques have been introduced, which are target smoothing, process window aware OPC, model-based retargeting and adaptive OPC. By utilizing those techniques, OPC enables more stabilized patterning, instead of realizing design target exactly on wafer. Inevitably, post-OPC layouts become more complicated because those techniques invoke additional edge, or fragments prior to correction or during OPC iteration. As a result, jogs of post OPC layer can be dramatically increased, which results in huge number of shot count after data fracturing. In other words, there is trade-off relationship between data complexity and various methods for OPC stability. In this paper, those relationships have been investigated with respect to several technology nodes. The mask shot count reduction is achieved by reducing the number of jogs with which EPE difference are within pre-specified value. The effect of jog smoothing on OPC output - in view of OPC performance and mask data preparation - was studied quantitatively for respective technology nodes.

  12. Optimal real time cost-benefit based demand response with intermittent resources

    International Nuclear Information System (INIS)

    Zareen, N.; Mustafa, M.W.; Sultana, U.; Nadia, R.; Khattak, M.A.

    2015-01-01

    Ever-increasing price of conventional energy resources and related environmental concern enforced to explore alternative energy sources. Inherent uncertainty of power generation and demand being strongly influenced by the electricity market has posed severe challenges for DRPs (Demand Response Programs). Definitely, the success of such uncertain energy systems under new market structures is critically decided by the advancement of innovative technical and financial tools. Recent exponential growth of DG (distributed generations) demanded both the grid reliability and financial cost–benefits analysis for deregulated electricity market stakeholders. Based on the SGT (signaling game theory), the paper presents a novel user-aware demand-management approach where the price are colligated with grid condition uncertainties to manage the peak residential loads. The degree of information disturbances are considered as a key factor for evaluating electricity bidding mechanisms in the presence of independent multi-generation resources and price-elastic demand. A correlation between the cost–benefit price and variable reliability of grid is established under uncertain generation and demand conditions. Impacts of the strategies on load shape, benefit of customers and the reduction of energy consumption are inspected and compared with Time-of-Used based DRPs. Simulation results show that the proposed DRP can significantly reduce or even eliminate peak-hour energy consumption, leading to a substantial raise of revenues with 18% increase in the load reduction and a considerable improvement in system reliability is evidenced. - Highlights: • Proposed an optimal real time cost-benefit based demand response model. • Used signaling game theory for the information disturbances in deregulated market. • Introduced a correlation between the cost–benefit price and variable grid reliability. • Derive robust bidding strategies for utility/customers successful participation.

  13. An Optimal Cost Effectiveness Study on Zimbabwe Cholera Seasonal Data from 2008–2011

    Science.gov (United States)

    Sardar, Tridip; Mukhopadhyay, Soumalya; Bhowmick, Amiya Ranjan; Chattopadhyay, Joydev

    2013-01-01

    Incidence of cholera outbreak is a serious issue in underdeveloped and developing countries. In Zimbabwe, after the massive outbreak in 2008–09, cholera cases and deaths are reported every year from some provinces. Substantial number of reported cholera cases in some provinces during and after the epidemic in 2008–09 indicates a plausible presence of seasonality in cholera incidence in those regions. We formulate a compartmental mathematical model with periodic slow-fast transmission rate to study such recurrent occurrences and fitted the model to cumulative cholera cases and deaths for different provinces of Zimbabwe from the beginning of cholera outbreak in 2008–09 to June 2011. Daily and weekly reported cholera incidence data were collected from Zimbabwe epidemiological bulletin, Zimbabwe Daily cholera updates and Office for the Coordination of Humanitarian Affairs Zimbabwe (OCHA, Zimbabwe). For each province, the basic reproduction number () in periodic environment is estimated. To the best of our knowledge, this is probably a pioneering attempt to estimate in periodic environment using real-life data set of cholera epidemic for Zimbabwe. Our estimates of agree with the previous estimate for some provinces but differ significantly for Bulawayo, Mashonaland West, Manicaland, Matabeleland South and Matabeleland North. Seasonal trend in cholera incidence is observed in Harare, Mashonaland West, Mashonaland East, Manicaland and Matabeleland South. Our result suggests that, slow transmission is a dominating factor for cholera transmission in most of these provinces. Our model projects cholera cases and cholera deaths during the end of the epidemic in 2008–09 to January 1, 2012. We also determine an optimal cost-effective control strategy among the four government undertaken interventions namely promoting hand-hygiene & clean water distribution, vaccination, treatment and sanitation for each province. PMID:24312540

  14. Optimal distributed energy resources and the cost of reduced greenhouse gas emissions in a large retail shopping centre

    International Nuclear Information System (INIS)

    Braslavsky, Julio H.; Wall, Josh R.; Reedman, Luke J.

    2015-01-01

    Highlights: • Optimal options for distributed energy resources are analysed for a shopping centre. • A multiobjective optimisation model is formulated and solved using DER-CAM. • Cost and emission trade-offs are compared in four key optimal investment scenarios. • Moderate investment in DER technologies lowers emissions by 29.6% and costs by 8.5%. • Larger investment in DER technologies lowers emissions by 72% at 47% higher costs. - Abstract: This paper presents a case study on optimal options for distributed energy resource (DER) technologies to reduce greenhouse gas emissions in a large retail shopping centre located in Sydney, Australia. Large retail shopping centres take the largest share of energy consumed by all commercial buildings, and present a strong case for adoption of DER technologies to reduce energy costs and emissions. However, the complexity of optimally designing and operating DER systems has hindered their widespread adoption in practice. This paper examines and demonstrates the value of DER in reducing the carbon footprint of the shopping centre by formulating and solving a multiobjective optimisation problem using the Distributed Energy Resources Customer Adoption Model (DER-CAM) tool. An economic model of the shopping centre is developed in DER-CAM using on-site-specific demand, tariffs, and performance data for each DER technology option available. Four key optimal DER technology investment scenarios are then analysed by comparing: (1) solution trade-offs of costs and emissions, (2) the cost of reduced emissions attained in each investment scenario, and (3) investment benefits with respect to the business-as-usual scenario. The analysis shows that a moderate investment in combined cooling, heat and power (CCHP) technology alone can reduce annual energy costs by 8.5% and carbon dioxide-equivalent emissions by 29.6%. A larger investment in CCHP technology, in conjunction with on-site solar photovoltaic (PV) generation, can deliver

  15. Optimal Vehicle Design Using the Integrated System and Cost Modeling Tool Suite

    Science.gov (United States)

    2010-08-01

    Space Vehicle Costing ( ACEIT ) • New Small Sat Model Development & Production Cost O&M Cost Module  Radiation Exposure  Radiation Detector Response...Reliability OML Availability Risk l l Tools CEA, SRM Model, POST, ACEIT , Inflation Model, Rotor Blade Des, Microsoft Project, ATSV, S/1-iABP...space STK, SOAP – Specific mission • Space Vehicle Design (SMAD) • Space Vehicle Propulsion • Orbit Propagation • Space Vehicle Costing ( ACEIT ) • New

  16. Cost related sensitivity analysis for optimal operation of a grid-parallel PEM fuel cell power plant

    Science.gov (United States)

    El-Sharkh, M. Y.; Tanrioven, M.; Rahman, A.; Alam, M. S.

    Fuel cell power plants (FCPP) as a combined source of heat, power and hydrogen (CHP&H) can be considered as a potential option to supply both thermal and electrical loads. Hydrogen produced from the FCPP can be stored for future use of the FCPP or can be sold for profit. In such a system, tariff rates for purchasing or selling electricity, the fuel cost for the FCPP/thermal load, and hydrogen selling price are the main factors that affect the operational strategy. This paper presents a hybrid evolutionary programming and Hill-Climbing based approach to evaluate the impact of change of the above mentioned cost parameters on the optimal operational strategy of the FCPP. The optimal operational strategy of the FCPP for different tariffs is achieved through the estimation of the following: hourly generated power, the amount of thermal power recovered, power trade with the local grid, and the quantity of hydrogen that can be produced. Results show the importance of optimizing system cost parameters in order to minimize overall operating cost.

  17. Optimal cost-effective designs of Phase II proof of concept trials and associated go-no go decisions.

    Science.gov (United States)

    Chen, Cong; Beckman, Robert A

    2009-01-01

    This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.

  18. Components in the interstellar medium

    International Nuclear Information System (INIS)

    Martin, E.R.

    1981-01-01

    An analysis is made of the lines of sight toward 32 stars with a procedure that gives velocity components for various interstellar ions. The column densities found for species expected to be relatively undepleted are used to estimate the column density of neutral hydrogen in each component. Whenever possible, the molecular hydrogen excitation temperature, abundances (relative to S II), electron density, and hydrogen volume density are calculated for each component. The results for each star are combined to give total HI column density as a function of (LSR) velocity. The derived velocities correspond well with those found in optical studies. The mean electron density is found to be approximately constant with velocity, but the mean hydrogen volume density is found to vary. The data presented here are consistent with the assumption that some of the velocity components are due to circumstellar material. The total HI column density toward a given star is generally in agreement with Lyman alpha measurements, but ionization and abundance effects are important toward some stars. The total HI column density is found to vary exponentially with velocity (for N(HI)> 10 17 cm -2 ), with an indication that the velocity dispersion at low column densities (N(HI) 17 cm -2 ) is approximately constant. An estimate is made of the kinetic energy density due to cloud motion which depends only on the total HI column density as a function of velocity. The value of 9 x 10 42 erg/pc 3 is in good agreement with a theoretical prediction

  19. Characterization of Interstellar Organic Molecules

    International Nuclear Information System (INIS)

    Gencaga, Deniz; Knuth, Kevin H.; Carbon, Duane F.

    2008-01-01

    Understanding the origins of life has been one of the greatest dreams throughout history. It is now known that star-forming regions contain complex organic molecules, known as Polycyclic Aromatic Hydrocarbons (PAHs), each of which has particular infrared spectral characteristics. By understanding which PAH species are found in specific star-forming regions, we can better understand the biochemistry that takes place in interstellar clouds. Identifying and classifying PAHs is not an easy task: we can only observe a single superposition of PAH spectra at any given astrophysical site, with the PAH species perhaps numbering in the hundreds or even thousands. This is a challenging source separation problem since we have only one observation composed of numerous mixed sources. However, it is made easier with the help of a library of hundreds of PAH spectra. In order to separate PAH molecules from their mixture, we need to identify the specific species and their unique concentrations that would provide the given mixture. We develop a Bayesian approach for this problem where sources are separated from their mixture by Metropolis Hastings algorithm. Separated PAH concentrations are provided with their error bars, illustrating the uncertainties involved in the estimation process. The approach is demonstrated on synthetic spectral mixtures using spectral resolutions from the Infrared Space Observatory (ISO). Performance of the method is tested for different noise levels.

  20. The photoevaporation of interstellar clouds

    International Nuclear Information System (INIS)

    Bertoldi, F.

    1989-01-01

    The dynamics of the photoevaporation of interstellar clouds and its consequences for the structure and evolution of H II regions are studied. An approximate analytical solution for the evolution of photoevaporating clouds is derived under the realistic assumption of axisymmetry. The effects of magnetic fields are taken into account in an approximate way. The evolution of a neutral cloud subjected to the ionizing radiation of an OB star has two distinct stages. When a cloud is first exposed to the radiation, the increase in pressure due to the ionization at the surface of the cloud leads to a radiation-driven implosion: an ionization front drives a shock into the cloud, ionizes part of it and compresses the remaining into a dense globule. The initial implosion is followed by an equilibrium cometary stage, in which the cloud maintains a semistationary comet-shaped configuration; it slowly evaporates while accelerating away from the ionizing star until the cloud has been completely ionized, reaches the edge of the H II region, or dies. Expressions are derived for the cloud mass-loss rate and acceleration. To investigate the effect of the cloud photoevaporation on the structure of H II regions, the evolution of an ensemble of clouds of a given mass distribution is studied. It is shown that the compressive effect of the ionizing radiation can induce star formation in clouds that were initially gravitationally stable, both for thermally and magnetically supported clouds

  1. The interstellar medium in galaxies

    CERN Document Server

    1997-01-01

    It has been more than five decades ago that Henk van de Hulst predicted the observability of the 21-cm line of neutral hydrogen (HI ). Since then use of the 21-cm line has greatly improved our knowledge in many fields and has been used for galactic structure studies, studies of the interstellar medium (ISM) in the Milky Way and other galaxies, studies of the mass distribution of the Milky Way and other galaxies, studies of spiral struc­ ture, studies of high velocity gas in the Milky Way and other galaxies, for measuring distances using the Tully-Fisher relation etc. Regarding studies of the ISM, there have been a number of instrumen­ tal developments over the past decade: large CCD's became available on optical telescopes, radio synthesis offered sensitive imaging capabilities, not only in the classical 21-cm HI line but also in the mm-transitions of CO and other molecules, and X-ray imaging capabilities became available to measure the hot component of the ISM. These developments meant that Milky Way was n...

  2. Wavelength dependence of interstellar polarization

    International Nuclear Information System (INIS)

    Mavko, G.E.

    1974-01-01

    The wavelength dependence of interstellar polarization was measured for twelve stars in three regions of the Milky Way. A 120A bandpass was used to measure the polarization at a maximum of sixteen wavelengths evenly spaced between 2.78μ -1 (3600A) and 1.28μ -1 (7800A). For such a wide wavelength range, the wavelength resolution is superior to that of any previously reported polarization measurements. The new scanning polarimeter built by W. A. Hiltner of the University of Michigan was used for the observations. Very broad structure was found in the wavelength dependence of the polarization. Extensive investigations were carried out to show that the structure was not caused by instrumental effects. The broad structure observed is shown to be in agreement with concurrent extinction measurements for the same stars. Also, the observed structure is of the type predicted when a homogeneous silicate grain model is fitted to the observed extinction. The results are in agreement with the hypothesis that the very broad band structure seen in the extinction is produced by the grains. (Diss. Abstr. Int., B)

  3. Equation of costs and function objective for the optimization of the design of nets of flow of liquids to pressure

    International Nuclear Information System (INIS)

    Narvaez R, Paulo Cesar; Galeano P, Haiver

    2002-01-01

    Optimal design problem of liquid distribution systems has been viewed as the selection of pipe sizes and pumps, which will minimize overall costs, accomplishing the flow and pressure constraints. There is a set of methods for least cost design of liquids distribution networks (6). In the last years, some of them have been studied broadly: linear programming (1, 4, 5, 7], non-linear programming [8, 9], and genetic algorithms (3, 10, 13). This paper describes the development of a cost equation and the objective function for liquid distribution networks that together to the mathematical model and the solution method of the flow problem developed by Narvaez (11), were used by in a computer model that involves the application of an genetic algorithm to the problem of least cost design of liquids distribution networks

  4. A theoretical quantum chemical study of alanine formation in interstellar medium

    Science.gov (United States)

    Shivani; Pandey, Parmanad; Misra, Alka; Tandon, Poonam

    2017-08-01

    The interstellar medium, the vast space between the stars, is a rich reservoir of molecular material ranging from simple diatomic molecules to more complex, astrobiologically important molecules such as amino acids, nucleobases, and other organic species. Radical-radical and radical-neutral interaction schemes are very important for the formation of comparatively complex molecules in low temperature chemistry. An attempt has been made to explore the possibility of formation of complex organic molecules in interstellar medium, through detected interstellar molecules like CH3CN and HCOOH. The gas phase reactions are theoretically studied using quantum chemical techniques. We used the density functional theory (DFT) at the B3LYP/6-311G( d, p) level. The reaction energies, potential barrier and optimized structures of all the geometries, involved in the reaction path, has been discussed. We report the potential energy surfaces for the reactions considered in this work.

  5. Cost-effectiveness of optimizing prevention in patients with coronary heart disease: the EUROASPIRE III health economics project.

    Science.gov (United States)

    De Smedt, Delphine; Kotseva, Kornelia; De Bacquer, Dirk; Wood, David; De Backer, Guy; Dallongeville, Jean; Seppo, Lehto; Pajak, Andrzej; Reiner, Zeljko; Vanuzzo, Diego; Georgiev, Borislav; Gotcheva, Nina; Annemans, Lieven

    2012-11-01

    The EUROASPIRE III survey indicated that the guidelines on cardiovascular disease prevention are poorly implemented in patients with established coronary heart disease (CHD). The purpose of this health economic project was to assess the potential clinical effectiveness and cost-effectiveness of optimizing cardiovascular prevention in eight EUROASPIRE III countries (Belgium, Bulgaria, Croatia, Finland, France, Italy, Poland, and the U.K.). METHODS AND RESULTS The individual risk for subsequent cardiovascular events was estimated, based on published Framingham equations. Based on the EUROASPIRE III data, the type of suboptimal prevention, if any, was identified for each individual, and the effects of optimized tailored prevention (smoking cessation, diet and exercise, better management of elevated blood pressure and/or LDL-cholesterol) were estimated. Costs of prevention and savings of avoided events were based on country-specific data. A willingness to pay threshold of €30,000/quality-adjusted life year (QALY) was used. The robustness of the results was validated by sensitivity analyses. Overall, the cost-effectiveness analyses for the eight countries showed mainly favourable results with an average incremental cost-effectiveness ratio (ICER) of €12,484 per QALY. Only in the minority of patients at the lowest risk for recurrent events, intensifying preventive therapy seems not cost-effective. Also, the single impact of intensified cholesterol control seems less cost-effective, possibly because their initial 2-year risk was already fairly low, hence the room for improvement is rather limited. These results underscore the societal value of optimizing prevention in most patients with established CHD, but also highlight the need for setting priorities towards patients more at risk and the need for more studies comparing intensified prevention with usual care in these patients.

  6. Simulation, optimization and analysis of cost of biodiesel plant pot route enzymatic; Simulacao, otimizacao e analise de custo de planta de biodiesel via rota enzimatica

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, Jocelia S.; Ferreira, Andrea L.O. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Silva, Giovanilton F. [Tecnologia Bioenergetica - Tecbio, Fortaleza, CE (Brazil)

    2008-07-01

    The aim of this work ware simulation, optimization and to find the biodiesel production cost produced by enzymatic route. Consequently, it was carried out a methodology of economic calculations and sensitivity analyses for this process. It was used a computational software from balance equations for obtaining the biodiesel cost. The economical analysis was obtained by capital cost of biofuel. The whole process was developed according analysis of fixed capital cost, total manufacturing cost, raw material cost, and chemical cost. The results of economic calculations to biodiesel production showed efficient. The model was meant for use in assessing the effects on estimated biodiesel production cost of changes in different types of oils. (author)

  7. Tårs 10000 m2 CSP + Flat Plate Solar Collector Plant - Cost-Performance Optimization of the Design

    DEFF Research Database (Denmark)

    Perers, Bengt; Furbo, Simon; Tian, Zhiyong

    2016-01-01

    , was established. The optimization showed that there was a synergy in combining CSP and FP collectors. Even though the present cost per m² of the CSP collectors is high, the total energy cost is minimized by installing a combination of collectors in such solar heating plant. It was also found that the CSP......A novel solar heating plant with Concentrating Solar Power (CSP) collectors and Flat Plate (FP) collectors has been put into operation in Tårs since July 2015. To investigate economic performance of the plant, a TRNSYS-Genopt model, including a solar collector field and thermal storage tank...

  8. New methods to minimize the preventive maintenance cost of series-parallel systems using ant colony optimization

    International Nuclear Information System (INIS)

    Samrout, M.; Yalaoui, F.; Cha-hat telet, E.; Chebbo, N.

    2005-01-01

    This article is based on a previous study made by Bris, Chatelet and Yalaoui [Bris R, Chatelet E, Yalaoui F. New method to minimise the preventive maintenance cost of series-parallel systems. Reliab Eng Syst Saf 2003;82:247-55]. They use genetic algorithm to minimize preventive maintenance cost problem for the series-parallel systems. We propose to improve their results developing a new method based on another technique, the Ant Colony Optimization (ACO). The resolution consists in determining the solution vector of system component inspection periods, T P . Those calculations were applied within the programming tool Matlab. Thus, highly interesting results and improvements of previous studies were obtained

  9. Cost optimization of ADS design: Comparative study of externally driven heterogeneous and homogeneous two-zone subcritical reactor systems

    International Nuclear Information System (INIS)

    Gulik, Volodymyr; Tkaczyk, Alan H.

    2014-01-01

    Highlights: • The optimization of two-zone homogeneous subcritical systems has been performed. • A Serpent model for two-zone heterogeneous subcritical systems has been developed. • The optimization of two-zone heterogeneous subcritical systems has been carried out. • Economically optimal core composition of two-zone subcritical system was found. • The neutron spectra of the heterogeneous subcritical systems have been obtained. - Abstract: Subcritical systems driven by external neutron sources, commonly known as Accelerator-Driven System (ADS), are one type of advanced nuclear reactor exhibiting attractive characteristics, distinguished from the traditional critical systems by their intrinsic safety features. In addition, an ADS can be used for the transmutation of the nuclear waste, accumulated during the operation of existing reactors. The optimization of a subcritical nuclear reactor in terms of materials (fuel content, coolant, etc.), geometrical, and economical parameters is a crucial step in the process of their design and construction. This article describes the optimization modeling performed for homogeneous and heterogeneous two-zone subcritical systems in terms of geometry of the fuel zones. Economical assessment was also carried out for the costs of the fuel in the core of the system. Optimization modeling was performed with the Serpent-1.1.18 Monte Carlo code. The model of a two-zone subcritical system with a fast inner and a thermal gas-cooled graphite-moderated outer zone was developed, simulated, and analyzed. The optimal value for the pitch of fuel elements in the thermal outer zone was investigated from the viewpoint of the cost of subcritical system. As the main goal of ADS development is nuclear waste transmutation, neutron spectra for both fast and thermal zones were obtained for different system configurations. The results of optimization modeling of homogeneous and heterogeneous two-zone subcritical systems show that an optimal

  10. Organic chemistry and biology of the interstellar medium

    Science.gov (United States)

    Sagan, C.

    1973-01-01

    Interstellar organic chemistry is discussed as the field of study emerging from the discovery of microwave lines of formaldehyde and of hydrogen cyanide in the interstellar medium. The reliability of molecular identifications and comparisons of interstellar and cometary compounds are considered, along with the degradational origin of simple organics. It is pointed out that the contribution of interstellar organic chemistry to problems in biology is not substantive but analogical. The interstellar medium reveals the operation of chemical processes which, on earth and perhaps on vast numbers of planets throughout the universe, led to the origin of life, but the actual molecules of the interstellar medium are unlikely to play any significant biological role.

  11. Optimizing power plant cycling operations while reducing generating plant damage and costs

    Energy Technology Data Exchange (ETDEWEB)

    Lefton, S.A.; Besuner, P.H.; Grimsrud, P. [Aptech Engineering Services, Inc., Sunnyvale, CA (United States); Bissel, A. [Electric Supply Board, Dublin (Ireland)

    1998-12-31

    This presentation describes a method for analyzing, quantifying, and minimizing the total cost of fossil, combined cycle, and pumped hydro power plant cycling operation. The method has been developed, refined, and applied during engineering studies at some 160 units in the United States and 8 units at the Irish Electric Supply Board (ESB) generating system. The basic premise of these studies was that utilities are underestimating the cost of cycling operation. The studies showed that the cost of cycling conventional boiler/turbine fossil power plants can range from between $2,500 and $500,000 per start-stop cycle. It was found that utilities typically estimate these costs by factors of 3 to 30 below actual costs and, thus, often significantly underestimate their true cycling costs. Knowledge of the actual, or total, cost of cycling will reduce power production costs by enabling utilities to more accurately dispatch their units to manage unit life expectancies, maintenance strategies and reliability. Utility management responses to these costs are presented and utility cost savings have been demonstrated. (orig.) 7 refs.

  12. Optimizing power plant cycling operations while reducing generating plant damage and costs

    Energy Technology Data Exchange (ETDEWEB)

    Lefton, S A; Besuner, P H; Grimsrud, P [Aptech Engineering Services, Inc., Sunnyvale, CA (United States); Bissel, A [Electric Supply Board, Dublin (Ireland)

    1999-12-31

    This presentation describes a method for analyzing, quantifying, and minimizing the total cost of fossil, combined cycle, and pumped hydro power plant cycling operation. The method has been developed, refined, and applied during engineering studies at some 160 units in the United States and 8 units at the Irish Electric Supply Board (ESB) generating system. The basic premise of these studies was that utilities are underestimating the cost of cycling operation. The studies showed that the cost of cycling conventional boiler/turbine fossil power plants can range from between $2,500 and $500,000 per start-stop cycle. It was found that utilities typically estimate these costs by factors of 3 to 30 below actual costs and, thus, often significantly underestimate their true cycling costs. Knowledge of the actual, or total, cost of cycling will reduce power production costs by enabling utilities to more accurately dispatch their units to manage unit life expectancies, maintenance strategies and reliability. Utility management responses to these costs are presented and utility cost savings have been demonstrated. (orig.) 7 refs.

  13. Configuration optimization of series flow double-effect water-lithium bromide absorption refrigeration systems by cost minimization

    DEFF Research Database (Denmark)

    Mussati, Sergio F.; Cignitti, Stefano; Mansouri, Seyed Soheil

    2018-01-01

    An optimal process configuration for double-effect water-lithium bromide absorption refrigeration systems with series flow – where the solution is first passed through the high-temperature generator – is obtained by minimization of the total annual cost for a required cooling capacity. To this end......) takes place entirely at the high-temperature zone, and the sizes and operating conditions of the other process units change accordingly in order to meet the problem specification with the minimal total annual cost. This new configuration was obtained for wide ranges of the cooling capacity (150–450 k.......9%, respectively. Most importantly, the obtained optimal solution eliminates the low-temperature solution heat exchanger from the conventional configuration, rendering a new process configuration. The energy integration between the weak and strong lithium bromide solutions (cold and hot streams, respectively...

  14. METHODOLOGY FOR DETERMINING THE OPTIMAL CLEANING PERIOD OF HEAT EXCHANGERS BY USING THE CRITERIA OF MINIMUM COST

    Directory of Open Access Journals (Sweden)

    Yanileisy Rodríguez Calderón

    2015-04-01

    Full Text Available One of the most serious problems of the Process Industry is that when planning the maintenance of the heat exchangers is not applied the methodologies based on economic criteria to optimize periods of cleaning surfaces resulting in additional costs for the company and for the country. This work develops and proposes a methodical based on the criterion of Minimum Cost for determining the optimal cleaning period. It is given an example of application of this method to the case of intercoolers of a centrifugal compressor with a high fouling level.It occurs this because is used sea water with many microorganisms as cooling agent which severely embeds transfer surfaces of side water. The methodology employed can be generalized to other applications.

  15. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    Science.gov (United States)

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  16. Optimal inspection and replacement periods of the safety system in Wolsung Nuclear Power Plant Unit 1 with an optimized cost perspective

    International Nuclear Information System (INIS)

    Jinil Mok; Poong Hyun Seong

    1996-01-01

    In this work, a model for determining the optimal inspection and replacement periods of the safety system in Wolsung Nuclear Power Plant Unit 1 is developed, which is to minimize economic loss caused by inadvertent trip and the system failure. This model uses cost benefit analysis method and the part for optimal inspection period considers the human error. The model is based on three factors as follows: (i) The cumulative failure distribution function of the safety system, (ii) The probability that the safety system does not operate due to failure of the system or human error when the safety system is needed at an emergency condition and (iii) The average probability that the reactor is tripped due to the failure of system components or human error. The model then is applied to evaluate the safety system in Wolsung Nuclear Power Plant Unit 1. The optimal replacement periods which are calculated with proposed model differ from those used in Wolsung NPP Unit 1 by about a few days or months, whereas the optimal inspection periods are in about the same range. (author)

  17. On Graphene in the Interstellar Medium

    Science.gov (United States)

    Chen, X. H.; Li, Aigen; Zhang, Ke

    2017-11-01

    The possible detection of C24, a planar graphene that was recently reported to be in several planetary nebulae by García-Hernández et al., inspires us to explore whether and how much graphene could exist in the interstellar medium (ISM) and how it would reveal its presence through its ultraviolet (UV) extinction and infrared (IR) emission. In principle, interstellar graphene could arise from the photochemical processing of polycyclic aromatic hydrocarbon (PAH) molecules, which are abundant in the ISM, due to the complete loss of their hydrogen atoms, and/or from graphite, which is thought to be a major dust species in the ISM, via fragmentation caused by grain–grain collisional shattering. Both quantum-chemical computations and laboratory experiments have shown that the exciton-dominated electronic transitions in graphene cause a strong absorption band near 2755 \\mathringA . We calculate the UV absorption of graphene and place an upper limit of ∼5 ppm of C/H (i.e., ∼1.9% of the total interstellar C) on the interstellar graphene abundance. We also model the stochastic heating of graphene C24 in the ISM, excited by single starlight photons of the interstellar radiation field and calculate its IR emission spectra. We also derive the abundance of graphene in the ISM to be <5 ppm of C/H by comparing the model emission spectra with that observed in the ISM.

  18. Optimal Control Method for Wind Farm to Support Temporary Primary Frequency Control with Minimized Wind Energy Cost

    DEFF Research Database (Denmark)

    Wang, Haijiao; Chen, Zhe; Jiang, Quanyuan

    2015-01-01

    This study proposes an optimal control method for variable speed wind turbines (VSWTs) based wind farm (WF) to support temporary primary frequency control. This control method consists of two layers: temporary frequency support control (TFSC) of the VSWT, and temporary support power optimal...... dispatch (TSPOD) of the WF. With TFSC, the VSWT could temporarily provide extra power to support system frequency under varying and wide-range wind speed. In the WF control centre, TSPOD optimally dispatches the frequency support power orders to the VSWTs that operate under different wind speeds, minimises...... the wind energy cost of frequency support, and satisfies the support capabilities of the VSWTs. The effectiveness of the whole control method is verified in the IEEE-RTS built in MATLABSimulink, and compared with a published de-loading method....

  19. Optimal costs of HIV pre-exposure prophylaxis for men who have sex with men.

    Directory of Open Access Journals (Sweden)

    Jennie McKenney

    Full Text Available Men who have sex with men (MSM are disproportionately affected by HIV due to their increased risk of infection. Oral pre-exposure prophylaxis (PrEP is a highly effictive HIV-prevention strategy for MSM. Despite evidence of its effectiveness, PrEP uptake in the United States has been slow, in part due to its cost. As jurisdictions and health organizations begin to think about PrEP scale-up, the high cost to society needs to be understood.We modified a previously-described decision-analysis model to estimate the cost per quality-adjusted life-year (QALY gained, over a 1-year duration of PrEP intervention and lifetime time horizon. Using updated parameter estimates, we calculated: 1 the cost per QALY gained, stratified over 4 strata of PrEP cost (a function of both drug cost and provider costs; and 2 PrEP drug cost per year required to fall at or under 4 cost per QALY gained thresholds.When PrEP drug costs were reduced by 60% (with no sexual disinhibition to 80% (assuming 25% sexual disinhibition, PrEP was cost-effective (at <$100,000 per QALY averted in all scenarios of base-case or better adherence, as long as the background HIV prevalence was greater than 10%. For PrEP to be cost saving at base-case adherence/efficacy levels and at a background prevalence of 20%, drug cost would need to be reduced to $8,021 per year with no disinhibition, and to $2,548 with disinhibition.Results from our analysis suggest that PrEP drug costs need to be reduced in order to be cost-effective across a range of background HIV prevalence. Moreover, our results provide guidance on the pricing of generic emtricitabine/tenofovir disoproxil fumarate, in order to provide those at high risk for HIV an affordable prevention option without financial burden on individuals or jurisdictions scaling-up coverage.

  20. Utilization of Supercapacitors in Adaptive Protection Applications for Resiliency against Communication Failures: A Size and Cost Optimization Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Hany F [Florida Intl Univ., Miami, FL (United States); El Hariri, Mohamad [Florida Intl Univ., Miami, FL (United States); Elsayed, Ahmed [Florida Intl Univ., Miami, FL (United States); Mohammed, Osama [Florida Intl Univ., Miami, FL (United States)

    2017-03-30

    Microgrids’ adaptive protection techniques rely on communication signals from the point of common coupling to ad- just the corresponding relays’ settings for either grid-connected or islanded modes of operation. However, during communication out- ages or in the event of a cyberattack, relays settings are not changed. Thus adaptive protection schemes are rendered unsuc- cessful. Due to their fast response, supercapacitors, which are pre- sent in the microgrid to feed pulse loads, could also be utilized to enhance the resiliency of adaptive protection schemes to communi- cation outages. Proper sizing of the supercapacitors is therefore im- portant in order to maintain a stable system operation and also reg- ulate the protection scheme’s cost. This paper presents a two-level optimization scheme for minimizing the supercapacitor size along with optimizing its controllers’ parameters. The latter will lead to a reduction of the supercapacitor fault current contribution and an increase in that of other AC resources in the microgrid in the ex- treme case of having a fault occurring simultaneously with a pulse load. It was also shown that the size of the supercapacitor can be reduced if the pulse load is temporary disconnected during the transient fault period. Simulations showed that the resulting super- capacitor size and the optimized controller parameters from the proposed two-level optimization scheme were feeding enough fault currents for different types of faults and minimizing the cost of the protection scheme.

  1. Living renal donors: optimizing the imaging strategy--decision- and cost-effectiveness analysis

    NARCIS (Netherlands)

    Y.S. Liem (Ylian Serina); M.C.J.M. Kock (Marc); W. Weimar (Willem); K. Visser (Karen); M.G.M. Hunink (Myriam); J.N.M. IJzermans (Jan)

    2003-01-01

    textabstractPURPOSE: To determine the most cost-effective strategy for preoperative imaging performed in potential living renal donors. MATERIALS AND METHODS: In a decision-analytic model, the societal cost-effectiveness of digital subtraction angiography (DSA), gadolinium-enhanced

  2. Social welfare and the Affordable Care Act: is it ever optimal to set aside comparative cost?

    Science.gov (United States)

    Mortimer, Duncan; Peacock, Stuart

    2012-10-01

    The creation of the Patient-Centered Outcomes Research Institute (PCORI) under the Affordable Care Act has set comparative effectiveness research (CER) at centre stage of US health care reform. Comparative cost analysis has remained marginalised and it now appears unlikely that the PCORI will require comparative cost data to be collected as an essential component of CER. In this paper, we review the literature to identify ethical and distributional objectives that might motivate calls to set priorities without regard to comparative cost. We then present argument and evidence to consider whether there is any plausible set of objectives and constraints against which priorities can be set without reference to comparative cost. We conclude that - to set aside comparative cost even after accounting for ethical and distributional constraints - would be truly to act as if money is no object. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Optimization of transport network in the Basin of Yangtze River with minimization of environmental emission and transport/investment costs

    Directory of Open Access Journals (Sweden)

    Haiping Shi

    2016-08-01

    Full Text Available The capacity of the ship-lock at the Three Gorges Dam has become bottleneck of waterway transport and caused serious congestion. In this article, a continual network design model is established to solve the problem with minimizing the transport cost and environmental emission as well as infrastructure construction cost. In this bi-level model, the upper model gives the schemes of ship-lock expansion or construction of pass-dam highway. The lower model assigns the containers in the multi-mode network and calculates the transport cost, environmental emission, and construction investment. The solution algorithm to the model is proposed. In the numerical study, scenario analyses are done to evaluate the schemes and determine the optimal one in the context of different traffic demands. The result shows that expanding the ship-lock is better than constructing pass-dam highway.

  4. Photodissociation and excitation of interstellar molecules

    International Nuclear Information System (INIS)

    Dishoeck, E.F. van.

    1984-01-01

    Apart from a rather long introduction containing some elementary astrophysics, quantum chemistry and spectroscopy and an incomplete, historical review of molecular observations, this thesis is divided into three sections. In part A, a rigorous quantum chemical and dynamical study is made of the photodissociation processes in the OH and HCl molecules. In part B, the cross sections obtained in part A are used in various astrophysical problems such as the study of the abundances of the OH and HCl molecules in interstellar clouds, the use of the OH abundance as a measure of the cosmic ray ionization rate, the lifetime of the OH radical in comets and the abundance of OH in the solar photosphere. Part C discusses the excitation of the C 2 molecule under interstellar conditions, its use as a diagnostic probe of the temperature, density and strength of the radiation field in interstellar clouds. Quadrupole moments and oscillator strengths are analyzed. (Auth.)

  5. On the nature of interstellar turbulence

    International Nuclear Information System (INIS)

    Altunin, V.I.

    1981-01-01

    Possible reasons of interstellar medium turbulence manifested in pulsar scintillation and radio-frequency emission scattering of extragalactic sources near by the Galaxy plane, are discussed. Sources and conditions of turbulence emergence in HII region shells, supernova, residue and in stellar wind giving observed scattering effects are considered. It is shown that in the formation of the interstellar scintillation pattern of discrete radio-frequency emission sources a certain role can be played by magnetosound turbulence, which arises due to shock-waves propagating in the interstellar medium at a velocity Vsub(sh) approximately 20-100 km/s as well as by stellar-wind inhomogeneity of OB classes stars [ru

  6. Physics of the interstellar and intergalactic medium

    CERN Document Server

    Draine, Bruce T

    2010-01-01

    This is a comprehensive and richly illustrated textbook on the astrophysics of the interstellar and intergalactic medium--the gas and dust, as well as the electromagnetic radiation, cosmic rays, and magnetic and gravitational fields, present between the stars in a galaxy and also between galaxies themselves. Topics include radiative processes across the electromagnetic spectrum; radiative transfer; ionization; heating and cooling; astrochemistry; interstellar dust; fluid dynamics, including ionization fronts and shock waves; cosmic rays; distribution and evolution of the interstellar medium; and star formation. While it is assumed that the reader has a background in undergraduate-level physics, including some prior exposure to atomic and molecular physics, statistical mechanics, and electromagnetism, the first six chapters of the book include a review of the basic physics that is used in later chapters. This graduate-level textbook includes references for further reading, and serves as an invaluable resourc...

  7. The use of lifetime functions in the optimization of interventions on existing bridges considering maintenance and failure costs

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Seung-Ie [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: yangsione@dreamwiz.com; Frangopol, Dan M. [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: dan.frangopol@colorado.edu; Kawakami, Yoriko [Hanshin Expressway Public Corporation, Kobe Maintenance Department, 16-1 Shinko-cho Chuo-ku Kobe City, Hyogo, 650-0041 (Japan)]. E-mail: yoriko-kawakami@hepc.go.jp; Neves, Luis C. [Department of Civil, Enviromental, and Architectural Enginnering, University of Colorado, Campus Box 428, Boulder, CO 80309-0428 (United States)]. E-mail: lneves@civil.uminho.pt

    2006-06-15

    In the last decade, it became clear that life-cycle cost analysis of existing civil infrastructure must be used to optimally manage the growing number of aging and deteriorating structures. The uncertainties associated with deteriorating structures require the use of probabilistic methods to properly evaluate their lifetime performance. In this paper, the deterioration and the effect of maintenance actions are analyzed considering the performance of existing structures characterized by lifetime functions. These functions allow, in a simple manner, the consideration of the effect of aging on the decrease of the probability of survival of a structure, as well as the effect of maintenance actions. Models for the effects of proactive and reactive preventive maintenance, and essential maintenance actions are presented. Since the probability of failure is different from zero during the entire service life of a deteriorating structure and depends strongly on the maintenance strategy, the cost of failure is included in this analysis. The failure of one component in a structure does not usually lead to failure of the structure and, as a result, the safety of existing structures must be analyzed using a system reliability framework. The optimization consists of minimizing the sum of the cumulative maintenance and expected failure cost during the prescribed time horizon. Two examples of application of the proposed methodology are presented. In the first example, the sum of the maintenance and failure costs of a bridge in Colorado is minimized considering essential maintenance only and a fixed minimum acceptable probability of failure. In the second example, the expected lifetime cost, including maintenance and expected failure costs, of a multi-girder bridge is minimized considering reactive preventive maintenance actions.

  8. The use of lifetime functions in the optimization of interventions on existing bridges considering maintenance and failure costs

    International Nuclear Information System (INIS)

    Yang, Seung-Ie; Frangopol, Dan M.; Kawakami, Yoriko; Neves, Luis C.

    2006-01-01

    In the last decade, it became clear that life-cycle cost analysis of existing civil infrastructure must be used to optimally manage the growing number of aging and deteriorating structures. The uncertainties associated with deteriorating structures require the use of probabilistic methods to properly evaluate their lifetime performance. In this paper, the deterioration and the effect of maintenance actions are analyzed considering the performance of existing structures characterized by lifetime functions. These functions allow, in a simple manner, the consideration of the effect of aging on the decrease of the probability of survival of a structure, as well as the effect of maintenance actions. Models for the effects of proactive and reactive preventive maintenance, and essential maintenance actions are presented. Since the probability of failure is different from zero during the entire service life of a deteriorating structure and depends strongly on the maintenance strategy, the cost of failure is included in this analysis. The failure of one component in a structure does not usually lead to failure of the structure and, as a result, the safety of existing structures must be analyzed using a system reliability framework. The optimization consists of minimizing the sum of the cumulative maintenance and expected failure cost during the prescribed time horizon. Two examples of application of the proposed methodology are presented. In the first example, the sum of the maintenance and failure costs of a bridge in Colorado is minimized considering essential maintenance only and a fixed minimum acceptable probability of failure. In the second example, the expected lifetime cost, including maintenance and expected failure costs, of a multi-girder bridge is minimized considering reactive preventive maintenance actions

  9. Experiments on chemical and physical evolution of interstellar grain mantles

    International Nuclear Information System (INIS)

    Greenberg, J.M.

    1984-01-01

    The Astrophysical Laboratory at the University of Leiden is the first to succeed in simulating the essential conditions in interstellar space as they affect the evolution of interstellar grains. (author)

  10. Locally optimal control under unknown dynamics with learnt cost function: application to industrial robot positioning

    Science.gov (United States)

    Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric

    2017-01-01

    Recent methods of Reinforcement Learning have enabled to solve difficult, high dimensional, robotic tasks under unknown dynamics using iterative Linear Quadratic Gaussian control theory. These algorithms are based on building a local time-varying linear model of the dynamics from data gathered through interaction with the environment. In such tasks, the cost function is often expressed directly in terms of the state and control variables so that it can be locally quadratized to run the algorithm. If the cost is expressed in terms of other variables, a model is required to compute the cost function from the variables manipulated. We propose a method to learn the cost function directly from the data, in the same way as for the dynamics. This way, the cost function can be defined in terms of any measurable quantity and thus can be chosen more appropriately for the task to be carried out. With our method, any sensor information can be used to design the cost function. We demonstrate the efficiency of this method through simulating, with the V-REP software, the learning of a Cartesian positioning task on several industrial robots with different characteristics. The robots are controlled in joint space and no model is provided a priori. Our results are compared with another model free technique, consisting in writing the cost function as a state variable.

  11. Surface chemistry on interstellar oxide grains

    International Nuclear Information System (INIS)

    Denison, P.; Williams, D.A.

    1981-01-01

    Detailed calculations are made to test the predictions of Duley, Millar and Williams (1978) concerning the chemical reactivity of interstellar oxide grains. A method is established for calculating interaction energies between atoms and the perfect crystal with or without surface vacancy sites. The possibility of reactions between incident atoms and absorbed atoms is investigated. It is concluded that H 2 formation can occur on the perfect crystal surfaces, and that for other diatomic molecules the important formation sites are the Fsub(s)- and V 2- sub(s)-centres. The outline by Duley, Millar and Williams (1979) of interstellar oxide grain growth and destruction is justified by these calculations. (author)

  12. Optimal sizing of plug-in fuel cell electric vehicles using models of vehicle performance and system cost

    International Nuclear Information System (INIS)

    Xu, Liangfei; Ouyang, Minggao; Li, Jianqiu; Yang, Fuyuan; Lu, Languang; Hua, Jianfeng

    2013-01-01

    Highlights: ► An analytical model for vehicle performance and power-train parameters. ► Quantitative relationships between vehicle performance and power-train parameters. ► Optimal sizing rules that help designing an optimal PEM fuel cell power-train. ► An on-road testing showing the performance of the proposed vehicle. -- Abstract: This paper presents an optimal sizing method for plug-in proton exchange membrane (PEM) fuel cell and lithium-ion battery (LIB) powered city buses. We propose a theoretical model describing the relationship between components’ parameters and vehicle performance. Analysis results show that within the working range of the electric motor, the maximal velocity and driving distance are influenced linearly by the parameters of the components, e.g. fuel cell efficiency, fuel cell output power, stored hydrogen mass, vehicle auxiliary power, battery capacity, and battery average resistance. Moreover, accelerating time is also linearly dependant on the abovementioned parameters, except of those of the battery. Next, we attempt to minimize fixed and operating costs by introducing an optimal sizing problem that uses as constraints the requirements on vehicle performance. By solving this problem, we attain several optimal sizing rules. Finally, we use these rules to design a plug-in PEM fuel cell city bus and present performance results obtained by on-road testing.

  13. On the “cost-optimal levels” of energy performance requirements and its economic evaluation in Italy

    Directory of Open Access Journals (Sweden)

    Lamberto Tronchin

    2014-10-01

    Full Text Available The European energy policies about climate and energy package, known as the “20-20-20” targets define ambitious, but achievable, national energy objectives. As regards the Directives closely related to the 2020 targets, the EU Energy Performance of Buildings Directive (EPBD Recast- DIR 2010/31/EU is the main European legislative instrument for improving the energy performance of buildings, taking into account outdoor climatic and local conditions, as well as indoor climate requirements and cost-effectiveness. The EPBD recast now requests that Member States shall ensure that minimum energy performance requirements for buildings are set “with a view to achieving cost-optimal levels”. The cost optimum level shall be calculated in accordance with a comparative methodology framework, leaving the Member States to determine which of these calculations is to become the national benchmark against which national minimum energy performance requirements will be assessed. The European standards (ENs- Umbrella Document V7 (prCEN/TR 15615 are intended to support the EPBD by providing the calculation methods and associated material to obtain the overall energy performance of a building. For Italy the Energy Performance of Building Simulations EPBS must be calculated with standard UNITS 11300. The energy building behaviour is referred to standard and not to real use, nor climate or dynamic energy evaluation. Since retrofitting of existing buildings offers significant opportunities for reducing energy consumption and greenhouse gas emissions, a case study of retrofitting is described and the CostOptimal Level EU procedure in an Italian context is analysed. Following this procedure, it is shown not only that the energy cost depends on several conditions and most of them are not indexed at national level but also that the cost of improvement depends on local variables and contract tender. The case study highlights the difficulties to apply EU rules, and

  14. Optimal insemination and replacement decisions to minimize the cost of pathogen-specific clinical mastitis in dairy cows.

    Science.gov (United States)

    Cha, E; Kristensen, A R; Hertl, J A; Schukken, Y H; Tauer, L W; Welcome, F L; Gröhn, Y T

    2014-01-01

    Mastitis is a serious production-limiting disease, with effects on milk yield, milk quality, and conception rate, and an increase in the risk of mortality and culling. The objective of this study was 2-fold: (1) to develop an economic optimization model that incorporates all the different types of pathogens that cause clinical mastitis (CM) categorized into 8 classes of culture results, and account for whether the CM was a first, second, or third case in the current lactation and whether the cow had a previous case or cases of CM in the preceding lactation; and (2) to develop this decision model to be versatile enough to add additional pathogens, diseases, or other cow characteristics as more information becomes available without significant alterations to the basic structure of the model. The model provides economically optimal decisions depending on the individual characteristics of the cow and the specific pathogen causing CM. The net returns for the basic herd scenario (with all CM included) were $507/cow per year, where the incidence of CM (cases per 100 cow-years) was 35.6, of which 91.8% of cases were recommended for treatment under an optimal replacement policy. The cost per case of CM was $216.11. The CM cases comprised (incidences, %) Staphylococcus spp. (1.6), Staphylococcus aureus (1.8), Streptococcus spp. (6.9), Escherichia coli (8.1), Klebsiella spp. (2.2), other treated cases (e.g., Pseudomonas; 1.1), other not treated cases (e.g., Trueperella pyogenes; 1.2), and negative culture cases (12.7). The average cost per case, even under optimal decisions, was greatest for Klebsiella spp. ($477), followed by E. coli ($361), other treated cases ($297), and other not treated cases ($280). This was followed by the gram-positive pathogens; among these, the greatest cost per case was due to Staph. aureus ($266), followed by Streptococcus spp. ($174) and Staphylococcus spp. ($135); negative culture had the lowest cost ($115). The model recommended treatment for

  15. Interstellar communication. II. Application to the solar gravitational lens

    Science.gov (United States)

    Hippke, Michael

    2018-01-01

    We have shown in paper I of this series [1] that interstellar communication to nearby (pc) stars is possible at data rates of bits per second per Watt between a 1 m sized probe and a large receiving telescope (E-ELT, 39 m), when optimizing all parameters such as frequency at 300-400 nm. We now apply our framework of interstellar extinction and quantum state calculations for photon encoding to the solar gravitational lens (SGL), which enlarges the aperture (and thus the photon flux) of the receiving telescope by a factor of >109 . For the first time, we show that the use of the SGL for communication purposes is possible. This was previously unclear because the Einstein ring is placed inside the solar coronal noise, and contributing factors are difficult to determine. We calculate point-spread functions, aperture sizes, heliocentric distance, and optimum communication frequency. The best wavelength for nearby (meter-sized telescopes, an improvement of 107 compared to using the same receiving telescope without the SGL. A 1 m telescope in the SGL can receive data at rates comparable to a km-class "normal" telescope.

  16. Design Optimization of Time- and Cost-Constrained Fault-Tolerant Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2005-01-01

    In this paper we present an approach to the design optimization of fault-tolerant embedded systems for safety-critical applications. Processes are statically scheduled and communications are performed using the time-triggered protocol. We use process re-execution and replication for tolerating...... transient faults. Our design optimization approach decides the mapping of processes to processors and the assignment of fault-tolerant policies to processes such that transient faults are tolerated and the timing constraints of the application are satisfied. We present several heuristics which are able...

  17. Cost optimization of a real-time GIS-based management system for hazardous waste transportation.

    Science.gov (United States)

    Zhu, Yun; Lin, Che-Jen; Zhong, Yilong; Zhou, Qing; Lin, Che-Jen; Chen, Chunyi

    2010-08-01

    In this paper, the design and cost analysis of a real-time, geographical information system (GIS) based management system for hazardous waste transportation are described. The implementation of such a system can effectively prevent illegal dumping and perform emergency responses during the transportation of hazardous wastes. A case study was conducted in Guangzhou, China to build a small-scale, real-time management system for waste transportation. Two alternatives were evaluated in terms of system capability and cost structure. Alternative I was the building of a complete real-time monitoring and management system in a governing agency; whereas alternative II was the combination of the existing management framework with a commercial Telematics service to achieve the desired level of monitoring and management. The technological framework under consideration included locating transportation vehicles using a global positioning system (GPS), exchanging vehicle location data via the Internet and Intranet, managing hazardous waste transportation using a government management system and responding to emergencies during transportation. Analysis of the cost structure showed that alternative II lowered the capital and operation cost by 38 and 56% in comparison with alternative I. It is demonstrated that efficient management can be achieved through integration of the existing technological components with additional cost benefits being achieved by streamlined software interfacing.

  18. The Ultimate Destination: Choice of Interplanetary Exploration Path can define Future of Interstellar Spaceflight

    Science.gov (United States)

    Silin, D. V.

    Manned interstellar spaceflight is facing multiple challenges of great magnitude; among them are extremely large distances and the lack of known habitable planets other than Earth. Many of these challenges are applicable to manned space exploration within the Solar System to the same or lesser degree. If these issues are resolved on an interplanetary scale, better position to pursue interstellar exploration can be reached. However, very little progress (if any) was achieved in manned space exploration since the end of Space Race. There is no lack of proposed missions, but all of them require considerable technological and financial efforts to implement while yielding no tangible benefits that would justify their costs. To overcome this obstacle highest priority in future space exploration plans should be assigned to the creation of added value in outer space. This goal can be reached if reductions in space transportation, construction and maintenance of space-based structures costs are achieved. In order to achieve these requirements several key technologies have to be mastered, such as near-Earth object mining, space- based manufacturing, agriculture and structure assembly. To keep cost and difficulty under control next exploration steps can be limited to nearby destinations such as geostationary orbit, low lunar orbit, Moon surface and Sun-Earth L1 vicinity. Completion of such a program will create a solid foundation for further exploration and colonization of the Solar System, solve common challenges of interplanetary and interstellar spaceflight and create useful results for the majority of human population. Another important result is that perception of suitable destinations for interstellar missions will change significantly. If it becomes possible to create habitable and self-sufficient artificial environments in the nearby interplanetary space, Earth-like habitable planets will be no longer required to expand beyond our Solar System. Large fraction of the

  19. Neutral buoyancy is optimal to minimize the cost of transport in horizontally swimming seals.

    Science.gov (United States)

    Sato, Katsufumi; Aoki, Kagari; Watanabe, Yuuki Y; Miller, Patrick J O

    2013-01-01

    Flying and terrestrial animals should spend energy to move while supporting their weight against gravity. On the other hand, supported by buoyancy, aquatic animals can minimize the energy cost for supporting their body weight and neutral buoyancy has been considered advantageous for aquatic animals. However, some studies suggested that aquatic animals might use non-neutral buoyancy for gliding and thereby save energy cost for locomotion. We manipulated the body density of seals using detachable weights and floats, and compared stroke efforts of horizontally swimming seals under natural conditions using animal-borne recorders. The results indicated that seals had smaller stroke efforts to swim a given speed when they were closer to neutral buoyancy. We conclude that neutral buoyancy is likely the best body density to minimize the cost of transport in horizontal swimming by seals.

  20. Demand and generation cost uncertainty modelling in power system optimization studies

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, Bruno Andre; Saraiva, Joao Tome [INESC Porto and Departamento de Engenharia Electrotecnica e Computadores, Faculdade de Engenharia da Universidade do Porto, FEUP, Campus da FEUP Rua Roberto Frias 378, 4200 465 Porto (Portugal)

    2009-06-15

    This paper describes the formulations and the solution algorithms developed to include uncertainties in the generation cost function and in the demand on DC OPF studies. The uncertainties are modelled by trapezoidal fuzzy numbers and the solution algorithms are based on multiparametric linear programming techniques. These models are a development of an initial formulation detailed in several publications co-authored by the second author of this paper. Now, we developed a more complete model and a more accurate solution algorithm in the sense that it is now possible to capture the widest possible range of values of the output variables reflecting both demand and generation cost uncertainties. On the other hand, when modelling simultaneously demand and generation cost uncertainties, we are representing in a more realistic way the volatility that is currently inherent to power systems. Finally, the paper includes a case study to illustrate the application of these models based on the IEEE 24 bus test system. (author)

  1. REVISITING ULYSSES OBSERVATIONS OF INTERSTELLAR HELIUM

    International Nuclear Information System (INIS)

    Wood, Brian E.; Müller, Hans-Reinhard; Witte, Manfred

    2015-01-01

    We report the results of a comprehensive reanalysis of Ulysses observations of interstellar He atoms flowing through the solar system, the goal being to reassess the interstellar He flow vector and to search for evidence of variability in this vector. We find no evidence that the He beam seen by Ulysses changes at all from 1994-2007. The direction of flow changes by no more than ∼0.°3 and the speed by no more than ∼0.3 km s –1 . A global fit to all acceptable He beam maps from 1994-2007 yields the following He flow parameters: V ISM = 26.08 ± 0.21 km s –1 , λ = 75.54 ± 0.°19, β = –5.44 ± 0.°24, and T = 7260 ± 270 K; where λ and β are the ecliptic longitude and latitude direction in J2000 coordinates. The flow vector is consistent with the original analysis of the Ulysses team, but our temperature is significantly higher. The higher temperature somewhat mitigates a discrepancy that exists in the He flow parameters measured by Ulysses and the Interstellar Boundary Explorer, but does not resolve it entirely. Using a novel technique to infer photoionization loss rates directly from Ulysses data, we estimate a density of n He = 0.0196 ± 0.0033 cm –3 in the interstellar medium

  2. REVISITING ULYSSES OBSERVATIONS OF INTERSTELLAR HELIUM

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Brian E. [Naval Research Laboratory, Space Science Division, Washington, DC 20375 (United States); Müller, Hans-Reinhard [Department of Physics and Astronomy, Dartmouth College, Hanover, NH 03755 (United States); Witte, Manfred, E-mail: brian.wood@nrl.navy.mil [Max-Planck-Institute for Solar System Research, Katlenburg-Lindau D-37191 (Germany)

    2015-03-01

    We report the results of a comprehensive reanalysis of Ulysses observations of interstellar He atoms flowing through the solar system, the goal being to reassess the interstellar He flow vector and to search for evidence of variability in this vector. We find no evidence that the He beam seen by Ulysses changes at all from 1994-2007. The direction of flow changes by no more than ∼0.°3 and the speed by no more than ∼0.3 km s{sup –1}. A global fit to all acceptable He beam maps from 1994-2007 yields the following He flow parameters: V {sub ISM} = 26.08 ± 0.21 km s{sup –1}, λ = 75.54 ± 0.°19, β = –5.44 ± 0.°24, and T = 7260 ± 270 K; where λ and β are the ecliptic longitude and latitude direction in J2000 coordinates. The flow vector is consistent with the original analysis of the Ulysses team, but our temperature is significantly higher. The higher temperature somewhat mitigates a discrepancy that exists in the He flow parameters measured by Ulysses and the Interstellar Boundary Explorer, but does not resolve it entirely. Using a novel technique to infer photoionization loss rates directly from Ulysses data, we estimate a density of n {sub He} = 0.0196 ± 0.0033 cm{sup –3} in the interstellar medium.

  3. Interstellar propagation of low energy cosmic rays

    International Nuclear Information System (INIS)

    Cesarsky, C.J.

    1975-01-01

    Wave particles interactions prevent low energy cosmic rays from propagating at velocities much faster than the Alfven velocity, reducing their range by a factor of order 50. Therefore, supernovae remnants cannot fill the neutral portions of the interstellar medium with 2 MeV cosmic rays [fr

  4. SILICATE COMPOSITION OF THE INTERSTELLAR MEDIUM

    Energy Technology Data Exchange (ETDEWEB)

    Fogerty, S.; Forrest, W.; Watson, D. M.; Koch, I. [Department of Physics and Astronomy, University of Rochester, Rochester, NY 14627 (United States); Sargent, B. A., E-mail: sfogerty@pas.rochester.edu [Center for Imaging Science and Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623 (United States)

    2016-10-20

    The composition of silicate dust in the diffuse interstellar medium and in protoplanetary disks around young stars informs our understanding of the processing and evolution of the dust grains leading up to planet formation. An analysis of the well-known 9.7 μ m feature indicates that small amorphous silicate grains represent a significant fraction of interstellar dust and are also major components of protoplanetary disks. However, this feature is typically modeled assuming amorphous silicate dust of olivine and pyroxene stoichiometries. Here, we analyze interstellar dust with models of silicate dust that include non-stoichiometric amorphous silicate grains. Modeling the optical depth along lines of sight toward the extinguished objects Cyg OB2 No. 12 and ζ Ophiuchi, we find evidence for interstellar amorphous silicate dust with stoichiometry intermediate between olivine and pyroxene, which we simply refer to as “polivene.” Finally, we compare these results to models of silicate emission from the Trapezium and protoplanetary disks in Taurus.

  5. THE AGE OF THE LOCAL INTERSTELLAR BUBBLE

    International Nuclear Information System (INIS)

    Abt, Helmut A.

    2011-01-01

    The Local Interstellar Bubble is an irregular region from 50 to 150 pc from the Sun in which the interstellar gas density is 10 -2 -10 -3 of that outside the bubble and the interstellar temperature is 10 6 K. Evidently most of the gas was swept out by one or more supernovae. I explored the stellar contents and ages of the region from visual double stars, spectroscopic doubles, single stars, open clusters, emission regions, X-ray stars, planetary nebulae, and pulsars. The bubble has three sub-regions. The region toward the galactic center has stars as early as O9.5 V and with ages of 2-4 M yr. It also has a pulsar (PSRJ1856-3754) with a spin-down age of 3.76 Myr. That pulsar is likely to be the remnant of the supernova that drove away most of the gas. The central lobe has stars as early as B7 V and therefore an age of about 160 Myr or less. The Pleiades lobe has stars as early as B3 and therefore an age of about 50 Myr. There are no obvious pulsars that resulted from the supernovae that cleared out those areas. As found previously by Welsh and Lallement, the bubble has five B stars along its perimeter that show high-temperature ions of O VI and C II along their lines of sight, confirming its high interstellar temperature.

  6. Fluorescent excitation of interstellar H2

    NARCIS (Netherlands)

    Black, J.H.; Dishoeck, van E.F.

    1987-01-01

    The infrared emission spectrum of H2 excited by ultraviolet absorption, followed by fluorescence, was investigated using comprehensive models of interstellar clouds for computing the spectrum and to assess the effects on the intensity to various cloud properties, such as density, size, temperature,

  7. Organics in meteorites - Solar or interstellar?

    Science.gov (United States)

    Alexander, Conel M. O'D.; Cody, George D.; Fogel, Marilyn; Yabuta, Hikaru

    2008-10-01

    The insoluble organic material (IOM) in primitive meteorites is related to the organic material in interplanetary dust particles and comets, and is probably related to the refractory organic material in the diffuse interstellar medium. If the IOM is representative of refractory ISM organics, models for how and from what it formed will have to be revised.

  8. Optical observations of nearby interstellar gas

    Science.gov (United States)

    Frisch, P. C.; York, D. G.

    1984-11-01

    Observations indicated that a cloud with a heliocentric velocity of approximately -28 km/s and a hydrogen column density that possibly could be on the order of, or greater than, 5 x 10 to the 19 power/square cm is located within the nearest 50 to 80 parsecs in the direction of Ophiuchus. This is a surprisingly large column density of material for this distance range. The patchy nature of the absorption from the cloud indicates that it may not be a feature with uniform properties, but rather one with small scale structure which includes local enhancements in the column density. This cloud is probably associated with the interstellar cloud at about the same velocity in front of the 20 parsec distant star alpha Oph (Frisch 1981, Crutcher 1982), and the weak interstellar polarization found in stars as near as 35 parsecs in this general region (Tinbergen 1982). These data also indicate that some portion of the -14 km/s cloud also must lie within the 100 parsec region. Similar observations of both Na1 and Ca2 interstellar absorption features were performed in other lines of sight. Similar interstellar absorption features were found in a dozen stars between 20 and 100 parsecs of the Sun.

  9. SILICATE COMPOSITION OF THE INTERSTELLAR MEDIUM

    International Nuclear Information System (INIS)

    Fogerty, S.; Forrest, W.; Watson, D. M.; Koch, I.; Sargent, B. A.

    2016-01-01

    The composition of silicate dust in the diffuse interstellar medium and in protoplanetary disks around young stars informs our understanding of the processing and evolution of the dust grains leading up to planet formation. An analysis of the well-known 9.7 μ m feature indicates that small amorphous silicate grains represent a significant fraction of interstellar dust and are also major components of protoplanetary disks. However, this feature is typically modeled assuming amorphous silicate dust of olivine and pyroxene stoichiometries. Here, we analyze interstellar dust with models of silicate dust that include non-stoichiometric amorphous silicate grains. Modeling the optical depth along lines of sight toward the extinguished objects Cyg OB2 No. 12 and ζ Ophiuchi, we find evidence for interstellar amorphous silicate dust with stoichiometry intermediate between olivine and pyroxene, which we simply refer to as “polivene.” Finally, we compare these results to models of silicate emission from the Trapezium and protoplanetary disks in Taurus.

  10. Interstellar Extinction in the Gaia Photometric Systems

    Directory of Open Access Journals (Sweden)

    Bridžius A.

    2003-12-01

    Full Text Available Three medium-band photometric systems proposed for the Gaia space mission are intercompared in determining color excesses for stars of spectral classes from O to M at V = 18 mag. A possibility of obtaining a three-dimensional map of the interstellar extinction is discussed.

  11. MEASURING THE FRACTAL STRUCTURE OF INTERSTELLAR CLOUDS

    NARCIS (Netherlands)

    VOGELAAR, MGR; WAKKER, BP; SCHWARZ, UJ

    1991-01-01

    To study the structure of interstellar clouds we used the so-called perimeter-area relation to estimate fractal dimensions. We studied the reliability of the method by applying it to artificial fractals and discuss some of the problems and pitfalls. Results for two different cloud types

  12. INTERSTELLAR MAGNETIC FIELD SURROUNDING THE HELIOPAUSE

    International Nuclear Information System (INIS)

    Whang, Y. C.

    2010-01-01

    This paper presents a three-dimensional analytical solution, in the limit of very low plasma β-ratio, for the distortion of the interstellar magnetic field surrounding the heliopause. The solution is obtained using a line dipole method that is the integration of point dipole along a semi-infinite line; it represents the magnetic field caused by the presence of the heliopause. The solution allows the variation of the undisturbed magnetic field at any inclination angle. The heliosphere is considered as having blunt-nosed geometry on the upwind side and it asymptotically approaches a cylindrical geometry having an open exit for the continuous outflow of the solar wind on the downwind side. The heliopause is treated as a magnetohydrodynamic tangential discontinuity; the interstellar magnetic field lines at the boundary are tangential to the heliopause. The interstellar magnetic field is substantially distorted due to the presence of the heliopause. The solution shows the draping of the field lines around the heliopause. The magnetic field strength varies substantially near the surface of the heliopause. The effect on the magnetic field due to the presence of the heliopause penetrates very deep into the interstellar space; the depth of penetration is of the same order of magnitude as the scale length of the heliosphere.

  13. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on

  14. Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    2000-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query

  15. Optimal decoding and information transmission in Hodgkin–Huxley neurons under metabolic cost constraints

    Czech Academy of Sciences Publication Activity Database

    Košťál, Lubomír; Kobayashi, R.

    2015-01-01

    Roč. 136, Oct 2015 (2015), s. 3-10 ISSN 0303-2647 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : neuronal coding * information transfer * optimal decoding Subject RIV: BD - Theory of Information Impact factor: 1.495, year: 2015

  16. An Invocation Cost Optimization Method for Web Services in Cloud Environment

    OpenAIRE

    Qi, Lianyong; Yu, Jiguo; Zhou, Zhili

    2017-01-01

    The advent of cloud computing technology has enabled users to invoke various web services in a “pay-as-you-go” manner. However, due to the flexible pricing model of web services in cloud environment, a cloud user’ service invocation cost may be influenced by many factors (e.g., service invocation time), which brings a great challenge for cloud users’ cost-effective web service invocation. In view of this challenge, in this paper, we first investigate the multiple factors that influence the in...

  17. The influence of the interstellar medium on climate and life

    International Nuclear Information System (INIS)

    Talbot, R.J. Jr.

    1980-01-01

    Recent studies of the gas and dust between the stars, the interstellar medium, reveal a complex chemistry which indicates that prebiotic organic chemistry is ubiquitous. The relationship between this interstellar chemistry and the organic chemistry of the early solar system and the Earth is explored. The interstellar medium is also considered as likely to have a continuing influence upon the climate of the Earth and other planets. Life forms as known are not only descendants of the organic evolution begun in the interstellar medium, but their continuing evolution is also molded through occasional interactions between the interstellar medium, the Sun and the climate on Earth. (author)

  18. Stochastic Funding of a Defined Contribution Pension Plan with Proportional Administrative Costs and Taxation under Mean-Variance Optimization Approach

    Directory of Open Access Journals (Sweden)

    Charles I Nkeki

    2014-11-01

    Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.

  19. Structural optimization procedure of a composite wind turbine blade for reducing both material cost and blade weight

    Science.gov (United States)

    Hu, Weifei; Park, Dohyun; Choi, DongHoon

    2013-12-01

    A composite blade structure for a 2 MW horizontal axis wind turbine is optimally designed. Design requirements are simultaneously minimizing material cost and blade weight while satisfying the constraints on stress ratio, tip deflection, fatigue life and laminate layup requirements. The stress ratio and tip deflection under extreme gust loads and the fatigue life under a stochastic normal wind load are evaluated. A blade element wind load model is proposed to explain the wind pressure difference due to blade height change during rotor rotation. For fatigue life evaluation, the stress result of an implicit nonlinear dynamic analysis under a time-varying fluctuating wind is converted to the histograms of mean and amplitude of maximum stress ratio using the rainflow counting algorithm Miner's rule is employed to predict the fatigue life. After integrating and automating the whole analysis procedure an evolutionary algorithm is used to solve the discrete optimization problem.

  20. An Improved Particle Swarm Optimization for Selective Single Machine Scheduling with Sequence Dependent Setup Costs and Downstream Demands

    Directory of Open Access Journals (Sweden)

    Kun Li

    2015-01-01

    Full Text Available This paper investigates a special single machine scheduling problem derived from practical industries, namely, the selective single machine scheduling with sequence dependent setup costs and downstream demands. Different from traditional single machine scheduling, this problem further takes into account the selection of jobs and the demands of downstream lines. This problem is formulated as a mixed integer linear programming model and an improved particle swarm optimization (PSO is proposed to solve it. To enhance the exploitation ability of the PSO, an adaptive neighborhood search with different search depth is developed based on the decision characteristics of the problem. To improve the search diversity and make the proposed PSO algorithm capable of getting out of local optimum, an elite solution pool is introduced into the PSO. Computational results based on extensive test instances show that the proposed PSO can obtain optimal solutions for small size problems and outperform the CPLEX and some other powerful algorithms for large size problems.

  1. Spatial data mining of pipeline data provides new wave of O and M capital cost optimization opportunities

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, D. [QM4 Engineering Ltd., Calgary, AB (Canada)

    2010-07-01

    This paper discussed the cost optimization benefits of spatial data mining in upstream oil and gas pipeline operations. The data mining method was used to enhance the characterization and management of internal corrosion risk and to optimize pipeline corrosion inhibition, as well as to identify pipeline network hydraulic bottlenecks. The data mining method formed part of a quality-based pipeline integrity management program. Results of the data mining study highlighted trends in well operational data and historical pipeline failure events. Use of the methodology resulted in significant savings. It was demonstrated that the key to a successful pipeline management model is a complete inventory characterization and determination of failure susceptibility profiles through the application of rigorous data standards. 4 tabs., 8 figs.

  2. A Cost-Effective Approach to Optimizing Microstructure and Magnetic Properties in Ce17Fe78B₆ Alloys.

    Science.gov (United States)

    Tan, Xiaohua; Li, Heyun; Xu, Hui; Han, Ke; Li, Weidan; Zhang, Fang

    2017-07-28

    Optimizing fabrication parameters for rapid solidification of Re-Fe-B (Re = Rare earth) alloys can lead to nanocrystalline products with hard magnetic properties without any heat-treatment. In this work, we enhanced the magnetic properties of Ce 17 Fe 78 B₆ ribbons by engineering both the microstructure and volume fraction of the Ce₂Fe 14 B phase through optimization of the chamber pressure and the wheel speed necessary for quenching the liquid. We explored the relationship between these two parameters (chamber pressure and wheel speed), and proposed an approach to identifying the experimental conditions most likely to yield homogenous microstructure and reproducible magnetic properties. Optimized experimental conditions resulted in a microstructure with homogeneously dispersed Ce₂Fe 14 B and CeFe₂ nanocrystals. The best magnetic properties were obtained at a chamber pressure of 0.05 MPa and a wheel speed of 15 m·s -1 . Without the conventional heat-treatment that is usually required, key magnetic properties were maximized by optimization processing parameters in rapid solidification of magnetic materials in a cost-effective manner.

  3. Optimal distribution feeder reconfiguration for increasing the penetration of plug-in electric vehicles and minimizing network costs

    International Nuclear Information System (INIS)

    Kavousi-Fard, Abdollah; Abbasi, Alireza; Rostami, Mohammad-Amin; Khosravi, Abbas

    2015-01-01

    Appearance of PEVs (Plug-in Electric Vehicles) in future transportation sector brings forward opportunities and challenges from grid perspective. Increased utilization of PEVs will result in problems such as greater total loss, unbalanced load factor, feeder congestion and voltage drop. PEVs are mobile energy storages dispersed all over the network with benefits to both owners and utilities in case of V2G (Vehicle-to-Grid) possibility. The intelligent bidirectional power flow between grid and large number of vehicles adds complexity to the system and requires operative tools to schedule V2G energy and subdue PEV impacts. In this paper, DFR (Distribution Feeder Reconfiguration) is utilized to optimally coordinate PEV operation in a stochastic framework. Uncertainty in PEVs characteristics can be due to several sources from location and time of grid connection to driving pattern and battery SoC (State-of-Charge). The proposed stochastic problem is solved with a self-adaptive evolutionary swarm algorithm based on SSO (Social Spider Optimization) algorithm. Numerical studies verify the efficacy of the proposed DFR to improve the system performance and optimal dispatch of V2G. - Highlights: • Consideration effect of PEVS on the distribution feeder reconfiguration. • Increasing the penetration of PEVS. • Introducing a new artificial optimization algorithm. • Modeling the uncertainty in network. • Investigating the degradation cost of batteries in V2G technology.

  4. Balancing development costs and sales to optimize the development time of product line additions

    NARCIS (Netherlands)

    Langerak, F.; Griffin, A.; Hultink, E.J.

    2010-01-01

    Development teams often use mental models to simplify development time decision making because a comprehensive empirical assessment of the trade-offs across the metrics of development time, development costs, proficiency in market-entry timing, and new product sales is simply not feasible.

  5. Software for optimizing the cost of forming tools; Kosten fuer Umformwerkzeuge softwaretechnisch durchgehend im Griff

    Energy Technology Data Exchange (ETDEWEB)

    Fries, Daniel [Autoform Engineering GmbH, Zuerich (Switzerland)

    2009-08-31

    In spite of some major technical challenges, market prices of tools are that low that it may be cheaper to buy a ready-made tool on the world market than to buy the raw material for it. An example of the Daimler AG shows, however, that tool cost calculation may still be economically interesting in the car manufacturing and component manufacturing industry. (orig.)

  6. Challenges of healthcare administration: optimizing quality and value at an affordable cost in pediatric cardiology.

    Science.gov (United States)

    Cohen, Mitchell I; Frias, Patricio A

    2018-01-01

    The purpose of this review is to explore the paradigm shift in healthcare delivery that will need to take place over the next few years away from an emphasis on supply-driven health care to better quality transparent-driven health care whose focus is on the consumer's best interest. The current healthcare system is fragmented and costs continue to rise. The best way to contain costs is to improve quality to the consumer, the patient. Physicians and hospitals need to align in a team-based approach that allows physicians to understand current costs and how to strive toward a focus on healthcare outcomes. Pediatric cardiology is a unique discipline that cares for patients with complex congenital conditions that will span their lifetime and also involves not just cardiology but surgery, intensive care, anesthesia, nursing, and a host of inpatient and ambulatory services. Understanding what matters to the patient and his/her family and presenting quality outcomes in a transparent fashion will gradually allow a shift to take place away from physician visits, tests ordered, and procedures performed. This can only be achieved with physicians, given the appropriate tools to understand costs, value, and outcomes and models where the hospitals and physicians are aligned. The transformation to a value-based healthcare system is beginning and pediatric cardiologists need to be educated, given the appropriate resources, receive appropriate feedback, and patients need to be part of the solution so that care providers can understand what matters most to them.

  7. Optimizing droop coefficients for minimum cost operation of islanded micro-grids

    DEFF Research Database (Denmark)

    Sanseverino, E. Riva; Tran, Q. T.T.; Zizzo, G.

    2017-01-01

    This paper shows how minimum cost energy management can be carried out for islanded micro-grids considering an expanded state that also includes the system's frequency. Each of the configurations outputted by the energy management system at each hour are indeed technically sound and coherent from...

  8. TRIANGULATION OF THE INTERSTELLAR MAGNETIC FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Schwadron, N. A.; Moebius, E. [University of New Hampshire, Durham, NH 03824 (United States); Richardson, J. D. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Burlaga, L. F. [Goddard Space Flight Center, Greenbelt, MD 20771 (United States); McComas, D. J. [Southwest Research Institute, San Antonio, TX 78228 (United States)

    2015-11-01

    Determining the direction of the local interstellar magnetic field (LISMF) is important for understanding the heliosphere’s global structure, the properties of the interstellar medium, and the propagation of cosmic rays in the local galactic medium. Measurements of interstellar neutral atoms by Ulysses for He and by SOHO/SWAN for H provided some of the first observational insights into the LISMF direction. Because secondary neutral H is partially deflected by the interstellar flow in the outer heliosheath and this deflection is influenced by the LISMF, the relative deflection of H versus He provides a plane—the so-called B–V plane in which the LISMF direction should lie. Interstellar Boundary Explorer (IBEX) subsequently discovered a ribbon, the center of which is conjectured to be the LISMF direction. The most recent He velocity measurements from IBEX and those from Ulysses yield a B–V plane with uncertainty limits that contain the centers of the IBEX ribbon at 0.7–2.7 keV. The possibility that Voyager 1 has moved into the outer heliosheath now suggests that Voyager 1's direct observations provide another independent determination of the LISMF. We show that LISMF direction measured by Voyager 1 is >40° off from the IBEX ribbon center and the B–V plane. Taking into account the temporal gradient of the field direction measured by Voyager 1, we extrapolate to a field direction that passes directly through the IBEX ribbon center (0.7–2.7 keV) and the B–V plane, allowing us to triangulate the LISMF direction and estimate the gradient scale size of the magnetic field.

  9. Dust in the Diffuse Neutral Interstellar Medium

    Science.gov (United States)

    Sofia, Ulysses J.

    2008-05-01

    Studies of interstellar dust have always relied heavily upon Laboratory Astrophysics for interpretation. Laboratory values, in the broad sense that includes theory, are needed for the most basic act of measuring interstellar abundances, to the more complex determination of what grains are responsible for particular extinction. The symbiotic relationship between astronomical observations and Laboratory Astrophysics has prompted both fields to move forward, especially in the era of high-resolution ultraviolet spectroscopy when new elemental species could be interpreted and observations were able to show the limits of laboratory determinations. Thanks to this synergy, we currently have a good idea of the quantity of the most abundant elements incorporated into dust in diffuse neutral interstellar clouds: carbon, oxygen, iron, silicon and magnesium. Now the task is to figure out how, chemically and physically, those elements are integrated into interstellar grains. We can do this by comparing extinction curves to grain populations in radiative transfer models. The limitation at the present time is the availability of optical constants in the infrared through ultraviolet for species that are likely to exist in dust, i.e., those that are easy to form in the physical environments around stars and in molecular clouds. Extinction in some lines of sight can be fit within current abundance limits and with the optical constants that are available. However the inability to reproduce other extinction curves suggests that optical constants can be improved, either in quality for compounds that have been measured, or quantity in the sense of providing data for more materials. This talk will address the current state and the future of dust studies in the diffuse neutral interstellar medium. This work is supported by the grant HST-AR-10979.01-A from the Space Telescope Science Institute to Whitman College.

  10. Transport Studies Enabling Efficiency Optimization of Cost-Competitive Fuel Cell Stacks (aka AURORA: Areal Use and Reactant Optimization at Rated Amperage)

    Energy Technology Data Exchange (ETDEWEB)

    Conti, Amedeo [Nuvera Fuel Cells, Inc., Billerica, MA (United States); Dross, Robert [Nuvera Fuel Cells, Inc., Billerica, MA (United States)

    2013-12-06

    Hydrogen fuel cells are recognized as one of the most viable solutions for mobility in the 21st century; however, there are technical challenges that must be addressed before the technology can become available for mass production. One of the most demanding aspects is the costs of present-day fuel cells which are prohibitively high for the majority of envisioned markets. The fuel cell community recognizes two major drivers to an effective cost reduction: (1) decreasing the noble metals content, and (2) increasing the power density in order to reduce the number of cells needed to achieve a specified power level. To date, the majority of development work aimed at increasing the value metric (i.e. W/mg-Pt) has focused on the reduction of precious metal loadings, and this important work continues. Efforts to increase power density have been limited by two main factors: (1) performance limitations associated with mass transport barriers, and (2) the historical prioritization of efficiency over cost. This program is driven by commercialization imperatives, and challenges both of these factors. The premise of this Program, supported by proprietary cost modeling by Nuvera, is that DOE 2015 cost targets can be met by simultaneously exceeding DOE 2015 targets for Platinum loadings (using materials with less than 0.2 mg-Pt/cm2) and MEA power density (operating at higher than 1.0 Watt/cm2). The approach of this program is to combine Nuvera’s stack technology, which has demonstrated the ability to operate stably at high current densities (> 1.5 A/cm2), with low Platinum loading MEAs developed by Johnson Matthey in order to maximize Pt specific power density and reduce stack cost. A predictive performance model developed by PSU/UTK is central to the program allowing the team to study the physics and optimize materials/conditions specific to low Pt loading electrodes and ultra-high current density and operation.

  11. Cost Optimization on Energy Consumption of Punching Machine Based on Green Manufacturing Method at PT Buana Intan Gemilang

    Directory of Open Access Journals (Sweden)

    Prillia Ayudia

    2017-01-01

    Full Text Available PT Buana Intan Gemilang is a company engaged in textile industry. The curtain textile production need punching machine to control the fabric process. The operator still works manually so it takes high cost of electrical energy consumption. So to solve the problem can implement green manufacturing on punching machine. The method include firstly to identify the color by classifying the company into the black, brown, gray or green color categories using questionnaire. Secondly is improvement area to be optimized and analyzed. Improvement plan at this stage that is focusing on energy area and technology. Thirdly is process applies by modifying the technology through implementing automation system on the punching machine so that there is an increase of green level on the process machine. The result obtained after implement the method can save cost on electrical energy consumption in the amount of Rp 1.068.159/day.

  12. Quantum cost optimized design of 4-bit reversible universal shift register using reduced number of logic gate

    Science.gov (United States)

    Maity, H.; Biswas, A.; Bhattacharjee, A. K.; Pal, A.

    In this paper, we have proposed the design of quantum cost (QC) optimized 4-bit reversible universal shift register (RUSR) using reduced number of reversible logic gates. The proposed design is very useful in quantum computing due to its low QC, less no. of reversible logic gate and less delay. The QC, no. of gates, garbage outputs (GOs) are respectively 64, 8 and 16 for proposed work. The improvement of proposed work is also presented. The QC is 5.88% to 70.9% improved, no. of gate is 60% to 83.33% improved with compared to latest reported result.

  13. Developing an Onboard Traffic-Aware Flight Optimization Capability for Near-Term Low-Cost Implementation

    Science.gov (United States)

    Wing, David J.; Ballin, Mark G.; Koczo, Stefan, Jr.; Vivona, Robert A.; Henderson, Jeffrey M.

    2013-01-01

    The concept of Traffic Aware Strategic Aircrew Requests (TASAR) combines Automatic Dependent Surveillance Broadcast (ADS-B) IN and airborne automation to enable user-optimal in-flight trajectory replanning and to increase the likelihood of Air Traffic Control (ATC) approval for the resulting trajectory change request. TASAR is designed as a near-term application to improve flight efficiency or other user-desired attributes of the flight while not impacting and potentially benefiting ATC. Previous work has indicated the potential for significant benefits for each TASAR-equipped aircraft. This paper will discuss the approach to minimizing TASAR's cost for implementation and accelerating readiness for near-term implementation.

  14. Data of cost-optimal solutions and retrofit design methods for school renovation in a warm climate.

    Science.gov (United States)

    Zacà, Ilaria; Tornese, Giuliano; Baglivo, Cristina; Congedo, Paolo Maria; D'Agostino, Delia

    2016-12-01

    "Efficient Solutions and Cost-Optimal Analysis for Existing School Buildings" (Paolo Maria Congedo, Delia D'Agostino, Cristina Baglivo, Giuliano Tornese, Ilaria Zacà) [1] is the paper that refers to this article. It reports the data related to the establishment of several variants of energy efficient retrofit measures selected for two existing school buildings located in the Mediterranean area. In compliance with the cost-optimal analysis described in the Energy Performance of Buildings Directive and its guidelines (EU, Directive, EU 244,) [2], [3], these data are useful for the integration of renewable energy sources and high performance technical systems for school renovation. The data of cost-efficient high performance solutions are provided in tables that are explained within the following sections. The data focus on the describe school refurbishment sector to which European policies and investments are directed. A methodological approach already used in previous studies about new buildings is followed (Baglivo Cristina, Congedo Paolo Maria, D׳Agostino Delia, Zacà Ilaria, 2015; IlariaZacà, Delia D'Agostino, Paolo Maria Congedo, Cristina Baglivo; Baglivo Cristina, Congedo Paolo Maria, D'Agostino Delia, Zacà Ilaria, 2015; Ilaria Zacà, Delia D'Agostino, Paolo Maria Congedo, Cristina Baglivo, 2015; Paolo Maria Congedo, Cristina Baglivo, IlariaZacà, Delia D'Agostino,2015) [4], [5], [6], [7], [8]. The files give the cost-optimal solutions for a kindergarten (REF1) and a nursery (REF2) school located in Sanarica and Squinzano (province of Lecce Southern Italy). The two reference buildings differ for construction period, materials and systems. The eleven tables provided contain data about the localization of the buildings, geometrical features and thermal properties of the envelope, as well as the energy efficiency measures related to walls, windows, heating, cooling, dhw and renewables. Output values of energy consumption, gas emission and costs are given for a

  15. The loop I superbubble and the local interstellar magnetic field

    International Nuclear Information System (INIS)

    Frisch, Priscilla Chapman

    2014-01-01

    Recent data on the interstellar magnetic field in the low density nearby interstellar medium suggest a new perspective for understanding interstellar clouds within 40 pc. The directions of the local interstellar magnetic field found from measurements of optically polarized starlight and the very local field found from the Ribbon of energetic neutral atoms discovered by IBEX nearly agree. The geometrical relation between the local magnetic field, the positions and kinematics of local interstellar clouds, and the Loop I S1 superbubble, suggest that the Sun is located in the boundary of this evolved superbubble. The quasiperpendicular angle between the bulk kinematics and magnetic field of the local ISM indicates that a complete picture of low density interstellar clouds needs to include information on the interstellar magnetic field.

  16. Optimal design of a system containing mixed redundancies with respect to reliability and cost

    International Nuclear Information System (INIS)

    Misra, K.B.

    1975-01-01

    A nuclear system generally consists of subsystems that may employ any of the partial, standby, and active redundancies, and is, therefore, a system with mixed type of redundancies. Optimization of reliability or availability of such systems at the design stage is a difficult problem. There appears to be no published work on the optimal design of maintained systems consisting of mixed redundancies. An attempt is therefore made, to present the basis of design and the solution technique to achieve this. An algorithm is described which makes the solution of this mathematically difficult problem possible. Some examples are demonstrated. To achieve further efficiency a study was organized and the recommendations for obtaining a minimum solution time are provided. Although, in the illustration, only the linear constraints and reliability, as the only design parameter, have been considered, the algorithm works well with the nonlinear type of constraints and can be used with other design parameters also. (author)

  17. Optimal Cost Avoidance Investment and Pricing Strategies for Performance-Based Post-Production Service Contracts

    Science.gov (United States)

    2011-04-30

    a BS degree in Mathematics and an MS degree in Statistics and Financial and Actuarial Mathematics from Kiev National Taras Shevchenko University...degrees from Rutgers University in Industrial Engineering (PhD and MS) and Statistics (MS) and from Universidad Nacional Autonoma de Mexico in Actuarial ...Science. His research efforts focus on developing mathematical models for the analysis, computation, and optimization of system performance with

  18. Optimal design of uptime-guarantee contracts under IGFR valuations and convex costs

    OpenAIRE

    Hezarkhni, Behzad

    2016-01-01

    An uptime-guarantee contract commits a service provider to maintain the functionality of a customer’s equipment at least for certain fraction of working time during a contracted period. This paper addresses the optimal design of uptime-guarantee contracts for the service provider when the customer’s valuation of a contract with a given guaranteed uptime level has an Increasing Generalized Failure Rate (IGFR) distribution. We first consider the case where the service provider proposes only one...

  19. Maintenance cost optimization in condition based maintenance: a case study for critical facilities

    OpenAIRE

    Filippo De Carlo; Maria Antonietta Arleo

    2013-01-01

    The increasing availability required to industrial plants and the limited budget often existing to assure it, require a careful formulation of maintenance optimization models. This need is primary for process plants, for which minimization of stops and maximization of their availability, are essential for ensuring targeted production and, therefore, profitability. In this context, the choice of the maintenance strategy is hence fundamental, depending on the system features and then on the eff...

  20. From Consumption to Prosumption - Operational Cost Optimization for Refrigeration System With Heat Waste Recovery

    DEFF Research Database (Denmark)

    Minko, Tomasz; Garcia, Jesus Lago; Bendtsen, Jan Dimon

    2017-01-01

    Implementation of liquid cooling transforms a refrigeration system into a combined cooling and heating system. Reclaimed heat can be used for building heating purposes or can be sold. Carbon dioxide based refrigeration systems are considered to have a particularly high potential for becoming ecient...... heat energy producers. In this paper a CO2 system that operates in the subcritical region is examined. Modelling approach is presented, and used for operation optimisation by way of non-linear model predictive control techniques. Assuming that the heat is sold when using both objective functions......, it turns out that the system have negative operational cost. In case when Cost Minimization objective function is used daily revenue is about 7:9 [eur], for Prosumption one it is 11:9 [eur]....

  1. Fuzzy Stochastic Optimal Guaranteed Cost Control of Bio-Economic Singular Markovian Jump Systems.

    Science.gov (United States)

    Li, Li; Zhang, Qingling; Zhu, Baoyan

    2015-11-01

    This paper establishes a bio-economic singular Markovian jump model by considering the price of the commodity as a Markov chain. The controller is designed for this system such that its biomass achieves the specified range with the least cost in a finite-time. Firstly, this system is described by Takagi-Sugeno fuzzy model. Secondly, a new design method of fuzzy state-feedback controllers is presented to ensure not only the regularity, nonimpulse, and stochastic singular finite-time boundedness of this kind of systems, but also an upper bound achieved for the cost function in the form of strict linear matrix inequalities. Finally, two examples including a practical example of eel seedling breeding are given to illustrate the merit and usability of the approach proposed in this paper.

  2. The Impact of Transaction Costs on Rebalancing an Investment Portfolio in Portfolio Optimization

    OpenAIRE

    B. Marasović; S. Pivac; S. V. Vukasović

    2015-01-01

    Constructing a portfolio of investments is one of the most significant financial decisions facing individuals and institutions. In accordance with the modern portfolio theory maximization of return at minimal risk should be the investment goal of any successful investor. In addition, the costs incurred when setting up a new portfolio or rebalancing an existing portfolio must be included in any realistic analysis. In this paper rebalancing an investment portfolio in the pr...

  3. DATA MINING METHODOLOGY FOR DETERMINING THE OPTIMAL MODEL OF COST PREDICTION IN SHIP INTERIM PRODUCT ASSEMBLY

    Directory of Open Access Journals (Sweden)

    Damir Kolich

    2016-03-01

    Full Text Available In order to accurately predict costs of the thousands of interim products that are assembled in shipyards, it is necessary to use skilled engineers to develop detailed Gantt charts for each interim product separately which takes many hours. It is helpful to develop a prediction tool to estimate the cost of interim products accurately and quickly without the need for skilled engineers. This will drive down shipyard costs and improve competitiveness. Data mining is used extensively for developing prediction models in other industries. Since ships consist of thousands of interim products, it is logical to develop a data mining methodology for a shipyard or any other manufacturing industry where interim products are produced. The methodology involves analysis of existing interim products and data collection. Pre-processing and principal component analysis is done to make the data “user-friendly” for later prediction processing and the development of both accurate and robust models. The support vector machine is demonstrated as the better model when there are a lower number of tuples. However as the number of tuples is increased to over 10000, then the artificial neural network model is recommended.

  4. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  5. Detection of organic matter in interstellar grains.

    Science.gov (United States)

    Pendleton, Y J

    1997-06-01

    Star formation and the subsequent evolution of planetary systems occurs in dense molecular clouds, which are comprised, in part, of interstellar dust grains gathered from the diffuse interstellar medium (DISM). Radio observations of the interstellar medium reveal the presence of organic molecules in the gas phase and infrared observational studies provide details concerning the solid-state features in dust grains. In particular, a series of absorption bands have been observed near 3.4 microns (approximately 2940 cm-1) towards bright infrared objects which are seen through large column densities of interstellar dust. Comparisons of organic residues, produced under a variety of laboratory conditions, to the diffuse interstellar medium observations have shown that aliphatic hydrocarbon grains are responsible for the spectral absorption features observed near 3.4 microns (approximately 2940 cm-1). These hydrocarbons appear to carry the -CH2- and -CH3 functional groups in the abundance ratio CH2/CH3 approximately 2.5, and the amount of carbon tied up in this component is greater than 4% of the cosmic carbon available. On a galactic scale, the strength of the 3.4 microns band does not scale linearly with visual extinction, but instead increases more rapidly for objects near the Galactic Center. A similar trend is noted in the strength of the Si-O absorption band near 9.7 microns. The similar behavior of the C-H and Si-O stretching bands suggests that these two components may be coupled, perhaps in the form of grains with silicate cores and refractory organic mantles. The ubiquity of the hydrocarbon features seen in the near infrared near 3.4 microns throughout out Galaxy and in other galaxies demonstrates the widespread availability of such material for incorporation into the many newly forming planetary systems. The similarity of the 3.4 microns features in any organic material with aliphatic hydrocarbons underscores the need for complete astronomical observational

  6. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades Versus Cost-Optimized Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S.; Milby, M.; Baker, J.

    2014-06-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR(R) (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for fifteen Chicagoland single family housing archetypes, called housing groups. In the present study, 800 IHP homes are first matched to one of these fifteen housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost-effectiveness.

  7. Evaluation of Missed Energy Saving Opportunity Based on Illinois Home Performance Program Field Data: Homeowner Selected Upgrades Versus Cost-Optimized Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Yee, S. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States); Milby, M. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States); Baker, J. [Partnership for Advanced Residential Retrofit, Chicago, IL (United States)

    2014-06-01

    Expanding on previous research by PARR, this study compares measure packages installed during 800 Illinois Home Performance with ENERGY STAR® (IHP) residential retrofits to those recommended as cost-optimal by Building Energy Optimization (BEopt) modeling software. In previous research, cost-optimal measure packages were identified for 15 Chicagoland single family housing archetypes. In the present study, 800 IHP homes are first matched to one of these 15 housing groups, and then the average measures being installed in each housing group are modeled using BEopt to estimate energy savings. For most housing groups, the differences between recommended and installed measure packages is substantial. By comparing actual IHP retrofit measures to BEopt-recommended cost-optimal measures, missed savings opportunities are identified in some housing groups; also, valuable information is obtained regarding housing groups where IHP achieves greater savings than BEopt-modeled, cost-optimal recommendations. Additionally, a measure-level sensitivity analysis conducted for one housing group reveals which measures may be contributing the most to gas and electric savings. Overall, the study finds not only that for some housing groups, the average IHP retrofit results in more energy savings than would result from cost-optimal, BEopt recommended measure packages, but also that linking home categorization to standardized retrofit measure packages provides an opportunity to streamline the process for single family home energy retrofits and maximize both energy savings and cost effectiveness.

  8. Optimal control of greenhouse gas emissions and system cost for integrated municipal solid waste management with considering a hierarchical structure.

    Science.gov (United States)

    Li, Jing; He, Li; Fan, Xing; Chen, Yizhong; Lu, Hongwei

    2017-08-01

    This study presents a synergic optimization of control for greenhouse gas (GHG) emissions and system cost in integrated municipal solid waste (MSW) management on a basis of bi-level programming. The bi-level programming is formulated by integrating minimizations of GHG emissions at the leader level and system cost at the follower level into a general MSW framework. Different from traditional single- or multi-objective approaches, the proposed bi-level programming is capable of not only addressing the tradeoffs but also dealing with the leader-follower relationship between different decision makers, who have dissimilar perspectives interests. GHG emission control is placed at the leader level could emphasize the significant environmental concern in MSW management. A bi-level decision-making process based on satisfactory degree is then suitable for solving highly nonlinear problems with computationally effectiveness. The capabilities and effectiveness of the proposed bi-level programming are illustrated by an application of a MSW management problem in Canada. Results show that the obtained optimal management strategy can bring considerable revenues, approximately from 76 to 97 million dollars. Considering control of GHG emissions, it would give priority to the development of the recycling facility throughout the whole period, especially in latter periods. In terms of capacity, the existing landfill is enough in the future 30 years without development of new landfills, while expansion to the composting and recycling facilities should be paid more attention.

  9. A bridge network maintenance framework for Pareto optimization of stakeholders/users costs

    International Nuclear Information System (INIS)

    Orcesi, Andre D.; Cremona, Christian F.

    2010-01-01

    For managing highway bridges, stakeholders require efficient and practical decision making techniques. In a context of limited bridge management budget, it is crucial to determine the most effective breakdown of financial resources over the different structures of a bridge network. Bridge management systems (BMSs) have been developed for such a purpose. However, they generally rely on an individual approach. The influence of the position of bridges in the transportation network, the consequences of inadequate service for the network users, due to maintenance actions or bridge failure, are not taken into consideration. Therefore, maintenance strategies obtained with current BMSs do not necessarily lead to an optimal level of service (LOS) of the bridge network for the users of the transportation network. Besides, the assessment of the structural performance of highway bridges usually requires the access to the geometrical and mechanical properties of its components. Such information might not be available for all structures in a bridge network for which managers try to schedule and prioritize maintenance strategies. On the contrary, visual inspections are performed regularly and information is generally available for all structures of the bridge network. The objective of this paper is threefold (i) propose an advanced network-level bridge management system considering the position of each bridge in the transportation network, (ii) use information obtained at visual inspections to assess the performance of bridges, and (iii) compare optimal maintenance strategies, obtained with a genetic algorithm, when considering interests of users and bridge owner either separately as conflicting criteria, or simultaneously as a common interest for the whole community. In each case, safety and serviceability aspects are taken into account in the model when determining optimal strategies. The theoretical and numerical developments are applied on a French bridge network.

  10. Cost-optimized configuration of computing instances for large sized cloud systems

    Directory of Open Access Journals (Sweden)

    Woo-Chan Kim

    2017-09-01

    Full Text Available Cloud computing services are becoming more popular for various reasons which include ‘having no need for capital expenditure’ and ‘the ability to quickly meet business demands’. However, what seems to be an attractive option may become a substantial expenditure as more projects are moved into the cloud. Cloud service companies provide different pricing options to their customers that can potentially lower the customers’ spending on the cloud. Choosing the right combination of pricing options can be formulated as a linear mixed integer programming problem, which can be solved using optimization.

  11. The effect of optimal wall loads and blanket technologies on the cost of fusion electricity

    International Nuclear Information System (INIS)

    Knight, P.J.; Ward, D.J.

    2000-01-01

    This paper presents a discussion of trends in fusion economics based on technology, as well as, physics arguments. Based on relatively simple physics considerations, supported by detailed systems code calculations, it is shown that optimal wall loads are not high. The results of systems code calculations, focussing on the economic impact of different blanket technologies, are described. These suggest that the economically favourable thermodynamic efficiencies of some blankets capable of operating at higher temperatures may be counterbalanced by the economic penalties of shorter lifetimes

  12. Advanced steam power plant concepts with optimized life-cycle costs: A new approach for maximum customer benefit

    Energy Technology Data Exchange (ETDEWEB)

    Seiter, C.

    1998-07-01

    The use of coal power generation applications is currently enjoying a renaissance. New highly efficient and cost-effective plant concepts together with environmental protection technologies are the main factors in this development. In addition, coal is available on the world market at attractive prices and in many places it is more readily available than gas. At the economical leading edge, standard power plant concepts have been developed to meet the requirements of emerging power markets. These concepts incorporate the high technological state-of-the-art and are designed to achieve lowest life-cycle costs. Low capital cost, fuel costs and operating costs in combination with shortest lead times are the main assets that make these plants attractive especially for IPPs and Developers. Other aspects of these comprehensive concepts include turnkey construction and the willingness to participate in BOO/BOT projects. One of the various examples of such a concept, the 2 x 610-MW Paiton Private Power Project Phase II in Indonesia, is described in this paper. At the technological leading edge, Siemens has always made a major contribution and was pacemaker for new developments in steam power plant technology. Modern coal-fired steam power plants use computer-optimized process and plant design as well as advanced materials, and achieve efficiencies exceeding 45%. One excellent example of this high technology is the world's largest lignite-fired steam power plant Schwarze Pumpe in Germany, which is equipped with two 800 MW Siemens steam turbine generators with supercritical steam parameters. The world's largest 50-Hz single-shaft turbine generator with supercritical steam parameters rated at 1025 MW for the Niederaussem lignite-fired steam power plant in Germany is a further example of the sophisticated Siemens steam turbine technology and sets a new benchmark in this field.

  13. The cost of model reference adaptive control - Analysis, experiments, and optimization

    Science.gov (United States)

    Messer, R. S.; Haftka, R. T.; Cudney, H. H.

    1993-01-01

    In this paper the performance of Model Reference Adaptive Control (MRAC) is studied in numerical simulations and verified experimentally with the objective of understanding how differences between the plant and the reference model affect the control effort. MRAC is applied analytically and experimentally to a single degree of freedom system and analytically to a MIMO system with controlled differences between the model and the plant. It is shown that the control effort is sensitive to differences between the plant and the reference model. The effects of increased damping in the reference model are considered, and it is shown that requiring the controller to provide increased damping actually decreases the required control effort when differences between the plant and reference model exist. This result is useful because one of the first attempts to counteract the increased control effort due to differences between the plant and reference model might be to require less damping, however, this would actually increase the control effort. Optimization of weighting matrices is shown to help reduce the increase in required control effort. However, it was found that eventually the optimization resulted in a design that required an extremely high sampling rate for successful realization.

  14. Simplified Occupancy Grid Indoor Mapping Optimized for Low-Cost Robots

    Directory of Open Access Journals (Sweden)

    Javier Garrido

    2013-10-01

    Full Text Available This paper presents a mapping system that is suitable for small mobile robots. An ad hoc algorithm for mapping based on the Occupancy Grid method has been developed. The algorithm includes some simplifications in order to be used with low-cost hardware resources. The proposed mapping system has been built in order to be completely autonomous and unassisted. The proposal has been tested with a mobile robot that uses infrared sensors to measure distances to obstacles and uses an ultrasonic beacon system for localization, besides wheel encoders. Finally, experimental results are presented.

  15. Cost Reduction of Taxi Enterprises at the Expense of Automobile Fleet Optimization

    Directory of Open Access Journals (Sweden)

    Novytskyi А.V.

    2016-05-01

    Full Text Available Results of taxi service operation using techniques of queuing system theory have been demonstrated. It has been shown that probability of service denial is the key quality criterion of transport services for taxi services. It is expedient to use total expenditures of queuing system as target function to estimate the efficiency of taxi service. It has been determined that application of queuing theory techniques makes it possible to identify optimum value of the number of operating motor vehicles for specific environment. The value is optimum according to minimum-cost criterion.

  16. A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts

    Science.gov (United States)

    2015-12-01

    As it is shown in Fig. 8, by increasing the buy size, the total cost of the system with a fixed EOS of 40 years and a fixed WACC of 3%, decreases...1000 uncertain demand event paths are considered in both analyses. For a 3% WACC , as illustrated in Fig. 9-a, the DES method gives an optimum buy size...consistency, other WACC values were tests as well. For example, for a 12% WACC , as illustrated in Fig. 9-b, the DES method gives an optimum buy size range

  17. Low cost fuel cell diffusion layer configured for optimized anode water management

    Science.gov (United States)

    Owejan, Jon P; Nicotera, Paul D; Mench, Matthew M; Evans, Robert E

    2013-08-27

    A fuel cell comprises a cathode gas diffusion layer, a cathode catalyst layer, an anode gas diffusion layer, an anode catalyst layer and an electrolyte. The diffusion resistance of the anode gas diffusion layer when operated with anode fuel is higher than the diffusion resistance of the cathode gas diffusion layer. The anode gas diffusion layer may comprise filler particles having in-plane platelet geometries and be made of lower cost materials and manufacturing processes than currently available commercial carbon fiber substrates. The diffusion resistance difference between the anode gas diffusion layer and the cathode gas diffusion layer may allow for passive water balance control.

  18. Optimal facility and equipment specification to support cost-effective recycling

    International Nuclear Information System (INIS)

    Redus, K.S.; Yuracko, K.L.

    1998-01-01

    The authors demonstrate a project management approach for D and D projects to select those facility areas or equipment systems on which to concentrate resources so that project materials disposition costs are minimized, safety requirements are always met, recycle and reuse goals are achieved, and programmatic or stakeholder concerns are met. The authors examine a facility that contains realistic areas and equipment, and they apply the approach to illustrate the different results that can be obtained depending on the strength or weakness of safety risk requirements, goals for recycle and reuse of materials, and programmatic or stakeholder concerns

  19. The Cost-Optimal Distribution of Wind and Solar Generation Facilities in a Simplified Highly Renewable European Power System

    Science.gov (United States)

    Kies, Alexander; von Bremen, Lüder; Schyska, Bruno; Chattopadhyay, Kabitri; Lorenz, Elke; Heinemann, Detlev

    2016-04-01

    The transition of the European power system from fossil generation towards renewable sources is driven by different reasons like decarbonisation and sustainability. Renewable power sources like wind and solar have, due to their weather dependency, fluctuating feed-in profiles, which make their system integration a difficult task. To overcome this issue, several solutions have been investigated in the past like the optimal mix of wind and PV [1], the extension of the transmission grid or storages [2]. In this work, the optimal distribution of wind turbines and solar modules in Europe is investigated. For this purpose, feed-in data with an hourly temporal resolution and a spatial resolution of 7 km covering Europe for the renewable sources wind, photovoltaics and hydro was used. Together with historical load data and a transmission model , a simplified pan-European power power system was simulated. Under cost assumptions of [3] the levelized cost of electricity (LCOE) for this simplified system consisting of generation, consumption, transmission and backup units is calculated. With respect to the LCOE, the optimal distribution of generation facilities in Europe is derived. It is shown, that by optimal placement of renewable generation facilities the LCOE can be reduced by more than 10% compared to a meta study scenario [4] and a self-sufficient scenario (every country produces on average as much from renewable sources as it consumes). This is mainly caused by a shift of generation facilities towards highly suitable locations, reduced backup and increased transmission need. The results of the optimization will be shown and implications for the extension of renewable shares in the European power mix will be discussed. The work is part of the RESTORE 2050 project (Wuppertal Institute, Next Energy, University of Oldenburg), that is financed by the Federal Ministry of Education and Research (BMBF, Fkz. 03SFF0439A). [1] Kies, A. et al.: Kies, Alexander, et al

  20. Optimization of temperature differences in a utilizer in relation to the lowest sum of capital and operating cost

    International Nuclear Information System (INIS)

    Kustrin, I.; Tuma, M.

    1992-01-01

    Our environment and nature are currently overburdened with the emission of noxious substances. Steam boilers fired with coal are therefore not very popular. Wherever possible, they are being replaced by devices which are less harmful for the environment because they use different fuel. This paper discusses replacing a steam boiler with a gas turbine and an utilizer. A mathematical model for performing the optimization of capital and operating costs is presented. The model optimizes the degree of preheating of the flue gases i.e. the temperature of the entering flue gases. The smallest temperature difference (pinch point) was not estimated by the pinch technology because the presented example is relatively simple and the pinch point temperature difference was chosen according to the values reported in various literature sources. The optimization is supplemented with an analysis of the thermal and exergetical efficiencies of the utilizer under different conditions (average temperature difference between the hot gases and water or steam, exit temperature of the hot gases), which condition the choice of the type of utilizer