WorldWideScience

Sample records for hubble systems optimize

  1. Hubble Space Telescope nickel hydrogen battery system briefing

    Science.gov (United States)

    Nawrocki, David; Saldana, David; Rao, Gopal

    1993-01-01

    The topics covered are presented in viewgraph form and include the following: the Hubble Space Telescope (HST) Mission; system constraints; battery specification; battery module; simplified block diagram; cell design summary; present status; voltage decay; system depth of discharge; pressure since launch; system capacity; eclipse time vs. trickle charge; capacity test objectives; and capacity during tests.

  2. Version 1 of the Hubble Source Catalog

    Science.gov (United States)

    Whitmore, Bradley C.; Allam, Sahar S.; Budavári, Tamás; Casertano, Stefano; Downes, Ronald A.; Donaldson, Thomas; Fall, S. Michael; Lubow, Stephen H.; Quick, Lee; Strolger, Louis-Gregory; Wallace, Geoff; White, Richard L.

    2016-06-01

    The Hubble Source Catalog is designed to help optimize science from the Hubble Space Telescope (HST) by combining the tens of thousands of visit-based source lists in the Hubble Legacy Archive (HLA) into a single master catalog. Version 1 of the Hubble Source Catalog includes WFPC2, ACS/WFC, WFC3/UVIS, and WFC3/IR photometric data generated using SExtractor software to produce the individual source lists. The catalog includes roughly 80 million detections of 30 million objects involving 112 different detector/filter combinations, and about 160,000 HST exposures. Source lists from Data Release 8 of the HLA are matched using an algorithm developed by Budavári & Lubow. The mean photometric accuracy for the catalog as a whole is better than 0.10 mag, with relative accuracy as good as 0.02 mag in certain circumstances (e.g., bright isolated stars). The relative astrometric residuals are typically within 10 mas, with a value for the mode (I.e., most common value) of 2.3 mas. The absolute astrometric accuracy is better than 0''\\hspace{-0.5em}. 1 for most sources, but can be much larger for a fraction of fields that could not be matched to the PanSTARRS, SDSS, or 2MASS reference systems. In this paper we describe the database design with emphasis on those aspects that enable the users to fully exploit the catalog while avoiding common misunderstandings and potential pitfalls. We provide usage examples to illustrate some of the science capabilities and data quality characteristics, and briefly discuss plans for future improvements to the Hubble Source Catalog.

  3. Cosmological Hubble constant and nuclear Hubble constant

    International Nuclear Information System (INIS)

    Horbuniev, Amelia; Besliu, Calin; Jipa, Alexandru

    2005-01-01

    The evolution of the Universe after the Big Bang and the evolution of the dense and highly excited nuclear matter formed by relativistic nuclear collisions are investigated and compared. Values of the Hubble constants for cosmological and nuclear processes are obtained. For nucleus-nucleus collisions at high energies the nuclear Hubble constant is obtained in the frame of different models involving the hydrodynamic flow of the nuclear matter. Significant difference in the values of the two Hubble constant - cosmological and nuclear - is observed

  4. Hubble Space Telescope electrical power system

    Science.gov (United States)

    Whitt, Thomas H.; Bush, John R., Jr.

    1990-01-01

    The Hubble Space Telescope (HST) electrical power system (EPS) is supplying between 2000 and 2400 W of continuous power to the electrical loads. The major components of the EPS are the 5000-W back surface field reflector solar array, the six nickel-hydrogen (NiH2) 22-cell 88-Ah batteries, and the charge current controllers, which, in conjunction with the flight computer, control battery charging. The operation of the HST EPS and the results of the HST NiH2 six-battery test are discussed, and preliminary flight data are reviewed. The HST NiH2 six-battery test is a breadboard of the HST EPS on test at Marshall Space Flight Center.

  5. HUBBLE VISION: A Planetarium Show About Hubble Space Telescope

    Science.gov (United States)

    Petersen, Carolyn Collins

    1995-05-01

    In 1991, a planetarium show called "Hubble: Report From Orbit" outlining the current achievements of the Hubble Space Telescope was produced by the independent planetarium production company Loch Ness Productions, for distribution to facilities around the world. The program was subsequently converted to video. In 1994, that program was updated and re-produced under the name "Hubble Vision" and offered to the planetarium community. It is periodically updated and remains a sought-after and valuable resource within the community. This paper describes the production of the program, and the role of the astronomical community in the show's production (and subsequent updates). The paper is accompanied by a video presentation of Hubble Vision.

  6. Hubble Legacy Archive And The Public

    Science.gov (United States)

    Harris, Jessica; Whitmore, B.; Eisenhamer, B.; Bishop, M.; Knisely, L.

    2012-01-01

    The Hubble Legacy Archive (HLA) at the Space Telescope Science Institute (STScI) hosts the Image of the Month (IOTM) Series. The HLA is a joint project of STScI, the Space Telescope European Coordinating Facility (ST-ECF), and the Canadian Astronomy Data Centre (CADC). The HLA is designed optimize science from the Hubble Space Telescope by providing online enhanced Hubble products and advanced browsing capabilities. The IOTM's are created for astronomers and the public to highlight various features within HLA, such as the "Interactive Display", "Footprint” and "Inventory” features to name a few. We have been working with the Office of Public Outreach (OPO) to create a standards based educational module for middle school to high school students of the IOTM: Rings and the Moons of Uranus. The set of Uranus activities are highlighted by a movie that displays the orbit of five of Uranus’ largest satellites. We made the movie based on eight visits of Uranus from 2000-06-16 to 2000-06-18, using the PC chip on the Wide Field Planetary Camera 2 (WFPC2) and filter F850LP (proposal ID: 8680). Students will be engaged in activities that will allow them to "discover” the rings and satellites around Uranus, calculate the orbit of the satellites, and introduces students to analyze real data from Hubble.

  7. A knowledge-based system for monitoring the electrical power system of the Hubble Space Telescope

    Science.gov (United States)

    Eddy, Pat

    1987-01-01

    The design and the prototype for the expert system for the Hubble Space Telescope's electrical power system are discussed. This prototype demonstrated the capability to use real time data from a 32k telemetry stream and to perform operational health and safety status monitoring, detect trends such as battery degradation, and detect anomalies such as solar array failures. This prototype, along with the pointing control system and data management system expert systems, forms the initial Telemetry Analysis for Lockheed Operated Spacecraft (TALOS) capability.

  8. A Scientific Revolution: The Hubble and James Webb Space Telescopes

    Science.gov (United States)

    Gardner, Jonathan P.

    2010-01-01

    Astronomy is going through a scientific revolution, responding to a flood of data from the Hubble Space Telescope, other space missions, and large telescopes on the ground. In this talk, I will discuss some of the important discoveries of the last decade, from dwarf planets in the outer Solar System to the mysterious dark energy that overcomes gravity to accelerate the expansion of the Universe. The next decade will be equally bright with the newly refurbished Hubble and the promise of its successor, the James Webb Space Telescope. An infrared-optimized 6.5m space telescope, Webb is designed to find the first galaxies that formed in the early universe and to peer into the dusty gas clouds where stars and planets are born. With MEMS technology, a deployed primary mirror and a tennis-court sized sunshield, the mission presents many technical challenges. I will describe Webb's scientific goals, its design and recent progress in constructing the observatory. Webb is scheduled for launch in 2014.

  9. Hubble 15 years of discovery

    CERN Document Server

    Lindberg Christensen, Lars; Kornmesser, M

    2006-01-01

    Hubble: 15 Years of Discovery was a key element of the European Space Agency's 15th anniversary celebration activities for the 1990 launch of the NASA/ESA Hubble Space Telescope. As an observatory in space, Hubble is one of the most successful scientific projects of all time, both in terms of scientific output and its immediate public appeal.

  10. A natural language query system for Hubble Space Telescope proposal selection

    Science.gov (United States)

    Hornick, Thomas; Cohen, William; Miller, Glenn

    1987-01-01

    The proposal selection process for the Hubble Space Telescope is assisted by a robust and easy to use query program (TACOS). The system parses an English subset language sentence regardless of the order of the keyword phases, allowing the user a greater flexibility than a standard command query language. Capabilities for macro and procedure definition are also integrated. The system was designed for flexibility in both use and maintenance. In addition, TACOS can be applied to any knowledge domain that can be expressed in terms of a single reaction. The system was implemented mostly in Common LISP. The TACOS design is described in detail, with particular attention given to the implementation methods of sentence processing.

  11. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    Science.gov (United States)

    Gardner, Jonathan P.

    2009-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts z greater than 6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z greater than 10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (less than 50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth-Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems, and discuss recent progress in constructing the observatory.

  12. A Unique test for Hubble's new Solar Arrays

    Science.gov (United States)

    2000-10-01

    pairs. The arrays use high efficiency solar cells and an advanced structural system to support the solar panels. Unlike the earlier sets, which roll up like window shades, the new arrays are rigid. ESA provided Hubble's first two sets of solar arrays, and built and tested the motors and electronics of the new set provided by NASA Goddard Space Flight Center. Now, this NASA/ESA test has benefits that extend beyond Hubble to the world-wide aerospace community. It will greatly expand basic knowledge of the jitter phenomenon. Engineers across the globe can apply these findings to other spacecraft that are subjected to regular, dramatic changes in sunlight and temperature. Note to editors The Hubble Project The Hubble Space Telescope is a project of international co-operation between the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA). The partnership agreement between ESA and NASA was signed on 7 October 1977. ESA has provided two pairs of solar panels and one of Hubble's scientific instruments (the Faint Object Camera), as well as a number of other components and supports NASA during routine Servicing Missions to the telescope. In addition, 15 European scientists are working at the Space Telescope Science Institute in Baltimore (STScI), which is responsible for the scientific operation of the Hubble Observatory and is managed by the Association of Universities for Research in Astronomy (AURA) for NASA. In return, European astronomers have guaranteed access to 15% of Hubble's observing time. The Space Telescope European Coordinating Facility (ST-ECF) hosted at the European Southern Observatory (ESO) in Garching bei München, Germany, supports European Hubble users. ESA and ESO jointly operate the ST-ECF.

  13. The new European Hubble archive

    Science.gov (United States)

    De Marchi, Guido; Arevalo, Maria; Merin, Bruno

    2016-01-01

    The European Hubble Archive (hereafter eHST), hosted at ESA's European Space Astronomy Centre, has been released for public use in October 2015. The eHST is now fully integrated with the other ESA science archives to ensure long-term preservation of the Hubble data, consisting of more than 1 million observations from 10 different scientific instruments. The public HST data, the Hubble Legacy Archive, and the high-level science data products are now all available to scientists through a single, carefully designed and user friendly web interface. In this talk, I will show how the the eHST can help boost archival research, including how to search on sources in the field of view thanks to precise footprints projected onto the sky, how to obtain enhanced previews of imaging data and interactive spectral plots, and how to directly link observations with already published papers. To maximise the scientific exploitation of Hubble's data, the eHST offers connectivity to virtual observatory tools, easily integrates with the recently released Hubble Source Catalog, and is fully accessible through ESA's archives multi-mission interface.

  14. Building the Hubble Space Telescope

    International Nuclear Information System (INIS)

    O'dell, C.R.

    1989-01-01

    The development of the design for the Hubble Space Telescope (HST) is discussed. The HST optical system is described and illustrated. The financial and policy issues related to the development of the HST are considered. The actual construction of the HST optical telescope is examined. Also, consideration is given to the plans for the HST launch

  15. Hubble again views Saturn's Rings Edge-on

    Science.gov (United States)

    1995-01-01

    Saturn's magnificent ring system is seen tilted edge-on -- for the second time this year -- in this NASA Hubble Space Telescope picture taken on August 10, 1995, when the planet was 895 million miles (1,440 million kilometers) away. Hubble snapped the image as Earth sped back across Saturn's ring plane to the sunlit side of the rings. Last May 22, Earth dipped below the ring plane, giving observers a brief look at the backlit side of the rings. Ring-plane crossing events occur approximately every 15 years. Earthbound observers won't have as good a view until the year 2038. Several of Saturn's icy moons are visible as tiny starlike objects in or near the ring plane. They are from left to right, Enceladus, Tethys, Dione and Mimas. 'The Hubble data shows numerous faint satellites close to the bright rings, but it will take a couple of months to precisely identify them,' according to Steve Larson (University of Arizona). During the May ring plane crossing, Hubble detected two, and possibly four, new moons orbiting Saturn. These new observations also provide a better view of the faint E ring, 'to help determine the size of particles and whether they will pose a collision hazard to the Cassini spacecraft,' said Larson. The picture was taken with Hubble's Wide Field Planetary Camera 2 in wide field mode. This image is a composite view, where a long exposure of the faint rings has been combined with a shorter exposure of Saturn's disk to bring out more detail. When viewed edge-on, the rings are so dim they almost disappear because they are very thin -- probably less than a mile thick.The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced Flight Center for NASA's Office of Space Science.This image and other images and data received from the Hubble Space Telescope are posted on the World Wide Web on the Space Telescope Science Institute home page at URL http://oposite.stsci.edu/pubinfo/

  16. The Far-Field Hubble Constant

    Science.gov (United States)

    Lauer, Tod

    1995-07-01

    We request deep, near-IR (F814W) WFPC2 images of five nearby Brightest Cluster Galaxies (BCG) to calibrate the BCG Hubble diagram by the Surface Brightness Fluctuation (SBF) method. Lauer & Postman (1992) show that the BCG Hubble diagram measured out to 15,000 km s^-1 is highly linear. Calibration of the Hubble diagram zeropoint by SBF will thus yield an accurate far-field measure of H_0 based on the entire volume within 15,000 km s^-1, thus circumventing any strong biases caused by local peculiar velocity fields. This method of reaching the far field is contrasted with those using distance ratios between Virgo and Coma, or any other limited sample of clusters. HST is required as the ground-based SBF method is limited to team developed the SBF method, the first BCG Hubble diagram based on a full-sky, volume-limited BCG sample, played major roles in the calibration of WFPC and WFPC2, and are conducting observations of local galaxies that will validate the SBF zeropoint (through GTO programs). This work uses the SBF method to tie both the Cepheid and Local Group giant-branch distances generated by HST to the large scale Hubble flow, which is most accurately traced by BCGs.

  17. The Development of a Virtual Company to Support the Reengineering of the NASA/Goddard Hubble Space Telescope Control Center System

    Science.gov (United States)

    Lehtonen, Ken

    1999-01-01

    This is a report to the Third Annual International Virtual Company Conference, on The Development of a Virtual Company to Support the Reengineering of the NASA/Goddard Hubble Space Telescope (HST) Control Center System. It begins with a HST Science "Commercial": Brief Tour of Our Universe showing various pictures taken from the Hubble Space Telescope. The presentation then reviews the project background and goals. Evolution of the Control Center System ("CCS Inc.") is then reviewed. Topics of Interest to "virtual companies" are reviewed: (1) "How To Choose A Team" (2) "Organizational Model" (3) "The Human Component" (4) "'Virtual Trust' Among Teaming Companies" (5) "Unique Challenges to Working Horizontally" (6) "The Cultural Impact" (7) "Lessons Learned".

  18. Solar system anomalies: Revisiting Hubble's law

    Science.gov (United States)

    Plamondon, R.

    2017-12-01

    This paper investigates the impact of a new metric recently published [R. Plamondon and C. Ouellet-Plamondon, in On Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories, edited by K. Rosquist, R. T. Jantzen, and R. Ruffini (World Scientific, Singapore, 2015), p. 1301] for studying the space-time geometry of a static symmetric massive object. This metric depends on a complementary error function (erfc) potential that characterizes the emergent gravitation field predicted by the model. This results in two types of deviations as compared to computations made on the basis of a Newtonian potential: a constant and a radial outcome. One key feature of the metric is that it postulates the existence of an intrinsic physical constant σ , the massive object-specific proper length that scales measurements in its surroundings. Although σ must be evaluated experimentally, we use a heuristic to estimate its value and point out some latent relationships between the Hubble constant, the secular increase in the astronomical unit, and the Pioneers delay. Indeed, highlighting the systematic errors that emerge when the effect of σ is neglected, one can link the Hubble constant H 0 to σ Sun and the secular increase V AU to σ Earth . The accuracy of the resulting numerical predictions, H 0 = 74 . 42 ( 0 . 02 ) ( km / s ) / Mpc and V AU ≅ 7.8 cm yr-1 , calls for more investigations of this new metric by specific experts. Moreover, we investigate the expected impacts of the new metric on the flyby anomalies, and we revisit the Pioneers delay. It is shown that both phenomena could be partly taken into account within the context of this unifying paradigm, with quite accurate numerical predictions. A correction for the osculating asymptotic velocity at the perigee of the order of 10 mm/s and an inward radial acceleration of 8 . 34 × 10 - 10 m / s 2 affecting the Pioneer ! space crafts could be explained by this new model.

  19. European astronaut selected for the third Hubble Space Telescope

    Science.gov (United States)

    1998-08-01

    only observed relatively near celestial objects, like the planets in our solar system, but also looked thousands of millions of light years into space, taking images of the most distant galaxies ever seen. "The observations and spectral measurements taken with Hubble have improved our understanding of the origin and age of the universe. In some cases, the Hubble Space Telescope has already changed our thinking about the evolution of planetary systems, stars and galaxies," points out Roger Bonnet, ESA's Director of Science. Astronomers throughout the world are using the telescope. European astronomers have a significant share in the scientific utilisation of Hubble. The Space Telescope Science Institute in Baltimore, USA, coordinates and schedules the various observations. Europe's centre for coordinating observations from Hubble, the Space Telescope European Coordination Facility, is located at the Headquarters of the European Southern Observatory (ESO) at Garching, near Munich, Germany. The Hubble Space Telescope is the first spacecraft ever built that has been designed for extensive in-orbit maintenance and refurbishment by astronauts. Unlike other satellites launched on unmanned rockets, Hubble is accessible by astronauts in orbit. It has numerous grapple fixtures and handholds for ease of access and the safety of astronauts. Hence the telescope's planned 15-year continuous operating time, despite the harsh environmental conditions, and the ability to upgrade it with more powerful instruments as technology progresses. At regular intervals of 3 to 4 years, the US Space Shuttle visits the telescope in orbit to replace components which have failed or reached the nominal end of their operational lifetime and to replace and upgrade instruments with newer, better ones. STS-104 will be the third Hubble servicing mission, after STS-61 in December 1993 and STS-82 in February 1997. To increase Hubble's scientific capability, Nicollier and his fellow crew members from NASA

  20. Dark Energy and the Hubble Law

    Science.gov (United States)

    Chernin, A. D.; Dolgachev, V. P.; Domozhilova, L. M.

    The Big Bang predicted by Friedmann could not be empirically discovered in the 1920th, since global cosmological distances (more than 300-1000 Mpc) were not available for observations at that time. Lemaitre and Hubble studied receding motions of galaxies at local distances of less than 20-30 Mpc and found that the motions followed the (nearly) linear velocity-distance relation, known now as Hubble's law. For decades, the real nature of this phenomenon has remained a mystery, in Sandage's words. After the discovery of dark energy, it was suggested that the dynamics of local expansion flows is dominated by omnipresent dark energy, and it is the dark energy antigravity that is able to introduce the linear velocity-distance relation to the flows. It implies that Hubble's law observed at local distances was in fact the first observational manifestation of dark energy. If this is the case, the commonly accepted criteria of scientific discovery lead to the conclusion: In 1927, Lemaitre discovered dark energy and Hubble confirmed this in 1929.

  1. Hubble expansion in a Euclidean framework

    International Nuclear Information System (INIS)

    Alfven, H.

    1979-01-01

    There now seems to be strong evidence for a non-cosmological interpretation of the QSO redshift - in any case, so strong that it is of interest to investigate the consequences. The purpose of this paper is to construct a model of the Hubble expansion which is as far as possible from the conventional Big Bang model without coming in conflict with any well-established observational results (while introducing no new laws of physics). This leads to an essentially Euclidean metagalactic model (see Table I) with very little mass outside one-third or half of the Hubble radius. The total kinetic energy of the Hubble expansion need only to be about 5% of the rest mass energy. Present observations support backwards in time extrapolation of the Hubble expansion to a 'minimum size galaxy' Rsub(m), which may have any value in 0 26 cm. Other arguments speak in favor of a size close to the upper value, say Rsub(m) = 10 26 cm (Table II). As this size is probably about 100 times the Schwarzschild limit, an essentially Euclidean description is allowed. The kinetic energy of the Hubble expansion may derive from an intense QSO-like activity in the minimum size metagalaxy, with an energy release corresponding to the annihilation of a few solar masses per galaxy per year. Some of the conclusions based on the Big Bang hypothesis are criticized and in several cases alternative interpretations are suggested. A comparison between the Euclidean and the conventional models is given in Table III. (orig.)

  2. Hubble Space Telescope Observations of cD Galaxies and Their Globular Cluster Systems

    Science.gov (United States)

    Jordán, Andrés; Côté, Patrick; West, Michael J.; Marzke, Ronald O.; Minniti, Dante; Rejkuba, Marina

    2004-01-01

    We have used WFPC2 on the Hubble Space Telescope (HST) to obtain F450W and F814W images of four cD galaxies (NGC 541 in Abell 194, NGC 2832 in Abell 779, NGC 4839 in Abell 1656, and NGC 7768 in Abell 2666) in the range 5400 km s-1cluster (GC) systems reveals no anomalies in terms of specific frequencies, metallicity gradients, average metallicities, or the metallicity offset between the globular clusters and the host galaxy. We show that the latter offset appears roughly constant at Δ[Fe/H]~0.8 dex for early-type galaxies spanning a luminosity range of roughly 4 orders of magnitude. We combine the globular cluster metallicity distributions with an empirical technique described in a series of earlier papers to investigate the form of the protogalactic mass spectrum in these cD galaxies. We find that the observed GC metallicity distributions are consistent with those expected if cD galaxies form through the cannibalism of numerous galaxies and protogalactic fragments that formed their stars and globular clusters before capture and disruption. However, the properties of their GC systems suggest that dynamical friction is not the primary mechanism by which these galaxies are assembled. We argue that cD's instead form rapidly, via hierarchical merging, prior to cluster virialization. Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555 Based in part on observations obtained at the European Southern Observatory, for VLT program 68.D-0130(A).

  3. HST's 10th anniversary, ESA and Hubble : changing our vision

    Science.gov (United States)

    2000-04-01

    With the astronauts who took part in the most recent Servicing Mission (SM3A) in attendance, ESA is taking the opportunity to give a - first - complete overview of Europe's major contribution to the HST mission. It will also review the first ten years of operations and the outstanding results that have "changed our vision" of the cosmos. A new fully European outreach initiative - the "European Space Agency Hubble Information Centre" - will be presented and officially launched; it has been set up by ESA to provide information on Hubble from a European perspective. A public conference will take place in the afternoon to celebrate Hubble's achievements midway through its life. Ten years of outstanding performance Launched on 24 April 1990, Hubble is now midway through its operating life and it is considered one of the most successful space science missions ever. So far more than 10,000 scientific papers based on Hubble results have been published and European scientists have contributed to more than 25% of these. Not only has Hubble produced a rich harvest of scientific results, it has impressed the man in the street with its beautiful images of the sky. Thousands of headlines all over the world have given direct proof of the public's great interest in the mission - 'The deepest images ever', 'The sharpest view of the Universe', 'Measurements of the earliest galaxies' and many others, all reflecting Hubble's performance as a top-class observatory. The Servicing Missions that keep the observatory and its instruments in prime condition are one of the innovative ideas behind Hubble. Astronauts have serviced Hubble three times, and ESA astronauts have taken part in two of these missions. Claude Nicollier (CH) worked with American colleagues on the First Servicing Mission, when Hubble's initial optical problems were repaired. On the latest, Servicing Mission 3A, both Claude Nicollier and Jean-François Clervoy (F) were members of the crew. Over the next 10 years European

  4. The Hubble Constant

    Directory of Open Access Journals (Sweden)

    Neal Jackson

    2015-09-01

    Full Text Available I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H_0 values of around 72–74 km s^–1 Mpc^–1, with typical errors of 2–3 km s^–1 Mpc^–1. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67–68 km s^–1 Mpc^–1 and typical errors of 1–2 km s^–1 Mpc^–1. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.

  5. Chandra Independently Determines Hubble Constant

    Science.gov (United States)

    2006-08-01

    A critically important number that specifies the expansion rate of the Universe, the so-called Hubble constant, has been independently determined using NASA's Chandra X-ray Observatory. This new value matches recent measurements using other methods and extends their validity to greater distances, thus allowing astronomers to probe earlier epochs in the evolution of the Universe. "The reason this result is so significant is that we need the Hubble constant to tell us the size of the Universe, its age, and how much matter it contains," said Max Bonamente from the University of Alabama in Huntsville and NASA's Marshall Space Flight Center (MSFC) in Huntsville, Ala., lead author on the paper describing the results. "Astronomers absolutely need to trust this number because we use it for countless calculations." Illustration of Sunyaev-Zeldovich Effect Illustration of Sunyaev-Zeldovich Effect The Hubble constant is calculated by measuring the speed at which objects are moving away from us and dividing by their distance. Most of the previous attempts to determine the Hubble constant have involved using a multi-step, or distance ladder, approach in which the distance to nearby galaxies is used as the basis for determining greater distances. The most common approach has been to use a well-studied type of pulsating star known as a Cepheid variable, in conjunction with more distant supernovae to trace distances across the Universe. Scientists using this method and observations from the Hubble Space Telescope were able to measure the Hubble constant to within 10%. However, only independent checks would give them the confidence they desired, considering that much of our understanding of the Universe hangs in the balance. Chandra X-ray Image of MACS J1149.5+223 Chandra X-ray Image of MACS J1149.5+223 By combining X-ray data from Chandra with radio observations of galaxy clusters, the team determined the distances to 38 galaxy clusters ranging from 1.4 billion to 9.3 billion

  6. HUBBLE PINPOINTS WHITE DWARFS IN GLOBULAR CLUSTER

    Science.gov (United States)

    2002-01-01

    Peering deep inside a cluster of several hundred thousand stars, NASA's Hubble Space Telescope uncovered the oldest burned-out stars in our Milky Way Galaxy. Located in the globular cluster M4, these small, dying stars - called white dwarfs - are giving astronomers a fresh reading on one of the biggest questions in astronomy: How old is the universe? The ancient white dwarfs in M4 are about 12 to 13 billion years old. After accounting for the time it took the cluster to form after the big bang, astronomers found that the age of the white dwarfs agrees with previous estimates for the universe's age. In the top panel, a ground-based observatory snapped a panoramic view of the entire cluster, which contains several hundred thousand stars within a volume of 10 to 30 light-years across. The Kitt Peak National Observatory's 0.9-meter telescope took this picture in March 1995. The box at left indicates the region observed by the Hubble telescope. The Hubble telescope studied a small region of the cluster. A section of that region is seen in the picture at bottom left. A sampling of an even smaller region is shown at bottom right. This region is only about one light-year across. In this smaller region, Hubble pinpointed a number of faint white dwarfs. The blue circles pinpoint the dwarfs. It took nearly eight days of exposure time over a 67-day period to find these extremely faint stars. Globular clusters are among the oldest clusters of stars in the universe. The faintest and coolest white dwarfs within globular clusters can yield a globular cluster's age. Earlier Hubble observations showed that the first stars formed less than 1 billion years after the universe's birth in the big bang. So, finding the oldest stars puts astronomers within arm's reach of the universe's age. M4 is 7,000 light-years away in the constellation Scorpius. Hubble's Wide Field and Planetary Camera 2 made the observations from January through April 2001. These optical observations were combined to

  7. Theoretical colours and isochrones for some Hubble Space Telescope colour systems. II

    Science.gov (United States)

    Paltoglou, G.; Bell, R. A.

    1991-01-01

    A grid of synthetic surface brightness magnitudes for 14 bandpasses of the Hubble Space Telescope Faint Object Camera is presented, as well as a grid of UBV, uvby, and Faint Object Camera surface brightness magnitudes derived from the Gunn-Stryker spectrophotometric atlas. The synthetic colors are used to examine the transformations between the ground-based Johnson UBV and Stromgren uvby systems and the Faint Object Camera UBV and uvby. Two new four-color systems, similar to the Stromgren system, are proposed for the determination of abundance, temperature, and surface gravity. The synthetic colors are also used to calculate color-magnitude isochrones from the list of theoretical tracks provided by VandenBerg and Bell (1990). It is shown that by using the appropriate filters it is possible to minimize the dependence of this color difference on metallicity. The effects of interstellar reddening on various Faint Object Camera colors are analyzed as well as the observational requirements for obtaining data of a given signal-to-noise for each of the 14 bandpasses.

  8. Hubble and the Language of Images

    Science.gov (United States)

    Levay, Z. G.

    2005-12-01

    Images released from the Hubble Space Telescope have been very highly regarded by the astronomy-attentive public for at least a decade. Due in large part to these images, Hubble has become an iconic figure, even among the general public. This iconic status is both a boon and a burden for those who produce the stream of images fl owing from this telescope. While the benefits of attention are fairly obvious, the negative aspects are less visible. One of the most persistent challenges is the need to continue to deliver images that "top" those released before. In part this can be accomplished because of Hubble's upgraded instrumentation. But it can also be a source of pressure that could, if left unchecked, erode ethical boundaries in our communication with the public. These pressures are magnified in an atmosphere of uncertainty with regard to the future of the mission.

  9. Hubble Source Catalog

    Science.gov (United States)

    Lubow, S.; Budavári, T.

    2013-10-01

    We have created an initial catalog of objects observed by the WFPC2 and ACS instruments on the Hubble Space Telescope (HST). The catalog is based on observations taken on more than 6000 visits (telescope pointings) of ACS/WFC and more than 25000 visits of WFPC2. The catalog is obtained by cross matching by position in the sky all Hubble Legacy Archive (HLA) Source Extractor source lists for these instruments. The source lists describe properties of source detections within a visit. The calculations are performed on a SQL Server database system. First we collect overlapping images into groups, e.g., Eta Car, and determine nearby (approximately matching) pairs of sources from different images within each group. We then apply a novel algorithm for improving the cross matching of pairs of sources by adjusting the astrometry of the images. Next, we combine pairwise matches into maximal sets of possible multi-source matches. We apply a greedy Bayesian method to split the maximal matches into more reliable matches. We test the accuracy of the matches by comparing the fluxes of the matched sources. The result is a set of information that ties together multiple observations of the same object. A byproduct of the catalog is greatly improved relative astrometry for many of the HST images. We also provide information on nondetections that can be used to determine dropouts. With the catalog, for the first time, one can carry out time domain, multi-wavelength studies across a large set of HST data. The catalog is publicly available. Much more can be done to expand the catalog capabilities.

  10. BEAUTY IN THE EYE OF HUBBLE

    Science.gov (United States)

    2002-01-01

    A dying star, IC 4406, dubbed the 'Retina Nebula' is revealed in this month's Hubble Heritage image. Like many other so-called planetary nebulae, IC 4406 exhibits a high degree of symmetry; the left and right halves of the Hubble image are nearly mirror images of the other. If we could fly around IC4406 in a starship, we would see that the gas and dust form a vast donut of material streaming outward from the dying star. From Earth, we are viewing the donut from the side. This side view allows us to see the intricate tendrils of dust that have been compared to the eye's retina. In other planetary nebulae, like the Ring Nebula (NGC 6720), we view the donut from the top. The donut of material confines the intense radiation coming from the remnant of the dying star. Gas on the inside of the donut is ionized by light from the central star and glows. Light from oxygen atoms is rendered blue in this image; hydrogen is shown as green, and nitrogen as red. The range of color in the final image shows the differences in concentration of these three gases in the nebula. Unseen in the Hubble image is a larger zone of neutral gas that is not emitting visible light, but which can be seen by radio telescopes. One of the most interesting features of IC 4406 is the irregular lattice of dark lanes that criss-cross the center of the nebula. These lanes are about 160 astronomical units wide (1 astronomical unit is the distance between the Earth and Sun). They are located right at the boundary between the hot glowing gas that produces the visual light imaged here and the neutral gas seen with radio telescopes. We see the lanes in silhouette because they have a density of dust and gas that is a thousand times higher than the rest of the nebula. The dust lanes are like a rather open mesh veil that has been wrapped around the bright donut. The fate of these dense knots of material is unknown. Will they survive the nebula's expansion and become dark denizens of the space between the stars

  11. Hubble induced mass after inflation in spectator field models

    Energy Technology Data Exchange (ETDEWEB)

    Fujita, Tomohiro [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States); Harigaya, Keisuke, E-mail: tomofuji@stanford.edu, E-mail: keisukeh@icrr.u-tokyo.ac.jp [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-12-01

    Spectator field models such as the curvaton scenario and the modulated reheating are attractive scenarios for the generation of the cosmic curvature perturbation, as the constraints on inflation models are relaxed. In this paper, we discuss the effect of Hubble induced masses on the dynamics of spectator fields after inflation. We pay particular attention to the Hubble induced mass by the kinetic energy of an oscillating inflaton, which is generically unsuppressed but often overlooked. In the curvaton scenario, the Hubble induced mass relaxes the constraint on the property of the inflaton and the curvaton, such as the reheating temperature and the inflation scale. We comment on the implication of our discussion for baryogenesis in the curvaton scenario. In the modulated reheating, the predictions of models e.g. the non-gaussianity can be considerably altered. Furthermore, we propose a new model of the modulated reheating utilizing the Hubble induced mass which realizes a wide range of the local non-gaussianity parameter.

  12. Testing the isotropy of the Hubble expansion

    OpenAIRE

    Migkas, K.; Plionis, M.

    2016-01-01

    Abstract: We have used the Union2.1 SNIa compilation to search for possible Hubble expansion anisotropies, dividing the sky in 9 solid angles containing roughly the same number of SNIa, as well as in two Galactic hemispheres. We identified only one sky region, containing 82 SNIa (~15% of total sample with z > 0.02), that indeed appears to share a Hubble expansion significantly different from the rest of the sample. However, this behaviour can be attributed to the joint "erratic" behaviour of ...

  13. Hubble 2020: Outer Planet Atmospheres Legacy (OPAL) Program

    Science.gov (United States)

    Simon, Amy

    2017-08-01

    Long time base observations of the outer planets are critical in understanding the atmospheric dynamics and evolution of the gas giants. We propose yearly monitoring of each giant planet for the remainder of Hubble's lifetime to provide a lasting legacy of increasingly valuable data for time-domain studies. The Hubble Space Telescope is a unique asset to planetary science, allowing high spatial resolution data with absolute photometric knowledge. For the outer planets, gas/ice giant planets Jupiter, Saturn, Uranus and Neptune, many phenomena happen on timescales of years to decades, and the data we propose are beyond the scope of a typical GO program. Hubble is the only platform that can provide high spatial resolution global studies of cloud coloration, activity, and motion on a consistent time basis to help constrain the underlying mechanics.

  14. Carnegie Hubble Program: A Mid-Infrared Calibration of the Hubble Constant

    Science.gov (United States)

    Freedman, Wendy L.; Madore, Barry F.; Scowcroft, Victoria; Burns, Chris; Monson, Andy; Persson, S. Eric; Seibert, Mark; Rigby, Jane

    2012-01-01

    Using a mid-infrared calibration of the Cepheid distance scale based on recent observations at 3.6 micrometers with the Spitzer Space Telescope, we have obtained a new, high-accuracy calibration of the Hubble constant. We have established the mid-IR zero point of the Leavitt law (the Cepheid period-luminosity relation) using time-averaged 3.6 micrometers data for 10 high-metallicity, MilkyWay Cepheids having independently measured trigonometric parallaxes. We have adopted the slope of the PL relation using time-averaged 3.6micrometers data for 80 long-period Large Magellanic Cloud (LMC) Cepheids falling in the period range 0.8 < log(P) < 1.8.We find a new reddening-corrected distance to the LMC of 18.477 +/- 0.033 (systematic) mag. We re-examine the systematic uncertainties in H(sub 0), also taking into account new data over the past decade. In combination with the new Spitzer calibration, the systematic uncertainty in H(sub 0) over that obtained by the Hubble Space Telescope Key Project has decreased by over a factor of three. Applying the Spitzer calibration to the Key Project sample, we find a value of H(sub 0) = 74.3 with a systematic uncertainty of +/-2.1 (systematic) kilometers per second Mpc(sup -1), corresponding to a 2.8% systematic uncertainty in the Hubble constant. This result, in combination with WMAP7measurements of the cosmic microwave background anisotropies and assuming a flat universe, yields a value of the equation of state for dark energy, w(sub 0) = -1.09 +/- 0.10. Alternatively, relaxing the constraints on flatness and the numbers of relativistic species, and combining our results with those of WMAP7, Type Ia supernovae and baryon acoustic oscillations yield w(sub 0) = -1.08 +/- 0.10 and a value of N(sub eff) = 4.13 +/- 0.67, mildly consistent with the existence of a fourth neutrino species.

  15. HUBBLE SPIES MOST DISTANT SUPERNOVA EVER SEEN

    Science.gov (United States)

    2002-01-01

    Using NASA's Hubble Space Telescope, astronomers pinpointed a blaze of light from the farthest supernova ever seen, a dying star that exploded 10 billion years ago. The detection and analysis of this supernova, called 1997ff, is greatly bolstering the case for the existence of a mysterious form of dark energy pervading the cosmos, making galaxies hurl ever faster away from each other. The supernova also offers the first glimpse of the universe slowing down soon after the Big Bang, before it began speeding up. This panel of images, taken with the Wide Field and Planetary Camera 2, shows the supernova's cosmic neighborhood; its home galaxy; and the dying star itself. Astronomers found this supernova in 1997 during a second look at the northern Hubble Deep Field [top panel], a tiny region of sky first explored by the Hubble telescope in 1995. The image shows the myriad of galaxies Hubble spied when it peered across more than 10 billion years of time and space. The white box marks the area where the supernova dwells. The photo at bottom left is a close-up view of that region. The white arrow points to the exploding star's home galaxy, a faint elliptical. Its redness is due to the billions of old stars residing there. The picture at bottom right shows the supernova itself, distinguished by the white dot in the center. Although this stellar explosion is among the brightest beacons in the universe, it could not be seen directly in the Hubble images. The stellar blast is so distant from Earth that its light is buried in the glow of its host galaxy. To find the supernova, astronomers compared two pictures of the 'deep field' taken two years apart. One image was of the original Hubble Deep Field; the other, the follow-up deep-field picture taken in 1997. Using special computer software, astronomers then measured the light from the galaxies in both images. Noting any changes in light output between the two pictures, the computer identified a blob of light in the 1997 picture

  16. Dismantling Hubble's Legacy?

    OpenAIRE

    Way, Michael J.

    2013-01-01

    Edwin Hubble is famous for a number of discoveries that are well known to amateur and professional astronomers, students and the general public. The origins of these discoveries are examined and it is demonstrated that, in each case, a great deal of supporting evidence was already in place. In some cases the discoveries had either already been made, or competing versions were not adopted for complex scientific and sociological reasons.

  17. System performance optimization

    International Nuclear Information System (INIS)

    Bednarz, R.J.

    1978-01-01

    The System Performance Optimization has become an important and difficult field for large scientific computer centres. Important because the centres must satisfy increasing user demands at the lowest possible cost. Difficult because the System Performance Optimization requires a deep understanding of hardware, software and workload. The optimization is a dynamic process depending on the changes in hardware configuration, current level of the operating system and user generated workload. With the increasing complication of the computer system and software, the field for the optimization manoeuvres broadens. The hardware of two manufacturers IBM and CDC is discussed. Four IBM and two CDC operating systems are described. The description concentrates on the organization of the operating systems, the job scheduling and I/O handling. The performance definitions, workload specification and tools for the system stimulation are given. The measurement tools for the System Performance Optimization are described. The results of the measurement and various methods used for the operating system tuning are discussed. (Auth.)

  18. New Hubble Servicing Mission to upgrade instruments

    Science.gov (United States)

    2006-10-01

    The history of the NASA/ESA Hubble Space Telescope is dominated by the familiar sharp images and amazing discoveries that have had an unprecedented scientific impact on our view of the world and our understanding of the universe. Nevertheless, such important contributions to science and humankind have only been possible as result of regular upgrades and enhancements to Hubble’s instrumentation. Using the Space Shuttle for this fifth Servicing Mission underlines the important role that astronauts have played and continue to play in increasing the Space Telescope’s lifespan and scientific power. Since the loss of Columbia in 2003, the Shuttle has been successfully launched on three missions, confirming that improvements made to it have established the required high level of safety for the spacecraft and its crew. “There is never going to be an end to the science that we can do with a machine like Hubble”, says David Southwood, ESA’s Director of Science. “Hubble is our way of exploring our origins. Everyone should be proud that there is a European element to it and that we all are part of its success at some level.” This Servicing Mission will not just ensure that Hubble can function for perhaps as much as another ten years; it will also increase its capabilities significantly in key areas. This highly visible mission is expected to take place in 2008 and will feature several space walks. As part of the upgrade, two new scientific instruments will be installed: the Cosmic Origins Spectrograph and Wide Field Camera 3. Each has advanced technology sensors that will dramatically improve Hubble’s potential for discovery and enable it to observe faint light from the youngest stars and galaxies in the universe. With such an astounding increase in its science capabilities, this orbital observatory will continue to penetrate the most distant regions of outer space and reveal breathtaking phenomena. “Today, Hubble is producing more science than ever before in

  19. Finding our Origins with the Hubble and James Webb Space Telescopes

    Science.gov (United States)

    Gardner, Jonathan P.

    2009-01-01

    NASA is planning a successor to the Hubble Space Telescope designed to study the origins of galaxies, stars, planets and life in the universe. In this talk, Dr. Gardner will discuss the origin and evolution of galaxies, beginning with the Big Bang and tracing what we have learned with Hubble through to the present day. He will show that results from studies with Hubble have led to plans for its successor, the James Webb Space Telescope. Webb is scheduled to launch in 2014, and is designed to find the first galaxies that formed in the distant past and to penetrate the dusty clouds of gas where stars are still forming today. He will compare Webb to Hubble, and discuss recent progress in the construction of the observatory.

  20. Nickel-hydrogen battery testing for Hubble Space Telescope

    Science.gov (United States)

    Baggett, Randy M.; Whitt, Thomas H.

    1989-01-01

    The authors identify objectives and provide data from several nickel-hydrogen battery tests designed to evaluate the possibility of launching Ni-H2 batteries on the Hubble Space Telescope (HST). Test results from a 14-cell battery, a 12-cell battery, and a 4-cell pack are presented. Results of a thermal vacuum test to verify the battery-module/bay heat rejection capacity are reported. A 6-battery system simulation breadboard is described, and test results are presented.

  1. Replacement vs. Renovation: The Reincarnation of Hubble Middle School

    Science.gov (United States)

    Ogurek, Douglas J.

    2010-01-01

    At the original Hubble Middle School, neither the views (a congested Roosevelt Road and glimpses of downtown Wheaton) nor the century-old facility that offered them was very inspiring. Built at the start of the 20th century, the 250,000-square-foot building was converted from Wheaton Central High School to Hubble Middle School in the early 1980s.…

  2. A Hubble Diagram for Quasars

    Directory of Open Access Journals (Sweden)

    Susanna Bisogni

    2018-01-01

    Full Text Available The cosmological model is at present not tested between the redshift of the farthest observed supernovae (z ~ 1.4 and that of the Cosmic Microwave Background (z ~ 1,100. Here we introduce a new method to measure the cosmological parameters: we show that quasars can be used as “standard candles” by employing the non-linear relation between their intrinsic UV and X-ray emission as an absolute distance indicator. We built a sample of ~1,900 quasars with available UV and X-ray observations, and produced a Hubble Diagram up to z ~ 5. The analysis of the quasar Hubble Diagram, when used in combination with supernovae, provides robust constraints on the matter and energy content in the cosmos. The application of this method to forthcoming, larger quasar samples, will also provide tight constraints on the dark energy equation of state and its possible evolution with time.

  3. Delivering Hubble Discoveries to the Classroom

    Science.gov (United States)

    Eisenhamer, B.; Villard, R.; Weaver, D.; Cordes, K.; Knisely, L.

    2013-04-01

    Today's classrooms are significantly influenced by current news events, delivered instantly into the classroom via the Internet. Educators are challenged daily to transform these events into student learning opportunities. In the case of space science, current news events may be the only chance for educators and students to explore the marvels of the Universe. Inspired by these circumstances, the education and news teams developed the Star Witness News science content reading series. These online news stories (also available in downloadable PDF format) mirror the content of Hubble press releases and are designed for upper elementary and middle school level readers to enjoy. Educators can use Star Witness News stories to reinforce students' reading skills while exposing students to the latest Hubble discoveries.

  4. Cataclysmic variables, Hubble-Sandage variables and eta Carinae

    International Nuclear Information System (INIS)

    Bath, G.T.

    1980-01-01

    The Hubble-Sandage variables are the most luminous stars in external galaxies. They were first investigated by Hubble and Sandage (1953) for use as distance indicators. Their main characteristics are high luminosity, blue colour indices, and irregular variability. Spectroscopically they show hydrogen and helium in emission with occasionally weaker FeII and [FeII], and no Balmer jump (Humphreys 1975, 1978). In this respect they closely resemble cataclysmic variables, particularly dwarf novae. In the quiescent state dwarf novae show broad H and HeI, together with a strong UV continuum. In contrast to the spectroscopic similarities, the luminosities could hardly differ more. Rather than being the brightest stars known, quiescent dwarf novae are as faint or fainter than the sun. It is suggested that the close correspondence between the spectral appearance of the two classes combined with the difference in luminosity is well accounted for by a model of Hubble-Sandage variables in which the same physical processes are occurring, but on a larger scale. (Auth.)

  5. Cosmic Collisions The Hubble Atlas of Merging Galaxies

    CERN Document Server

    Christensen, Lars Lindberg; Martin, Davide

    2009-01-01

    Lars Lindberg Christensen, Raquel Yumi Shida & Davide De Martin Cosmic Collisions: The Hubble Atlas of Merging Galaxies Like majestic ships in the grandest night, galaxies can slip ever closer until their mutual gravitational interaction begins to mold them into intricate figures that are finally, and irreversibly, woven together. It is an immense cosmic dance, choreographed by gravity. Cosmic Collisions contains a hundred new, many thus far unpublished, images of colliding galaxies from the NASA/ESA Hubble Space Telescope. It is believed that many present-day galaxies, including the Milky Way, were assembled from such a coalescence of smaller galaxies, occurring over billions of years. Triggered by the colossal and violent interaction between the galaxies, stars form from large clouds of gas in firework bursts, creating brilliant blue star clusters. The importance of these cosmic encounters reaches far beyond the stunning Hubble images. They may, in fact, be among the most important processes that shape ...

  6. YOUNG PLANETARY NEBULAE: HUBBLE SPACE TELESCOPE IMAGING AND A NEW MORPHOLOGICAL CLASSIFICATION SYSTEM

    International Nuclear Information System (INIS)

    Sahai, Raghvendra; Villar, Gregory G.; Morris, Mark R.

    2011-01-01

    Using Hubble Space Telescope images of 119 young planetary nebulae (PNs), most of which have not previously been published, we have devised a comprehensive morphological classification system for these objects. This system generalizes a recently devised system for pre-planetary nebulae, which are the immediate progenitors of PNs. Unlike previous classification studies, we have focused primarily on young PNs rather than all PNs, because the former best show the influences or symmetries imposed on them by the dominant physical processes operating at the first and primary stage of the shaping process. Older PNs develop instabilities, interact with the ambient interstellar medium, and are subject to the passage of photoionization fronts, all of which obscure the underlying symmetries and geometries imposed early on. Our classification system is designed to suffer minimal prejudice regarding the underlying physical causes of the different shapes and structures seen in our PN sample, however, in many cases, physical causes are readily suggested by the geometry, along with the kinematics that have been measured in some systems. Secondary characteristics in our system, such as ansae, indicate the impact of a jet upon a slower-moving, prior wind; a waist is the signature of a strong equatorial concentration of matter, whether it be outflowing or in a bound Keplerian disk, and point symmetry indicates a secular trend, presumably precession, in the orientation of the central driver of a rapid, collimated outflow.

  7. Hubble expansion in static spacetime

    International Nuclear Information System (INIS)

    Rossler, Otto E.; Froehlich, Dieter; Movassagh, Ramis; Moore, Anthony

    2007-01-01

    A recently proposed mechanism for light-path expansion in a static spacetime is based on the moving-lenses paradigm. Since the latter is valid independently of whether space expands or not, a static universe can be used to better see the implications. The moving-lenses paradigm is related to the paradigm of dynamical friction. If this is correct, a Hubble-like law is implicit. It is described quantitatively. A bent in the Hubble-like line is predictably implied. The main underlying assumption is Price's Principle (PI 3 ). If the theory is sound, the greatest remaining problem in cosmology becomes the origin of hydrogen. Since Blandford's jet production mechanism for quasars is too weak, a generalized Hawking radiation hidden in the walls of cosmic voids is invoked. A second prediction is empirical: slow pattern changes in the cosmic microwave background. A third is ultra-high redshifts for Giacconi quasars. Bruno's eternal universe in the spirit of Augustine becomes a bit less outlandish

  8. The Carnegie Hubble Program

    Science.gov (United States)

    Freedman, Wendy L.; Madore, Barry F.; Scowcroft, Vicky; Mnso, Andy; Persson, S. E.; Rigby, Jane; Sturch, Laura; Stetson, Peter

    2011-01-01

    We present an overview of and preliminary results from an ongoing comprehensive program that has a goal of determining the Hubble constant to a systematic accuracy of 2%. As part of this program, we are currently obtaining 3.6 micron data using the Infrared Array Camera (IRAC) on Spitzer, and the program is designed to include JWST in the future. We demonstrate that the mid-infrared period-luminosity relation for Cepheids at 3.6 microns is the most accurate means of measuring Cepheid distances to date. At 3.6 microns, it is possible to minimize the known remaining systematic uncertainties in the Cepheid extragalactic distance scale. We discuss the advantages of 3.6 micron observations in minimizing systematic effects in the Cepheid calibration of the Hubble constant including the absolute zero point, extinction corrections, and the effects of metallicity on the colors and magnitudes of Cepheids. We are undertaking three independent tests of the sensitivity of the mid-IR Cepheid Leavitt Law to metallicity, which when combined will allow a robust constraint on the effect. Finally, we are providing a new mid-IR Tully-Fisher relation for spiral galaxies.

  9. Characterizing the Evolution of Circumstellar Systems with the Hubble Space Telescope and the Gemini Planet Imager

    Science.gov (United States)

    Wolff, Schuyler; Schuyler G. Wolff

    2018-01-01

    The study of circumstellar disks at a variety of evolutionary stages is essential to understand the physical processes leading to planet formation. The recent development of high contrast instruments designed to directly image the structures surrounding nearby stars, such as the Gemini Planet Imager (GPI) and coronagraphic data from the Hubble Space Telescope (HST) have made detailed studies of circumstellar systems possible. In my thesis work I detail the observation and characterization of three systems. GPI polarization data for the transition disk, PDS 66 shows a double ring and gap structure with a temporally variable azimuthal asymmetry. This evolved morphology could indicate shadowing from some feature in the innermost regions of the disk, a gap-clearing planet, or a localized change in the dust properties of the disk. Millimeter continuum data of the DH Tau system places limits on the dust mass that is contributing to the strong accretion signature on the wide-separation planetary mass companion, DH Tau b. The lower than expected dust mass constrains the possible formation mechanism, with core accretion followed by dynamical scattering being the most likely. Finally, I present HST scattered light observations of the flared, edge-on protoplanetary disk ESO H$\\alpha$ 569. I combine these data with a spectral energy distribution to model the key structural parameters such as the geometry (disk outer radius, vertical scale height, radial flaring profile), total mass, and dust grain properties in the disk using the radiative transfer code MCFOST. In order to conduct this work, I developed a new tool set to optimize the fitting of disk parameters using the MCMC code \\texttt{emcee} to efficiently explore the high dimensional parameter space. This approach allows us to self-consistently and simultaneously fit a wide variety of observables in order to place constraints on the physical properties of a given disk, while also rigorously assessing the uncertainties in

  10. Hubble Space Telescope: a Vision to 2020 and Beyond: The Hubble Source Catalog

    Science.gov (United States)

    Strolger, Louis-Gregory

    2016-01-01

    The Hubble Source Catalog (HSC) is an initiative centered on what science would be enabled by a master catalog of all the sources HST has imaged over its lifetime. The first version of this catalog was released in early 2015, and included approximately 30 million sources from archived direct imaging with WFPC2, ACS (through 2011), and WFC3 (to 2014). Version 2, scheduled for release in early 2016, will feed off the Hubble Legacy Archive DR9 release, updating the ACS sources with more detections, and more direct imaging, through to mid-2015. This talk will overview the properties and goals of the HSC in terms of its source detection, object resolution, confusion limits, and overall astrometric and photometric precision. I will also discuss the connections to other MAST activities (e.g., the Discovery Portal interface), to STScI and user products (e.g., the Spectroscopic Catalog and High-Level Science Products), and to community resources (e.g., Pan-STARRS, SDSS, and eventually GAIA). The HSC successfully amalgamates the diverse observations with HST, and despite the limitations in uniformity on the sky, will be an important reference for JWST, LSST, and other future telescopes.

  11. HUBBLE SPACE TELESCOPE ASTROMETRY OF THE PROCYON SYSTEM

    Energy Technology Data Exchange (ETDEWEB)

    Bond, Howard E. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Gilliland, Ronald L.; Kozhurina-Platais, Vera; Nelan, Edmund P. [Space Telescope Science Institute, 3700 San Martin Dr., Baltimore, MD 21218 (United States); Schaefer, Gail H. [The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023 (United States); Demarque, Pierre; Girard, Terrence M. [Department of Astronomy, Yale University, Box 208101, New Haven, CT 06520 (United States); Holberg, Jay B. [Lunar and Planetary Laboratory, University of Arizona, 1541 E. University Blvd., Tucson, AZ 85721 (United States); Gudehus, Donald [Department of Physics and Astronomy, Georgia State University, Atlanta, GA 30303 (United States); Mason, Brian D. [U.S. Naval Observatory, 3450 Massachusetts Ave., Washington, DC 20392 (United States); Burleigh, Matthew R.; Barstow, Martin A., E-mail: heb11@psu.edu [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom)

    2015-11-10

    The nearby star Procyon is a visual binary containing the F5 IV-V subgiant Procyon A, orbited in a 40.84-year period by the faint DQZ white dwarf (WD) Procyon B. Using images obtained over two decades with the Hubble Space Telescope, and historical measurements back to the 19th century, we have determined precise orbital elements. Combined with measurements of the parallax and the motion of the A component, these elements yield dynamical masses of 1.478 ± 0.012 M{sub ⊙} and 0.592 ± 0.006 M{sub ⊙} for A and B, respectively. The mass of Procyon A agrees well with theoretical predictions based on asteroseismology and its temperature and luminosity. Use of a standard core-overshoot model agrees best for a surprisingly high amount of core overshoot. Under these modeling assumptions, Procyon A’s age is ∼2.7 Gyr. Procyon B’s location in the H-R diagram is in excellent agreement with theoretical cooling tracks for WDs of its dynamical mass. Its position in the mass–radius plane is also consistent with theory, assuming a carbon–oxygen core and a helium-dominated atmosphere. Its progenitor’s mass was 1.9–2.2 M{sub ⊙}, depending on its amount of core overshoot. Several astrophysical puzzles remain. In the progenitor system, the stars at periastron were separated by only ∼5 AU, which might have led to tidal interactions and even mass transfer; yet there is no direct evidence that these have occurred. Moreover the orbital eccentricity has remained high (∼0.40). The mass of Procyon B is somewhat lower than anticipated from the initial-to-final-mass relation seen in open clusters. The presence of heavy elements in its atmosphere requires ongoing accretion, but the place of origin is uncertain.

  12. Hubble Space Telescope Astrometry of the Procyon System

    Science.gov (United States)

    Bond, Howard E.; Gilliland, Ronald L.; Schaefer, Gail H.; Demarque, Pierre; Girard, Terrence M.; Holberg, Jay B.; Gudehus, Donald; Mason, Brian D.; Kozhurina-Platais, Vera; Burleigh, Matthew R.

    2015-01-01

    The nearby star Procyon is a visual binary containing the F5 IV-V subgiant Procyon A, orbited in a 40.84-year period by the faint DQZ white dwarf (WD) Procyon B. Using images obtained over two decades with the Hubble Space Telescope, and historical measurements back to the 19th century, we have determined precise orbital elements. Combined with measurements of the parallax and the motion of the A component, these elements yield dynamical masses of 1.478 plus or minus 0.012M and 0.592 plus or minus 0.006M for A and B, respectively. The mass of Procyon A agrees well with theoretical predictions based on asteroseismology and its temperature and luminosity. Use of a standard core-overshoot model agrees best for a surprisingly high amount of core overshoot. Under these modeling assumptions, Procyon A's age is approximately 2.7 Gyr. Procyon B's location in the H-R diagram is in excellent agreement with theoretical cooling tracks for WDs of its dynamical mass. Its position in the mass-radius plane is also consistent with theory, assuming a carbon-oxygen core and a helium-dominated atmosphere. Its progenitor's mass was 1.9-2.2M, depending on its amount of core overshoot. Several astrophysical puzzles remain. In the progenitor system, the stars at periastron were separated by only approximately AU, which might have led to tidal interactions and even mass transfer; yet there is no direct evidence that these have occurred. Moreover the orbital eccentricity has remained high (approximately 0.40). The mass of Procyon B is somewhat lower than anticipated from the initial-to-final-mass relation seen in open clusters. The presence of heavy elements in its atmosphere requires ongoing accretion, but the place of origin is uncertain.

  13. Theoretical colours and isochrones for some Hubble Space Telescope colour systems

    Science.gov (United States)

    Edvardsson, B.; Bell, R. A.

    1989-01-01

    Synthetic spectra for effective temperatures of 4000-7250 K, logarithmic surface gravities typical of dwarfs and subgiants, and metallicities from solar values to 0.001 of the solar metallicity were used to derive a grid of synthetic surface brightness magnitudes for 21 of the Hubble Space Telescope Wide Field Camera (WFC) band passes. The absolute magnitudes of these 21 band passes are also obtained for a set of globular cluster isochrones with different helium abundances, metallicities, oxygen abundances, and ages. The usefulness and efficiency of different sets of broad and intermediate bandwidth WFC colors for determining ages and metallicities for globular clusters are evaluated.

  14. Observational constraint on spherical inhomogeneity with CMB and local Hubble parameter

    Science.gov (United States)

    Tokutake, Masato; Ichiki, Kiyotomo; Yoo, Chul-Moon

    2018-03-01

    We derive an observational constraint on a spherical inhomogeneity of the void centered at our position from the angular power spectrum of the cosmic microwave background (CMB) and local measurements of the Hubble parameter. The late time behaviour of the void is assumed to be well described by the so-called Λ-Lemaȋtre-Tolman-Bondi (ΛLTB) solution. Then, we restrict the models to the asymptotically homogeneous models each of which is approximated by a flat Friedmann-Lemaȋtre-Robertson-Walker model. The late time ΛLTB models are parametrized by four parameters including the value of the cosmological constant and the local Hubble parameter. The other two parameters are used to parametrize the observed distance-redshift relation. Then, the ΛLTB models are constructed so that they are compatible with the given distance-redshift relation. Including conventional parameters for the CMB analysis, we characterize our models by seven parameters in total. The local Hubble measurements are reflected in the prior distribution of the local Hubble parameter. As a result of a Markov-Chains-Monte-Carlo analysis for the CMB temperature and polarization anisotropies, we found that the inhomogeneous universe models with vanishing cosmological constant are ruled out as is expected. However, a significant under-density around us is still compatible with the angular power spectrum of CMB and the local Hubble parameter.

  15. A Toy Cosmology Using a Hubble-Scale Casimir Effect

    Directory of Open Access Journals (Sweden)

    Michael E. McCulloch

    2014-02-01

    Full Text Available The visible mass of the observable universe agrees with that needed for a flat cosmos, and the reason for this is not known. It is shown that this can be explained by modelling the Hubble volume as a black hole that emits Hawking radiation inwards, disallowing wavelengths that do not fit exactly into the Hubble diameter, since partial waves would allow an inference of what lies outside the horizon. This model of “horizon wave censorship” is equivalent to a Hubble-scale Casimir effect. This incomplete toy model is presented to stimulate discussion. It predicts a minimum mass and acceleration for the observable universe which are in agreement with the observed mass and acceleration, and predicts that the observable universe gains mass as it expands and was hotter in the past. It also predicts a suppression of variation on the largest cosmic scales that agrees with the low-l cosmic microwave background anomaly seen by the Planck satellite.

  16. Hubble peers inside a celestial geode

    Science.gov (United States)

    2004-08-01

    celestial geode hi-res Size hi-res: 148 Kb Credits: ESA/NASA, Yäel Nazé (University of Liège, Belgium) and You-Hua Chu (University of Illinois, Urbana, USA) Hubble peers inside a celestial geode In this unusual image, the NASA/ESA Hubble Space Telescope captures a rare view of the celestial equivalent of a geode - a gas cavity carved by the stellar wind and intense ultraviolet radiation from a young hot star. Real geodes are handball-sized, hollow rocks that start out as bubbles in volcanic or sedimentary rock. Only when these inconspicuous round rocks are split in half by a geologist, do we get a chance to appreciate the inside of the rock cavity that is lined with crystals. In the case of Hubble's 35 light-year diameter ‘celestial geode’ the transparency of its bubble-like cavity of interstellar gas and dust reveals the treasures of its interior. Low resolution version (JPG format) 148 Kb High resolution version (TIFF format) 1929 Kb Acknowledgment: This image was created with the help of the ESA/ESO/NASA Photoshop FITS Liberator. Real geodes are handball-sized, hollow rocks that start out as bubbles in volcanic or sedimentary rock. Only when these inconspicuous round rocks are split in half by a geologist, do we get a chance to appreciate the inside of the rock cavity that is lined with crystals. In the case of Hubble's 35 light-year diameter ‘celestial geode’ the transparency of its bubble-like cavity of interstellar gas and dust reveals the treasures of its interior. The object, called N44F, is being inflated by a torrent of fast-moving particles (what astronomers call a 'stellar wind') from an exceptionally hot star (the bright star just below the centre of the bubble) once buried inside a cold dense cloud. Compared with our Sun (which is losing mass through the so-called 'solar wind'), the central star in N44F is ejecting more than a 100 million times more mass per second and the hurricane of particles moves much faster at 7 million km per hour

  17. Distance to M33 determined from magnitude corrections to Hubble's original cepheid photometry

    International Nuclear Information System (INIS)

    Sandage, A.

    1983-01-01

    New photoelectric photometry in Selected Area 45, and transfers from a faint photoelectric sequence adjacent to the south-preceding arm in M33 have been made to the comparison stars for Hubble's Cepheids in M33. Progressive magnitude corrections are required to Hubble's M33 scales, reaching 2.8 mag at the limit of the Mount Wilson 2.5-m Hooker reflector. Hubble's Cepheid light curves have been corrected to the B photoelectric system, and new photometric parameters are given for 35 of his variables. The P-L relation agrees in zero point to within 0.2 mag of the P-L relation from independent data by Sandage and Carlson for 12 new Cepheids in an outlying region of M33. Application of an adopted absolute P-L relation, calibrated by Martin, Warren, and Feast, to these data gives an apparent blue modulus of (m-M)/sup AB//sub M33/ = 25.35, which is 0.67 mag fainter than a previously adopted value, and represents a factor of 4.2 increase of Hubble's earliest distance. Three consequences of this larger apparent distance modulus are (1) the mean absolute magnitude of the first three brightest red supergiants is M/sup max//sub left-angle-bracketV/(3)> = -8.7 rather than approx.-8.0 in M33, complicating but not destroying use of red supergiants as distance indicators, (2) the mean absolute magnitude of the two brightest blue irregular supergiant variables is M/sub left-angle-bracketB/(2)> = -9.95, which is close to the value for the brightest known supergiants in the galaxy, and (3) the absolute magnitude of M33 itself is brighter than heretofore assumed

  18. Metrical connection in space-time, Newton's and Hubble's laws

    International Nuclear Information System (INIS)

    Maeder, A.

    1978-01-01

    The theory of gravitation in general relativity is not scale invariant. Here, we follow Dirac's proposition of a scale invariant theory of gravitation (i.e. a theory in which the equations keep their form when a transformation of scale is made). We examine some concepts of Weyl's geometry, like the metrical connection, the scale transformations and invariance, and we discuss their consequences for the equation of the geodetic motion and for its Newtonian limit. Under general conditions, we show that the only non-vanishing component of the coefficient of metrical connection may be identified with Hubble's constant. In this framework, the equivalent to the Newtonian approximation for the equation of motion contains an additional acceleration term Hdr vector /dt, which produces an expansion of gravitational systems. The velocity of this expansion is shown to increase linearly with the distance between interacting objects. The relative importance of this new expansion term to the Newtonian one varies like (2rhosub(c)/rho)sup(1/2), where rhosub(c) is the critical density of the Einsteinde Sitter model and rho is the mean density of the considered gravitational configuration. Thus, this 'generalized expansion' is important essentially for systems of mean density not too much above the critical density. Finally, our main conclusion is that in the integrable Weyl geometry, Hubble's law - like Newton's law - would appear as an intrinsic property of gravitation, being only the most visible manifestation of a general effect characterizing the gravitational interaction. (orig.) [de

  19. Automation of Hubble Space Telescope Mission Operations

    Science.gov (United States)

    Burley, Richard; Goulet, Gregory; Slater, Mark; Huey, William; Bassford, Lynn; Dunham, Larry

    2012-01-01

    On June 13, 2011, after more than 21 years, 115 thousand orbits, and nearly 1 million exposures taken, the operation of the Hubble Space Telescope successfully transitioned from 24x7x365 staffing to 815 staffing. This required the automation of routine mission operations including telemetry and forward link acquisition, data dumping and solid-state recorder management, stored command loading, and health and safety monitoring of both the observatory and the HST Ground System. These changes were driven by budget reductions, and required ground system and onboard spacecraft enhancements across the entire operations spectrum, from planning and scheduling systems to payload flight software. Changes in personnel and staffing were required in order to adapt to the new roles and responsibilities required in the new automated operations era. This paper will provide a high level overview of the obstacles to automating nominal HST mission operations, both technical and cultural, and how those obstacles were overcome.

  20. A guide to hubble space telescope objects their selection, location, and significance

    CERN Document Server

    Chen, James L

    2015-01-01

    From the authors of "How to Find the Apollo Landing Sites," this is a guide to connecting the view above with the history of recent scientific discoveries from the Hubble Space Telescope. Each selected HST photo is shown with a sky map and a photograph or drawing to illustrate where to find it and how it should appear from a backyard telescope. Here is the casual observer's chance to locate the deep space objects visually, and appreciate the historic Hubble photos in comparison to what is visible from a backyard telescope. HST objects of all types are addressed, from Messier objects, Caldwell objects, and NGC objects, and are arranged in terms of what can be seen during the seasons. Additionally, the reader is given an historical perspective on the work of Edwin Hubble, while locating and viewing the deep space objects that changed astronomy forever.  Countless people have seen the amazing photographs taken by the Hubble Space Telescope. But how many people can actually point out where in the sky ...

  1. Optimal Control and Optimization of Stochastic Supply Chain Systems

    CERN Document Server

    Song, Dong-Ping

    2013-01-01

    Optimal Control and Optimization of Stochastic Supply Chain Systems examines its subject in the context of the presence of a variety of uncertainties. Numerous examples with intuitive illustrations and tables are provided, to demonstrate the structural characteristics of the optimal control policies in various stochastic supply chains and to show how to make use of these characteristics to construct easy-to-operate sub-optimal policies.                 In Part I, a general introduction to stochastic supply chain systems is provided. Analytical models for various stochastic supply chain systems are formulated and analysed in Part II. In Part III the structural knowledge of the optimal control policies obtained in Part II is utilized to construct easy-to-operate sub-optimal control policies for various stochastic supply chain systems accordingly. Finally, Part IV discusses the optimisation of threshold-type control policies and their robustness. A key feature of the book is its tying together of ...

  2. Optimal Alarm Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system is simply an optimal level-crossing predictor that can be designed to elicit the fewest false alarms for a fixed detection probability. It...

  3. Second generation spectrograph for the Hubble Space Telescope

    Science.gov (United States)

    Woodgate, B. E.; Boggess, A.; Gull, T. R.; Heap, S. R.; Krueger, V. L.; Maran, S. P.; Melcher, R. W.; Rebar, F. J.; Vitagliano, H. D.; Green, R. F.; Wolff, S. C.; Hutchings, J. B.; Jenkins, E. B.; Linsky, J. L.; Moos, H. W.; Roesler, F.; Shine, R. A.; Timothy, J. G.; Weistrop, D. E.; Bottema, M.; Meyer, W.

    1986-01-01

    The preliminary design for the Space Telescope Imaging Spectrograph (STIS), which has been selected by NASA for definition study for future flight as a second-generation instrument on the Hubble Space Telescope (HST), is presented. STIS is a two-dimensional spectrograph that will operate from 1050 A to 11,000 A at the limiting HST resolution of 0.05 arcsec FWHM, with spectral resolutions of 100, 1200, 20,000, and 100,000 and a maximum field-of-view of 50 x 50 arcsec. Its basic operating modes include echelle model, long slit mode, slitless spectrograph mode, coronographic spectroscopy, photon time-tagging, and direct imaging. Research objectives are active galactic nuclei, the intergalactic medium, global properties of galaxies, the origin of stellar systems, stelalr spectral variability, and spectrographic mapping of solar system processes.

  4. System floorplanning optimization

    KAUST Repository

    Browning, David W.

    2012-12-01

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the \\'outside in\\' with the focus on how the system is perceived by the end-user. As a consequence, very little consideration is given to the interconnections or power of the devices within the system with a mentality of \\'just make it fit\\'. In this paper we discuss the challenges of Notebook system design and the steps by which system floor-planning tools and algorithms can be used to provide an automated method to optimize this process to ensure all required components most optimally fit inside the Notebook system. © 2012 IEEE.

  5. System floorplanning optimization

    KAUST Repository

    Browning, David W.

    2013-01-10

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the \\'outside in\\' with the focus on how the system is perceived by the end-user. As a consequence, very little consideration is given to the interconnections or power of the devices within the system with a mentality of \\'just make it fit\\'. In this paper we discuss the challenges of Notebook system design and the steps by which system floor-planning tools and algorithms can be used to provide an automated method to optimize this process to ensure all required components most optimally fit inside the Notebook system.

  6. Observations of the Hubble Deep Field with the Infrared Space Observatory .4. Association of sources with Hubble Deep Field galaxies

    DEFF Research Database (Denmark)

    Mann, R.G.; Oliver, S.J.; Serjeant, S.B.G.

    1997-01-01

    We discuss the identification of sources detected by the Infrared Space Observatory (ISO) at 6.7 and 15 mu m in the Hubble Deep Field (HDF) region. We conservatively associate ISO sources with objects in existing optical and near-infrared HDF catalogues using the likelihood ratio method, confirming...... these results (and, in one case, clarifying them) with independent visual searches, We find 15 ISO sources to be reliably associated with bright [I-814(AB) HDF, and one with an I-814(AB)=19.9 star, while a further 11 are associated with objects in the Hubble Flanking Fields (10 galaxies...... and one star), Amongst optically bright HDF galaxies, ISO tends to detect luminous, star-forming galaxies at fairly high redshift and with disturbed morphologies, in preference to nearby ellipticals....

  7. Type Ia supernova Hubble residuals and host-galaxy properties

    International Nuclear Information System (INIS)

    Kim, A. G.; Aldering, G.; Aragon, C.; Bailey, S.; Fakhouri, H. K.; Antilogus, P.; Bongard, S.; Canto, A.; Cellier-Holzem, F.; Fleury, M.; Guy, J.; Baltay, C.; Buton, C.; Feindt, U.; Greskovic, P.; Kowalski, M.; Childress, M.; Chotard, N.; Copin, Y.; Gangler, E.

    2014-01-01

    Kim et al. introduced a new methodology for determining peak-brightness absolute magnitudes of type Ia supernovae from multi-band light curves. We examine the relation between their parameterization of light curves and Hubble residuals, based on photometry synthesized from the Nearby Supernova Factory spectrophotometric time series, with global host-galaxy properties. The K13 Hubble residual step with host mass is 0.013 ± 0.031 mag for a supernova subsample with data coverage corresponding to the K13 training; at <<1σ, the step is not significant and lower than previous measurements. Relaxing the data coverage requirement of the Hubble residual step with the host mass is 0.045 ± 0.026 mag for the larger sample; a calculation using the modes of the distributions, less sensitive to outliers, yields a step of 0.019 mag. The analysis of this article uses K13 inferred luminosities, as distinguished from previous works that use magnitude corrections as a function of SALT2 color and stretch parameters: steps at >2σ significance are found in SALT2 Hubble residuals in samples split by the values of their K13 x(1) and x(2) light-curve parameters. x(1) affects the light-curve width and color around peak (similar to the Δm 15 and stretch parameters), and x(2) affects colors, the near-UV light-curve width, and the light-curve decline 20-30 days after peak brightness. The novel light-curve analysis, increased parameter set, and magnitude corrections of K13 may be capturing features of SN Ia diversity arising from progenitor stellar evolution.

  8. Optimization of power system operation

    CERN Document Server

    Zhu, Jizhong

    2015-01-01

    This book applies the latest applications of new technologies topower system operation and analysis, including new and importantareas that are not covered in the previous edition. Optimization of Power System Operation covers both traditional andmodern technologies, including power flow analysis, steady-statesecurity region analysis, security constrained economic dispatch,multi-area system economic dispatch, unit commitment, optimal powerflow, smart grid operation, optimal load shed, optimalreconfiguration of distribution network, power system uncertaintyanalysis, power system sensitivity analysis, analytic hierarchicalprocess, neural network, fuzzy theory, genetic algorithm,evolutionary programming, and particle swarm optimization, amongothers. New topics such as the wheeling model, multi-areawheeling, the total transfer capability computation in multipleareas, are also addressed. The new edition of this book continues to provide engineers andac demics with a complete picture of the optimization of techn...

  9. The Hubble Legacy Archive: Data Processing in the Era of AstroDrizzle

    Science.gov (United States)

    Strolger, Louis-Gregory; Hubble Legacy Archive Team, The Hubble Source Catalog Team

    2015-01-01

    The Hubble Legacy Archive (HLA) expands the utility of Hubble Space Telescope wide-field imaging data by providing high-level composite images and source lists, perusable and immediately available online. The latest HLA data release (DR8.0) marks a fundamental change in how these image combinations are produced, using DrizzlePac tools and Astrodrizzle to reduce geometric distortion and provide improved source catalogs for all publicly available data. We detail the HLA data processing and source list schemas, what products are newly updated and available for WFC3 and ACS, and how these data products are further utilized in the production of the Hubble Source Catalog. We also discuss plans for future development, including updates to WFPC2 products and field mosaics.

  10. Distributed Optimization System

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  11. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1993-01-01

    Reliability-based design of structural systems is considered. In particular, systems where the reliability model is a series system of parallel systems are treated. A sensitivity analysis for this class of problems is presented. Optimization problems with series systems of parallel systems...... optimization of series systems of parallel systems, but it is also efficient in reliability-based optimization of series systems in general....

  12. A nuclear data approach for the Hubble constant measurements

    Directory of Open Access Journals (Sweden)

    Pritychenko Boris

    2017-01-01

    Full Text Available An extraordinary number of Hubble constant measurements challenges physicists with selection of the best numerical value. The standard U.S. Nuclear Data Program (USNDP codes and procedures have been applied to resolve this issue. The nuclear data approach has produced the most probable or recommended Hubble constant value of 67.2(69 (km/sec/Mpc. This recommended value is based on the last 20 years of experimental research and includes contributions from different types of measurements. The present result implies (14.55 ± 1.51 × 109 years as a rough estimate for the age of the Universe. The complete list of recommended results is given and possible implications are discussed.

  13. Optimization and optimal control in automotive systems

    CERN Document Server

    Kolmanovsky, Ilya; Steinbuch, Maarten; Re, Luigi

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier  approaches, based on some degree of heuristics, to the use of  more and more common systematic methods. Even systematic methods can be developed and applied in a large number of forms so the text collects contributions from across the theory, methods and real-world automotive applications of optimization. Greater fuel economy, significant reductions in permissible emissions, new drivability requirements and the generally increasing complexity of automotive systems are among the criteria that the contributing authors set themselves to meet. In many cases multiple and often conflicting requirements give rise to multi-objective constrained optimization problems which are also considered. Some of these problems fall into the domain of the traditional multi-disciplinary optimization applie...

  14. Reliability and optimization of structural systems

    International Nuclear Information System (INIS)

    Thoft-Christensen, P.

    1987-01-01

    The proceedings contain 28 papers presented at the 1st working conference. The working conference was organized by the IFIP Working Group 7.5. The proceedings also include 4 papers which were submitted, but for various reasons not presented at the working conference. The working conference was attended by 50 participants from 18 countries. The conference was the first scientific meeting of the new IFIP Working Group 7.5 on 'Reliability and Optimization of Structural Systems'. The purpose of the Working Group 7.5 is to promote modern structural system optimization and reliability theory, to advance international cooperation in the field of structural system optimization and reliability theory, to stimulate research, development and application of structural system optimization and reliability theory, to further the dissemination and exchange of information on reliability and optimization of structural system optimization and reliability theory, and to encourage education in structural system optimization and reliability theory. (orig./HP)

  15. Simulation-based optimization of thermal systems

    International Nuclear Information System (INIS)

    Jaluria, Yogesh

    2009-01-01

    This paper considers the design and optimization of thermal systems on the basis of the mathematical and numerical modeling of the system. Many complexities are often encountered in practical thermal processes and systems, making the modeling challenging and involved. These include property variations, complicated regions, combined transport mechanisms, chemical reactions, and intricate boundary conditions. The paper briefly presents approaches that may be used to accurately simulate these systems. Validation of the numerical model is a particularly critical aspect and is discussed. It is important to couple the modeling with the system performance, design, control and optimization. This aspect, which has often been ignored in the literature, is considered in this paper. Design of thermal systems based on concurrent simulation and experimentation is also discussed in terms of dynamic data-driven optimization methods. Optimization of the system and of the operating conditions is needed to minimize costs and improve product quality and system performance. Different optimization strategies that are currently used for thermal systems are outlined, focusing on new and emerging strategies. Of particular interest is multi-objective optimization, since most thermal systems involve several important objective functions, such as heat transfer rate and pressure in electronic cooling systems. A few practical thermal systems are considered in greater detail to illustrate these approaches and to present typical simulation, design and optimization results

  16. Measurement of Hubble constant: non-Gaussian errors in HST Key Project data

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Meghendra [Dr. A.P.J. Abdul Kalam Technical University, Uttar Pradesh, Lucknow, 226021 India (India); Gupta, Shashikant; Pandey, Ashwini [Amity University Haryana, Gurgaon, Haryana, 122413 India (India); Sharma, Satendra, E-mail: meghendrasingh_db@yahoo.co.in, E-mail: shashikantgupta.astro@gmail.com, E-mail: satyamkashwini@gmail.com, E-mail: ssharma_phy@yahoo.co.uk [Yobe State University, Damaturu, Yobe State (Nigeria)

    2016-08-01

    Assuming the Central Limit Theorem, experimental uncertainties in any data set are expected to follow the Gaussian distribution with zero mean. We propose an elegant method based on Kolmogorov-Smirnov statistic to test the above; and apply it on the measurement of Hubble constant which determines the expansion rate of the Universe. The measurements were made using Hubble Space Telescope. Our analysis shows that the uncertainties in the above measurement are non-Gaussian.

  17. Dependability of self-optimizing mechatronic systems

    CERN Document Server

    Rammig, Franz; Schäfer, Wilhelm; Sextro, Walter

    2014-01-01

    Intelligent technical systems, which combine mechanical, electrical and software engineering with control engineering and advanced mathematics, go far beyond the state of the art in mechatronics and open up fascinating perspectives. Among these systems are so-called self-optimizing systems, which are able to adapt their behavior autonomously and flexibly to changing operating conditions. Self-optimizing systems create high value for example in terms of energy and resource efficiency as well as reliability. The Collaborative Research Center 614 "Self-optimizing Concepts and Structures in Mechanical Engineering" pursued the long-term aim to open up the active paradigm of self-optimization for mechanical engineering and to enable others to develop self-optimizing systems. This book is directed to researchers and practitioners alike. It provides a design methodology for the development of self-optimizing systems consisting of a reference process, methods, and tools. The reference process is divided into two phase...

  18. Optimal perturbations for nonlinear systems using graph-based optimal transport

    Science.gov (United States)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  19. OPF-Based Optimal Location of Two Systems Two Terminal HVDC to Power System Optimal Operation

    Directory of Open Access Journals (Sweden)

    Mehdi Abolfazli

    2013-04-01

    Full Text Available In this paper a suitable mathematical model of the two terminal HVDC system is provided for optimal power flow (OPF and optimal location based on OPF such power injection model. The ability of voltage source converter (VSC-based HVDC to independently control active and reactive power is well represented by the model. The model is used to develop an OPF-based optimal location algorithm of two systems two terminal HVDC to minimize the total fuel cost and active power losses as objective function. The optimization framework is modeled as non-linear programming (NLP and solved by Matlab and GAMS softwares. The proposed algorithm is implemented on the IEEE 14- and 30-bus test systems. The simulation results show ability of two systems two terminal HVDC in improving the power system operation. Furthermore, two systems two terminal HVDC is compared by PST and OUPFC in the power system operation from economical and technical aspects.

  20. HUBBLE provides multiple views of how to feed a black hole

    Science.gov (United States)

    1998-05-01

    Although the cause-and-effect relationships are not yet clear, the views provided by complementary images from two instruments aboard the Hubble Space Telescope are giving astronomers new insights into the powerful forces being exerted in this complex maelstrom. Researchers believe these forces may even have shifted the axis of the massive black hole from its expected orientation. The Hubble wide-field camera visible image of the merged Centaurus A galaxy, also called NGC 5128, shows in sharp clarity a dramatic dark lane of dust girdling the galaxy. Blue clusters of newborn stars are clearly resolved, and silhouettes of dust filaments are interspersed with blazing orange-glowing gas. Located only 10 million light-years away, this peculiar-looking galaxy contains the closest active galactic nucleus to Earth and has long been considered an example of an elliptical galaxy disrupted by a recent collision with a smaller companion spiral galaxy. Using the infrared vision of Hubble, astronomers have penetrated this wall of dust for the first time to see a twisted disk of hot gas swept up in the black hole's gravitational whirlpool. The suspected black hole is so dense it contains the mass of perhaps a billion stars, compacted into a small region of space not much larger than our Solar System. Resolving features as small as seven light-years across, Hubble has shown astronomers that the hot gas disk is tilted in a different direction from the black hole's axis -- like a wobbly wheel around an axle. The black hole's axis is identified by the orientation of a high-speed jet of material, glowing in X-rays and radio frequencies, blasted from the black hole at 1/100th the speed of light. This gas disk presumably fueling the black hole may have formed so recently it is not yet aligned to the black hole's spin axis, or it may simply be influenced more by the galaxy's gravitational tug than by the black hole's. "This black hole is doing its own thing. Aside from receiving fresh

  1. On the determination of the Hubble constant

    International Nuclear Information System (INIS)

    Gurzadyan, V.G.; Harutyunyan, V.V.; Kocharyan, A.A.

    1990-10-01

    The possibility of an alternative determination of the distance scale of the Universe and the Hubble constant based on the numerical analysis of the hierarchical nature of the large scale Universe (galaxies, clusters and superclusters) is proposed. The results of computer experiments performed by means of special numerical algorithms are represented. (author). 9 refs, 7 figs

  2. Optimal Control of Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Vadim Azhmyakov

    2007-01-01

    Full Text Available In the present work, we consider a class of nonlinear optimal control problems, which can be called “optimal control problems in mechanics.” We deal with control systems whose dynamics can be described by a system of Euler-Lagrange or Hamilton equations. Using the variational structure of the solution of the corresponding boundary-value problems, we reduce the initial optimal control problem to an auxiliary problem of multiobjective programming. This technique makes it possible to apply some consistent numerical approximations of a multiobjective optimization problem to the initial optimal control problem. For solving the auxiliary problem, we propose an implementable numerical algorithm.

  3. Optimization of a wearable power system

    Energy Technology Data Exchange (ETDEWEB)

    Kovacevic, I.; Round, S. D.; Kolar, J. W.; Boulouchos, K.

    2008-07-01

    In this paper the optimization of wearable power system comprising of an internal combustion engine, motor/generator, inverter/rectifier, Li-battery pack, DC/DC converters, and controller is performed. The Wearable Power System must have the capability to supply an average 20 W for 4 days with peak power of 200 W and have a system weight less then 4 kg. The main objectives are to select the engine, fuel and battery type, to match the weight of fuel and the number of battery cells, to find the optimal working point of engine and minimizing the system weight. The minimization problem is defined in Matlab as a nonlinear constrained optimization task. The optimization procedure returns the optimal system design parameters: the Li-polymer battery with eight cells connected in series for a 28 V DC output voltage, the selection of gasoline/oil fuel mixture and the optimal engine working point of 12 krpm for a 4.5 cm{sup 3} 4-stroke engine. (author)

  4. Hubble Captures Volcanic Eruption Plume From Io

    Science.gov (United States)

    1997-01-01

    The Hubble Space Telescope has snapped a picture of a 400-km-high (250-mile-high) plume of gas and dust from a volcanic eruption on Io, Jupiter's large innermost moon.Io was passing in front of Jupiter when this image was taken by the Wide Field and Planetary Camera 2 in July 1996. The plume appears as an orange patch just off the edge of Io in the eight o'clock position, against the blue background of Jupiter's clouds. Io's volcanic eruptions blasts material hundreds of kilometers into space in giant plumes of gas and dust. In this image, material must have been blown out of the volcano at more than 2,000 mph to form a plume of this size, which is the largest yet seen on Io.Until now, these plumes have only been seen by spacecraft near Jupiter, and their detection from the Earth-orbiting Hubble Space Telescope opens up new opportunities for long-term studies of these remarkable phenomena.The plume seen here is from Pele, one of Io's most powerful volcanos. Pele's eruptions have been seen before. In March 1979, the Voyager 1 spacecraft recorded a 300-km-high eruption cloud from Pele. But the volcano was inactive when the Voyager 2 spacecraft flew by Jupiter in July 1979. This Hubble observation is the first glimpse of a Pele eruption plume since the Voyager expeditions.Io's volcanic plumes are much taller than those produced by terrestrial volcanos because of a combination of factors. The moon's thin atmosphere offers no resistance to the expanding volcanic gases; its weak gravity (one-sixth that of Earth) allows material to climb higher before falling; and its biggest volcanos are more powerful than most of Earth's volcanos.This image is a contrast-enhanced composite of an ultraviolet image (2600 Angstrom wavelength), shown in blue, and a violet image (4100 Angstrom wavelength), shown in orange. The orange color probably occurs because of the absorption and/or scattering of ultraviolet light in the plume. This light from Jupiter passes through the plume and is

  5. Enhancing Hubble's vision service missions that expanded our view of the universe

    CERN Document Server

    Shayler, David J

    2016-01-01

    After a 20-year struggle to place a large, sophisticated optical telescope in orbit the Hubble Space Telescope was finally launched in 1990, though its primary mirror was soon found to be flawed. A dramatic mission in 1993 installed corrective optics so that the intended science program could finally begin. Those events are related in a companion to this book, The Hubble Space Telescope: From Concept to Success.   Enhancing Hubble’s Vision: Service Missions That Expanded Our View of the Universe tells the story of the four missions between 1997 and 2009 that repaired, serviced and upgraded the instruments on the telescope to maintain its state-of-the-art capabilities. It draws on first hand interviews with those closely involved in the project. The spacewalking skills and experiences gained from maintaining and upgrading Hubble had direct application to the construction of the International Space Station and help with its maintenance. These skills can be applied to future human and robotic satellite servic...

  6. Construcción de un diagrama de Hubble: Una herramienta para la Enseñanza de la Astronomía

    Directory of Open Access Journals (Sweden)

    Giovanni Cardona Rodriguez

    2016-06-01

    Full Text Available Se presenta una actividad que puede apoyar el trabajo de los docentes que dirigen  clubes de Astronomía y quieren abordar el tema de evolución del Universo, ya que   se  reconstruye  la ley de Hubble  a partir de la construcción de un Diagrama de Hubble con  datos  tomados del Sloan Digital Sky Survey   (SDSS ,  del  cual se obtiene el valor del parámetro de Hubble y se infiere la expansión del Universo. Esta actividad  didáctica permite a los profesores orientar a sus estudiantes por el camino que siguió Hubble  para determinar su ley, en este sentido se exponen algunas implicaciones de aplicación de la misma en el contexto de la formación de profesores de física y de los clubes de Astronomía.  Construction of a Hubble Diagram: A tool for teaching astronomy This article presents the construction and analysis of an activity that can support the work of teachers who run Astronomy clubs and want to address the issue of evolution of the Universe. Here Hubble's law is reconstructed by reproducing a Hubble diagram with Sloan Digital Sky Survey's (SDSS data, from which the Hubble parameter value is obtained and the expansion of the Universe is inferred. This educational activity allows teachers to guide their students along the path followed by Hubble to determine his law. In this sense some implications of applying the latter are discussed in the context of teacher's training in Physics and Astronomy clubs. Construção de um diagrama de Hubble: Uma ferramenta para ensino de astronomía Se apresenta uma actividade que pode apoiar o trabalho dos professores que dirigem clubes de Astronomia e querem abordar a questão da evolução do Universo, como a lei de Hubble é reconstruída a partir da reprodução de um diagrama de Hubble com os dados tomados do Sloan digital Sky Survey (SDSS, é achado o parâmetro de Hubble e inferida a expansão do universo. Esta actividade educativa permite aos professores orientar seus alunos ao

  7. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  8. HVAC system optimization - in-building section

    Energy Technology Data Exchange (ETDEWEB)

    Lu Lu; Wenjian Cai; Lihua Xie; Shujiang Li; Yeng Chai Soh [Nanyang Technological Univ., Singapore (Singapore). School of Electrical and Electronic Engineering

    2005-01-01

    This paper presents a practical method to optimize in-building section of centralized Heating, Ventilation and Air-conditioning (HVAC) systems which consist of indoor air loops and chilled water loops. First, through component characteristic analysis, mathematical models associated with cooling loads and energy consumption for heat exchangers and energy consuming devices are established. By considering variation of cooling load of each end user, adaptive neuro-fuzzy inference system (ANFIS) is employed to model duct and pipe networks and obtain optimal differential pressure (DP) set points based on limited sensor information. A mix-integer nonlinear constraint optimization of system energy is formulated and solved by a modified genetic algorithm. The main feature of our paper is a systematic approach in optimizing the overall system energy consumption rather than that of individual component. A simulation study for a typical centralized HVAC system is provided to compare the proposed optimization method with traditional ones. The results show that the proposed method indeed improves the system performance significantly. (author)

  9. Fuzzy logic control and optimization system

    Science.gov (United States)

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  10. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  11. HUBBLE CAPTURES MERGER BETWEEN QUASAR AND GALAXY

    Science.gov (United States)

    2002-01-01

    This NASA Hubble Space Telescope image shows evidence fo r a merger between a quasar and a companion galaxy. This surprising result might require theorists to rethink their explanations for the nature of quasars, the most energetic objects in the universe. The bright central object is the quasar itself, located several billion light-years away. The two wisps on the (left) of the bright central object are remnants of a bright galaxy that have been disrupted by the mutual gravitational attraction between the quasar and the companion galaxy. This provides clear evidence for a merger between the two objects. Since their discovery in 1963, quasars (quasi-stellar objects) have been enigmatic because they emit prodigious amounts of energy from a very compact source. The most widely accepted model is that a quasar is powered by a supermassive black hole in the core of a galaxy. These new observations proved a challenge for theorists as no current models predict the complex quasar interactions unveiled by Hubble. The image was taken with the Wide Field Planetary Camera-2. Credit: John Bahcall, Institute for Advanced Study, NASA.

  12. Optimal Design of Gravity Pipeline Systems Using Genetic Algorithm and Mathematical Optimization

    Directory of Open Access Journals (Sweden)

    maryam rohani

    2015-03-01

    Full Text Available In recent years, the optimal design of pipeline systems has become increasingly important in the water industry. In this study, the two methods of genetic algorithm and mathematical optimization were employed for the optimal design of pipeline systems with the objective of avoiding the water hammer effect caused by valve closure. The problem of optimal design of a pipeline system is a constrained one which should be converted to an unconstrained optimization problem using an external penalty function approach in the mathematical programming method. The quality of the optimal solution greatly depends on the value of the penalty factor that is calculated by the iterative method during the optimization procedure such that the computational effort is simultaneously minimized. The results obtained were used to compare the GA and mathematical optimization methods employed to determine their efficiency and capabilities for the problem under consideration. It was found that the mathematical optimization method exhibited a slightly better performance compared to the GA method.

  13. Hubble Space Telescope: Should NASA Proceed with a Servicing Mission?

    National Research Council Canada - National Science Library

    Morgan, Daniel

    2006-01-01

    The National Aeronautics and Space Administration (NASA) estimates that without a servicing mission to replace key components, the Hubble Space Telescope will cease scientific operations in 2008 instead of 2010...

  14. Constraints on the progenitor system of the type Ia supernova 2014J from pre-explosion Hubble space telescope imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Patrick L.; Fox, Ori D.; Filippenko, Alexei V.; Shen, Ken J.; Zheng, WeiKang; Graham, Melissa L.; Tucker, Brad E. [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States); Cenko, S. Bradley [NASA/Goddard Space Flight Center, Code 662, Greenbelt, MD 20771 (United States); Prato, Lisa [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Schaefer, Gail, E-mail: pkelly@astro.berkeley.edu [The CHARA Array of Georgia State University, Mount Wilson Observatory, Mount Wilson, CA 91023 (United States)

    2014-07-20

    We constrain the properties of the progenitor system of the highly reddened Type Ia supernova (SN Ia) 2014J in Messier 82 (M82; d ≈ 3.5 Mpc). We determine the supernova (SN) location using Keck-II K-band adaptive optics images, and we find no evidence for flux from a progenitor system in pre-explosion near-ultraviolet through near-infrared Hubble Space Telescope (HST) images. Our upper limits exclude systems having a bright red giant companion, including symbiotic novae with luminosities comparable to that of RS Ophiuchi. While the flux constraints are also inconsistent with predictions for comparatively cool He-donor systems (T ≲ 35,000 K), we cannot preclude a system similar to V445 Puppis. The progenitor constraints are robust across a wide range of R{sub V} and A{sub V} values, but significantly greater values than those inferred from the SN light curve and spectrum would yield proportionally brighter luminosity limits. The comparatively faint flux expected from a binary progenitor system consisting of white dwarf stars would not have been detected in the pre-explosion HST imaging. Infrared HST exposures yield more stringent constraints on the luminosities of very cool (T < 3000 K) companion stars than was possible in the case of SN Ia 2011fe.

  15. Constraints on the progenitor system of the type Ia supernova 2014J from pre-explosion Hubble space telescope imaging

    International Nuclear Information System (INIS)

    Kelly, Patrick L.; Fox, Ori D.; Filippenko, Alexei V.; Shen, Ken J.; Zheng, WeiKang; Graham, Melissa L.; Tucker, Brad E.; Cenko, S. Bradley; Prato, Lisa; Schaefer, Gail

    2014-01-01

    We constrain the properties of the progenitor system of the highly reddened Type Ia supernova (SN Ia) 2014J in Messier 82 (M82; d ≈ 3.5 Mpc). We determine the supernova (SN) location using Keck-II K-band adaptive optics images, and we find no evidence for flux from a progenitor system in pre-explosion near-ultraviolet through near-infrared Hubble Space Telescope (HST) images. Our upper limits exclude systems having a bright red giant companion, including symbiotic novae with luminosities comparable to that of RS Ophiuchi. While the flux constraints are also inconsistent with predictions for comparatively cool He-donor systems (T ≲ 35,000 K), we cannot preclude a system similar to V445 Puppis. The progenitor constraints are robust across a wide range of R V and A V values, but significantly greater values than those inferred from the SN light curve and spectrum would yield proportionally brighter luminosity limits. The comparatively faint flux expected from a binary progenitor system consisting of white dwarf stars would not have been detected in the pre-explosion HST imaging. Infrared HST exposures yield more stringent constraints on the luminosities of very cool (T < 3000 K) companion stars than was possible in the case of SN Ia 2011fe.

  16. The Prevalence of Tobacco, Hubble-Bubble, Alcoholic Drinks, Drugs, and Stimulants among High-School Students

    Directory of Open Access Journals (Sweden)

    Roghayeh Alaee

    2011-08-01

    Full Text Available Introduction: The purpose of the present study was to investigate the prevalence of tobacco, hubble-bubble, alcoholic drinks, and other drugs among Karaj high-school students in 2011. Methods: The research method was a descriptive-sectional study. Participants of this study were 447 girl and boy high-school students of Karaj that were selected by clustering random sampling. For data gathering, drug abuse questionnaire, and risk and protective factors inventory were administered among selected sample. Results: According to the results, 57% of students in this study said that they have had experiences with a kind of drug including tobacco, hubble-bubble, alcoholic drinks, and other drugs at least once in their lives. The study showed the prevalence for soft drugs: hubble-bubble, tobacco, and alcoholic drinks, and for hard drugs ecstasy, opium, hashish, meth, crack, and heroin respectively. Conclusion: Soft drugs including hubble-bubble, tobacco, and alcoholic drinks, are the most common among Karaj high-school students. The prevalence of hard drugs among them is rather low.

  17. Optimizing queries in distributed systems

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2006-01-01

    Full Text Available This research presents the main elements of query optimizations in distributed systems. First, data architecture according with system level architecture in a distributed environment is presented. Then the architecture of a distributed database management system (DDBMS is described on conceptual level followed by the presentation of the distributed query execution steps on these information systems. The research ends with presentation of some aspects of distributed database query optimization and strategies used for that.

  18. Artificial intelligence in power system optimization

    CERN Document Server

    Ongsakul, Weerakorn

    2013-01-01

    With the considerable increase of AI applications, AI is being increasingly used to solve optimization problems in engineering. In the past two decades, the applications of artificial intelligence in power systems have attracted much research. This book covers the current level of applications of artificial intelligence to the optimization problems in power systems. This book serves as a textbook for graduate students in electric power system management and is also be useful for those who are interested in using artificial intelligence in power system optimization.

  19. Results of a technical analysis of the Hubble Space Telescope nickel-cadmium and nickel-hydrogen batteries

    Science.gov (United States)

    Manzo, Michelle A.

    1991-01-01

    The Hubble Space Telescope (HST) Program Office requested the expertise of the NASA Aerospace Flight Battery Systems Steering Committee (NAFBSSC) in the conduct of an independent assessment of the HST's battery system to assist in their decision of whether to fly nickel-cadmium or nickel-hydrogen batteries on the telescope. In response, a subcommittee to the NAFBSSC was organized with membership comprised of experts with background in the nickel-cadmium/nickel-hydrogen secondary battery/power systems areas. The work and recommendations of that subcommittee are presented.

  20. Optimization of multi-branch switched diversity systems

    KAUST Repository

    Nam, Haewoon

    2009-10-01

    A performance optimization based on the optimal switching threshold(s) for a multi-branch switched diversity system is discussed in this paper. For the conventional multi-branch switched diversity system with a single switching threshold, the optimal switching threshold is a function of both the average channel SNR and the number of diversity branches, where computing the optimal switching threshold is not a simple task when the number of diversity branches is high. The newly proposed multi-branch switched diversity system is based on a sequence of switching thresholds, instead of a single switching threshold, where a different diversity branch uses a different switching threshold for signal comparison. Thanks to the fact that each switching threshold in the sequence can be optimized only based on the number of the remaining diversity branches, the proposed system makes it easy to find these switching thresholds. Furthermore, some selected numerical and simulation results show that the proposed switched diversity system with the sequence of optimal switching thresholds outperforms the conventional system with the single optimal switching threshold. © 2009 IEEE.

  1. Power system optimization

    International Nuclear Information System (INIS)

    Bogdan, Zeljko; Cehil, Mislav

    2007-01-01

    Long-term gas purchase contracts usually determine delivery and payment for gas on the regular hourly basis, independently of demand side consumption. In order to use fuel gas in an economically viable way, optimization of gas distribution for covering consumption must be introduced. In this paper, a mathematical model of the electric utility system which is used for optimization of gas distribution over electric generators is presented. The utility system comprises installed capacity of 1500 MW of thermal power plants, 400 MW of combined heat and power plants, 330 MW of a nuclear power plant and 1600 MW of hydro power plants. Based on known demand curve the optimization model selects plants according to the prescribed criteria. Firstly it engages run-of-river hydro plants, then the public cogeneration plants, the nuclear plant and thermal power plants. Storage hydro plants are used for covering peak load consumption. In case of shortage of installed capacity, the cross-border purchase is allowed. Usage of dual fuel equipment (gas-oil), which is available in some thermal plants, is also controlled by the optimization procedure. It is shown that by using such a model it is possible to properly plan the amount of fuel gas which will be contracted. The contracted amount can easily be distributed over generators efficiently and without losses (no breaks in delivery). The model helps in optimizing of fuel gas-oil ratio for plants with combined burners and enables planning of power plants overhauls over a year in a viable and efficient way. (author)

  2. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    Science.gov (United States)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  3. European astronomers' successes with the Hubble Space Telescope*

    Science.gov (United States)

    1997-02-01

    [Figure: Laguna Nebula] Their work spans all aspects of astronomy, from the planets to the most distant galaxies and quasars, and the following examples are just a few European highlights from Hubble's second phase, 1994-96. A scarcity of midget stars Stars less massive and fainter than the Sun are much numerous in the Milky Way Galaxy than the big bright stars that catch the eye. Guido De Marchi and Francesco Paresce of the European Southern Observatory as Garching, Germany, have counted them. With the wide-field WFPC2 camera, they have taken sample censuses within six globular clusters, which are large gatherings of stars orbiting independently in the Galaxy. In every case they find that the commonest stars have an output of light that is only one-hundredth of the Sun's. They are ten times more numerous than stars like the Sun. More significant for theories of the Universe is a scarcity of very faint stars. Some astronomers have suggested that vast numbers of such stars could account for the mysterious dark matter, which makes stars and galaxies move about more rapidly than expected from the mass of visible matter. But that would require an ever-growing count of objects at low brightnesses, and De Marchi and Paresce find the opposite to be the case -- the numbers diminish. There may be a minimum size below which Nature finds starmaking difficult. The few examples of very small stars seen so far by astronomers may be, not the heralds of a multitude of dark-matter stars, but rareties. Unchanging habits in starmaking Confirmation that very small stars are scarce comes from Gerry Gilmore of the Institute of Astronomy in Cambridge (UK). He leads a European team that analyses long-exposure images in the WFPC2 camera, obtained as a by-product when another instrument is examining a selected object. The result is an almost random sample of well-observed stars and galaxies. The most remarkable general conclusion is that the make-up of stellar populations never seems to

  4. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization......Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods...... thereof can be used as models for many industrial problems like manufacturing planning and optimization of flexible manufacturing systems. This is illustrated for an example in distributed robotic systems....

  5. NASA Astrophysics E/PO: A Quarter Century of Discovery and Inspiration with the Hubble Space Telescope

    Science.gov (United States)

    Jirdeh, Hussein; Straughn, Amber; Smith, Denise Anne; Eisenhamer, Bonnie

    2015-08-01

    April 24, 2015 marked the 25th anniversary of the launch of the Hubble Space Telescope. In its quarter-century in orbit, the Hubble Space Telescope has transformed the way we understand the Universe, helped us find our place among the stars, and paved the way to incredible advancements in science and technology.In this presentation, we explain how NASA and ESA, including the Space Telescope Science Institute (STScI) and partners, is using the 25th anniversary of Hubble’s launch as a unique opportunity to communicate to students, educators, and the public the significance of the past quarter-century of discovery with the Hubble Space Telescope. We describe the various programs, resources, and experiences we are utilizing to enhancethe public understanding of Hubble’s many contributions to the scientific world. These include educator professional development opportunities, exhibits, events, traditional and social media, and resources for educators (formal k-12, informal, and higher education). We also highlight how we are capitalizing on Hubble’s cultural popularity to make the scientific connection to NASA’s next Great Observatory, the James Webb Space Telescope.This presentation highlights many of the opportunities by which students, educators, and the public are joining in the anniversary activities, both in-person and online. Find out more at hubble25th.org and follow #Hubble25 on social media.

  6. Optimization of some eco-energetic systems

    International Nuclear Information System (INIS)

    Purica, I.; Pavelescu, M.; Stoica, M.

    1976-01-01

    An optimization problem of two eco-energetic systems is described. The first one is close to the actual eco-energetic system in Romania, while the second is a new one, based on nuclear energy as primary source and hydrogen energy as secondary source. The optimization problem solved is to find the optimal structure of the systems so that the objective functions adopted, namely unitary energy cost C and total pollution P, to be minimum at the same time. The problem can be modelated with a bimatrix cooperative mathematical game without side payments. We demonstrate the superiority of the new eco-energetic system. (author)

  7. Optimal Set-Point Synthesis in HVAC Systems

    DEFF Research Database (Denmark)

    Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik

    2007-01-01

    This paper presents optimal set-point synthesis for a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger and a water-to-air heat exchanger. The objective function is composed of the electrical power for different...... components, encompassing fans, primary/secondary pump, tertiary pump, and air-to-air heat exchanger wheel; and a fraction of thermal power used by the HVAC system. The goals that have to be achieved by the HVAC system appear as constraints in the optimization problem. To solve the optimization problem......, a steady state model of the HVAC system is derived while different supplying hydronic circuits are studied for the water-to-air heat exchanger. Finally, the optimal set-points and the optimal supplying hydronic circuit are resulted....

  8. Topology optimized permanent magnet systems

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Bahl, Christian; Insinga, Andrea Roberto

    2017-01-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron...... and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a ΛcoolΛcool figure of merit of 0...

  9. HUBBLE'S PANORAMIC PORTRAIT OF A VAST STAR-FORMING REGION

    Science.gov (United States)

    2002-01-01

    NASA's Hubble Space Telescope has snapped a panoramic portrait of a vast, sculpted landscape of gas and dust where thousands of stars are being born. This fertile star-forming region, called the 30 Doradus Nebula, has a sparkling stellar centerpiece: the most spectacular cluster of massive stars in our cosmic neighborhood of about 25 galaxies. The mosaic picture shows that ultraviolet radiation and high-speed material unleashed by the stars in the cluster, called R136 [the large blue blob left of center], are weaving a tapestry of creation and destruction, triggering the collapse of looming gas and dust clouds and forming pillar-like structures that are incubators for nascent stars. The photo offers an unprecedented, detailed view of the entire inner region of 30 Doradus, measuring 200 light-years wide by 150 light-years high. The nebula resides in the Large Magellanic Cloud (a satellite galaxy of the Milky Way), 170,000 light-years from Earth. Nebulas like 30 Doradus are the 'signposts' of recent star birth. High-energy ultraviolet radiation from the young, hot, massive stars in R136 causes the surrounding gaseous material to glow. Previous Hubble telescope observations showed that R136 contains several dozen of the most massive stars known, each about 100 times the mass of the Sun and about 10 times as hot. These stellar behemoths all formed at the same time about 2 million years ago. The stars in R136 are producing intense 'stellar winds' (streams of material traveling at several million miles an hour), which are wreaking havoc on the gas and dust in the surrounding neighborhood. The winds are pushing the gas away from the cluster and compressing the inner regions of the surrounding gas and dust clouds [the pinkish material]. The intense pressure is triggering the collapse of parts of the clouds, producing a new generation of star formation around the central cluster. The new stellar nursery is about 30 to 50 light-years from R136. Most of the stars in the

  10. Topology optimized permanent magnet systems

    Science.gov (United States)

    Bjørk, R.; Bahl, C. R. H.; Insinga, A. R.

    2017-09-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a Λcool figure of merit of 0.472 is reached, which is an increase of 100% compared to a previous optimized design.

  11. First results from the Hubble OPAL Program: Jupiter in 2015

    Science.gov (United States)

    Simon, Amy A.; Wong, Michael H.; Orton, Glenn S.

    2015-11-01

    The Hubble 2020: Outer Planet Atmospheres Legacy (OPAL) program is a Director's Discretionary program designed to generate two yearly global maps for each of the outer planets to enable long term studies of atmospheric color, structure and two-dimensional wind fields. This presentation focuses on Jupiter results from the first year of the campaign. Data were acqured January 19, 2015 with the WFC3/UVIS camera and the F275W, F343N, F395N, F467M, F502N, F547M, F631N, F658N, and F889N filters. Global maps were generated and are publicly available through the High Level Science Products archive: https://archive.stsci.edu/prepds/opal/Using cross-correlation on the global maps, the zonal wind profile was measured between +/- 50 degrees latitude and is in family with Voyager and Cassini era profiles. There are some variations in mid to high latitude wind jet magnitudes, particularly at +40°and -35° planetographic latitude. The Great Red Spot continues to maintain an intense orange coloration, as it did in 2014. However, the interior shows changed structure, including a reduced core and new filamentary features. Finally, a wave not previously seen in Hubble images was also observed and is interpreted as a baroclinic instability with associated cyclone formation near 16° N latitude. A similar feature was observed faintly in Voyager 2 images, and is consistent with the Hubble feature in location and scale.

  12. System Design and Performance of the Two-Gyro Science Mode For the Hubble Space Telescope

    Science.gov (United States)

    Prior, Michael; Dunham, Larry

    2005-01-01

    For fifteen years, the science mission of the Hubble Space Telescope (HST) required using at least three of the six on-board rate gyros for attitude control. Failed gyros were eventually replaced through Space Shuttle Servicing Missions. The tragic loss of the Space Shuttle Columbia has resulted in the cancellation of all planned Shuttle based missions to HST. While a robotic servicing mission is currently being planned instead, controlling with alternate sensors to replace failed gyros can extend the HST science gathering until a servicing mission can be performed, and also extend science at HST s end of life. Additionally, sufficient performance may allow a permanent transition to operations with less than 3 gyros (by intentionally turning off working gyros saving them for later use) allowing for an even greater science mission extension. To meet this need, a Two Gyro Science (TGS) mode has been designed and implemented using magnetometers (Magnetic Sensing System - MSS), Fixed Head Star Trackers (FHSTs), and Fine Guidance Sensors (FGSs) to control vehicle rate about the missing gyro input axis. The development of the TGS capability is the largest re-design of HST operations undertaken, since it affects several major spacecraft subsystems, the most heavily being the Pointing Control System (PCS) and Flight Software (FSW). Additionally, and equally important, are the extensive modifications and enhancements of the Planning and Scheduling system which must now be capable of scheduling science observations while taking into account several new constraints imposed by the TGS operational modes (such as FHST availability and magnetic field geometry) that will impact science gathering efficiency and target availability. This paper discusses the systems engineering design, development, and performance of the TGS mode, now in its final stages of completion.

  13. Factors influencing the profitability of optimizing control systems

    International Nuclear Information System (INIS)

    Broussaud, A.; Guyot, O.

    1999-01-01

    Optimizing control systems supplement conventional Distributed Control Systems and Programmable Logic Controllers. They continuously implement set points, which aim at maximizing the profitability of plant operation. They are becoming an integral part of modern mineral processing plants. This trend is justified by economic considerations, optimizing control being among the most cost-effective methods of improving metallurgical plant performance. The paper successively analyzes three sets of factors, which influence the profitability of optimizing control systems, and provides guidelines for analyzing the potential value of an optimizing control system at a given operation: external factors, such as economic factors and factors related to plant feed; features of the optimizing control system; and subsequent maintenance of the optimizing control system. It is shown that pay back times for optimization control projects are typically measured in days. The OCS software used by the authors for their applications is described briefly. (author)

  14. An Optimal Lower Eigenvalue System

    Directory of Open Access Journals (Sweden)

    Yingfan Liu

    2011-01-01

    Full Text Available An optimal lower eigenvalue system is studied, and main theorems including a series of necessary and suffcient conditions concerning existence and a Lipschitz continuity result concerning stability are obtained. As applications, solvability results to some von-Neumann-type input-output inequalities, growth, and optimal growth factors, as well as Leontief-type balanced and optimal balanced growth paths, are also gotten.

  15. Optimization in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Geraldo R.M. da [Sao Paulo Univ., Sao Carlos, SP (Brazil). Escola de Engenharia

    1994-12-31

    This paper discusses, partially, the advantages and the disadvantages of the optimal power flow. It shows some of the difficulties of implementation and proposes solutions. An analysis is made comparing the power flow, BIGPOWER/CESP, and the optimal power flow, FPO/SEL, developed by the author, when applied to the CEPEL-ELETRONORTE and CESP systems. (author) 8 refs., 5 tabs.

  16. Electric power system applications of optimization

    CERN Document Server

    Momoh, James A

    2008-01-01

    Introduction Structure of a Generic Electric Power System  Power System Models  Power System Control Power System Security Assessment  Power System Optimization as a Function of Time  Review of Optimization Techniques Applicable to Power Systems Electric Power System Models  Complex Power Concepts Three-Phase Systems Per Unit Representation  Synchronous Machine Modeling Reactive Capability Limits Prime Movers and Governing Systems  Automatic Gain Control Transmission Subsystems  Y-Bus Incorporating the Transformer Effect  Load Models  Available Transfer Capability  Illustrative Examples  Power

  17. Hubble's Law Implies Benford's Law for Distances to Galaxies ...

    Indian Academy of Sciences (India)

    in both time and space, predicts that conformity to Benford's law will improve as more data on distances to galaxies becomes available. Con- versely, with the logical derivation of this law presented here, the recent empirical observations may beviewed as independent evidence of the validity of Hubble's law. Key words.

  18. Remarks on the low value obtained for the Hubble constant

    International Nuclear Information System (INIS)

    Jaakkola, Toivo

    1975-01-01

    Some remarks are made on the basis of the data given by Sandage and Tamman, suggesting that these authors have over-estimated the distances to the most luminous galaxies and obtained a value too low for the Hubble constant [fr

  19. Long term trending of engineering data for the Hubble Space Telescope

    Science.gov (United States)

    Cox, Ross M.

    1993-01-01

    A major goal in spacecraft engineering analysis is the detection of component failures before the fact. Trending is the process of monitoring subsystem states to discern unusual behaviors. This involves reducing vast amounts of data about a component or subsystem into a form that helps humans discern underlying patterns and correlations. A long term trending system has been developed for the Hubble Space Telescope. Besides processing the data for 988 distinct telemetry measurements each day, it produces plots of 477 important parameters for the entire 24 hours. Daily updates to the trend files also produce 339 thirty day trend plots each month. The total system combines command procedures to control the execution of the C-based data processing program, user-written FORTRAN routines, and commercial off-the-shelf plotting software. This paper includes a discussion the performance of the trending system and of its limitations.

  20. Truss systems and shape optimization

    Science.gov (United States)

    Pricop, Mihai Victor; Bunea, Marian; Nedelcu, Roxana

    2017-07-01

    Structure optimization is an important topic because of its benefits and wide applicability range, from civil engineering to aerospace and automotive industries, contributing to a more green industry and life. Truss finite elements are still in use in many research/industrial codesfor their simple stiffness matrixand are naturally matching the requirements for cellular materials especially considering various 3D printing technologies. Optimality Criteria combined with Solid Isotropic Material with Penalization is the optimization method of choice, particularized for truss systems. Global locked structures areobtainedusinglocally locked lattice local organization, corresponding to structured or unstructured meshes. Post processing is important for downstream application of the method, to make a faster link to the CAD systems. To export the optimal structure in CATIA, a CATScript file is automatically generated. Results, findings and conclusions are given for two and three-dimensional cases.

  1. An independent determination of the local Hubble constant

    Science.gov (United States)

    Fernández Arenas, David; Terlevich, Elena; Terlevich, Roberto; Melnick, Jorge; Chávez, Ricardo; Bresolin, Fabio; Telles, Eduardo; Plionis, Manolis; Basilakos, Spyros

    2018-02-01

    The relationship between the integrated H β line luminosity and the velocity dispersion of the ionized gas of H II galaxies and giant H II regions represents an exciting standard candle that presently can be used up to redshifts z ˜ 4. Locally it is used to obtain precise measurements of the Hubble constant by combining the slope of the relation obtained from nearby (z ≤ 0.2) H II galaxies with the zero-point determined from giant H II regions belonging to an `anchor sample' of galaxies for which accurate redshift-independent distance moduli are available. We present new data for 36 giant H II regions in 13 galaxies of the anchor sample that includes the megamaser galaxy NGC 4258. Our data are the result of the first 4 yr of observation of our primary sample of 130 giant H II regions in 73 galaxies with Cepheid determined distances. Our best estimate of the Hubble parameter is 71.0 ± 2.8(random) ± 2.1(systematic) km s- 1Mpc- 1. This result is the product of an independent approach and, although at present less precise than the latest SNIa results, it is amenable to substantial improvement.

  2. Strategies for Optimal Design of Structural Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1992-01-01

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  3. Dissecting the Gravitational lens B1608+656 : II. Precision Measurements of the Hubble Constant, Spatial Curvature, and the Dark Energy Equation of State

    NARCIS (Netherlands)

    Suyu, S. H.; Marshall, P. J.; Auger, M. W.; Hilbert, S.; Blandford, R. D.; Koopmans, L. V. E.; Fassnacht, C. D.; Treu, T.

    2010-01-01

    Strong gravitational lens systems with measured time delays between the multiple images provide a method for measuring the "time-delay distance" to the lens, and thus the Hubble constant. We present a Bayesian analysis of the strong gravitational lens system B1608+656, incorporating (1) new, deep

  4. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  5. On the Luminosity Distance and the Hubble Constant

    OpenAIRE

    Yuri Heymann

    2013-01-01

    By differentiating luminosity distance with respect to time using its standard formula we find that the peculiar velocity is a time varying velocity of light. Therefore, a new definition of the luminosity distance is provided such that the peculiar velocity is equal to c. Using this definition a Hubble constant H0 = 67.3 km s−1 Mpc−1 is obtained from supernovae data.

  6. The ESA Hubble 15th Anniversary Campaign: A Trans-European collaboration project

    Science.gov (United States)

    Zoulias, Manolis; Christensen, Lars Lindberg; Kornmesser, Martin

    2006-08-01

    On April 24th 2005, NASA/ESA Hubble Space Telescope had been in orbit for 15 years. The anniversary was celebrated by ESA with the production of an 83 min. scientific movie and a 120 pages book, both titled ``Hubble, 15 years of discovery''. In order to cross language and distribution barriers a network of 16 translators and 22 partners from more than 10 countries was established. The DVD was distributed in approximately 700,000 copies throughout Europe. The project was amongst the largest of its kind with respect to collaboration, distribution and audience impact. It clearly demonstrated how international collaboration can produce effective cross-cultural educational and outreach products for astronomy.

  7. Dynamical friction: The Hubble diagram as a cosmological test

    International Nuclear Information System (INIS)

    Gunn, J.E.; Tinsley, B.M.

    1976-01-01

    Effects on the Hubble diagram of the frictional accretion of small cluster galaxies by large ones, to which Ostriker and Tremaine have recently drawn attention, must be accurately determined if the magnitude-redshift relation is to become a viable cosmological test. We find that the process might be detectable through the concomitant change in galaxy colors, but that its effect on the dispersion of magnitudes of first-ranked cluster galaxies would be negligible even if the change in average magnitude is very important. The sign of the effect of accretion on the luminosity observed within a given aperture depends on the structures of the galaxies involved. The size of the effect not only depends sensitively on the galaxy structures, but is also amplified when the relatively recent collapse times of the clusters are taken into account. It is vital to answer the complicated observational and theoretical questions raised by these preliminary calculations, because the Hubble diagram remains the most promising approach to the deceleration parameter q 0 . Local tests of the density of the universe do not give equivalent information

  8. Time-optimal feedback control for linear systems

    International Nuclear Information System (INIS)

    Mirica, S.

    1976-01-01

    The paper deals with the results of qualitative investigations of the time-optimal feedback control for linear systems with constant coefficients. In the first section, after some definitions and notations, two examples are given and it is shown that even the time-optimal control problem for linear systems with constant coefficients which looked like ''completely solved'' requires a further qualitative investigation of the stability to ''permanent perturbations'' of optimal feedback control. In the second section some basic results of the linear time-optimal control problem are reviewed. The third section deals with the definition of Boltyanskii's ''regular synthesis'' and its connection to Filippov's theory of right-hand side discontinuous differential equations. In the fourth section a theorem is proved concerning the stability to perturbations of time-optimal feedback control for linear systems with scalar control. In the last two sections it is proved that, if the matrix which defines the system has only real eigenvalues or is three-dimensional, the time-optimal feedback control defines a regular synthesis and therefore is stable to perturbations. (author)

  9. New Cosmic Horizons: Space Astronomy from the V2 to the Hubble Space Telescope

    Science.gov (United States)

    Leverington, David

    2001-02-01

    Preface; 1. The sounding rocket era; 2. The start of the space race; 3. Initial exploration of the Solar System; 4. Lunar exploration; 5. Mars and Venus; early results; 6. Mars and Venus; the middle period; 7. Venus, Mars and cometary spacecraft post-1980; 8. Early missions to the outer planets; 9. The Voyager missions to the outer planets; 10. The Sun; 11. Early spacecraft observations of non-solar system sources; 12. A period of rapid growth; 13. The high energy astronomy observatory programme; 14. IUE, IRAS and Exosat - spacecraft for the early 1980s; 15. Hiatus; 16. Business as usual; 17. The Hubble Space Telescope.

  10. NEW OBSERVATIONAL CONSTRAINTS ON THE υ ANDROMEDAE SYSTEM WITH DATA FROM THE HUBBLE SPACE TELESCOPE AND HOBBY-EBERLY TELESCOPE

    International Nuclear Information System (INIS)

    McArthur, Barbara E.; Benedict, G. Fritz.; Martioli, Eder; Barnes, Rory; Korzennik, Sylvain; Nelan, Ed; Butler, R. Paul

    2010-01-01

    We have used high-cadence radial velocity (RV) measurements from the Hobby-Eberly Telescope with existing velocities from the Lick, Elodie, Harlan J. Smith, and Whipple 60'' telescopes combined with astrometric data from the Hubble Space Telescope Fine Guidance Sensors to refine the orbital parameters and determine the orbital inclinations and position angles of the ascending node of components υ And A c and d. With these inclinations and using M * = 1.31M sun as a primary mass, we determine the actual masses of two of the companions: υ And A c is 13.98 +2.3 -5.3 M JUP , and υ And A d is 10.25 +0.7 -3.3 M JUP . These measurements represent the first astrometric determination of mutual inclination between objects in an extrasolar planetary system, which we find to be 29. 0 9 ± 1 0 . The combined RV measurements also reveal a long-period trend indicating a fourth planet in the system. We investigate the dynamic stability of this system and analyze regions of stability, which suggest a probable mass of υ And A b. Finally, our parallaxes confirm that υ And B is a stellar companion of υ And A.

  11. Optimal Design of Pumped Pipeline Systems Using Genetic Algorithm and Mathematical Optimization

    Directory of Open Access Journals (Sweden)

    Mohammadhadi Afshar

    2007-12-01

    Full Text Available In recent years, much attention has been paid to the optimal design of pipeline systems. In this study, the problem of pipeline system optimal design has been solved through genetic algorithm and mathematical optimization. Pipe diameters and their thicknesses are considered as decision variables to be designed in a manner that water column separation and excessive pressures are avoided in the event of pump failure. Capabilities of the genetic algorithm and the mathematical programming method are compared for the problem under consideration. For simulation of transient streams, explicit characteristic method is used in which devices such as pumps are defined as boundary conditions of the equations defining the hydraulic behavior of pipe segments. The problem of optimal design of pipeline systems is a constrained problem which is converted to an unconstrained optimization problem using an external penalty function approach. The efficiency of the proposed approaches is verified in one example and the results are presented.

  12. Trajectory Optimization for Differential Flat Systems

    OpenAIRE

    Kahina Louadj; Benjamas Panomruttanarug; Alexandre Carlos Brandao Ramos; Felix Mora-Camino

    2016-01-01

    International audience; The purpose of this communication is to investigate the applicability of Variational Calculus to the optimization of the operation of differentially flat systems. After introducingcharacteristic properties of differentially flat systems, the applicability of variational calculus to the optimization of flat output trajectories is displayed. Two illustrative examples are also presented.

  13. Beyond the Hubble Constant

    Science.gov (United States)

    1995-08-01

    about the distances to galaxies and thereby about the expansion rate of the Universe. A simple way to determine the distance to a remote galaxy is by measuring its redshift, calculate its velocity from the redshift and divide this by the Hubble constant, H0. For instance, the measured redshift of the parent galaxy of SN 1995K (0.478) yields a velocity of 116,000 km/sec, somewhat more than one-third of the speed of light (300,000 km/sec). From the universal expansion rate, described by the Hubble constant (H0 = 20 km/sec per million lightyears as found by some studies), this velocity would indicate a distance to the supernova and its parent galaxy of about 5,800 million lightyears. The explosion of the supernova would thus have taken place 5,800 million years ago, i.e. about 1,000 million years before the solar system was formed. However, such a simple calculation works only for relatively ``nearby'' objects, perhaps out to some hundred million lightyears. When we look much further into space, we also look far back in time and it is not excluded that the universal expansion rate, i.e. the Hubble constant, may have been different at earlier epochs. This means that unless we know the change of the Hubble constant with time, we cannot determine reliable distances of distant galaxies from their measured redshifts and velocities. At the same time, knowledge about such change or lack of the same will provide unique information about the time elapsed since the Universe began to expand (the ``Big Bang''), that is, the age of the Universe and also its ultimate fate. The Deceleration Parameter q0 Cosmologists are therefore eager to determine not only the current expansion rate (i.e., the Hubble constant, H0) but also its possible change with time (known as the deceleration parameter, q0). Although a highly accurate value of H0 has still not become available, increasing attention is now given to the observational determination of the second parameter, cf. also the Appendix at the

  14. Hubble Space Telescope - Scientific, Technological and Social Contributions to the Public Discourse on Science

    Science.gov (United States)

    Wiseman, Jennifer

    2012-01-01

    The Hubble Space Telescope has unified the world with a sense of awe and wonder for 2 I years and is currently more scientifically powerful than ever. I will present highlights of discoveries made with the Hubble Space Telescope, including details of planetary weather, star formation, extra-solar planets, colliding galaxies, and a universe expanding with the acceleration of dark energy. I will also present the unique technical challenges and triumphs of this phenomenal observatory, and discuss how our discoveries in the cosmos affect our sense of human unity, significance, and wonder.

  15. Visual prosthesis wireless energy transfer system optimal modeling.

    Science.gov (United States)

    Li, Xueping; Yang, Yuan; Gao, Yong

    2014-01-16

    Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling's more accuracy for the actual application. The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system's further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application.

  16. Joint optimization of regional water-power systems

    DEFF Research Database (Denmark)

    Cardenal, Silvio Javier Pereira; Mo, Birger; Gjelsvik, Anders

    2016-01-01

    using stochastic dual dynamic programming. The results showed that current water allocation to hydropower producers in basins with high irrigation productivity, and to irrigation users in basins with high hydropower productivity was sub-optimal. Optimal allocation was achieved by managing reservoirs...... for joint optimization of water and electric power systems was developed in order to identify methodologies to assess the broader interactions between water and energy systems. The proposed method is to include water users and power producers into an economic optimization problem that minimizes the cost...... of power production and maximizes the benefits of water allocation, subject to constraints from the power and hydrological systems. The method was tested on the Iberian Peninsula using simplified models of the seven major river basins and the power market. The optimization problem was successfully solved...

  17. Linear quadratic optimization for positive LTI system

    Science.gov (United States)

    Muhafzan, Yenti, Syafrida Wirma; Zulakmal

    2017-05-01

    Nowaday the linear quadratic optimization subject to positive linear time invariant (LTI) system constitute an interesting study considering it can become a mathematical model of variety of real problem whose variables have to nonnegative and trajectories generated by these variables must be nonnegative. In this paper we propose a method to generate an optimal control of linear quadratic optimization subject to positive linear time invariant (LTI) system. A sufficient condition that guarantee the existence of such optimal control is discussed.

  18. The Hubble Space Telescope: UV, Visible, and Near-Infrared Pursuits

    Science.gov (United States)

    Wiseman, Jennifer

    2010-01-01

    The Hubble Space Telescope continues to push the limits on world-class astrophysics. Cameras including the Advanced Camera for Surveys and the new panchromatic Wide Field Camera 3 which was installed nu last year's successful servicing mission S2N4,o{fer imaging from near-infrared through ultraviolet wavelengths. Spectroscopic studies of sources from black holes to exoplanet atmospheres are making great advances through the versatile use of STIS, the Space Telescope Imaging Spectrograph. The new Cosmic Origins Spectrograph, also installed last year, is the most sensitive UV spectrograph to fly io space and is uniquely suited to address particular scientific questions on galaxy halos, the intergalactic medium, and the cosmic web. With these outstanding capabilities on HST come complex needs for laboratory astrophysics support including atomic and line identification data. I will provide an overview of Hubble's current capabilities and the scientific programs and goals that particularly benefit from the studies of laboratory astrophysics.

  19. Performance optimization of queueing systems with perturbation realization

    KAUST Repository

    Xia, Li

    2012-04-01

    After the intensive studies of queueing theory in the past decades, many excellent results in performance analysis have been obtained, and successful examples abound. However, exploring special features of queueing systems directly in performance optimization still seems to be a territory not very well cultivated. Recent progresses of perturbation analysis (PA) and sensitivity-based optimization provide a new perspective of performance optimization of queueing systems. PA utilizes the structural information of queueing systems to efficiently extract the performance sensitivity information from a sample path of system. This paper gives a brief review of PA and performance optimization of queueing systems, focusing on a fundamental concept called perturbation realization factors, which captures the special dynamic feature of a queueing system. With the perturbation realization factors as building blocks, the performance derivative formula and performance difference formula can be obtained. With performance derivatives, gradient-based optimization can be derived, while with performance difference, policy iteration and optimality equations can be derived. These two fundamental formulas provide a foundation for performance optimization of queueing systems from a sensitivity-based point of view. We hope this survey may provide some inspirations on this promising research topic. © 2011 Elsevier B.V. All rights reserved.

  20. Making Data Mobile: The Hubble Deep Field Academy iPad app

    Science.gov (United States)

    Eisenhamer, Bonnie; Cordes, K.; Davis, S.; Eisenhamer, J.

    2013-01-01

    Many school districts are purchasing iPads for educators and students to use as learning tools in the classroom. Educators often prefer these devices to desktop and laptop computers because they offer portability and an intuitive design, while having a larger screen size when compared to smart phones. As a result, we began investigating the potential of adapting online activities for use on Apple’s iPad to enhance the dissemination and usage of these activities in instructional settings while continuing to meet educators’ needs. As a pilot effort, we are developing an iPad app for the “Hubble Deep Field Academy” - an activity that is currently available online and commonly used by middle school educators. The Hubble Deep Field Academy app features the HDF-North image while centering on the theme of how scientists use light to explore and study the universe. It also includes features such as embedded links to vocabulary, images and videos, teacher background materials, and readings about Hubble’s other deep field surveys. It is our goal is to impact students’ engagement in STEM-related activities, while enhancing educators’ usage of NASA data via new and innovative mediums. We also hope to develop and share lessons learned with the E/PO community that can be used to support similar projects. We plan to test the Hubble Deep Field Academy app during the school year to determine if this new activity format is beneficial to the education community.

  1. Quasar Host Galaxies/Neptune Rotation/Galaxy Building Blocks/Hubble Deep Field/Saturn Storm

    Science.gov (United States)

    2001-01-01

    Computerized animations simulate a quasar erupting in the core of a normal spiral galaxy, the collision of two interacting galaxies, and the evolution of the universe. Hubble Space Telescope (HST) images show six quasars' host galaxies (including spirals, ellipticals, and colliding galaxies) and six clumps of galaxies approximately 11 billion light years away. A false color time lapse movie of Neptune displays the planet's 16-hour rotation, and the evolution of a storm on Saturn is seen though a video of the planet's rotation. A zoom sequence starts with a ground-based image of the constellation Ursa major and ends with the Hubble Deep Field through progressively narrower and deeper views.

  2. MOVING OBJECTS IN THE HUBBLE ULTRA DEEP FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Mukremin; Gianninas, Alexandros [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 W. Brooks St., Norman, OK 73019 (United States); Von Hippel, Ted, E-mail: kilic@ou.edu, E-mail: alexg@nhn.ou.edu, E-mail: ted.vonhippel@erau.edu [Embry-Riddle Aeronautical University, 600 S. Clyde Morris Blvd., Daytona Beach, FL 32114 (United States)

    2013-09-01

    We identify proper motion objects in the Hubble Ultra Deep Field (UDF) using the optical data from the original UDF program in 2004 and the near-infrared data from the 128 orbit UDF 2012 campaign. There are 12 sources brighter than I = 27 mag that display >3{sigma} significant proper motions. We do not find any proper motion objects fainter than this magnitude limit. Combining optical and near-infrared photometry, we model the spectral energy distribution of each point-source using stellar templates and state-of-the-art white dwarf models. For I {<=} 27 mag, we identify 23 stars with K0-M6 spectral types and two faint blue objects that are clearly old, thick disk white dwarfs. We measure a thick disk white dwarf space density of 0.1-1.7 Multiplication-Sign 10{sup -3} pc{sup -3} from these two objects. There are no halo white dwarfs in the UDF down to I = 27 mag. Combining the Hubble Deep Field North, South, and the UDF data, we do not see any evidence for dark matter in the form of faint halo white dwarfs, and the observed population of white dwarfs can be explained with the standard Galactic models.

  3. Optimization of multi-branch switched diversity systems

    KAUST Repository

    Nam, Haewoon; Alouini, Mohamed-Slim

    2009-01-01

    A performance optimization based on the optimal switching threshold(s) for a multi-branch switched diversity system is discussed in this paper. For the conventional multi-branch switched diversity system with a single switching threshold

  4. High-Performance Reaction Wheel Optimization for Fine-Pointing Space Platforms: Minimizing Induced Vibration Effects on Jitter Performance plus Lessons Learned from Hubble Space Telescope for Current and Future Spacecraft Applications

    Science.gov (United States)

    Hasha, Martin D.

    2016-01-01

    The Hubble Space Telescope (HST) applies large-diameter optics (2.5-m primary mirror) for diffraction-limited resolution spanning an extended wavelength range (approx. 100-2500 nm). Its Pointing Control System (PCS) Reaction Wheel Assemblies (RWAs), in the Support Systems Module (SSM), acquired an unprecedented set of high-sensitivity Induced Vibration (IV) data for 5 flight-certified RWAs: dwelling at set rotation rates. Focused on 4 key ratios, force and moment harmonic values (in 3 local principal directions) are extracted in the RWA operating range (0-3000 RPM). The IV test data, obtained under ambient lab conditions, are investigated in detail, evaluated, compiled, and curve-fitted; variational trends, core causes, and unforeseen anomalies are addressed. In aggregate, these values constitute a statistically-valid basis to quantify ground test-to-test variations and facilitate extrapolations to on-orbit conditions. Accumulated knowledge of bearing-rotor vibrational sources, corresponding harmonic contributions, and salient elements of IV key variability factors are discussed. An evolved methodology is presented for absolute assessments and relative comparisons of macro-level IV signal magnitude due to micro-level construction-assembly geometric details/imperfections stemming from both electrical drive and primary bearing design parameters. Based upon studies of same-size/similar-design momentum wheels' IV changes, upper estimates due to transitions from ground tests to orbital conditions are derived. Recommended HST RWA choices are discussed relative to system optimization/tradeoffs of Line-Of-Sight (LOS) vector-pointing focal-plane error driven by higher IV transmissibilities through low-damped structural dynamics that stimulate optical elements. Unique analytical disturbance results for orbital HST accelerations are described applicable to microgravity efforts. Conclusions, lessons learned, historical context/insights, and perspectives on future applications

  5. Optimal design of distributed control and embedded systems

    CERN Document Server

    Çela, Arben; Li, Xu-Guang; Niculescu, Silviu-Iulian

    2014-01-01

    Optimal Design of Distributed Control and Embedded Systems focuses on the design of special control and scheduling algorithms based on system structural properties as well as on analysis of the influence of induced time-delay on systems performances. It treats the optimal design of distributed and embedded control systems (DCESs) with respect to communication and calculation-resource constraints, quantization aspects, and potential time-delays induced by the associated  communication and calculation model. Particular emphasis is put on optimal control signal scheduling based on the system state. In order to render  this complex optimization problem feasible in real time, a time decomposition is based on periodicity induced by the static scheduling is operated. The authors present a co-design approach which subsumes the synthesis of the optimal control laws and the generation of an optimal schedule of control signals on real-time networks as well as the execution of control tasks on a single processor. The a...

  6. Type I supernovae and angular anisotropy of the Hubble constant

    International Nuclear Information System (INIS)

    Le Denmat, Gerard; Vigier, J.-P.

    1975-01-01

    The observation of type I supernovae in distant galaxies yields an homogeneous sample of sources to evaluate their true distance. An examination of their distribution in the sky provides a significant confirmation of the angular anisotropy of the Hubble constant already observed by Rubin, Rubin and Ford [fr

  7. Cogeneration system simulation/optimization

    International Nuclear Information System (INIS)

    Puppa, B.A.; Chandrashekar, M.

    1992-01-01

    Companies are increasingly turning to computer software programs to improve and streamline the analysis o cogeneration systems. This paper introduces a computer program which originated with research at the University of Waterloo. The program can simulate and optimize any type of layout of cogeneration plant. An application of the program to a cogeneration feasibility study for a university campus is described. The Steam and Power Plant Optimization System (SAPPOS) is a PC software package which allows users to model any type of steam/power plant on a component-by-component basis. Individual energy/steam balances can be done quickly to model any scenario. A typical days per month cogeneration simulation can also be carried out to provide a detailed monthly cash flow and energy forecast. This paper reports that SAPPOS can be used for scoping, feasibility, and preliminary design work, along with financial studies, gas contract studies, and optimizing the operation of completed plants. In the feasibility study presented, SAPPOS is used to evaluate both diesel engine and gas turbine combined cycle options

  8. UVUDF: Ultraviolet Imaging of the Hubble Ultra Deep Field with Wide-Field Camera 3

    Science.gov (United States)

    Teplitz, Harry I.; Rafelski, Marc; Kurczynski, Peter; Bond, Nicholas A.; Grogin, Norman; Koekemoer, Anton M.; Atek, Hakim; Brown, Thomas M.; Coe, Dan; Colbert, James W.; Ferguson, Henry C.; Finkelstein, Steven L.; Gardner, Jonathan P.; Gawiser, Eric; Giavalisco, Mauro; Gronwall, Caryl; Hanish, Daniel J.; Lee, Kyoung-Soo; de Mello, Duilia F.; Ravindranath, Swara; Ryan, Russell E.; Siana, Brian D.; Scarlata, Claudia; Soto, Emmaris; Voyer, Elysse N.; Wolfe, Arthur M.

    2013-12-01

    We present an overview of a 90 orbit Hubble Space Telescope treasury program to obtain near-ultraviolet imaging of the Hubble Ultra Deep Field using the Wide Field Camera 3 UVIS detector with the F225W, F275W, and F336W filters. This survey is designed to: (1) investigate the episode of peak star formation activity in galaxies at 1 dropouts at redshifts 1.7, 2.1, and 2.7 is largely consistent with the number predicted by published luminosity functions. We also confirm that the image mosaics have sufficient sensitivity and resolution to support the analysis of the evolution of star-forming clumps, reaching 28-29th magnitude depth at 5σ in a 0.''2 radius aperture depending on filter and observing epoch. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are #12534.

  9. An Architectural Style for Optimizing System Qualities in Adaptive Embedded Systems using Multi-Objective Optimization

    NARCIS (Netherlands)

    de Roo, Arjan; Sözer, Hasan; Aksit, Mehmet

    Customers of today's complex embedded systems demand the optimization of multiple system qualities under varying operational conditions. To be able to influence the system qualities, the system must have parameters that can be adapted. Constraints may be defined on the value of these parameters.

  10. Optimal Control for a Class of Chaotic Systems

    Directory of Open Access Journals (Sweden)

    Jianxiong Zhang

    2012-01-01

    Full Text Available This paper proposes the optimal control methods for a class of chaotic systems via state feedback. By converting the chaotic systems to the form of uncertain piecewise linear systems, we can obtain the optimal controller minimizing the upper bound on cost function by virtue of the robust optimal control method of piecewise linear systems, which is cast as an optimization problem under constraints of bilinear matrix inequalities (BMIs. In addition, the lower bound on cost function can be achieved by solving a semidefinite programming (SDP. Finally, numerical examples are given to illustrate the results.

  11. Optimal Model-Based Control in HVAC Systems

    DEFF Research Database (Denmark)

    Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik

    2008-01-01

    is developed. Then the optimal control structure is designed and implemented. The HVAC system is splitted into two subsystems. By selecting the right set-points and appropriate cost functions for each subsystem controller the optimal control strategy is respected to gaurantee the minimum thermal and electrical......This paper presents optimal model-based control of a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger (a rotary wheel heat recovery) and a water-to- air heat exchanger. First dynamic model of the HVAC system...... energy consumption. Finally, the controller is applied to control the mentioned HVAC system and the results show that the expected goals are fulfilled....

  12. Computing the optimal path in stochastic dynamical systems

    International Nuclear Information System (INIS)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-01-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  13. Optimization and Optimal Control in Automotive Systems

    NARCIS (Netherlands)

    Waschl, H.; Kolmanovsky, I.V.; Steinbuch, M.; Re, del L.

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier approaches, based on some degree of heuristics, to the use of more and

  14. Hubble's View of Little Blue Dots

    Science.gov (United States)

    Kohler, Susanna

    2018-02-01

    The recent discovery of a new type of tiny, star-forming galaxy is the latest in a zoo of detections shedding light on our early universe. What can we learn from the unique little blue dots found in archival Hubble data?Peas, Berries, and DotsGreen pea galaxies identified by citizen scientists with Galaxy Zoo. [Richard Nowell Carolin Cardamone]As telescope capabilities improve and we develop increasingly deeper large-scale surveys of our universe, we continue to learn more about small, faraway galaxies. In recent years, increasing sensitivity first enabled the detection of green peas luminous, compact, low-mass (10 billion solar masses; compare this to the Milky Ways 1 trillion solar masses!) galaxies with high rates of star formation.Not long thereafter, we discovered galaxies that form stars similarly rapidly, but are even smaller only 330 million solar masses, spanning less than 3,000 light-years in size. These tiny powerhouses were termed blueberries for their distinctive color.Now, scientists Debra and Bruce Elmegreen (of Vassar College and IBM Research Division, respectively) report the discovery of galaxies that have even higher star formation rates and even lower masses: little blue dots.Exploring Tiny Star FactoriesThe Elmegreens discovered these unique galaxies by exploring archival Hubble data. The Hubble Frontier Fields data consist of deep images of six distant galaxy clusters and the parallel fields next to them. It was in the archival data for two Frontier Field Parallels, those for clusters Abell 2744 and MAS J0416.1-2403, that the authors noticed several galaxies that stand out as tiny, bright, blue objects that are nearly point sources.Top: a few examples of the little blue dots recently identified in two Hubble Frontier Field Parallels. Bottom: stacked images for three different groups of little blue dots. [Elmegreen Elmegreen 2017]The authors performed a search through the two Frontier Field Parallels, discovering a total of 55 little blue dots

  15. Design of Thermal Systems Using Topology Optimization

    DEFF Research Database (Denmark)

    Haertel, Jan Hendrik Klaas

    printeddry-cooled power plant condensers using a simpliffed thermouid topology optimizationmodel is presented in another study. A benchmarking of the optimized geometriesagainst a conventional heat exchanger design is conducted and the topologyoptimized designs show a superior performance. A thermouid......The goalof this thesis is to apply topology optimization to the design of differentthermal systems such as heat sinks and heat exchangers in order to improve thethermal performance of these systems compared to conventional designs. Thedesign of thermal systems is a complex task that has...... of optimized designs are presentedwithin this thesis.  The maincontribution of the thesis is the development of several numerical optimizationmodels that are applied to different design challenges within thermalengineering.  Topology optimization isapplied in an industrial project to design the heat rejection...

  16. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    Science.gov (United States)

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  17. Thermodynamic optimization of geometry in engineering flow systems

    Energy Technology Data Exchange (ETDEWEB)

    Bejan, A.; Jones, J.A. [Duke Univ., Durham, NC (United States)

    2000-07-01

    This review draws attention to an emerging body of work that relies on global thermodynamic optimization in the pursuit of flow system architecture. Exergy analysis establishes the theoretical performance limit. Thermodynamic optimization (or entropy generation minimization) brings the design as closely as permissible to the theoretical limit. The design is destined to remain imperfect because of constraints (finite sizes, times, and costs). Improvements are registered by spreading the imperfection (e.g., flow resistances) through the system. Resistances compete against each other and must be optimized together. Optimal spreading means spatial distribution, geometric form, topology, and geography. System architecture springs out of constrained global optimization. The principle is illustrated by simple examples: the optimization of dimensions, spacings, and the distribution (allocation) of heat transfer surface to the two heat exchangers of a power plant. Similar opportunities for deducing flow architecture exist in more complex systems for power and refrigeration. Examples show that the complete structure of heat exchangers for environmental control systems of aircraft can be derived based on this principle. (authors)

  18. Combined Optimal Control System for excavator electric drive

    Science.gov (United States)

    Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.

    2018-03-01

    The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).

  19. Optimization and Control of Cyber-Physical Vehicle Systems

    Directory of Open Access Journals (Sweden)

    Justin M. Bradley

    2015-09-01

    Full Text Available A cyber-physical system (CPS is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.

  20. Optimization and Control of Cyber-Physical Vehicle Systems.

    Science.gov (United States)

    Bradley, Justin M; Atkins, Ella M

    2015-09-11

    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.

  1. Optimal Control Development System for Electrical Drives

    Directory of Open Access Journals (Sweden)

    Marian GAICEANU

    2008-08-01

    Full Text Available In this paper the optimal electrical drive development system is presented. It consists of both electrical drive types: DC and AC. In order to implement the optimal control for AC drive system an Altivar 71 inverter, a Frato magnetic particle brake (as load, three-phase induction machine, and dSpace 1104 controller have been used. The on-line solution of the matrix Riccati differential equation (MRDE is computed by dSpace 1104 controller, based on the corresponding feedback signals, generating the optimal speed reference for the AC drive system. The optimal speed reference is tracked by Altivar 71 inverter, conducting to energy reduction in AC drive. The classical control (consisting of rotor field oriented control with PI controllers and the optimal one have been implemented by designing an adequate ControlDesk interface. The three-phase induction machine (IM is controlled at constant flux. Therefore, the linear dynamic mathematical model of the IM has been obtained. The optimal control law provides transient regimes with minimal energy consumption. The obtained solution by integration of the MRDE is orientated towards the numerical implementation-by using a zero order hold. The development system is very useful for researchers, doctoral students or experts training in electrical drive. The experimental results are shown.

  2. Optimization of biotechnological systems through geometric programming

    Directory of Open Access Journals (Sweden)

    Torres Nestor V

    2007-09-01

    Full Text Available Abstract Background In the past, tasks of model based yield optimization in metabolic engineering were either approached with stoichiometric models or with structured nonlinear models such as S-systems or linear-logarithmic representations. These models stand out among most others, because they allow the optimization task to be converted into a linear program, for which efficient solution methods are widely available. For pathway models not in one of these formats, an Indirect Optimization Method (IOM was developed where the original model is sequentially represented as an S-system model, optimized in this format with linear programming methods, reinterpreted in the initial model form, and further optimized as necessary. Results A new method is proposed for this task. We show here that the model format of a Generalized Mass Action (GMA system may be optimized very efficiently with techniques of geometric programming. We briefly review the basics of GMA systems and of geometric programming, demonstrate how the latter may be applied to the former, and illustrate the combined method with a didactic problem and two examples based on models of real systems. The first is a relatively small yet representative model of the anaerobic fermentation pathway in S. cerevisiae, while the second describes the dynamics of the tryptophan operon in E. coli. Both models have previously been used for benchmarking purposes, thus facilitating comparisons with the proposed new method. In these comparisons, the geometric programming method was found to be equal or better than the earlier methods in terms of successful identification of optima and efficiency. Conclusion GMA systems are of importance, because they contain stoichiometric, mass action and S-systems as special cases, along with many other models. Furthermore, it was previously shown that algebraic equivalence transformations of variables are sufficient to convert virtually any types of dynamical models into

  3. A Guided Inquiry on Hubble Plots and the Big Bang

    Science.gov (United States)

    Forringer, Ted

    2014-01-01

    In our science for non-science majors course "21st Century Physics," we investigate modern "Hubble plots" (plots of velocity versus distance for deep space objects) in order to discuss the Big Bang, dark matter, and dark energy. There are two potential challenges that our students face when encountering these topics for the…

  4. Hierarchical models and iterative optimization of hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Rasina, Irina V. [Ailamazyan Program Systems Institute, Russian Academy of Sciences, Peter One str. 4a, Pereslavl-Zalessky, 152021 (Russian Federation); Baturina, Olga V. [Trapeznikov Control Sciences Institute, Russian Academy of Sciences, Profsoyuznaya str. 65, 117997, Moscow (Russian Federation); Nasatueva, Soelma N. [Buryat State University, Smolina str.24a, Ulan-Ude, 670000 (Russian Federation)

    2016-06-08

    A class of hybrid control systems on the base of two-level discrete-continuous model is considered. The concept of this model was proposed and developed in preceding works as a concretization of the general multi-step system with related optimality conditions. A new iterative optimization procedure for such systems is developed on the base of localization of the global optimality conditions via contraction the control set.

  5. Optimization of Parameters of Asymptotically Stable Systems

    Directory of Open Access Journals (Sweden)

    Anna Guerman

    2011-01-01

    Full Text Available This work deals with numerical methods of parameter optimization for asymptotically stable systems. We formulate a special mathematical programming problem that allows us to determine optimal parameters of a stabilizer. This problem involves solutions to a differential equation. We show how to chose the mesh in order to obtain discrete problem guaranteeing the necessary accuracy. The developed methodology is illustrated by an example concerning optimization of parameters for a satellite stabilization system.

  6. The Hubble Space Telescope nickel-hydrogen battery design

    Science.gov (United States)

    Nawrocki, D. E.; Armantrout, J. D.; Standlee, D. J.; Baker, R. C.; Lanier, J. R.

    1990-01-01

    Details are presented of the HST (Hubble Space Telescope) battery cell, battery package, and module mechanical and electrical designs. Also included are a summary of acceptance, qualification, and vibration tests and thermal vacuum testing. Unique details of battery cell charge retention performance characteristics associated with prelaunch hold conditions are discussed. Special charge control methods to minimize thermal dissipation during pad charging operations are summarized. This module design meets all NASA fracture control requirements for manned missions.

  7. The Structural Optimization System CAOS

    DEFF Research Database (Denmark)

    Rasmussen, John

    1990-01-01

    CAOS is a system for structural shape optimization. It is closely integrated in a Computer Aided Design environment and controlled entirely from the CAD-system AutoCAD. The mathematical foundation of the system is briefly presented and a description of the CAD-integration strategy is given together...

  8. The effect of interacting dark energy on local measurements of the Hubble constant

    International Nuclear Information System (INIS)

    Odderskov, Io; Baldi, Marco; Amendola, Luca

    2016-01-01

    In the current state of cosmology, where cosmological parameters are being measured to percent accuracy, it is essential to understand all sources of error to high precision. In this paper we present the results of a study of the local variations in the Hubble constant measured at the distance scale of the Coma Cluster, and test the validity of correcting for the peculiar velocities predicted by gravitational instability theory. The study is based on N-body simulations, and includes models featuring a coupling between dark energy and dark matter, as well as two ΛCDM simulations with different values of σ 8 . It is found that the variance in the local flows is significantly larger in the coupled models, which increases the uncertainty in the local measurements of the Hubble constant in these scenarios. By comparing the results from the different simulations, it is found that most of the effect is caused by the higher value of σ 8 in the coupled cosmologies, though this cannot account for all of the additional variance. Given the discrepancy between different estimates of the Hubble constant in the universe today, cosmological models causing a greater cosmic variance is something that we should be aware of.

  9. The effect of interacting dark energy on local measurements of the Hubble constant

    Energy Technology Data Exchange (ETDEWEB)

    Odderskov, Io [Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, Aarhus C (Denmark); Baldi, Marco [Dipartimento di Fisica e Astronomia, Alma Mater Studiorum Università di Bologna, viale Berti Pichat 6/2, I-40127, Bologna (Italy); Amendola, Luca, E-mail: isho07@phys.au.dk, E-mail: marco.baldi5@unibo.it, E-mail: l.amendola@thphys.uni-heidelberg.de [Institut für Theoretische Physik, Ruprecht-Karls-Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-05-01

    In the current state of cosmology, where cosmological parameters are being measured to percent accuracy, it is essential to understand all sources of error to high precision. In this paper we present the results of a study of the local variations in the Hubble constant measured at the distance scale of the Coma Cluster, and test the validity of correcting for the peculiar velocities predicted by gravitational instability theory. The study is based on N-body simulations, and includes models featuring a coupling between dark energy and dark matter, as well as two ΛCDM simulations with different values of σ{sub 8}. It is found that the variance in the local flows is significantly larger in the coupled models, which increases the uncertainty in the local measurements of the Hubble constant in these scenarios. By comparing the results from the different simulations, it is found that most of the effect is caused by the higher value of σ{sub 8} in the coupled cosmologies, though this cannot account for all of the additional variance. Given the discrepancy between different estimates of the Hubble constant in the universe today, cosmological models causing a greater cosmic variance is something that we should be aware of.

  10. Astronomers celebrate a year of new Hubble results

    Science.gov (United States)

    1995-02-01

    "We are beginning to understand that because of these observations we are going to have to change the way we look at the Universe," said ESA's Dr Duccio Macchetto, Associate Director for Science Programs at the Space Telescope Science Institute (STScI), Baltimore, Maryland, USA. The European Space Agency plays a major role in the Hubble Space Telescope programme. The Agency provided one of the telescope's four major instruments, called the Faint Object Camera, and two sets of electricity-generating solar arrays. In addition, 15 ESA scientific and technical staff work at the STScI. In return for this contribution, European astronomers are entitled to 15 percent of the telescope's observing time, although currently they account for 20 percent of all observations. "This is a testimony to the quality of the European science community", said Dr Roger Bonnet, Director of Science at ESA. "We are only guaranteed 15 percent of the telescope's use, but consistently receive much more than that." Astronomers from universities, observatories and research institutes across Europe lead more than 60 investigations planned for the telescope's fifth observing cycle, which begins this summer. Many more Europeans contribute to teams led by other astronomers. Looking back to the very start of time European astronomer Dr Peter Jakobsen used ESA's Faint Object Camera to confirm that helium was present in the early Universe. Astronomers had long predicted that 90 percent of the newly born Universe consisted of hydrogen, with helium making up the remainder. Before the refurbished Hubble came along, it was easy to detect the hydrogen, but the primordial helium remained elusive. The ultraviolet capabilities of the telescope, combined with the improvement in spatial resolution following the repair, made it possible for Dr Jakobsen to obtain an image of a quasar close to the edge of the known Universe. A spectral analysis of this picture revealed the quasar's light, which took 13 billion years

  11. Optimal control of switched systems arising in fermentation processes

    CERN Document Server

    Liu, Chongyang

    2014-01-01

    The book presents, in a systematic manner, the optimal controls under different mathematical models in fermentation processes. Variant mathematical models – i.e., those for multistage systems; switched autonomous systems; time-dependent and state-dependent switched systems; multistage time-delay systems and switched time-delay systems – for fed-batch fermentation processes are proposed and the theories and algorithms of their optimal control problems are studied and discussed. By putting forward novel methods and innovative tools, the book provides a state-of-the-art and comprehensive systematic treatment of optimal control problems arising in fermentation processes. It not only develops nonlinear dynamical system, optimal control theory and optimization algorithms, but can also help to increase productivity and provide valuable reference material on commercial fermentation processes.

  12. Hubble Space Telescope, Faint Object Camera

    Science.gov (United States)

    1981-01-01

    This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.

  13. Optimal Vibration Control for Tracked Vehicle Suspension Systems

    Directory of Open Access Journals (Sweden)

    Yan-Jun Liang

    2013-01-01

    Full Text Available Technique of optimal vibration control with exponential decay rate and simulation for vehicle active suspension systems is developed. Mechanical model and dynamic system for a class of tracked vehicle suspension vibration control is established and the corresponding system of state space form is described. In order to prolong the working life of suspension system and improve ride comfort, based on the active suspension vibration control devices and using optimal control approach, an optimal vibration controller with exponential decay rate is designed. Numerical simulations are carried out, and the control effects of the ordinary optimal controller and the proposed controller are compared. Numerical simulation results illustrate the effectiveness of the proposed technique.

  14. Observational constraints on Hubble parameter in viscous generalized Chaplygin gas

    Science.gov (United States)

    Thakur, P.

    2018-04-01

    Cosmological model with viscous generalized Chaplygin gas (in short, VGCG) is considered here to determine observational constraints on its equation of state parameters (in short, EoS) from background data. These data consists of H(z)-z (OHD) data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter and SN Ia data (Union 2.1). Best-fit values of the EoS parameters including present Hubble parameter (H0) and their acceptable range at different confidence limits are determined. In this model the permitted range for the present Hubble parameter and the transition redshift (zt) at 1σ confidence limits are H0= 70.24^{+0.34}_{-0.36} and zt=0.76^{+0.07}_{-0.07} respectively. These EoS parameters are then compared with those of other models. Present age of the Universe (t0) have also been determined here. Akaike information criterion and Bayesian information criterion for the model selection have been adopted for comparison with other models. It is noted that VGCG model satisfactorily accommodates the present accelerating phase of the Universe.

  15. Spike: Artificial intelligence scheduling for Hubble space telescope

    Science.gov (United States)

    Johnston, Mark; Miller, Glenn; Sponsler, Jeff; Vick, Shon; Jackson, Robert

    1990-01-01

    Efficient utilization of spacecraft resources is essential, but the accompanying scheduling problems are often computationally intractable and are difficult to approximate because of the presence of numerous interacting constraints. Artificial intelligence techniques were applied to the scheduling of the NASA/ESA Hubble Space Telescope (HST). This presents a particularly challenging problem since a yearlong observing program can contain some tens of thousands of exposures which are subject to a large number of scientific, operational, spacecraft, and environmental constraints. New techniques were developed for machine reasoning about scheduling constraints and goals, especially in cases where uncertainty is an important scheduling consideration and where resolving conflicts among conflicting preferences is essential. These technique were utilized in a set of workstation based scheduling tools (Spike) for HST. Graphical displays of activities, constraints, and schedules are an important feature of the system. High level scheduling strategies using both rule based and neural network approaches were developed. While the specific constraints implemented are those most relevant to HST, the framework developed is far more general and could easily handle other kinds of scheduling problems. The concept and implementation of the Spike system are described along with some experiments in adapting Spike to other spacecraft scheduling domains.

  16. Comparative evaluation of various optimization methods and the development of an optimization code system SCOOP

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1979-11-01

    Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)

  17. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  18. A determination of H-0 with the class gravitational lens B1608+656. II. Mass models and the Hubble constant from lensing

    NARCIS (Netherlands)

    Koopmans, LVE; Fassnacht, CD

    1999-01-01

    We present mass models of the four-image gravitational lens system B1608 + 656, based on information obtained through VLBA imaging, VLA monitoring, and Hubble Space Telescope (HST) WFPC2 and NICMOS imaging. We have determined a mass model for the lens galaxies that reproduces (1) all image positions

  19. Constraining the evolution of the Hubble Parameter using cosmic chronometers

    Science.gov (United States)

    Dickinson, Hugh

    2017-08-01

    Substantial investment is being made in space- and ground-based missions with the goal of revealing the nature of the observed cosmic acceleration. This is one of the most important unsolved problems in cosmology today.We propose here to constrain the evolution of the Hubble parameter [H(z)] between 1.3 fundamental nature of dark energy.

  20. Optimizing the Gating System for Steel Castings

    Directory of Open Access Journals (Sweden)

    Jan Jezierski

    2018-04-01

    Full Text Available The article presents the attempt to optimize a gating system to produce cast steel castings. It is based on John Campbell’s theory and presents the original results of computer modelling of typical and optimized gating systems for cast steel castings. The current state-of-the-art in cast steel casting foundry was compared with several proposals of optimization. The aim was to find a compromise between the best, theoretically proven gating system version, and a version that would be affordable in industrial conditions. The results show that it is possible to achieve a uniform and slow pouring process even for heavy castings to preserve their internal quality.

  1. "HUBBLE, the astronomer, the telescope, the results"

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    The fundamental discoveries made by Edwin Hubble in the first quarter of the last century will be presented. The space telescope bearing his name will be introduced, as well as the strategy put in place by NASA and the European Space Agency for its operation and its maintenance on-orbit. The personal experience of the speaker having participated in two of five servicing mission will be exposed and illustrated by pictures taken on-orbit. Finally, the main results obtained by the orbital observatory will be presented, in particular the ones related to the large scale structure of the Universe and its early history

  2. Genetic optimization of steam multi-turbines system

    International Nuclear Information System (INIS)

    Olszewski, Pawel

    2014-01-01

    Optimization analysis of partially loaded cogeneration, multiple-stages steam turbines system was numerically investigated by using own-developed code (C++). The system can be controlled by following variables: fresh steam temperature, pressure, and flow rates through all stages in steam turbines. Five various strategies, four thermodynamics and one economical, which quantify system operation, were defined and discussed as an optimization functions. Mathematical model of steam turbines calculates steam properties according to the formulation proposed by the International Association for the Properties of Water and Steam. Genetic algorithm GENOCOP was implemented as a solving engine for non–linear problem with handling constrains. Using formulated methodology, example solution for partially loaded system, composed of five steam turbines (30 input variables) with different characteristics, was obtained for five strategies. The genetic algorithm found multiple solutions (various input parameters sets) giving similar overall results. In real application it allows for appropriate scheduling of machine operation that would affect equable time load of every system compounds. Also based on these results three strategies where chosen as the most complex: the first thermodynamic law energy and exergy efficiency maximization and total equivalent energy minimization. These strategies can be successfully used in optimization of real cogeneration applications. - Highlights: • Genetic optimization model for a set of five various steam turbines was presented. • Four various thermodynamic optimization strategies were proposed and discussed. • Operational parameters (steam pressure, temperature, flow) influence was examined. • Genetic algorithm generated optimal solutions giving the best estimators values. • It has been found that similar energy effect can be obtained for various inputs

  3. Hubble Observes Surface of Titan

    Science.gov (United States)

    1994-01-01

    Scientists for the first time have made images of the surface of Saturn's giant, haze-shrouded moon, Titan. They mapped light and dark features over the surface of the satellite during nearly a complete 16-day rotation. One prominent bright area they discovered is a surface feature 2,500 miles across, about the size of the continent of Australia.Titan, larger than Mercury and slightly smaller than Mars, is the only body in the solar system, other than Earth, that may have oceans and rainfall on its surface, albeit oceans and rain of ethane-methane rather than water. Scientists suspect that Titan's present environment -- although colder than minus 289 degrees Fahrenheit, so cold that water ice would be as hard as granite -- might be similar to that on Earth billions of years ago, before life began pumping oxygen into the atmosphere.Peter H. Smith of the University of Arizona Lunar and Planetary Laboratory and his team took the images with the Hubble Space Telescope during 14 observing runs between Oct. 4 - 18. Smith announced the team's first results last week at the 26th annual meeting of the American Astronomical Society Division for Planetary Sciences in Bethesda, Md. Co-investigators on the team are Mark Lemmon, a doctoral candidate with the UA Lunar and Planetary Laboratory; John Caldwell of York University, Canada; Larry Sromovsky of the University of Wisconsin; and Michael Allison of the Goddard Institute for Space Studies, New York City.Titan's atmosphere, about four times as dense as Earth's atmosphere, is primarily nitrogen laced with such poisonous substances as methane and ethane. This thick, orange, hydrocarbon haze was impenetrable to cameras aboard the Pioneer and Voyager spacecraft that flew by the Saturn system in the late 1970s and early 1980s. The haze is formed as methane in the atmosphere is destroyed by sunlight. The hydrocarbons produced by this methane destruction form a smog similar to that found over large cities, but is much thicker

  4. Topology optimization of nano-photonic systems

    DEFF Research Database (Denmark)

    Elesin, Yuriy; Wang, Fengwen; Andkjær, Jacob Anders

    2012-01-01

    We describe recent developments within nano-photonic systems design based on topology optimization. Applications include linear and non-linear optical waveguides, slow-light waveguides, as well as all-dielectric cloaks that minimize scattering or back-scattering from hard obstacles.......We describe recent developments within nano-photonic systems design based on topology optimization. Applications include linear and non-linear optical waveguides, slow-light waveguides, as well as all-dielectric cloaks that minimize scattering or back-scattering from hard obstacles....

  5. Office lighting systems: Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Dagnino, U. (ENEL, Milan (Italy))

    1990-09-01

    Relative to office lighting systems, in particular, those making use of tubular fluorescent lamps, currently available on the international market, this paper tries to develop lighting system, design optimization criteria. The comparative assessment of the various design possibilities considers operating cost, energy consumption, and occupational comfort/safety aspects such as lighting level uniformity and equilibrium, reduction of glare and reflection, natural/artificial lighting balance, programmed switching, computerized control systems for multi-use requirements in large areas, programmed maintenance for greater efficiency and reliability.

  6. METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Lasher

    2013-09-01

    Full Text Available Purpose. To demonstrate feasibility of the proposed integrated optimization of various MTS parameters to reduce capital investments as well as decrease any operational and maintenance expense. This will make use of MTS reasonable. At present, the Maglev Transport Systems (MTS for High-Speed Ground Transportation (HSGT almost do not apply. Significant capital investments, high operational and maintenance costs are the main reasons why Maglev Transport Systems (MTS are hardly currently used for the High-Speed Ground Transportation (HSGT. Therefore, this article justifies use of Theory of Complex Optimization of Transport (TCOT, developed by one of the co-authors, to reduce MTS costs. Methodology. According to TCOT, authors developed an abstract model of the generalized transport system (AMSTG. This model mathematically determines the optimal balance between all components of the system and thus provides the ultimate adaptation of any transport systems to the conditions of its application. To identify areas for effective use of MTS, by TCOT, the authors developed a dynamic model of distribution and expansion of spheres of effective use of transport systems (DMRRSEPTS. Based on this model, the most efficient transport system was selected for each individual track. The main estimated criterion at determination of efficiency of application of MTS is the size of the specific transportation tariff received from calculation of payback of total given expenses to a standard payback period or term of granting the credit. Findings. The completed multiple calculations of four types of MTS: TRANSRAPID, MLX01, TRANSMAG and TRANSPROGRESS demonstrated efficiency of the integrated optimization of the parameters of such systems. This research made possible expending the scope of effective usage of MTS in about 2 times. The achieved results were presented at many international conferences in Germany, Switzerland, United States, China, Ukraine, etc. Using MTS as an

  7. Joint optimization of regional water-power systems

    Science.gov (United States)

    Pereira-Cardenal, Silvio J.; Mo, Birger; Gjelsvik, Anders; Riegels, Niels D.; Arnbjerg-Nielsen, Karsten; Bauer-Gottwein, Peter

    2016-06-01

    Energy and water resources systems are tightly coupled; energy is needed to deliver water and water is needed to extract or produce energy. Growing pressure on these resources has raised concerns about their long-term management and highlights the need to develop integrated solutions. A method for joint optimization of water and electric power systems was developed in order to identify methodologies to assess the broader interactions between water and energy systems. The proposed method is to include water users and power producers into an economic optimization problem that minimizes the cost of power production and maximizes the benefits of water allocation, subject to constraints from the power and hydrological systems. The method was tested on the Iberian Peninsula using simplified models of the seven major river basins and the power market. The optimization problem was successfully solved using stochastic dual dynamic programming. The results showed that current water allocation to hydropower producers in basins with high irrigation productivity, and to irrigation users in basins with high hydropower productivity was sub-optimal. Optimal allocation was achieved by managing reservoirs in very distinct ways, according to the local inflow, storage capacity, hydropower productivity, and irrigation demand and productivity. This highlights the importance of appropriately representing the water users' spatial distribution and marginal benefits and costs when allocating water resources optimally. The method can handle further spatial disaggregation and can be extended to include other aspects of the water-energy nexus.

  8. Optimal Real-time Dispatch for Integrated Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Firestone, Ryan Michael [Univ. of California, Berkeley, CA (United States)

    2007-05-31

    This report describes the development and application of a dispatch optimization algorithm for integrated energy systems (IES) comprised of on-site cogeneration of heat and electricity, energy storage devices, and demand response opportunities. This work is intended to aid commercial and industrial sites in making use of modern computing power and optimization algorithms to make informed, near-optimal decisions under significant uncertainty and complex objective functions. The optimization algorithm uses a finite set of randomly generated future scenarios to approximate the true, stochastic future; constraints are included that prevent solutions to this approximate problem from deviating from solutions to the actual problem. The algorithm is then expressed as a mixed integer linear program, to which a powerful commercial solver is applied. A case study of United States Postal Service Processing and Distribution Centers (P&DC) in four cities and under three different electricity tariff structures is conducted to (1) determine the added value of optimal control to a cogeneration system over current, heuristic control strategies; (2) determine the value of limited electric load curtailment opportunities, with and without cogeneration; and (3) determine the trade-off between least-cost and least-carbon operations of a cogeneration system. Key results for the P&DC sites studied include (1) in locations where the average electricity and natural gas prices suggest a marginally profitable cogeneration system, optimal control can add up to 67% to the value of the cogeneration system; optimal control adds less value in locations where cogeneration is more clearly profitable; (2) optimal control under real-time pricing is (a) more complicated than under typical time-of-use tariffs and (b) at times necessary to make cogeneration economic at all; (3) limited electric load curtailment opportunities can be more valuable as a compliment to the cogeneration system than alone; and

  9. HUBBLE SPACE TELESCOPE DETECTION OF THE DOUBLE PULSAR SYSTEM J0737–3039 IN THE FAR-ULTRAVIOLET

    International Nuclear Information System (INIS)

    Durant, Martin; Kargaltsev, Oleg; Pavlov, George G.

    2014-01-01

    We report on detection of the double pulsar system J0737–3039 in the far-UV with the Advanced Camera for Surveys/Solar-blind Channel detector aboard Hubble Space Telescope. We measured the energy flux F = (4.6 ± 1.0) × 10 –17  erg cm –2 s –1 in the 1250-1550 Å band, which corresponds to the extinction-corrected luminosity L ≈ 1.5 × 10 28  erg s –1 for the distance d = 1.1 kpc and a plausible reddening E(B – V) = 0.1. If the detected emission comes from the entire surface of one of the neutron stars with a 13 km radius, the surface blackbody temperature is in the range T ≅ (2-5) × 10 5  K for a reasonable range of interstellar extinction. Such a temperature requires an internal heating mechanism to operate in old neutron stars, or, less likely, it might be explained by heating of the surface of the less energetic Pulsar B by the relativistic wind of Pulsar A. If the far-ultraviolet emission is non-thermal (e.g., produced in the magnetosphere of Pulsar A), its spectrum exhibits a break between the UV and X-rays

  10. Dwarf Galaxies with Gentle Star Formation and the Counts of Galaxies in the Hubble Deep Field

    OpenAIRE

    Campos, Ana

    1997-01-01

    In this paper the counts and colors of the faint galaxies observed in the Hubble Deep Field are fitted by means of simple luminosity evolution models that incorporate a numerous population of fading dwarfs. The observed color distribution of the very faint galaxies now allows us to put constraints on the star formation history in dwarfs. It is shown that the star-forming activity in these small systems has to proceed in a gentle way, i.e., through episodes where each one lasts much longer tha...

  11. Hubble Space Telescope: The Telescope, the Observations & the Servicing Mission

    Science.gov (United States)

    1999-11-01

    Today the HST Archives contain more than 260 000 astronomical observations. More than 13 000 astronomical objects have been observed by hundreds of different groups of scientists. Direct proof of the scientific significance of this project is the record-breaking number of papers published : over 2400 to date. Some of HST's most memorable achievements are: * the discovery of myriads of very faint galaxies in the early Universe, * unprecedented, accurate measurements of distances to the farthest galaxies, * significant improvement in the determination of the Hubble constant and thus the age of the Universe, * confirmation of the existence of blacks holes, * a far better understanding of the birth, life and death of stars, * a very detailed look at the secrets of the process by which planets are created. Europe and HST ESA's contribution to HST represents a nominal investment of 15%. ESA provided one of the two imaging instruments - the Faint Object Camera (FOC) - and the solar panels. It also has 15 scientists and computer staff working at the Space Telescope Science Institute in Baltimore (Maryland). In Europe the astronomical community receives observational assistance from the Space Telescope European Coordinating Facility (ST-ECF) located in Garching, Munich. In return for ESA's investment, European astronomers have access to approximately 15% of the observing time. In reality the actual observing time competitively allocated to European astronomers is closer to 20%. Looking back at almost ten years of operation, the head of ST-ECF, European HST Project Scientist Piero Benvenuti states: "Hubble has been of paramount importance to European astronomy, much more than the mere 20% of observing time. It has given the opportunity for European scientists to use a top class instrument that Europe alone would not be able to build and operate. In specific areas of research they have now, mainly due to HST, achieved international leadership." One of the major reasons for

  12. Optimized Evaluation System to Athletic Food Safety

    OpenAIRE

    Shanshan Li

    2015-01-01

    This study presented a new method of optimizing evaluation function in athletic food safety information programming by particle swarm optimization. The process of food information evaluation function is to automatically adjust these parameters in the evaluation function by self-optimizing method accomplished through competition, which is a food information system plays against itself with different evaluation functions. The results show that the particle swarm optimization is successfully app...

  13. Hubble Images Reveal Jupiter's Auroras

    Science.gov (United States)

    1996-01-01

    These images, taken by the Hubble Space Telescope, reveal changes in Jupiter's auroral emissions and how small auroral spots just outside the emission rings are linked to the planet's volcanic moon, Io. The images represent the most sensitive and sharply-detailed views ever taken of Jovian auroras.The top panel pinpoints the effects of emissions from Io, which is about the size of Earth's moon. The black-and-white image on the left, taken in visible light, shows how Io and Jupiter are linked by an invisible electrical current of charged particles called a 'flux tube.' The particles - ejected from Io (the bright spot on Jupiter's right) by volcanic eruptions - flow along Jupiter's magnetic field lines, which thread through Io, to the planet's north and south magnetic poles. This image also shows the belts of clouds surrounding Jupiter as well as the Great Red Spot.The black-and-white image on the right, taken in ultraviolet light about 15 minutes later, shows Jupiter's auroral emissions at the north and south poles. Just outside these emissions are the auroral spots. Called 'footprints,' the spots are created when the particles in Io's 'flux tube' reach Jupiter's upper atmosphere and interact with hydrogen gas, making it fluoresce. In this image, Io is not observable because it is faint in the ultraviolet.The two ultraviolet images at the bottom of the picture show how the auroral emissions change in brightness and structure as Jupiter rotates. These false-color images also reveal how the magnetic field is offset from Jupiter's spin axis by 10 to 15 degrees. In the right image, the north auroral emission is rising over the left limb; the south auroral oval is beginning to set. The image on the left, obtained on a different date, shows a full view of the north aurora, with a strong emission inside the main auroral oval.The images were taken by the telescope's Wide Field and Planetary Camera 2 between May 1994 and September 1995.This image and other images and data

  14. The Hubble series: convergence properties and redshift variables

    International Nuclear Information System (INIS)

    Cattoen, Celine; Visser, Matt

    2007-01-01

    In cosmography, cosmokinetics and cosmology, it is quite common to encounter physical quantities expanded as a Taylor series in the cosmological redshift z. Perhaps the most well-known exemplar of this phenomenon is the Hubble relation between distance and redshift. However, we now have considerable high-z data available; for instance, we have supernova data at least back to redshift z ∼ 1.75. This opens up the theoretical question as to whether or not the Hubble series (or more generally any series expansion based on the z-redshift) actually converges for large redshift. Based on a combination of mathematical and physical reasonings, we argue that the radius of convergence of any series expansion in z is less than or equal to 1, and that z-based expansions must break down for z > 1, corresponding to a universe less than half of its current size. Furthermore, we shall argue on theoretical grounds for the utility of an improved parametrization y = z/(1 + z). In terms of the y-redshift, we again argue that the radius of convergence of any series expansion in y is less than or equal to 1, so that y-based expansions are likely to be good all the way back to the big bang (y = 1), but that y-based expansions must break down for y < -1, now corresponding to a universe more than twice its current size

  15. Optimal Control and Forecasting of Complex Dynamical Systems

    CERN Document Server

    Grigorenko, Ilya

    2006-01-01

    This important book reviews applications of optimization and optimal control theory to modern problems in physics, nano-science and finance. The theory presented here can be efficiently applied to various problems, such as the determination of the optimal shape of a laser pulse to induce certain excitations in quantum systems, the optimal design of nanostructured materials and devices, or the control of chaotic systems and minimization of the forecast error for a given forecasting model (for example, artificial neural networks). Starting from a brief review of the history of variational calcul

  16. Optimal Tax Depreciation under a Progressive Tax System

    NARCIS (Netherlands)

    Wielhouwer, J.L.; De Waegenaere, A.M.B.; Kort, P.M.

    2000-01-01

    The focus of this paper is on the effect of a progressive tax system on optimal tax depreciation. By using dynamic optimization we show that an optimal strategy exists, and we provide an analytical expression for the optimal depreciation charges. Depreciation charges initially decrease over time,

  17. Optimal Design and Operation of Permanent Irrigation Systems

    Science.gov (United States)

    Oron, Gideon; Walker, Wynn R.

    1981-01-01

    Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.

  18. The variance of the locally measured Hubble parameter explained with different estimators

    DEFF Research Database (Denmark)

    Odderskov, Io Sandberg Hess; Hannestad, Steen; Brandbyge, Jacob

    2017-01-01

    We study the expected variance of measurements of the Hubble constant, H0, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N-body simulations. We compare the variance with that obtained by carrying out mock observations in the N......-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend...... to obtain a smaller variance than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H0 from CMB measurements and the value measured in the local universe, these considerations are important in light...

  19. The Hubble Tarantula Treasury Project

    Science.gov (United States)

    Sabbi, Elena; Lennon, D. J.; Anderson, J.; Van Der Marel, R. P.; Aloisi, A.; Boyer, M. L.; Cignoni, M.; De Marchi, G.; de Mink, S. E.; Evans, C. J.; Gallagher, J. S.; Gordon, K. D.; Gouliermis, D.; Grebel, E.; Koekemoer, A. M.; Larsen, S. S.; Panagia, N.; Ryon, J. E.; Smith, L. J.; Tosi, M.; Zaritsky, D. F.

    2014-01-01

    The Tarantula Nebula (a.k.a. 30 Doradus) in the Large Magellanic Cloud is one of the most famous objects in astronomy, with first astronomical references being more than 150 years old. Today the Tarantula Nebula and its ionizing cluster R136 are considered one of the few known starburst regions in the Local Group and an ideal test bed to investigate the temporal and spatial evolution of a prototypical starburst on a sub-cluster scale. The Hubble Tarantula Treasury Project (HTTP) is a panchromatic imaging survey of the stellar populations and ionized gas in the Tarantula Nebula that reaches into the sub-solar mass regime (eBook that explains how stars form and evolve using images from HTTP. The eBook utilizes emerging technology that works in conjunction with the built-in accessibility features in the Apple iPad to allow totally blind users to interactively explore complex astronomical images.

  20. Analysis and Optimization of Distributed Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    and scheduling policies. In this context, the task of designing such systems is becoming increasingly difficult. The success of new adequate design methods depends on the availability of efficient analysis as well as optimization techniques. In this paper, we present both analysis and optimization approaches...... characteristic to this class of systems: mapping of functionality, the optimization of the access to the communication channel, and the assignment of scheduling policies to processes. Optimization heuristics aiming at producing a schedulable system, with a given amount of resources, are presented....

  1. Control Methods Utilizing Energy Optimizing Schemes in Refrigeration Systems

    DEFF Research Database (Denmark)

    Larsen, L.S; Thybo, C.; Stoustrup, Jakob

    2003-01-01

    The potential energy savings in refrigeration systems using energy optimal control has been proved to be substantial. This however requires an intelligent control that drives the refrigeration systems towards the energy optimal state. This paper proposes an approach for a control, which drives th...... the condenser pressure towards an optimal state. The objective of this is to present a feasible method that can be used for energy optimizing control. A simulation model of a simple refrigeration system will be used as basis for testing the control method....

  2. Adaptive Multi-Agent Systems for Constrained Optimization

    Science.gov (United States)

    Macready, William; Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.

  3. Astronaut Anna Fisher in NBS Training For Hubble Space Telescope

    Science.gov (United States)

    1980-01-01

    The Hubble Space Telescope (HST) is a cooperative program of the European Space Agency (ESA) and the National Aeronautical and Space Administration (NASA) to operate a long-lived space-based observatory. It was the flagship mission of NASA's Great Observatories program. The HST program began as an astronomical dream in the 1940s. During the 1970s and 1980s, the HST was finally designed and built becoming operational in the 1990s. The HST was deployed into a low-Earth orbit on April 25, 1990 from the cargo bay of the Space Shuttle Discovery (STS-31). The design of the HST took into consideration its length of service and the necessity of repairs and equipment replacement by making the body modular. In doing so, subsequent shuttle missions could recover the HST, replace faulty or obsolete parts and be re-released. Marshall Space Flight Center's (MSFC's) Neutral Buoyancy Simulator (NBS) served as the test center for shuttle astronauts training for Hubble related missions. Shown is astronaut Anna Fisher training on a mock-up of a modular section of the HST for an axial scientific instrument change out.

  4. Optimal control applications in electric power systems

    CERN Document Server

    Christensen, G S; Soliman, S A

    1987-01-01

    Significant advances in the field of optimal control have been made over the past few decades. These advances have been well documented in numerous fine publications, and have motivated a number of innovations in electric power system engineering, but they have not yet been collected in book form. Our purpose in writing this book is to provide a description of some of the applications of optimal control techniques to practical power system problems. The book is designed for advanced undergraduate courses in electric power systems, as well as graduate courses in electrical engineering, applied mathematics, and industrial engineering. It is also intended as a self-study aid for practicing personnel involved in the planning and operation of electric power systems for utilities, manufacturers, and consulting and government regulatory agencies. The book consists of seven chapters. It begins with an introductory chapter that briefly reviews the history of optimal control and its power system applications and also p...

  5. Optimization of large-scale industrial systems : an emerging method

    Energy Technology Data Exchange (ETDEWEB)

    Hammache, A.; Aube, F.; Benali, M.; Cantave, R. [Natural Resources Canada, Varennes, PQ (Canada). CANMET Energy Technology Centre

    2006-07-01

    This paper reviewed optimization methods of large-scale industrial production systems and presented a novel systematic multi-objective and multi-scale optimization methodology. The methodology was based on a combined local optimality search with global optimality determination, and advanced system decomposition and constraint handling. The proposed method focused on the simultaneous optimization of the energy, economy and ecology aspects of industrial systems (E{sup 3}-ISO). The aim of the methodology was to provide guidelines for decision-making strategies. The approach was based on evolutionary algorithms (EA) with specifications including hybridization of global optimality determination with a local optimality search; a self-adaptive algorithm to account for the dynamic changes of operating parameters and design variables occurring during the optimization process; interactive optimization; advanced constraint handling and decomposition strategy; and object-oriented programming and parallelization techniques. Flowcharts of the working principles of the basic EA were presented. It was concluded that the EA uses a novel decomposition and constraint handling technique to enhance the Pareto solution search procedure for multi-objective problems. 6 refs., 9 figs.

  6. Hartmann wavefront sensing of the corrective optics for the Hubble Space Telescope

    Science.gov (United States)

    Davila, Pam S.; Eichhorn, William L.; Wilson, Mark E.

    1994-06-01

    There is no doubt that astronomy with the `new, improved' Hubble Space Telescope will significantly advance our knowledge and understanding of the universe for years to come. The Corrective Optics Space Telescope Axial Replacement (COSTAR) was designed to restore the image quality to nearly diffraction limited performance for three of the first generation instruments; the faint object camera, the faint object spectrograph, and the Goddard high resolution spectrograph. Spectacular images have been obtained from the faint object camera after the installation of the corrective optics during the first servicing mission in December of 1993. About 85% of the light in the central core of the corrected image is contained within a circle with a diameter of 0.2 arcsec. This is a vast improvement over the previous 15 to 17% encircled energies obtained before COSTAR. Clearly COSTAR is a success. One reason for the overwhelming success of COSTAR was the ambitious and comprehensive test program conducted by various groups throughout the program. For optical testing of COSTAR on the ground, engineers at Ball Aerospace designed and built the refractive Hubble simulator to produce known amounts of spherical aberration and astigmatism at specific points in the field of view. The design goal for this refractive aberrated simulator (RAS) was to match the aberrations of the Hubble Space Telescope to within (lambda) /20 rms over the field at a wavelength of 632.8 nm. When the COSTAR optics were combined with the RAS optics, the corrected COSTAR output images were produced. These COSTAR images were recorded with a high resolution 1024 by 1024 array CCD camera, the Ball image analyzer (BIA). The image quality criteria used for assessment of COSTAR performance was encircled energy in the COSTAR focal plane. This test with the BIA was very important because it was a direct measurement of the point spread function. But it was difficult with this test to say anything quantitative about the

  7. Optimization theory for large systems

    CERN Document Server

    Lasdon, Leon S

    2002-01-01

    Important text examines most significant algorithms for optimizing large systems and clarifying relations between optimization procedures. Much data appear as charts and graphs and will be highly valuable to readers in selecting a method and estimating computer time and cost in problem-solving. Initial chapter on linear and nonlinear programming presents all necessary background for subjects covered in rest of book. Second chapter illustrates how large-scale mathematical programs arise from real-world problems. Appendixes. List of Symbols.

  8. Precise Estimates of the Physical Parameters for the Exoplanet System HD 17156 Enabled by Hubble Space Telescope Fine Guidance Sensor Transit and Asteroseismic Observations

    DEFF Research Database (Denmark)

    Nutzman, Philip; Gilliland, Ronald L.; McCullough, Peter R.

    2011-01-01

    We present observations of three distinct transits of HD 17156b obtained with the Fine Guidance Sensors on board the Hubble Space Telescope. We analyzed both the transit photometry and previously published radial velocities to find the planet-star radius ratio Rp /R sstarf = 0.07454 ± 0.00035, in......We present observations of three distinct transits of HD 17156b obtained with the Fine Guidance Sensors on board the Hubble Space Telescope. We analyzed both the transit photometry and previously published radial velocities to find the planet-star radius ratio Rp /R sstarf = 0.07454 ± 0......-composition gas giant of the same mass and equilibrium temperature. For the three transits, we determine the times of mid-transit to a precision of 6.2 s, 7.6 s, and 6.9 s, and the transit times for HD 17156 do not show any significant departures from a constant period. The joint analysis of transit photometry...

  9. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  10. Adaptive stimulus optimization for sensory systems neuroscience.

    Science.gov (United States)

    DiMattina, Christopher; Zhang, Kechen

    2013-01-01

    In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.

  11. N-Springs pump and treat system optimization study

    International Nuclear Information System (INIS)

    1997-03-01

    This letter report describes and presents the results of a system optimization study conducted to evaluate the N-Springs pump and treat system. The N-Springs pump and treat is designed to remove strontium-90 (90Sr) found in the groundwater in the 100-NR-2 Operable Unit near the Columbia River. The goal of the system optimization study was to assess and quantify what conditions and operating parameters could be employed to enhance the operating and cost effectiveness of the recently upgraded pump and treat system.This report provides the results of the system optimization study, reports the cost effectiveness of operating the pump and treat at various operating modes and 90Sr removal goals, and provides recommendations for operating the pump and treat

  12. Optimally Controlled Flexible Fuel Powertrain System

    Energy Technology Data Exchange (ETDEWEB)

    Hakan Yilmaz; Mark Christie; Anna Stefanopoulou

    2010-12-31

    The primary objective of this project was to develop a true Flex Fuel Vehicle capable of running on any blend of ethanol from 0 to 85% with reduced penalty in usable vehicle range. A research and development program, targeting 10% improvement in fuel economy using a direct injection (DI) turbocharged spark ignition engine was conducted. In this project a gasoline-optimized high-technology engine was considered and the hardware and configuration modifications were defined for the engine, fueling system, and air path. Combined with a novel engine control strategy, control software, and calibration this resulted in a highly efficient and clean FFV concept. It was also intended to develop robust detection schemes of the ethanol content in the fuel integrated with adaptive control algorithms for optimized turbocharged direct injection engine combustion. The approach relies heavily on software-based adaptation and optimization striving for minimal modifications to the gasoline-optimized engine hardware system. Our ultimate objective was to develop a compact control methodology that takes advantage of any ethanol-based fuel mixture and not compromise the engine performance under gasoline operation.

  13. Optimal sizing of energy storage system for microgrids

    Indian Academy of Sciences (India)

    strategies and optimal allocation methods of the ESS devices are required for the MG. ... for the optimal design of systems managed optimally according to different .... Energy storage hourly operating and maintenance cost is defined as a ...

  14. Predictive Analytics for Coordinated Optimization in Distribution Systems

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Rui [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-04-13

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  15. Optimal economic and environment operation of micro-grid power systems

    International Nuclear Information System (INIS)

    Elsied, Moataz; Oukaour, Amrane; Gualous, Hamid; Lo Brutto, Ottavio A.

    2016-01-01

    Highlights: • Real-time energy management system for Micro-Grid power systems is introduced. • The management system considered cost objective function and emission constraints. • The optimization problem is solved using Binary Particle Swarm Algorithm. • Advanced real-time interface libraries are used to run the optimization code. - Abstract: In this paper, an advanced real-time energy management system is proposed in order to optimize micro-grid performance in a real-time operation. The proposed strategy of the management system capitalizes on the power of binary particle swarm optimization algorithm to minimize the energy cost and carbon dioxide and pollutant emissions while maximizing the power of the available renewable energy resources. Advanced real-time interface libraries are used to run the optimization code. The simulation results are considered for three different scenarios considering the complexity of the proposed problem. The proposed management system along with its control system is experimentally tested to validate the simulation results obtained from the optimization algorithm. The experimental results highlight the effectiveness of the proposed management system for micro-grids operation.

  16. Optimizing the design of international safeguards inspection systems

    International Nuclear Information System (INIS)

    Markin, J.T.; Coulter, C.A.; Gutmacher, R.G.; Whitty, W.J.

    1983-01-01

    Efficient implementation of international inspections for verifying the operation of a nuclear facility requires that available resources be allocated among inspection activities to maximize detection of misoperation. This report describes a design and evaluation method for selecting an inspection system that is optimal for accomplishing inspection objectives. The discussion includes methods for identifying system objectives, defining performance measures, and choosing between candidate systems. Optimization theory is applied in selecting the most preferred inspection design for a single nuclear facility, and an extension to optimal allocation of inspection resources among States containing multiple facilities is outlined. 3 figures, 5 tables

  17. Optimalization of selected RFID systems Parameters

    Directory of Open Access Journals (Sweden)

    Peter Vestenicky

    2004-01-01

    Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  18. Modeling of biological intelligence for SCM system optimization.

    Science.gov (United States)

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  19. Modeling of Biological Intelligence for SCM System Optimization

    Directory of Open Access Journals (Sweden)

    Shengyong Chen

    2012-01-01

    Full Text Available This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms.

  20. Overall Optimization for Offshore Wind Farm Electrical System

    DEFF Research Database (Denmark)

    Hou, Peng; Hu, Weihao; Chen, Cong

    2017-01-01

    Based on particle swarm optimization (PSO), an optimization platform for offshore wind farm electrical system (OWFES) is proposed in this paper, where the main components of an offshore wind farm and key technical constraints are considered as input parameters. The offshore wind farm electrical...... system is optimized in accordance with initial investment by considering three aspects: the number and siting of offshore substations (OS), the cable connection layout of both collection system (CS) and transmission system (TS) as well as the selection of electrical components in terms of voltage level...... that save 3.01% total cost compared with the industrial layout, and can be a useful tool for OWFES design and evaluation....

  1. Modeling of Biological Intelligence for SCM System Optimization

    Science.gov (United States)

    Chen, Shengyong; Zheng, Yujun; Cattani, Carlo; Wang, Wanliang

    2012-01-01

    This article summarizes some methods from biological intelligence for modeling and optimization of supply chain management (SCM) systems, including genetic algorithms, evolutionary programming, differential evolution, swarm intelligence, artificial immune, and other biological intelligence related methods. An SCM system is adaptive, dynamic, open self-organizing, which is maintained by flows of information, materials, goods, funds, and energy. Traditional methods for modeling and optimizing complex SCM systems require huge amounts of computing resources, and biological intelligence-based solutions can often provide valuable alternatives for efficiently solving problems. The paper summarizes the recent related methods for the design and optimization of SCM systems, which covers the most widely used genetic algorithms and other evolutionary algorithms. PMID:22162724

  2. Optimal planning of electric vehicle charging station at the distribution system using hybrid optimization algorithm

    DEFF Research Database (Denmark)

    Awasthi, Abhishek; Venkitusamy, Karthikeyan; Padmanaban, Sanjeevikumar

    2017-01-01

    India's ever increasing population has made it necessary to develop alternative modes of transportation with electric vehicles being the most preferred option. The major obstacle is the deteriorating impact on the utility distribution system brought about by improper setup of these charging...... stations. This paper deals with the optimal planning (siting and sizing) of charging station infrastructure in the city of Allahabad, India. This city is one of the upcoming smart cities, where electric vehicle transportation pilot project is going on under Government of India initiative. In this context......, a hybrid algorithm based on genetic algorithm and improved version of conventional particle swarm optimization is utilized for finding optimal placement of charging station in the Allahabad distribution system. The particle swarm optimization algorithm re-optimizes the received sub-optimal solution (site...

  3. Heuristic Optimization Techniques for Determining Optimal Reserve Structure of Power Generating Systems

    DEFF Research Database (Denmark)

    Ding, Yi; Goel, Lalit; Wang, Peng

    2012-01-01

    cost of the system will also increase. The reserve structure of a MSS should be determined based on striking a balance between the required reliability and the reserve cost. The objective of reserve management for a MSS is to schedule the reserve at the minimum system reserve cost while maintaining......Electric power generating systems are typical examples of multi-state systems (MSS). Sufficient reserve is critically important for maintaining generating system reliabilities. The reliability of a system can be increased by increasing the reserve capacity, noting that at the same time the reserve...... the required level of supply reliability to its customers. In previous research, Genetic Algorithm (GA) has been used to solve most reliability optimization problems. However, the GA is not very computationally efficient in some cases. In this chapter a new heuristic optimization technique—the particle swarm...

  4. Market-Based and System-Wide Fuel Cycle Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul Philip Hood [Univ. of Wisconsin, Madison, WI (United States); Scopatz, Anthony [Univ. of South Carolina, Columbia, SC (United States); Gidden, Matthew [Univ. of Wisconsin, Madison, WI (United States); Carlsen, Robert [Univ. of Wisconsin, Madison, WI (United States); Mouginot, Baptiste [Univ. of Wisconsin, Madison, WI (United States); Flanagan, Robert [Univ. of South Carolina, Columbia, SC (United States)

    2017-06-13

    This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.

  5. Market-Based and System-Wide Fuel Cycle Optimization

    International Nuclear Information System (INIS)

    Wilson, Paul Philip Hood; Scopatz, Anthony; Gidden, Matthew; Carlsen, Robert; Mouginot, Baptiste; Flanagan, Robert

    2017-01-01

    This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.

  6. Thermodynamic optimization of the Cu-Nd system

    International Nuclear Information System (INIS)

    Wang Peisheng; Zhou Liangcai; Du Yong; Xu Honghui; Liu Shuhong; Chen Li; Ouyang Yifang

    2011-01-01

    Research highlights: → The enthalpies of formation of the compounds Cu 6 Nd, Cu 5 Nd, Cu 2 Nd and αCuNd were calculated using DFT. → The thermodynamic constraints to eliminate the artificial phase relations were imposed during the thermodynamic optimization procedure. → The Cu-Nd system was optimized under the thermodynamic constraints. - Abstract: The thermodynamic constraints to eliminate artificial phase relations were introduced with the Cu-Nd system as an example. The enthalpies of formation of the compounds Cu 6 Nd, Cu 5 Nd, Cu 2 Nd and αCuNd are calculated using density functional theory. Taking into account all the experimental data and the first-principles calculated enthalpies of formation of these compounds, the thermodynamic optimization of the Cu-Nd system was performed under the proposed thermodynamic constraints. It is demonstrated that the thermodynamic constraints are critical to obtain a set of thermodynamic parameters for the Cu-Nd system, which can avoid the appearance of all the artificial phase relations.

  7. Multiobjective optimal placement of switches and protective devices in electric power distribution systems using ant colony optimization

    Energy Technology Data Exchange (ETDEWEB)

    Tippachon, Wiwat; Rerkpreedapong, Dulpichet [Department of Electrical Engineering, Kasetsart University, 50 Phaholyothin Rd., Ladyao, Jatujak, Bangkok 10900 (Thailand)

    2009-07-15

    This paper presents a multiobjective optimization methodology to optimally place switches and protective devices in electric power distribution networks. Identifying the type and location of them is a combinatorial optimization problem described by a nonlinear and nondifferential function. The multiobjective ant colony optimization (MACO) has been applied to this problem to minimize the total cost while simultaneously minimize two distribution network reliability indices including system average interruption frequency index (SAIFI) and system interruption duration index (SAIDI). Actual distribution feeders are used in the tests, and test results have shown that the algorithm can determine the set of optimal nondominated solutions. It allows the utility to obtain the optimal type and location of devices to achieve the best system reliability with the lowest cost. (author)

  8. Neural Network for Optimization of Existing Control Systems

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1995-01-01

    The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems.......The purpose of this paper is to develop methods to use Neural Network based Controllers (NNC) as an optimization tool for existing control systems....

  9. Optimal planning of integrated multi-energy systems

    DEFF Research Database (Denmark)

    van Beuzekom, I.; Gibescu, M.; Pinson, Pierre

    2017-01-01

    In this paper, a mathematical approach for the optimal planning of integrated energy systems is proposed. In order to address the challenges of future, RES-dominated energy systems, the model deliberates between the expansion of traditional energy infrastructures, the integration...... and sustainability goals for 2030 and 2045. Optimal green- and brownfield designs for a district's future integrated energy system are compared using a one-step, as well as a two-step planning approach. As expected, the greenfield designs are more cost efficient, as their results are not constrained by the existing...

  10. Dynamical interpretation of the Hubble sequence of galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Dallaporta, N; Secco, L [Padua Univ. (Italy). Istituto di Astronomia

    1977-08-01

    Brosche (1970) has proposed a theory in which the energy loss due to collisions among gas clouds contained in a galaxy constitutes the driving mechanism for its evolution, through virial equilibrium states, which, from an initial spherical shape, makes it to contract towards an elongated form; moreover, the value of the total angular momentum, assumed as given by uniform rotation, is assumed to determine the galaxy type on the Hubble sequence and to influence strongly the contraction time from the initial spherical to the final flat configuration. The authors modify Brosche's scheme by assuming as models the rotating polytropes of Chandrasekhar and Lebovitz with variable density from centre to border. As a consequence of this change, centrifugal shedding of matter is attained at the equator of the contracting ellipsoid for a configuration with an axial ratio different from zero, so that, hereafter, a flat disk is formed surrounding the internal bulge, with a decreasing overall eccentricity; the rotation curve assumes then an aspect qualitatively similar to the one observed for spiral galaxies. The feedback of star formation which, by exhausting the material of the gas clouds, is able to stop the driving mechanism of evolution before the final flat stage is attained has also been considered at several positions according to the value of the angular momentum. Numerical calculations seem to indicate that one can obtain in this way, by varying the angular momentum and the initial number of clouds, different galaxy types (elliptical, lenticular, spiral) resembling those of the Hubble sequence.

  11. Reward optimization of a repairable system

    International Nuclear Information System (INIS)

    Castro, I.T.; Perez-Ocon, R.

    2006-01-01

    This paper analyzes a system subject to repairable and non-repairable failures. Non-repairable failures lead to replacement of the system. Repairable failures, first lead to repair but they lead to replacement after a fixed number of repairs. Operating and repair times follow phase type distributions (PH-distributions) and the pattern of the operating times is modelled by a geometric process. In this context, the problem is to find the optimal number of repairs, which maximizes the long-run average reward per unit time. To this end, the optimal number is determined and it is obtained by efficient numerical procedures

  12. Reward optimization of a repairable system

    Energy Technology Data Exchange (ETDEWEB)

    Castro, I.T. [Departamento de Matematicas, Facultad de Veterinaria, Universidad de Extremadura, Avenida de la Universidad, s/n. 10071 Caceres (Spain)]. E-mail: inmatorres@unex.es; Perez-Ocon, R. [Departamento de Estadistica e Investigacion Operativa, Facultad de Ciencias, Universidad de Granada, Avenida de Severo Ochoa, s/n. 18071 Granada (Spain)]. E-mail: rperezo@ugr.es

    2006-03-15

    This paper analyzes a system subject to repairable and non-repairable failures. Non-repairable failures lead to replacement of the system. Repairable failures, first lead to repair but they lead to replacement after a fixed number of repairs. Operating and repair times follow phase type distributions (PH-distributions) and the pattern of the operating times is modelled by a geometric process. In this context, the problem is to find the optimal number of repairs, which maximizes the long-run average reward per unit time. To this end, the optimal number is determined and it is obtained by efficient numerical procedures.

  13. Hierarchical Control for Optimal and Distributed Operation of Microgrid Systems

    DEFF Research Database (Denmark)

    Meng, Lexuan

    manages the power flow with external grids, while the economic and optimal operation of MGs is not guaranteed by applying the existing schemes. Accordingly, this project dedicates to the study of real-time optimization methods for MGs, including the review of optimization algorithms, system level...... mathematical modeling, and the implementation of real-time optimization into existing hierarchical control schemes. Efficiency enhancement in DC MGs and optimal unbalance compensation in AC MGs are taken as the optimization objectives in this project. Necessary system dynamic modeling and stability analysis......, a discrete-time domain modeling method is proposed to establish an accurate system level model. Taking into account the different sampling times of real world plant, digital controller and communication devices, the system is modeled with these three parts separately, and with full consideration...

  14. Optimization of a particle optical system in a mutilprocessor environment

    International Nuclear Information System (INIS)

    Wei Lei; Yin Hanchun; Wang Baoping; Tong Linsu

    2002-01-01

    In the design of a charged particle optical system, many geometrical and electric parameters have to be optimized to improve the performance characteristics. In every optimization cycle, the electromagnetic field and particle trajectories have to be calculated. Therefore, the optimization of a charged particle optical system is limited by the computer resources seriously. Apart from this, numerical errors of calculation may also influence the convergence of merit function. This article studies how to improve the optimization of charged particle optical systems. A new method is used to determine the gradient matrix. With this method, the accuracy of the Jacobian matrix can be improved. In this paper, the charged particle optical system is optimized with a Message Passing Interface (MPI). The electromagnetic field, particle trajectories and gradients of optimization variables are calculated on networks of workstations. Therefore, the speed of optimization has been increased largely. It is possible to design a complicated charged particle optical system with optimum quality on a MPI environment. Finally, an electron gun for a cathode ray tube has been optimized on a MPI environment to verify the method proposed in this paper

  15. Constraining dark energy with Hubble parameter measurements: an analysis including future redshift-drift observations

    International Nuclear Information System (INIS)

    Guo, Rui-Yun; Zhang, Xin

    2016-01-01

    The nature of dark energy affects the Hubble expansion rate (namely, the expansion history) H(z) by an integral over w(z). However, the usual observables are the luminosity distances or the angular diameter distances, which measure the distance.redshift relation. Actually, the property of dark energy affects the distances (and the growth factor) by a further integration over functions of H(z). Thus, the direct measurements of the Hubble parameter H(z) at different redshifts are of great importance for constraining the properties of dark energy. In this paper, we show how the typical dark energy models, for example, the ΛCDM, wCDM, CPL, and holographic dark energy models, can be constrained by the current direct measurements of H(z) (31 data used in total in this paper, covering the redshift range of z @ element of [0.07, 2.34]). In fact, the future redshift-drift observations (also referred to as the Sandage-Loeb test) can also directly measure H(z) at higher redshifts, covering the range of z @ element of [2, 5]. We thus discuss what role the redshift-drift observations can play in constraining dark energy with the Hubble parameter measurements. We show that the constraints on dark energy can be improved greatly with the H(z) data from only a 10-year observation of redshift drift. (orig.)

  16. Optimal Real-time Dispatch for Integrated Energy Systems

    DEFF Research Database (Denmark)

    Anvari-Moghaddam, Amjad; Guerrero, Josep M.; Rahimi-Kian, Ashkan

    2016-01-01

    With the emerging of small-scale integrated energy systems (IESs), there are significant potentials to increase the functionality of a typical demand-side management (DSM) strategy and typical implementation of building-level distributed energy resources (DERs). By integrating DSM and DERs...... into a cohesive, networked package that fully utilizes smart energy-efficient end-use devices, advanced building control/automation systems, and integrated communications architectures, it is possible to efficiently manage energy and comfort at the end-use location. In this paper, an ontology-driven multi......-agent control system with intelligent optimizers is proposed for optimal real-time dispatch of an integrated building and microgrid system considering coordinated demand response (DR) and DERs management. The optimal dispatch problem is formulated as a mixed integer nonlinear programing problem (MINLP...

  17. A Software Tool for Optimal Sizing of PV Systems in Malaysia

    Directory of Open Access Journals (Sweden)

    Tamer Khatib

    2012-01-01

    Full Text Available This paper presents a MATLAB based user friendly software tool called as PV.MY for optimal sizing of photovoltaic (PV systems. The software has the capabilities of predicting the metrological variables such as solar energy, ambient temperature and wind speed using artificial neural network (ANN, optimizes the PV module/ array tilt angle, optimizes the inverter size and calculate optimal capacities of PV array, battery, wind turbine and diesel generator in hybrid PV systems. The ANN based model for metrological prediction uses four meteorological variables, namely, sun shine ratio, day number and location coordinates. As for PV system sizing, iterative methods are used for determining the optimal sizing of three types of PV systems, which are standalone PV system, hybrid PV/wind system and hybrid PV/diesel generator system. The loss of load probability (LLP technique is used for optimization in which the energy sources capacities are the variables to be optimized considering very low LLP. As for determining the optimal PV panels tilt angle and inverter size, the Liu and Jordan model for solar energy incident on a tilt surface is used in optimizing the monthly tilt angle, while a model for inverter efficiency curve is used in the optimization of inverter size.

  18. Analysis and Optimization of Heterogeneous Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2005-01-01

    . The success of such new design methods depends on the availability of analysis and optimization techniques. In this paper, we present analysis and optimization techniques for heterogeneous real-time embedded systems. We address in more detail a particular class of such systems called multi-clusters, composed...... to frames. Optimization heuristics for frame packing aiming at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  19. Asteroseismology of the Transiting Exoplanet Host HD 17156 with Hubble Space Telescope Fine Guidance Sensor

    DEFF Research Database (Denmark)

    Gilliland, Ronald L.; McCullough, Peter R.; Nelan, Edmund P.

    2011-01-01

    light curve. Using the density constraint from asteroseismology, and stellar evolution modeling results in M * = 1.285 ± 0.026 M sun, R * = 1.507 ± 0.012 R sun, and a stellar age of 3.2 ± 0.3 Gyr. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science......Observations conducted with the Fine Guidance Sensor on the Hubble Space Telescope (HST) providing high cadence and precision time-series photometry were obtained over 10 consecutive days in 2008 December on the host star of the transiting exoplanet HD 17156b. During this time, 1.0 × 1012 photons...... Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555....

  20. Optimal and Miniaturized Strongly Coupled Magnetic Resonant Systems

    Science.gov (United States)

    Hu, Hao

    Wireless power transfer (WPT) technologies for communication and recharging devices have recently attracted significant research attention. Conventional WPT systems based either on far-field or near-field coupling cannot provide simultaneously high efficiency and long transfer range. The Strongly Coupled Magnetic Resonance (SCMR) method was introduced recently, and it offers the possibility of transferring power with high efficiency over longer distances. Previous SCMR research has only focused on how to improve its efficiency and range through different methods. However, the study of optimal and miniaturized designs has been limited. In addition, no multiband and broadband SCMR WPT systems have been developed and traditional SCMR systems exhibit narrowband efficiency thereby imposing strict limitations on simultaneous wireless transmission of information and power, which is important for battery-less sensors. Therefore, new SCMR systems that are optimally designed and miniaturized in size will significantly enhance various technologies in many applications. The optimal and miniaturized SCMR systems are studied here. First, analytical models of the Conformal SCMR (CSCMR) system and thorough analysis and design methodology have been presented. This analysis specifically leads to the identification of the optimal design parameters, and predicts the performance of the designed CSCMR system. Second, optimal multiband and broadband CSCMR systems are designed. Two-band, three-band, and four-band CSCMR systems are designed and validated using simulations and measurements. Novel broadband CSCMR systems are also analyzed, designed, simulated and measured. The proposed broadband CSCMR system achieved more than 7 times larger bandwidth compared to the traditional SCMR system at the same frequency. Miniaturization methods of SCMR systems are also explored. Specifically, methods that use printable CSCMR with large capacitors, novel topologies including meandered, SRRs, and

  1. Optimizing Technology-Oriented Constructional Paramour's of complex dynamic systems

    International Nuclear Information System (INIS)

    Novak, S.M.

    1998-01-01

    Creating optimal vibro systems requires sequential solving of a few problems: selecting the basic pattern of dynamic actions, synthesizing the dynamic active systems, optimizing technological, technical, economic and design parameters. This approach is illustrated by an example of a high-efficiency vibro system synthesized for forming building structure components. When using only one single source to excite oscillations, resonance oscillations are imparted to the product to be formed in the horizontal and vertical planes. In order to obtain versatile and dynamically optimized parameters, a factor is introduced into the differential equations of the motion, accounting for the relationship between the parameters, which determine the frequency characteristics of the system and the parameter variation range. This results in obtaining non-sophisticated mathematical models of the system under investigation, convenient for optimization and for engineering design and calculations as well

  2. HUBBLE SPACE TELESCOPE DETECTION OF THE DOUBLE PULSAR SYSTEM J0737–3039 IN THE FAR-ULTRAVIOLET

    Energy Technology Data Exchange (ETDEWEB)

    Durant, Martin [Department of Medical Biophysics, Sunnybrook Hospital M6 623, 2075 Bayview Avenue, Toronto M4N 3M5 (Canada); Kargaltsev, Oleg [Department of Physics, The George Washington University, 725 21st Street NW, Washington, DC 20052 (United States); Pavlov, George G., E-mail: mdurant@sri.utoronto.ca, E-mail: kargaltsev@email.gwu.edu, E-mail: pavlov@astro.psu.edu [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States)

    2014-03-01

    We report on detection of the double pulsar system J0737–3039 in the far-UV with the Advanced Camera for Surveys/Solar-blind Channel detector aboard Hubble Space Telescope. We measured the energy flux F = (4.6 ± 1.0) × 10{sup –17} erg cm{sup –2} s{sup –1} in the 1250-1550 Å band, which corresponds to the extinction-corrected luminosity L ≈ 1.5 × 10{sup 28} erg s{sup –1} for the distance d = 1.1 kpc and a plausible reddening E(B – V) = 0.1. If the detected emission comes from the entire surface of one of the neutron stars with a 13 km radius, the surface blackbody temperature is in the range T ≅ (2-5) × 10{sup 5} K for a reasonable range of interstellar extinction. Such a temperature requires an internal heating mechanism to operate in old neutron stars, or, less likely, it might be explained by heating of the surface of the less energetic Pulsar B by the relativistic wind of Pulsar A. If the far-ultraviolet emission is non-thermal (e.g., produced in the magnetosphere of Pulsar A), its spectrum exhibits a break between the UV and X-rays.

  3. Optimization Models and Methods Developed at the Energy Systems Institute

    OpenAIRE

    N.I. Voropai; V.I. Zorkaltsev

    2013-01-01

    The paper presents shortly some optimization models of energy system operation and expansion that have been created at the Energy Systems Institute of the Siberian Branch of the Russian Academy of Sciences. Consideration is given to the optimization models of energy development in Russia, a software package intended for analysis of power system reliability, and model of flow distribution in hydraulic systems. A general idea of the optimization methods developed at the Energy Systems Institute...

  4. Optimal sensor configuration for complex systems

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    configuration is based on maximizing the overall sensor response while minimizing the correlation among the sensor outputs. The procedure for sensor configuration is based on simultaneous perturbation stochastic approximation (SPSA). SPSA avoids the need for detailed modeling of the sensor response by simply......Considers the problem of sensor configuration for complex systems. Our approach involves definition of an appropriate optimality criterion or performance measure, and description of an efficient and practical algorithm for achieving the optimality objective. The criterion for optimal sensor...... relying on observed responses as obtained by limited experimentation with test sensor configurations. We illustrate the approach with the optimal placement of acoustic sensors for signal detection in structures. This includes both a computer simulation study for an aluminum plate, and real...

  5. Economic optimization of photovoltaic water pumping systems for irrigation

    International Nuclear Information System (INIS)

    Campana, P.E.; Li, H.; Zhang, J.; Zhang, R.; Liu, J.; Yan, J.

    2015-01-01

    Highlights: • A novel optimization procedure for photovoltaic water pumping systems for irrigation is proposed. • An hourly simulation model is the basis of the optimization procedure. • The effectiveness of the new optimization approach has been tested to an existing photovoltaic water pumping system. - Abstract: Photovoltaic water pumping technology is considered as a sustainable and economical solution to provide water for irrigation, which can halt grassland degradation and promote farmland conservation in China. The appropriate design and operation significantly depend on the available solar irradiation, crop water demand, water resources and the corresponding benefit from the crop sale. In this work, a novel optimization procedure is proposed, which takes into consideration not only the availability of groundwater resources and the effect of water supply on crop yield, but also the investment cost of photovoltaic water pumping system and the revenue from crop sale. A simulation model, which combines the dynamics of photovoltaic water pumping system, groundwater level, water supply, crop water demand and crop yield, is employed during the optimization. To prove the effectiveness of the new optimization approach, it has been applied to an existing photovoltaic water pumping system. Results show that the optimal configuration can guarantee continuous operations and lead to a substantial reduction of photovoltaic array size and consequently of the investment capital cost and the payback period. Sensitivity studies have been conducted to investigate the impacts of the prices of photovoltaic modules and forage on the optimization. Results show that the water resource is a determinant factor

  6. Multiobjective hyper heuristic scheme for system design and optimization

    Science.gov (United States)

    Rafique, Amer Farhan

    2012-11-01

    As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.

  7. Optimal tax depreciation under a progressive tax system

    OpenAIRE

    Wielhouwer, J.L.; De Waegenaere, A.M.B.; Kort, P.M.

    2002-01-01

    The focus of this paper is on the effect of a progressive tax system on optimal tax depreciation. By using dynamic optimization we show that an optimal strategy exists, and we provide an analytical expression for the optimal depreciation charges. Depreciation charges initially decrease over time, and after a number of periods the firm enters a steady state where depreciation is constant and equal to replacement investments. This way, the optimal solution trades off the benefits of accelerated...

  8. Integrated solar energy system optimization

    Science.gov (United States)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  9. System and economic optimization problems of NPPs and its ideology

    International Nuclear Information System (INIS)

    Klimenko, A.V.; Mironovich, V.L.

    2016-01-01

    The iterative circuit design of optimization of system of links of nuclear fuel and energy complex (NFEC) is presented in the paper. Problems of system optimization of links NFEC as functional of NPP optimization are indicated and investigated [ru

  10. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY

    International Nuclear Information System (INIS)

    Dalcanton, Julianne J.; Williams, Benjamin F.; Rosenfield, Philip; Weisz, Daniel R.; Gilbert, Karoline M.; Gogarten, Stephanie M.; Lang, Dustin; Lauer, Tod R.; Dong Hui; Kalirai, Jason S.; Boyer, Martha L.; Gordon, Karl D.; Seth, Anil C.; Dolphin, Andrew; Bell, Eric F.; Bianchi, Luciana C.; Caldwell, Nelson; Dorman, Claire E.; Guhathakurta, Puragra; Girardi, Léo

    2012-01-01

    The Panchromatic Hubble Andromeda Treasury is an ongoing Hubble Space Telescope Multi-Cycle Treasury program to image ∼1/3 of M31's star-forming disk in six filters, spanning from the ultraviolet (UV) to the near-infrared (NIR). We use the Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) to resolve the galaxy into millions of individual stars with projected radii from 0 to 20 kpc. The full survey will cover a contiguous 0.5 deg 2 area in 828 orbits. Imaging is being obtained in the F275W and F336W filters on the WFC3/UVIS camera, F475W and F814W on ACS/WFC, and F110W and F160W on WFC3/IR. The resulting wavelength coverage gives excellent constraints on stellar temperature, bolometric luminosity, and extinction for most spectral types. The data produce photometry with a signal-to-noise ratio of 4 at m F275W = 25.1, m F336W = 24.9, m F475W = 27.9, m F814W = 27.1, m F110W = 25.5, and m F160W = 24.6 for single pointings in the uncrowded outer disk; in the inner disk, however, the optical and NIR data are crowding limited, and the deepest reliable magnitudes are up to 5 mag brighter. Observations are carried out in two orbits per pointing, split between WFC3/UVIS and WFC3/IR cameras in primary mode, with ACS/WFC run in parallel. All pointings are dithered to produce Nyquist-sampled images in F475W, F814W, and F160W. We describe the observing strategy, photometry, astrometry, and data products available for the survey, along with extensive testing of photometric stability, crowding errors, spatially dependent photometric biases, and telescope pointing control. We also report on initial fits to the structure of M31's disk, derived from the density of red giant branch stars, in a way that is independent of assumed mass-to-light ratios and is robust to variations in dust extinction. These fits also show that the 10 kpc ring is not just a region of enhanced recent star formation, but is instead a dynamical structure containing a significant overdensity of

  11. LIFE CYCLE ASSESSMENT IN HEALTHCARE SYSTEM OPTIMIZATION. INTRODUCTION

    Directory of Open Access Journals (Sweden)

    V. Sarancha

    2015-03-01

    Full Text Available Article describes the life cycle assessment method and introduces opportunities for method performance in healthcare system settings. LSA draws attention to careful use of resources, environmental, human and social responsibility. Modelling of environmental and technological inputs allows optimizing performance of the system. Various factors and parameters that may influence effectiveness of different sectors in healthcare system are detected. Performance optimization of detected parameters could lead to better system functioning, higher patient safety, economic sustainability and reduce resources consumption.

  12. CONSTRAINING DUST AND COLOR VARIATIONS OF HIGH-z SNe USING NICMOS ON THE HUBBLE SPACE TELESCOPE

    International Nuclear Information System (INIS)

    Nobili, S.; Amanullah, R.; Goobar, A.

    2009-01-01

    We present data from the Supernova Cosmology Project for five high redshift Type Ia supernovae (SNe Ia) that were obtained using the NICMOS infrared camera on the Hubble Space Telescope. We add two SNe from this sample to a rest-frame I-band Hubble diagram, doubling the number of high redshift supernovae on this diagram. This I-band Hubble diagram is consistent with a flat universe (Ω M , Ω Λ ) = (0.29, 0.71). A homogeneous distribution of large grain dust in the intergalactic medium (replenishing dust) is incompatible with the data and is excluded at the 5σ confidence level, if the SN host galaxy reddening is corrected assuming R V = 1.75. We use both optical and infrared observations to compare photometric properties of distant SNe Ia with those of nearby objects. We find generally good agreement with the expected color evolution for all SNe except the highest redshift SN in our sample (SN 1997ek at z = 0.863) which shows a peculiar color behavior. We also present spectra obtained from ground-based telescopes for type identification and determination of redshift.

  13. CRM System Optimization

    OpenAIRE

    Fučík, Ivan

    2015-01-01

    This thesis is focused on CRM solutions in small and medium-sized organizations with respect to the quality of their customer relationship. The main goal of this work is to design an optimal CRM solution in the environment of real organization. To achieve this goal it is necessary to understand the theoretical basis of several topics, such as organizations and their relationship with customers, CRM systems, their features and trends. On the basis of these theoretical topics it is possible to ...

  14. Skinner-Rusk unified formalism for optimal control systems and applications

    International Nuclear Information System (INIS)

    Barbero-Linan, MarIa; EcheverrIa-EnrIquez, Arturo; Diego, David MartIn de; Munoz-Lecanda, Miguel C; Roman-Roy, Narciso

    2007-01-01

    A geometric approach to time-dependent optimal control problems is proposed. This formulation is based on the Skinner and Rusk formalism for Lagrangian and Hamiltonian systems. The corresponding unified formalism developed for optimal control systems allows us to formulate geometrically the necessary conditions given by a weak form of Pontryagin's maximum principle, provided that the differentiability with respect to controls is assumed and the space of controls is open. Furthermore, our method is also valid for implicit optimal control systems and, in particular, for the so-called descriptor systems (optimal control problems including both differential and algebraic equations)

  15. TaPT: Temperature-Aware Dynamic Cache Optimization for Embedded Systems

    Directory of Open Access Journals (Sweden)

    Tosiron Adegbija

    2017-12-01

    Full Text Available Embedded systems have stringent design constraints, which has necessitated much prior research focus on optimizing energy consumption and/or performance. Since embedded systems typically have fewer cooling options, rising temperature, and thus temperature optimization, is an emergent concern. Most embedded systems only dissipate heat by passive convection, due to the absence of dedicated thermal management hardware mechanisms. The embedded system’s temperature not only affects the system’s reliability, but can also affect the performance, power, and cost. Thus, embedded systems require efficient thermal management techniques. However, thermal management can conflict with other optimization objectives, such as execution time and energy consumption. In this paper, we focus on managing the temperature using a synergy of cache optimization and dynamic frequency scaling, while also optimizing the execution time and energy consumption. This paper provides new insights on the impact of cache parameters on efficient temperature-aware cache tuning heuristics. In addition, we present temperature-aware phase-based tuning, TaPT, which determines Pareto optimal clock frequency and cache configurations for fine-grained execution time, energy, and temperature tradeoffs. TaPT enables autonomous system optimization and also allows designers to specify temperature constraints and optimization priorities. Experiments show that TaPT can effectively reduce execution time, energy, and temperature, while imposing minimal hardware overhead.

  16. Optimal control of quantum systems: a projection approach

    International Nuclear Information System (INIS)

    Cheng, C.-J.; Hwang, C.-C.; Liao, T.-L.; Chou, G.-L.

    2005-01-01

    This paper considers the optimal control of quantum systems. The controlled quantum systems are described by the probability-density-matrix-based Liouville-von Neumann equation. Using projection operators, the states of the quantum system are decomposed into two sub-spaces, namely the 'main state' space and the 'remaining state' space. Since the control energy is limited, a solution for optimizing the external control force is proposed in which the main state is brought to the desired main state at a certain target time, while the population of the remaining state is simultaneously suppressed in order to diminish its effects on the final population of the main state. The optimization problem is formulated by maximizing a general cost functional of states and control force. An efficient algorithm is developed to solve the optimization problem. Finally, using the hydrogen fluoride (HF) molecular population transfer problem as an illustrative example, the effectiveness of the proposed scheme for a quantum system initially in a mixed state or in a pure state is investigated through numerical simulations

  17. Hubble Space Telescope via the Web

    Science.gov (United States)

    O'Dea, Christopher P.

    The Space Telescope Science Institute (STScI) makes available a wide variety of information concerning the Hubble Space Telescope (HST) via the Space Telescope Electronic Information Service (STEIS). STEIS is accessible via anonymous ftp, gopher, WAIS, and WWW. The information on STEIS includes how to propose for time on the HST, the current status of HST, reports on the scientific instruments, the observing schedule, data reduction software, calibration files, and a set of publicly available images in JPEG, GIF and TIFF format. STEIS serves both the astronomical community as well as the larger Internet community. WWW is currently the most widely used interface to STEIS. Future developments on STEIS are expected to include larger amounts of hypertext, especially HST images and educational material of interest to students, educators, and the general public, and the ability to query proposal status.

  18. Optimized controllers for enhancing dynamic performance of PV interface system

    Directory of Open Access Journals (Sweden)

    Mahmoud A. Attia

    2018-05-01

    Full Text Available The dynamic performance of PV interface system can be improved by optimizing the gains of the Proportional–Integral (PI controller. In this work, gravitational search algorithm and harmony search algorithm are utilized to optimal tuning of PI controller gains. Performance comparison between the PV system with optimized PI gains utilizing different techniques are carried out. Finally, the dynamic behavior of the system is studied under hypothetical sudden variations in irradiance. The examination of the proposed techniques for optimal tuning of PI gains is conducted using MATLAB/SIMULINK software package. The main contribution of this work is investigating the dynamic performance of PV interfacing system with application of gravitational search algorithm and harmony search algorithm for optimal PI parameters tuning. Keywords: Photovoltaic power systems, Gravitational search algorithm, Harmony search algorithm, Genetic algorithm, Artificial intelligence

  19. Eyes on the Universe: The Legacy of the Hubble Space Telescope and Looking to the Future with the James Webb Space Telescope

    Science.gov (United States)

    Straughn, Amber

    2011-01-01

    Over the past 20 years the Hubble Space Telescope has revolutionized our understanding of the Universe. Most recently, the complete refurbishment of Hubble in 2009 has given new life to the telescope and the new science instruments have already produced groundbreaking science results, revealing some of the most distant galaxy candidates ever discovered. Despite the remarkable advances in astrophysics that Hubble has provided, the new questions that have arisen demand a new space telescope with new technologies and capabilities. I will present the exciting new technology development and science goals of NASA's James Webb Space Telescope, which is currently being built and tested and will be launched this decade.

  20. Design and optimization of flexible multi-generation systems

    DEFF Research Database (Denmark)

    Lythcke-Jørgensen, Christoffer Ernst

    variations and dynamics, and energy system analysis, which fails to consider process integration synergies in local systems. The primary objective of the thesis is to derive a methodology for linking process design practices with energy system analysis for enabling coherent and holistic design optimization...... of flexible multi-generation system. In addition, the case study results emphasize the importance of considering flexible operation, systematic process integration, and systematic assessment of uncertainties in the design optimization. It is recommended that future research focus on assessing system impacts...... from flexible multi-generation systems and performance improvements from storage options....

  1. Worst-case tolerance optimization of antenna systems

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    1980-01-01

    The application of recently developed algorithms to antenna systems design is demonstrated by the worst-case tolerance optimization of linear broadside arrays, using both spacings and excitation coefficients as design parameters. The resulting arrays are optimally immunized against deviations...... of the design parameters from their nominal values....

  2. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    Science.gov (United States)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  3. The UDF05 Follow-up of the Hubble Ultra Deep Field. III. The Luminosity Function at z ~ 6

    Science.gov (United States)

    Su, Jian; Stiavelli, Massimo; Oesch, Pascal; Trenti, Michele; Bergeron, Eddie; Bradley, Larry; Carollo, Marcella; Dahlen, Tomas; Ferguson, Henry C.; Giavalisco, Mauro; Koekemoer, Anton; Lilly, Simon; Lucas, Ray A.; Mobasher, Bahram; Panagia, Nino; Pavlovsky, Cheryl

    2011-09-01

    In this paper, we present a derivation of the rest-frame 1400 Å luminosity function (LF) at redshift six from a new application of the maximum likelihood method by exploring the five deepest Hubble Space Telescope/Advanced Camera for Surveys (HST/ACS) fields, i.e., the Hubble Ultra Deep Field, two UDF05 fields, and two Great Observatories Origins Deep Survey fields. We work on the latest improved data products, which makes our results more robust than those of previous studies. We use unbinned data and thereby make optimal use of the information contained in the data set. We focus on the analysis to a magnitude limit where the completeness is larger than 50% to avoid possibly large errors in the faint end slope that are difficult to quantify. We also take into account scattering in and out of the dropout sample due to photometric errors by defining for each object a probability that it belongs to the dropout sample. We find the best-fit Schechter parameters to the z ~ 6 LF are α = 1.87 ± 0.14, M * = -20.25 ± 0.23, and phi* = 1.77+0.62 -0.49 × 10-3 Mpc-3. Such a steep slope suggests that galaxies, especially the faint ones, are possibly the main sources of ionizing photons in the universe at redshift six. We also combine results from all studies at z ~ 6 to reach an agreement in the 95% confidence level that -20.45 Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program 10632 and 11563.

  4. Electricity tariff systems for informatics system design regarding consumption optimization in smart grids

    Directory of Open Access Journals (Sweden)

    Simona Vasilica OPREA

    2016-01-01

    Full Text Available High volume of data is gathered via sensors and recorded by smart meters. These data are processed at the electricity consumer and grid operators' side by big data analytics. Electricity consumption optimization offers multiple advantages for both consumers and grid operators. At the electricity customer level, by optimizing electricity consumption savings are significant, but the main benefits will come from indirect aspects such as avoiding onerous grid investments, higher volume of renewable energy sources' integration, less polluted environment etc. In order to optimize electricity consumption, advanced tariff systems are essential due to the financial incentive they provide for electricity consumers' behaviour change. In this paper several advanced tariff systems are described in details. These systems are applied in England, Spain, Italy, France, Norway and Germany. These systems are compared from characteristics, advantages/disadvantages point of view. Then, different tariff systems applied in Romania are presented. Romanian tariff systems have been designed for various electricity consumers' types. Different tariff systems applied by grid operators or electricity suppliers will be included in the database model that is part of an informatics system for electricity consumption optimization.

  5. Optimal control of complex atomic quantum systems.

    Science.gov (United States)

    van Frank, S; Bonneau, M; Schmiedmayer, J; Hild, S; Gross, C; Cheneau, M; Bloch, I; Pichler, T; Negretti, A; Calarco, T; Montangero, S

    2016-10-11

    Quantum technologies will ultimately require manipulating many-body quantum systems with high precision. Cold atom experiments represent a stepping stone in that direction: a high degree of control has been achieved on systems of increasing complexity. However, this control is still sub-optimal. In many scenarios, achieving a fast transformation is crucial to fight against decoherence and imperfection effects. Optimal control theory is believed to be the ideal candidate to bridge the gap between early stage proof-of-principle demonstrations and experimental protocols suitable for practical applications. Indeed, it can engineer protocols at the quantum speed limit - the fastest achievable timescale of the transformation. Here, we demonstrate such potential by computing theoretically and verifying experimentally the optimal transformations in two very different interacting systems: the coherent manipulation of motional states of an atomic Bose-Einstein condensate and the crossing of a quantum phase transition in small systems of cold atoms in optical lattices. We also show that such processes are robust with respect to perturbations, including temperature and atom number fluctuations.

  6. Optimal design of a maintainable cold-standby system

    Energy Technology Data Exchange (ETDEWEB)

    Yu Haiyang [Universite de technologie de Troyes, ISTIT, Rue Marie Curie, BP 2060, 10010 TROYES (France)]. E-mail: Haiyang.YU@utt.fr; Yalaoui, Farouk [Universite de technologie de Troyes, ISTIT, Rue Marie Curie, BP 2060, 10010 TROYES (France); Chatelet, Eric [Universite de technologie de Troyes, ISTIT, Rue Marie Curie, BP 2060, 10010 TROYES (France); Chu Chengbin [Universite de technologie de Troyes, ISTIT, Rue Marie Curie, BP 2060, 10010 TROYES (France); Management School, Hefei University of Technology, Hefei (China)

    2007-01-15

    This paper considers a framework to optimally design a maintainable cold-standby system. Not only the maintenance policy is to be determined, but also the reliability character of the components will be taken into account. Hence, the mean time to failure of the components and the policy time of good-as-new maintenances are proposed as decision variables. Following probability analyses, the system cost rate and the system availability are formulated as the optimization object and the constraint, respectively. Then, this optimization problem is directly resolved by recognizing its underlying properties. Moreover, the resolving procedure is found to be independent of the failure distributions of the components and the forms of the system cost, which is illustrated through a numerical example. As a conclusion, an exact method is successfully established to minimize the cost rate of a cold-standby system with the given maintenance facility.

  7. Optimal design of a maintainable cold-standby system

    International Nuclear Information System (INIS)

    Yu Haiyang; Yalaoui, Farouk; Chatelet, Eric; Chu Chengbin

    2007-01-01

    This paper considers a framework to optimally design a maintainable cold-standby system. Not only the maintenance policy is to be determined, but also the reliability character of the components will be taken into account. Hence, the mean time to failure of the components and the policy time of good-as-new maintenances are proposed as decision variables. Following probability analyses, the system cost rate and the system availability are formulated as the optimization object and the constraint, respectively. Then, this optimization problem is directly resolved by recognizing its underlying properties. Moreover, the resolving procedure is found to be independent of the failure distributions of the components and the forms of the system cost, which is illustrated through a numerical example. As a conclusion, an exact method is successfully established to minimize the cost rate of a cold-standby system with the given maintenance facility

  8. Optimal redundant systems for works with random processing time

    International Nuclear Information System (INIS)

    Chen, M.; Nakagawa, T.

    2013-01-01

    This paper studies the optimal redundant policies for a manufacturing system processing jobs with random working times. The redundant units of the parallel systems and standby systems are subject to stochastic failures during the continuous production process. First, a job consisting of only one work is considered for both redundant systems and the expected cost functions are obtained. Next, each redundant system with a random number of units is assumed for a single work. The expected cost functions and the optimal expected numbers of units are derived for redundant systems. Subsequently, the production processes of N tandem works are introduced for parallel and standby systems, and the expected cost functions are also summarized. Finally, the number of works is estimated by a Poisson distribution for the parallel and standby systems. Numerical examples are given to demonstrate the optimization problems of redundant systems

  9. Optimization and Control of Electric Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lesieutre, Bernard C. [Univ. of Wisconsin, Madison, WI (United States); Molzahn, Daniel K. [Univ. of Wisconsin, Madison, WI (United States)

    2014-10-17

    The analysis and optimization needs for planning and operation of the electric power system are challenging due to the scale and the form of model representations. The connected network spans the continent and the mathematical models are inherently nonlinear. Traditionally, computational limits have necessitated the use of very simplified models for grid analysis, and this has resulted in either less secure operation, or less efficient operation, or both. The research conducted in this project advances techniques for power system optimization problems that will enhance reliable and efficient operation. The results of this work appear in numerous publications and address different application problems include optimal power flow (OPF), unit commitment, demand response, reliability margins, planning, transmission expansion, as well as general tools and algorithms.

  10. Application of Grey Wolf Optimizer Algorithm for Optimal Power Flow of Two-Terminal HVDC Transmission System

    Directory of Open Access Journals (Sweden)

    Heba Ahmed Hassan

    2017-01-01

    Full Text Available This paper applies a relatively new optimization method, the Grey Wolf Optimizer (GWO algorithm for Optimal Power Flow (OPF of two-terminal High Voltage Direct Current (HVDC electrical power system. The OPF problem of pure AC power systems considers the minimization of total costs under equality and inequality constraints. Hence, the OPF problem of integrated AC-DC power systems is extended to incorporate HVDC links, while taking into consideration the power transfer control characteristics using a GWO algorithm. This algorithm is inspired by the hunting behavior and social leadership of grey wolves in nature. The proposed algorithm is applied to two different case-studies: the modified 5-bus and WSCC 9-bus test systems. The validity of the proposed algorithm is demonstrated by comparing the obtained results with those reported in literature using other optimization techniques. Analysis of the obtained results show that the proposed GWO algorithm is able to achieve shorter CPU time, as well as minimized total cost when compared with already existing optimization techniques. This conclusion proves the efficiency of the GWO algorithm.

  11. Open Drainage and Detention Basin Combined System Optimization

    Directory of Open Access Journals (Sweden)

    M. E. Banihabib

    2017-01-01

    Full Text Available Introduction: Since flooding causes death and economic damages, then it is important and is one of the most complex and destructive natural disaster that endangers human lives and properties compared to any other natural disasters. This natural disaster almost hit most of countries and each country depending on its policy deals with it differently. Uneven intensity and temporal distribution of rainfall in various parts of Iran (which has arid and semiarid climate causes flash floods and leads to too much economic damages. Detention basins can be used as one of the measures of flood control and it detains, delays and postpones the flood flow. It controls floods and affects the flood directly and rapidly by temporarily storing of water. If the land topography allows the possibility of making detention basin with an appropriate volume and quarries are near to the projects for construction of detention dam, it can be used, because of its faster effect comparing to the other watershed management measures. The open drains can be used alone or in combination with detention basin instead of detention basin solitarily. Since in the combined system of open and detention basin the dam height is increasing in contrast with increasing the open drainage capacity, optimization of the system is essential. Hence, the investigation of the sensitivity of optimized combined system (open drainage and detention basin to the effective factors is also useful in appropriately design of the combined system. Materials and Methods: This research aims to develop optimization model for a combined system of open drainage and detention basins in a mountainous area and analyze the sensitivity of optimized dimensions to the hydrological factors. To select the dam sites for detention basins, watershed map with scale of 1: 25000 is used. In AutoCAD environment, the location of the dam sites are assessed to find the proper site which contains enough storage volume of the detention

  12. Optimal control of operation efficiency of belt conveyor systems

    International Nuclear Information System (INIS)

    Zhang, Shirong; Xia, Xiaohua

    2010-01-01

    The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment or operation levels. Switching control and variable speed control are proposed in literature to improve energy efficiency of belt conveyors. The current implementations mostly focus on lower level control loops or an individual belt conveyor without operational considerations at the system level. In this paper, an optimal switching control and a variable speed drive (VSD) based optimal control are proposed to improve the energy efficiency of belt conveyor systems at the operational level, where time-of-use (TOU) tariff, ramp rate of belt speed and other system constraints are considered. A coal conveying system in a coal-fired power plant is taken as a case study, where great saving of energy cost is achieved by the two optimal control strategies. Moreover, considerable energy saving resulting from VSD based optimal control is also proved by the case study.

  13. Optimal control of operation efficiency of belt conveyor systems

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Shirong [Department of Automation, Wuhan University, Wuhan 430072 (China); Xia, Xiaohua [Department of Electrical, Electronic and Computer Engineering, University of Pretoria, Pretoria 0002 (South Africa)

    2010-06-15

    The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment or operation levels. Switching control and variable speed control are proposed in literature to improve energy efficiency of belt conveyors. The current implementations mostly focus on lower level control loops or an individual belt conveyor without operational considerations at the system level. In this paper, an optimal switching control and a variable speed drive (VSD) based optimal control are proposed to improve the energy efficiency of belt conveyor systems at the operational level, where time-of-use (TOU) tariff, ramp rate of belt speed and other system constraints are considered. A coal conveying system in a coal-fired power plant is taken as a case study, where great saving of energy cost is achieved by the two optimal control strategies. Moreover, considerable energy saving resulting from VSD based optimal control is also proved by the case study. (author)

  14. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  15. The Optimization of power reactor control system

    International Nuclear Information System (INIS)

    Danupoyo, S.D.

    1997-01-01

    A power reactor is an important part in nuclear powered electrical plant systems. Success in controlling the power reactor will establish safety of the whole power plant systems. Until now, the power reactor has been controlled by a classical control system that was designed based on output feedback method. To meet the safety requirements that are now more restricted, the recently used power reactor control system should be modified. this paper describes a power reactor control system that is designed based on a state feedback method optimized with LQG (Linear-quadrature-gaussian) method and equipped with a state estimator. A pressurized-water type reactor has been used as the model. by using a point kinetics method with one group delayed neutrons. the result of simulation testing shows that the optimized control system can control the power reactor more effective and efficient than the classical control system

  16. Air data system optimization using a genetic algorithm

    Science.gov (United States)

    Deshpande, Samir M.; Kumar, Renjith R.; Seywald, Hans; Siemers, Paul M., III

    1992-01-01

    An optimization method for flush-orifice air data system design has been developed using the Genetic Algorithm approach. The optimization of the orifice array minimizes the effect of normally distributed random noise in the pressure readings on the calculation of air data parameters, namely, angle of attack, sideslip angle and freestream dynamic pressure. The optimization method is applied to the design of Pressure Distribution/Air Data System experiment (PD/ADS) proposed for inclusion in the Aeroassist Flight Experiment (AFE). Results obtained by the Genetic Algorithm method are compared to the results obtained by conventional gradient search method.

  17. Gravitational Contraction and Fusion Plasma Burn. Universal Expansion and the Hubble Law

    International Nuclear Information System (INIS)

    Wilhelmsson, Hans

    2002-01-01

    A dynamic approach is developed for the two principle phases of (i) gravitational condensation, and (ii) burning fusion plasma evolution. Comparison is made with conceptual descriptions of star formation and of subsequent decay towards red giant stars, white dwarfs, and other condensed core objects like neutron stars and black holes. The possibility of treating the expansion of the Universe by means of a similar approach is also discussed. The concept of negative diffusion is introduced for the contraction phase of star formation. The coefficients of defining the nonlinear diffusion are determined uniquely by physical conditions and for the case of the expansion of the universe, by the observation of the Hubble law. The contraction and evolution of large scale 3-D stars and 2-D galactic systems can thus be dynamically surveyed. In particular the time-scales can be determined

  18. Design and Optimization Method of a Two-Disk Rotor System

    Science.gov (United States)

    Huang, Jingjing; Zheng, Longxi; Mei, Qing

    2016-04-01

    An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.

  19. Design, Analysis and Optimization of a Solar Dish/Stirling System

    Directory of Open Access Journals (Sweden)

    Seyyed Danial Nazemi

    2016-02-01

    Full Text Available In this paper, a mathematical model by which the thermal and physical behavior of a solar dish/Stirling system was investigated, then the system was designed, analysed and optimized. In this regard, all of heat losses in a dish/Stirling system were calculated, then, the output net-work of the Stirling engine was computed, and accordingly, the system efficiency was worked out. These heat losses include convection and conduction heat losses, radiation heat losses by emission in the cavity receiver, reflection heat losses of solar energy in the parabolic dish, internal and external conduction heat losses, energy dissipation by pressure drops, and energy losses by shuttle effect in displacer piston in the Stirling engine. All of these heat losses in the parabolic dish, cavity receiver and Stirling engine were calculated using mathematical modeling in MatlabTM software. For validation of the proposed model, a 10 kW solar dish/Stirling system was designed and the simulation results were compared with the Eurodish system data with a reasonable degree of agreement. This model is used to investigate the effect of geometric and thermodynamic parameters including the aperture diameter of the parabolic dish and the cavity receiver, and the pressure of the compression space of the Stirling engine, on the system performance. By using the PSO method, which is an intelligent optimization technique, the total design was optimized and the optimal values of decision-making parameters were determined. The optimization has been done in two scenarios. In the first scenario, the optimal value of each designed parameter has been changed when the other parameters are equal to the designed case study parameters. In the second scenario, all of parameters were assumed in their optimal values. By optimization of the modeled dish/Stirling system, the total efficiency of the system improved to 0.60% in the first scenario and it increased from 21.69% to 22.62% in the second

  20. Towards robust optimal design of storm water systems

    Science.gov (United States)

    Marquez Calvo, Oscar; Solomatine, Dimitri

    2015-04-01

    In this study the focus is on the design of a storm water or a combined sewer system. Such a system should be capable to handle properly most of the storm to minimize the damages caused by flooding due to the lack of capacity of the system to cope with rain water at peak times. This problem is a multi-objective optimization problem: we have to take into account the minimization of the construction costs, the minimization of damage costs due to flooding, and possibly other criteria. One of the most important factors influencing the design of storm water systems is the expected amount of water to deal with. It is common that this infrastructure is developed with the capacity to cope with events that occur once in, say 10 or 20 years - so-called design rainfall events. However, rainfall is a random variable and such uncertainty typically is not taken explicitly into account in optimization. Rainfall design data is based on historical information of rainfalls, but many times this data is based on unreliable measures; or in not enough historical information; or as we know, the patterns of rainfall are changing regardless of historical information. There are also other sources of uncertainty influencing design, for example, leakages in the pipes and accumulation of sediments in pipes. In the context of storm water or combined sewer systems design or rehabilitation, robust optimization technique should be able to find the best design (or rehabilitation plan) within the available budget but taking into account uncertainty in those variables that were used to design the system. In this work we consider various approaches to robust optimization proposed by various authors (Gabrel, Murat, Thiele 2013; Beyer, Sendhoff 2007) and test a novel method ROPAR (Solomatine 2012) to analyze robustness. References Beyer, H.G., & Sendhoff, B. (2007). Robust optimization - A comprehensive survey. Comput. Methods Appl. Mech. Engrg., 3190-3218. Gabrel, V.; Murat, C., Thiele, A. (2014

  1. Parameter optimization via cuckoo optimization algorithm of fuzzy controller for energy management of a hybrid power system

    International Nuclear Information System (INIS)

    Berrazouane, S.; Mohammedi, K.

    2014-01-01

    Highlights: • Optimized fuzzy logic controller (FLC) for operating a standalone hybrid power system based on cuckoo search algorithm. • Comparison between optimized fuzzy logic controller based on cuckoo search and swarm intelligent. • Loss of power supply probability and levelized energy cost are introduced. - Abstract: This paper presents the development of an optimized fuzzy logic controller (FLC) for operating a standalone hybrid power system based on cuckoo search algorithm. The FLC inputs are batteries state of charge (SOC) and net power flow, FLC outputs are the power rate of batteries, photovoltaic and diesel generator. Data for weekly solar irradiation, ambient temperature and load profile are used to tune the proposed controller by using cuckoo search algorithm. The optimized FLC is able to minimize loss of power supply probability (LPSP), excess energy (EE) and levelized energy cost (LEC). Moreover, the results of CS optimization are better than of particle swarm optimization PSO for fuzzy system controller

  2. Optimal Operation of Radial Distribution Systems Using Extended Dynamic Programming

    DEFF Research Database (Denmark)

    Lopez, Juan Camilo; Vergara, Pedro P.; Lyra, Christiano

    2018-01-01

    An extended dynamic programming (EDP) approach is developed to optimize the ac steady-state operation of radial electrical distribution systems (EDS). Based on the optimality principle of the recursive Hamilton-Jacobi-Bellman equations, the proposed EDP approach determines the optimal operation o...... approach is illustrated using real-scale systems and comparisons with commercial programming solvers. Finally, generalizations to consider other EDS operation problems are also discussed.......An extended dynamic programming (EDP) approach is developed to optimize the ac steady-state operation of radial electrical distribution systems (EDS). Based on the optimality principle of the recursive Hamilton-Jacobi-Bellman equations, the proposed EDP approach determines the optimal operation...... of the EDS by setting the values of the controllable variables at each time period. A suitable definition for the stages of the problem makes it possible to represent the optimal ac power flow of radial EDS as a dynamic programming problem, wherein the 'curse of dimensionality' is a minor concern, since...

  3. Optimal Mobile Sensing and Actuation Policies in Cyber-physical Systems

    CERN Document Server

    Tricaud, Christophe

    2012-01-01

    A successful cyber-physical system, a complex interweaving of hardware and software in direct interaction with some parts of the physical environment, relies heavily on proper identification of the, often pre-existing, physical elements. Based on information from that process, a bespoke “cyber” part of the system may then be designed for a specific purpose. Optimal Mobile Sensing and Actuation Strategies in Cyber-physical Systems focuses on distributed-parameter systems the dynamics of which can be modelled with partial differential equations. Such systems are very challenging to measure, their states being distributed throughout a spatial domain. Consequently, optimal strategies are needed and systematic approaches to the optimization of sensor locations have to be devised for parameter estimation. The text begins by reviewing the newer field of cyber-physical systems and introducing background notions of distributed parameter systems and optimal observation theory. New research opportunities are then de...

  4. A TYPE Ia SUPERNOVA AT REDSHIFT 1.55 IN HUBBLE SPACE TELESCOPE INFRARED OBSERVATIONS FROM CANDELS

    International Nuclear Information System (INIS)

    Rodney, Steven A.; Riess, Adam G.; Jones, David O.; Dahlen, Tomas; Ferguson, Henry C.; Casertano, Stefano; Grogin, Norman A.; Strolger, Louis-Gregory; Hjorth, Jens; Frederiksen, Teddy F.; Weiner, Benjamin J.; Mobasher, Bahram; Challis, Peter; Kirshner, Robert P.; Faber, S. M.; Filippenko, Alexei V.; Garnavich, Peter; Hayden, Brian; Graur, Or; Jha, Saurabh W.

    2012-01-01

    We report the discovery of a Type Ia supernova (SN Ia) at redshift z = 1.55 with the infrared detector of the Wide Field Camera 3 (WFC3-IR) on the Hubble Space Telescope (HST). This object was discovered in CANDELS imaging data of the Hubble Ultra Deep Field and followed as part of the CANDELS+CLASH Supernova project, comprising the SN search components from those two HST multi-cycle treasury programs. This is the highest redshift SN Ia with direct spectroscopic evidence for classification. It is also the first SN Ia at z > 1 found and followed in the infrared, providing a full light curve in rest-frame optical bands. The classification and redshift are securely defined from a combination of multi-band and multi-epoch photometry of the SN, ground-based spectroscopy of the host galaxy, and WFC3-IR grism spectroscopy of both the SN and host. This object is the first of a projected sample at z > 1.5 that will be discovered by the CANDELS and CLASH programs. The full CANDELS+CLASH SN Ia sample will enable unique tests for evolutionary effects that could arise due to differences in SN Ia progenitor systems as a function of redshift. This high-z sample will also allow measurement of the SN Ia rate out to z ≈ 2, providing a complementary constraint on SN Ia progenitor models.

  5. Existence of optimal controls for systems governed by mean-field ...

    African Journals Online (AJOL)

    In this paper, we study the existence of an optimal control for systems, governed by stochastic dierential equations of mean-eld type. For non linear systems, we prove the existence of an optimal relaxed control, by using tightness techniques and Skorokhod selection theorem. The optimal control is a measure valued process ...

  6. Globally Optimal Segmentation of Permanent-Magnet Systems

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    Permanent-magnet systems are widely used for generation of magnetic fields with specific properties. The reciprocity theorem, an energy-equivalence principle in magnetostatics, can be employed to calculate the optimal remanent flux density of the permanent-magnet system, given any objective...... remains unsolved. We show that the problem of optimal segmentation of a two-dimensional permanent-magnet assembly with respect to a linear objective functional can be reduced to the problem of piecewise linear approximation of a plane curve by perimeter maximization. Once the problem has been cast...

  7. Simulation and Optimization of SCR System for Direct-injection Diesel Engine

    Directory of Open Access Journals (Sweden)

    Guanqiang Ruan

    2014-11-01

    Full Text Available The turbo diesel SCR system has been researched and analyzed in this paper. By using software of CATIA, three-dimensional physical model of SCR system has been established, and with software of AVL-FIRE, the boundary conditions have been set, simulated and optimized. In the process of SCR system optimizing, it mainly optimized the pray angle. Compare the effects of processing NO to obtain batter optimization results. At last the optimization results are compared by bench test, and the experimental results are quite consistent with simulation.

  8. Hybrid Metaheuristic Approach for Nonlocal Optimization of Molecular Systems.

    Science.gov (United States)

    Dresselhaus, Thomas; Yang, Jack; Kumbhar, Sadhana; Waller, Mark P

    2013-04-09

    Accurate modeling of molecular systems requires a good knowledge of the structure; therefore, conformation searching/optimization is a routine necessity in computational chemistry. Here we present a hybrid metaheuristic optimization (HMO) algorithm, which combines ant colony optimization (ACO) and particle swarm optimization (PSO) for the optimization of molecular systems. The HMO implementation meta-optimizes the parameters of the ACO algorithm on-the-fly by the coupled PSO algorithm. The ACO parameters were optimized on a set of small difluorinated polyenes where the parameters exhibited small variance as the size of the molecule increased. The HMO algorithm was validated by searching for the closed form of around 100 molecular balances. Compared to the gradient-based optimized molecular balance structures, the HMO algorithm was able to find low-energy conformations with a 87% success rate. Finally, the computational effort for generating low-energy conformation(s) for the phenylalanyl-glycyl-glycine tripeptide was approximately 60 CPU hours with the ACO algorithm, in comparison to 4 CPU years required for an exhaustive brute-force calculation.

  9. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Tamer Khatib

    2012-01-01

    Full Text Available This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia, which are Kuala Lumpur, Johor Bharu, Ipoh, Kuching, and Alor Setar. Based on the results of the designed example for a PV system installed in Kuala Lumpur, the proposed method gives satisfactory optimal sizing results.

  10. Optimization of Simulated Inventory Systems : OptQuest and Alternatives

    OpenAIRE

    Kleijnen, J.P.C.; Wan, J.

    2006-01-01

    This article illustrates simulation optimization through an (s, S) inventory manage- ment system. In this system, the goal function to be minimized is the expected value of speci…c inventory costs. Moreover, speci…c constraints must be satis…ed for some random simulation responses, namely the service or …ll rate, and for some determin- istic simulation inputs, namely the constraint s optimization methods, including the popular OptQuest method. The optimal...

  11. Optimal sizing method for stand-alone photovoltaic power systems

    Energy Technology Data Exchange (ETDEWEB)

    Groumpos, P P; Papageorgiou, G

    1987-01-01

    The total life-cycle cost of stand-alone photovoltaic (SAPV) power systems is mathematically formulated. A new optimal sizing algorithm for the solar array and battery capacity is developed. The optimum value of a balancing parameter, M, for the optimal sizing of SAPV system components is derived. The proposed optimal sizing algorithm is used in an illustrative example, where a more economical life-cycle cost has bene obtained. The question of cost versus reliability is briefly discussed.

  12. Discrete-time inverse optimal control for nonlinear systems

    CERN Document Server

    Sanchez, Edgar N

    2013-01-01

    Discrete-Time Inverse Optimal Control for Nonlinear Systems proposes a novel inverse optimal control scheme for stabilization and trajectory tracking of discrete-time nonlinear systems. This avoids the need to solve the associated Hamilton-Jacobi-Bellman equation and minimizes a cost functional, resulting in a more efficient controller. Design More Efficient Controllers for Stabilization and Trajectory Tracking of Discrete-Time Nonlinear Systems The book presents two approaches for controller synthesis: the first based on passivity theory and the second on a control Lyapunov function (CLF). Th

  13. FREQUENCY OPTIMIZATION FOR SECURITY MONITORING OF COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Вogatyrev V.A.

    2015-03-01

    Full Text Available The subject areas of the proposed research are monitoring facilities for protection of computer systems exposed to destructive attacks of accidental and malicious nature. The interval optimization model of test monitoring for the detection of hazardous states of security breach caused by destructive attacks is proposed. Optimization function is to maximize profit in case of requests servicing in conditions of uncertainty, and intensity variance of the destructive attacks including penalties when servicing of requests is in dangerous conditions. The vector task of system availability maximization and minimization of probabilities for its downtime and dangerous conditions is proposed to be reduced to the scalar optimization problem based on the criterion of profit maximization from information services (service of requests that integrates these private criteria. Optimization variants are considered with the definition of the averaged periodic activities of monitoring and adapting of these periods to the changes in the intensity of destructive attacks. Adaptation efficiency of the monitoring frequency to changes in the activity of the destructive attacks is shown. The proposed solutions can find their application for optimization of test monitoring intervals to detect hazardous conditions of security breach that makes it possible to increase the system effectiveness, and specifically, to maximize the expected profit from information services.

  14. Optimal coherent control of dissipative N-level systems

    International Nuclear Information System (INIS)

    Jirari, H.; Poetz, W.

    2005-01-01

    General optimal coherent control of dissipative N-level systems in the Markovian time regime is formulated within Pointryagin's principle and the Lindblad equation. In the present paper, we study feasibility and limitations of steering of dissipative two-, three-, and four-level systems from a given initial pure or mixed state into a desired final state under the influence of an external electric field. The time evolution of the system is computed within the Lindblad equation and a conjugate gradient method is used to identify optimal control fields. The influence of both field-independent population and polarization decay on achieving the objective is investigated in systematic fashion. It is shown that, for realistic dephasing times, optimum control fields can be identified which drive the system into the target state with very high success rate and in economical fashion, even when starting from a poor initial guess. Furthermore, the optimal fields obtained give insight into the system dynamics. However, if decay rates of the system cannot be subjected to electromagnetic control, the dissipative system cannot be maintained in a specific pure or mixed state, in general

  15. Economic performances optimization of the transcritical Rankine cycle systems in geothermal application

    International Nuclear Information System (INIS)

    Yang, Min-Hsiung; Yeh, Rong-Hua

    2015-01-01

    Highlights: • The optimal economic performance of the TRC system are investigated. • In economic evaluations, R125 performs the most satisfactorily, followed by R41 and CO 2 . • The TRC system with CO 2 has the largest averaged temperature difference. • Economic optimized pressures are always lower than thermodynamic optimized operating pressures. - Abstract: The aim of this study is to investigate the economic optimization of a TRC system for the application of geothermal energy. An economic parameter of net power output index, which is the ratio of net power output to the total cost, is applied to optimize the TRC system using CO 2 , R41 and R125 as working fluids. The maximum net power output index and the corresponding optimal operating pressures are obtained and evaluated for the TRC system. Furthermore, the analyses of the corresponding averaged temperature differences in the heat exchangers on the optimal economic performances of the TRC system are carried out. The effects of geothermal temperatures on the thermodynamic and economic optimizations are also revealed. In both optimal economic and thermodynamic evaluations, R125 performs the most satisfactorily, followed by R41 and CO 2 in the TRC system. In addition, the TRC system operated with CO 2 has the largest averaged temperature difference in the heat exchangers and thus has potential in future application for lower-temperature heat resources. The highest working pressures obtained from economic optimization are always lower than those from thermodynamic optimization for CO 2 , R41, and R125 in the TRC system

  16. Hubble Servicing Challenges Drive Innovation of Shuttle Rendezvous Techniques

    Science.gov (United States)

    Goodman, John L.; Walker, Stephen R.

    2009-01-01

    Hubble Space Telescope (HST) servicing, performed by Space Shuttle crews, has contributed to what is arguably one of the most successful astronomy missions ever flown. Both nominal and contingency proximity operations techniques were developed to enable successful servicing, while lowering the risk of damage to HST systems, and improve crew safety. Influencing the development of these techniques were the challenges presented by plume impingement and HST performance anomalies. The design of both the HST and the Space Shuttle was completed before the potential of HST contamination and structural damage by shuttle RCS jet plume impingement was fully understood. Relative navigation during proximity operations has been challenging, as HST was not equipped with relative navigation aids. Since HST reached orbit in 1990, proximity operations design for servicing missions has evolved as insight into plume contamination and dynamic pressure has improved and new relative navigation tools have become available. Servicing missions have provided NASA with opportunities to gain insight into servicing mission design and development of nominal and contingency procedures. The HST servicing experiences and lessons learned are applicable to other programs that perform on-orbit servicing and rendezvous, both human and robotic.

  17. Hubble Diagram Test of Expanding and Static Cosmological Models: The Case for a Slowly Expanding Flat Universe

    Directory of Open Access Journals (Sweden)

    Laszlo A. Marosi

    2013-01-01

    Full Text Available We present a new redshift (RS versus photon travel time ( test including 171 supernovae RS data points. We extended the Hubble diagram to a range of z = 0,0141–8.1 in the hope that at high RSs, the fitting of the calculated RS/ diagrams to the observed RS data would, as predicted by different cosmological models, set constraints on alternative cosmological models. The Lambda cold dark matter (ΛCDM, the static universe model, and the case for a slowly expanding flat universe (SEU are considered. We show that on the basis of the Hubble diagram test, the static and the slowly expanding models are favored.

  18. An updated Type II supernova Hubble diagram

    Science.gov (United States)

    Gall, E. E. E.; Kotak, R.; Leibundgut, B.; Taubenberger, S.; Hillebrandt, W.; Kromer, M.; Burgett, W. S.; Chambers, K.; Flewelling, H.; Huber, M. E.; Kaiser, N.; Kudritzki, R. P.; Magnier, E. A.; Metcalfe, N.; Smith, K.; Tonry, J. L.; Wainscoat, R. J.; Waters, C.

    2018-03-01

    We present photometry and spectroscopy of nine Type II-P/L supernovae (SNe) with redshifts in the 0.045 ≲ z ≲ 0.335 range, with a view to re-examining their utility as distance indicators. Specifically, we apply the expanding photosphere method (EPM) and the standardized candle method (SCM) to each target, and find that both methods yield distances that are in reasonable agreement with each other. The current record-holder for the highest-redshift spectroscopically confirmed supernova (SN) II-P is PS1-13bni (z = 0.335-0.012+0.009), and illustrates the promise of Type II SNe as cosmological tools. We updated existing EPM and SCM Hubble diagrams by adding our sample to those previously published. Within the context of Type II SN distance measuring techniques, we investigated two related questions. First, we explored the possibility of utilising spectral lines other than the traditionally used Fe IIλ5169 to infer the photospheric velocity of SN ejecta. Using local well-observed objects, we derive an epoch-dependent relation between the strong Balmer line and Fe IIλ5169 velocities that is applicable 30 to 40 days post-explosion. Motivated in part by the continuum of key observables such as rise time and decline rates exhibited from II-P to II-L SNe, we assessed the possibility of using Hubble-flow Type II-L SNe as distance indicators. These yield similar distances as the Type II-P SNe. Although these initial results are encouraging, a significantly larger sample of SNe II-L would be required to draw definitive conclusions. Tables A.1, A.3, A.5, A.7, A.9, A.11, A.13, A.15 and A.17 are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A25

  19. A multi-objective optimization problem for multi-state series-parallel systems: A two-stage flow-shop manufacturing system

    International Nuclear Information System (INIS)

    Azadeh, A.; Maleki Shoja, B.; Ghanei, S.; Sheikhalishahi, M.

    2015-01-01

    This research investigates a redundancy-scheduling optimization problem for a multi-state series parallel system. The system is a flow shop manufacturing system with multi-state machines. Each manufacturing machine may have different performance rates including perfect performance, decreased performance and complete failure. Moreover, warm standby redundancy is considered for the redundancy allocation problem. Three objectives are considered for the problem: (1) minimizing system purchasing cost, (2) minimizing makespan, and (3) maximizing system reliability. Universal generating function is employed to evaluate system performance and overall reliability of the system. Since the problem is in the NP-hard class of combinatorial problems, genetic algorithm (GA) is used to find optimal/near optimal solutions. Different test problems are generated to evaluate the effectiveness and efficiency of proposed approach and compared to simulated annealing optimization method. The results show the proposed approach is capable of finding optimal/near optimal solution within a very reasonable time. - Highlights: • A redundancy-scheduling optimization problem for a multi-state series parallel system. • A flow shop with multi-state machines and warm standby redundancy. • Objectives are to optimize system purchasing cost, makespan and reliability. • Different test problems are generated and evaluated by a unique genetic algorithm. • It locates optimal/near optimal solution within a very reasonable time

  20. The SWAN/NPSOL code system for multivariable multiconstraint shield optimization

    International Nuclear Information System (INIS)

    Watkins, E.F.; Greenspan, E.

    1995-01-01

    SWAN is a useful code for optimization of source-driven systems, i.e., systems for which the neutron and photon distribution is the solution of the inhomogeneous transport equation. Over the years, SWAN has been applied to the optimization of a variety of nuclear systems, such as minimizing the thickness of fusion reactor blankets and shields, the weight of space reactor shields, the cost for an ICF target chamber shield, and the background radiation for explosive detection systems and maximizing the beam quality for boron neutron capture therapy applications. However, SWAN's optimization module can handle up to a single constraint and was inefficient in handling problems with many variables. The purpose of this work is to upgrade SWAN's optimization capability

  1. The Great Attractor: At the Limits of Hubble's Law of the Expanding Universe.

    Science.gov (United States)

    Murdin, Paul

    1991-01-01

    Presents the origin and mathematics of Hubble's Law of the expanding universe. Discusses limitations to this law and the related concepts of standard candles, elliptical galaxies, and streaming motions, which are conspicuous deviations from the law. The third of three models proposed as explanations for streaming motions is designated: The Great…

  2. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  3. Optimal PMU location in power systems using MICA

    Directory of Open Access Journals (Sweden)

    Seyed Abbas Taher

    2016-03-01

    Full Text Available This study presented a modified imperialist competitive algorithm (MICA for optimal placement of phasor measurement units (PMUs in normal and contingency conditions of power systems. The optimal PMU placement problem is used for full network observability with the minimum number of PMUs. For this purpose, PMUs are installed in strategic buses. Efficiency of the proposed method is shown by the simulation results of IEEE 14, 30, 57, and 118-bus test systems. Results of the numerical simulation on IEEE-test systems indicated that the proposed technique provided maximum redundancy measurement and minimum request of PMUs so that the whole system could be topologically observable by installing PMUs on the minimum system buses. To verify the proposed method, the results are compared with those of some recently reported methods. When MICA is used for solving optimal PMU placement (OPP, the number of PMUs would be usually equal to or less than those of the other existing methods. Results indicated that MICA is a very fast and accurate algorithm for OPP solution.

  4. Optimal Trajectories Generation in Robotic Fiber Placement Systems

    Science.gov (United States)

    Gao, Jiuchun; Pashkevich, Anatol; Caro, Stéphane

    2017-06-01

    The paper proposes a methodology for optimal trajectories generation in robotic fiber placement systems. A strategy to tune the parameters of the optimization algorithm at hand is also introduced. The presented technique transforms the original continuous problem into a discrete one where the time-optimal motions are generated by using dynamic programming. The developed strategy for the optimization algorithm tuning allows essentially reducing the computing time and obtaining trajectories satisfying industrial constraints. Feasibilities and advantages of the proposed methodology are confirmed by an application example.

  5. New solution to the problem of the tension between the high-redshift and low-redshift measurements of the Hubble constant

    Science.gov (United States)

    Bolejko, Krzysztof

    2018-01-01

    During my talk I will present results suggesting that the phenomenon of emerging spatial curvature could resolve the conflict between Planck's (high-redshift) and Riess et al. (low-redshift) measurements of the Hubble constant. The phenomenon of emerging spatial curvature is absent in the Standard Cosmological Model, which has a flat and fixed spatial curvature (small perturbations are considered in the Standard Cosmological Model but their global average vanishes, leading to spatial flatness at all times).In my talk I will show that with the nonlinear growth of cosmic structures the global average deviates from zero. As a result, the spatial curvature evolves from spatial flatness of the early universe to a negatively curved universe at the present day, with Omega_K ~ 0.1. Consequently, the present day expansion rate, as measured by the Hubble constant, is a few percent higher compared to the high-redshift constraints. This provides an explanation why there is a tension between high-redshift (Planck) and low-redshift (Riess et al.) measurements of the Hubble constant. In the presence of emerging spatial curvature these two measurements should in fact be different: high redshift measurements should be slightly lower than the Hubble constant inferred from the low-redshift data.The presentation will be based on the results described in arXiv:1707.01800 and arXiv:1708.09143 (which discuss the phenomenon of emerging spatial curvature) and on a paper that is still work in progress but is expected to be posted on arxiv by the AAS meeting (this paper uses mock low-redshift data to show that starting from the Planck's cosmological models (in the early universe) but with the emerging spatial curvature taken into account, the low-redshift Hubble constant should be 72.4 km/s/Mpc.

  6. The Carnegie-Chicago Hubble Program. I. An Independent Approach to the Extragalactic Distance Scale Using Only Population II Distance Indicators

    Science.gov (United States)

    Beaton, Rachael L.; Freedman, Wendy L.; Madore, Barry F.; Bono, Giuseppe; Carlson, Erika K.; Clementini, Gisella; Durbin, Meredith J.; Garofalo, Alessia; Hatt, Dylan; Jang, In Sung; Kollmeier, Juna A.; Lee, Myung Gyoon; Monson, Andrew J.; Rich, Jeffrey A.; Scowcroft, Victoria; Seibert, Mark; Sturch, Laura; Yang, Soung-Chul

    2016-12-01

    We present an overview of the Carnegie-Chicago Hubble Program, an ongoing program to obtain a 3% measurement of the Hubble constant (H 0) using alternative methods to the traditional Cepheid distance scale. We aim to establish a completely independent route to H 0 using RR Lyrae variables, the tip of the red giant branch (TRGB), and Type Ia supernovae (SNe Ia). This alternative distance ladder can be applied to galaxies of any Hubble type, of any inclination, and, using old stars in low-density environments, is robust to the degenerate effects of metallicity and interstellar extinction. Given the relatively small number of SNe Ia host galaxies with independently measured distances, these properties provide a great systematic advantage in the measurement of H 0 via the distance ladder. Initially, the accuracy of our value of H 0 will be set by the five Galactic RR Lyrae calibrators with Hubble Space Telescope Fine-Guidance Sensor parallaxes. With Gaia, both the RR Lyrae zero-point and TRGB method will be independently calibrated, the former with at least an order of magnitude more calibrators and the latter directly through parallax measurement of tip red giants. As the first end-to-end “distance ladder” completely independent of both Cepheid variables and the Large Magellanic Cloud, this path to H 0 will allow for the high-precision comparison at each rung of the traditional distance ladder that is necessary to understand tensions between this and other routes to H 0. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs #13472 and #13691.

  7. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    Energy Technology Data Exchange (ETDEWEB)

    Dall' Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the control of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.

  8. The Optimal Steering Control System using Imperialist Competitive Algorithm on Vehicles with Steer-by-Wire System

    Directory of Open Access Journals (Sweden)

    F. Hunaini

    2015-03-01

    Full Text Available Steer-by-wire is the electrical steering systems on vehicles that are expected with the development of an optimal control system can improve the dynamic performance of the vehicle. This paper aims to optimize the control systems, namely Fuzzy Logic Control (FLC and the Proportional, Integral and Derivative (PID control on the vehicle steering system using Imperialist Competitive Algorithm (ICA. The control systems are built in a cascade, FLC to suppress errors in the lateral motion and the PID control to minimize the error in the yaw motion of the vehicle. FLC is built has two inputs (error and delta error and single output. Each input and output consists of three Membership Function (MF in the form of a triangular for language term "zero" and two trapezoidal for language term "negative" and "positive". In order to work optimally, each MF optimized using ICA to get the position and width of the most appropriate. Likewise, in the PID control, the constant at each Proportional, Integral and Derivative control also optimized using ICA, so there are six parameters of the control system are simultaneously optimized by ICA. Simulations performed on vehicle models with 10 Degree Of Freedom (DOF, the plant input using the variables of steering that expressed in the desired trajectory, and the plant outputs are lateral and yaw motion. The simulation results showed that the FLC-PID control system optimized by using ICA can maintain the movement of vehicle according to the desired trajectory with lower error and higher speed limits than optimized with Particle Swarm Optimization (PSO.

  9. Optimal pole shifting controller for interconnected power system

    International Nuclear Information System (INIS)

    Yousef, Ali M.; Kassem, Ahmed M.

    2011-01-01

    Research highlights: → Mathematical model represents a power system which consists of synchronous machine connected to infinite bus through transmission line. → Power system stabilizer was designed based on optimal pole shifting controller. → The system performances was tested through load disturbances at different operating conditions. → The system performance with the proposed optimal pole shifting controller is compared with the conventional pole placement controller. → The digital simulation results indicated that the proposed controller has a superior performance. -- Abstract: Power system stabilizer based on optimal pole shifting is proposed. An approach for shifting the real parts of the open-loop poles to any desired positions while preserving the imaginary parts is presented. In each step of this approach, it is required to solve a first-order or a second-order linear matrix Lyapunov equation for shifting one real pole or two complex conjugate poles, respectively. This presented method yields a solution, which is optimal with respect to a quadratic performance index. The attractive feature of this method is that it enables solutions of the complex problem to be easily found without solving any non-linear algebraic Riccati equation. The present power system stabilizer is based on Riccati equation approach. The control law depends on finding the feedback gain matrix, and then the control signal is synthesized by multiplying the state variables of the power system with determined gain matrix. The gain matrix is calculated one time only, and it works over wide range of operating conditions. To validate the power of the proposed PSS, a linearized model of a simple power system consisted of a single synchronous machine connected to infinite bus bar through transmission line is simulated. The studied power system is subjected to various operating points and power system parameters changes.

  10. Optimal pole shifting controller for interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Yousef, Ali M., E-mail: drali_yousef@yahoo.co [Electrical Eng. Dept., Faculty of Engineering, Assiut University (Egypt); Kassem, Ahmed M., E-mail: kassem_ahmed53@hotmail.co [Control Technology Dep., Industrial Education College, Beni-Suef University (Egypt)

    2011-05-15

    Research highlights: {yields} Mathematical model represents a power system which consists of synchronous machine connected to infinite bus through transmission line. {yields} Power system stabilizer was designed based on optimal pole shifting controller. {yields} The system performances was tested through load disturbances at different operating conditions. {yields} The system performance with the proposed optimal pole shifting controller is compared with the conventional pole placement controller. {yields} The digital simulation results indicated that the proposed controller has a superior performance. -- Abstract: Power system stabilizer based on optimal pole shifting is proposed. An approach for shifting the real parts of the open-loop poles to any desired positions while preserving the imaginary parts is presented. In each step of this approach, it is required to solve a first-order or a second-order linear matrix Lyapunov equation for shifting one real pole or two complex conjugate poles, respectively. This presented method yields a solution, which is optimal with respect to a quadratic performance index. The attractive feature of this method is that it enables solutions of the complex problem to be easily found without solving any non-linear algebraic Riccati equation. The present power system stabilizer is based on Riccati equation approach. The control law depends on finding the feedback gain matrix, and then the control signal is synthesized by multiplying the state variables of the power system with determined gain matrix. The gain matrix is calculated one time only, and it works over wide range of operating conditions. To validate the power of the proposed PSS, a linearized model of a simple power system consisted of a single synchronous machine connected to infinite bus bar through transmission line is simulated. The studied power system is subjected to various operating points and power system parameters changes.

  11. Selections from 2017: Hubble Survey Explores Distant Galaxies

    Science.gov (United States)

    Kohler, Susanna

    2017-12-01

    Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.CANDELS Multi-Wavelength Catalogs: Source Identification and Photometry in the CANDELS COSMOSSurvey FieldPublished January2017Main takeaway:A publication led byHooshang Nayyeri(UC Irvine and UC Riverside) early this year details acatalog of sources built using the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey(CANDELS), a survey carried out by cameras on board the Hubble Space Telescope. The catalogliststhe properties of 38,000 distant galaxies visiblewithin the COSMOS field, a two-square-degree equatorial field explored in depthto answer cosmological questions.Why its interesting:Illustration showing the three-dimensional map of the dark matter distribution in theCOSMOS field. [Adapted from NASA/ESA/R. Massey(California Institute of Technology)]The depth and resolution of the CANDELS observations areuseful for addressingseveral major science goals, including the following:Studying the most distant objects in the universe at the epoch of reionization in the cosmic dawn.Understanding galaxy formation and evolution during the peak epoch of star formation in the cosmic high noon.Studying star formation from deep ultravioletobservations and studying cosmology from supernova observations.Why CANDELS is a major endeavor:CANDELS isthe largest multi-cycle treasury program ever approved on the Hubble Space Telescope using over 900 orbits between 2010 and 2013 withtwo cameras on board the spacecraftto study galaxy formation and evolution throughout cosmic time. The CANDELS images are all publicly available, and the new catalogrepresents an enormous source of information about distant objectsin our universe.CitationH. Nayyeri et al 2017 ApJS 228 7. doi:10.3847/1538-4365/228/1/7

  12. Optimization of Regenerators for AMRR Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nellis, Gregory [University of Wisconsin, Madison, WI (United States); Klein, Sanford [University of Wisconsin, Madison, WI (United States); Brey, William [University of Wisconsin, Madison, WI (United States); Moine, Alexandra [University of Wisconsin, Madison, WI (United States); Nielson, Kaspar [University of Wisconsin, Madison, WI (United States)

    2015-06-18

    Active Magnetic Regenerative Refrigeration (AMRR) systems have no direct global warming potential or ozone depletion potential and hold the potential for providing refrigeration with efficiencies that are equal to or greater than the vapor compression systems used today. The work carried out in this project has developed and improved modeling tools that can be used to optimize and evaluate the magnetocaloric materials and geometric structure of the regenerator beds required for AMRR Systems. There has been an explosion in the development of magnetocaloric materials for AMRR systems over the past few decades. The most attractive materials, based on the magnitude of the measured magnetocaloric effect, tend to also have large amounts of hysteresis. This project has provided for the first time a thermodynamically consistent method for evaluating these hysteretic materials in the context of an AMRR cycle. An additional, practical challenge that has been identified for AMRR systems is related to the participation of the regenerator wall in the cyclic process. The impact of housing heat capacity on both passive and active regenerative systems has been studied and clarified within this project. This report is divided into two parts corresponding to these two efforts. Part 1 describes the work related to modeling magnetic hysteresis while Part 2 discusses the modeling of the heat capacity of the housing. A key outcome of this project is the development of a publically available modeling tool that allows researchers to identify a truly optimal magnetocaloric refrigerant. Typically, the refrigeration potential of a magnetocaloric material is judged entirely based on the magnitude of the magnetocaloric effect and other properties of the material that are deemed unimportant. This project has shown that a material with a large magnetocaloric effect (as evidenced, for example, by a large adiabatic temperature change) may not be optimal when it is accompanied by a large hysteresis

  13. Stochastic network optimization with application to communication and queueing systems

    CERN Document Server

    Neely, Michael

    2010-01-01

    This text presents a modern theory of analysis, control, and optimization for dynamic networks. Mathematical techniques of Lyapunov drift and Lyapunov optimization are developed and shown to enable constrained optimization of time averages in general stochastic systems. The focus is on communication and queueing systems, including wireless networks with time-varying channels, mobility, and randomly arriving traffic. A simple drift-plus-penalty framework is used to optimize time averages such as throughput, throughput-utility, power, and distortion. Explicit performance-delay tradeoffs are prov

  14. Optimal design of integrated CHP systems for housing complexes

    International Nuclear Information System (INIS)

    Fuentes-Cortés, Luis Fabián; Ponce-Ortega, José María; Nápoles-Rivera, Fabricio; Serna-González, Medardo; El-Halwagi, Mahmoud M.

    2015-01-01

    Highlights: • An optimization formulation for designing domestic CHP systems is presented. • The operating scheme, prime mover and thermal storage system are optimized. • Weather conditions and behavior demands are considered. • Simultaneously economic and environmental objectives are considered. • Two case studies from Mexico are presented. - Abstract: This paper presents a multi-objective optimization approach for designing residential cogeneration systems based on a new superstructure that allows satisfying the demands of hot water and electricity at the minimum cost and the minimum environmental impact. The optimization involves the selection of technologies, size of required units and operating modes of equipment. Two residential complexes in different cities of the State of Michoacán in Mexico were considered as case studies. One is located on the west coast and the other one is in the mountainous area. The results show that the implementation of the proposed optimization method yields significant economic and environmental benefits due to the simultaneous reduction in the total annual cost and overall greenhouse gas emissions

  15. Probing the z > 6 Universe with the First Hubble Frontier Fields Cluster A2744

    Science.gov (United States)

    Atek, Hakim; Richard, Johan; Kneib, Jean-Paul; Clement, Benjamin; Egami, Eiichi; Ebeling, Harald; Jauzac, Mathilde; Jullo, Eric; Laporte, Nicolas; Limousin, Marceau; Natarajan, Priyamvada

    2014-05-01

    The Hubble Frontier Fields program combines the capabilities of the Hubble Space Telescope (HST) with the gravitational lensing of massive galaxy clusters to probe the distant universe to an unprecedented depth. Here, we present the results of the first combined HST and Spitzer observations of the cluster A-2744. We combine the full near-infrared data with ancillary optical images to search for gravitationally lensed high-redshift (z >~ 6) galaxies. We report the detection of 15 I 814 dropout candidates at z ~ 6-7 and one Y 105 dropout at z ~ 8 in a total survey area of 1.43 arcmin2 in the source plane. The predictions of our lens model also allow us to identify five multiply imaged systems lying at redshifts between z ~ 6 and z ~ 8. Thanks to constraints from the mass distribution in the cluster, we were able to estimate the effective survey volume corrected for completeness and magnification effects. This was in turn used to estimate the rest-frame ultraviolet luminosity function (LF) at z ~ 6-8. Our LF results are generally in agreement with the most recent blank field estimates, confirming the feasibility of surveys through lensing clusters. Although based on a shallower observations than what will be achieved in the final data set including the full Advanced Camera for Survey observations, the LF presented here goes down to M UV ~-18.5, corresponding to 0.2L sstarf at z ~ 7 with one identified object at M UV ~-15 thanks to the highly magnified survey areas. This early study forecasts the power of using massive galaxy clusters as cosmic telescopes and its complementarity to blank fields. Based on observations made with the NASA/ESA Hubble Space Telescope (HST), which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs 13495 and 11689. Based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory

  16. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  17. System for optimizing activation measurements

    International Nuclear Information System (INIS)

    Antonov, V.A.

    1993-01-01

    Optimization procedures make it possible to perform committed activation investigations, reduce the number of experiments, make them less laborious, and increase their productivity. Separate mathematical functions were investigated for given optimization conditions, and these enable numerical optimal parameter values to be established only in the particular cases of specific techniques and mathematical computer programs. In the known mathematical models insufficient account is taken of the variety and complexity of real nuclide mixtures, the influence of background radiation, and the wide diversity of activation measurement conditions, while numerical methods for solving the optimization problem fail to reveal the laws governing the variations of the activation parameters and their functional interdependences. An optimization method was proposed in which was mainly used to estimate the time intervals for activation measurements of a mononuclide, binary or ternary nuclide mixture. However, by forming a mathematical model of activation processes it becomes possible to extend the number of nuclides in the mixture and to take account of the influence of background radiation and the diversity of the measurement alternatives. The analytical expressions and nomograms obtained can be used to determine the number of measurements, their minimum errors, their sensitivities when estimating the quantity of the tracer nuclide, the permissible quantity of interfering nuclides, the permissible background radiation intensity, and the flux of activating radiation. In the worker described herein these investigations are generalized to include spectrally resolved detection of the activation effect in the presence of the tracer and the interfering nuclides. The analytical expressions are combined into a system from which the optimal activation parameters can be found under different given conditions

  18. Optimal energy management of HEVs with hybrid storage system

    International Nuclear Information System (INIS)

    Vinot, E.; Trigui, R.

    2013-01-01

    Highlights: • A battery and ultra-capacitor system for parallel hybrid vehicle is considered. • Optimal management using Pontryagin’s minimum principle is developed. • Battery stress limitation is taken into account by means of RMS current. • Rule based management approaching the optimal control is proposed. • Comparison between rule based and optimal management are proposed using Pareto front. - Abstract: Energy storage systems are a key point in the design and development of electric and hybrid vehicles. In order to reduce the battery size and its current stress, a hybrid storage system, where a battery is coupled with an electrical double-layer capacitor (EDLC) is considered in this paper. The energy management of such a configuration is not obvious and the optimal operation concerning the energy consumption and battery RMS current has to be identified. Most of the past work on the optimal energy management of HEVs only considered one additional power source. In this paper, the control of a hybrid vehicle with a hybrid storage system (HSS), where two additional power sources are used, is presented. Applying the Pontryagin’s minimum principle, an optimal energy management strategy is found and compared to a rule-based parameterized control strategy. Simulation results are shown and discussed. Applied on a small compact car, optimal and ruled-based methods show that gains of fuel consumption and/or a battery RMS current higher than 15% may be obtained. The paper also proves that a well tuned rule-based algorithm presents rather good performances when compared to the optimal strategy and remains relevant for different driving cycles. This rule-based algorithm may easily be implemented in a vehicle prototype or in an HIL test bench

  19. The Hubble Constant from SN Refsdal

    Science.gov (United States)

    Vega-Ferrero, J.; Diego, J. M.; Miranda, V.; Bernstein, G. M.

    2018-02-01

    Hubble Space Telescope observations from 2015 December 11 detected the expected fifth counter-image of supernova (SN) Refsdal at z = 1.49. In this Letter, we compare the time-delay predictions from numerous models with the measured value derived by Kelly et al. from very early data in the light curve of the SN Refsdal and find a best value for {H}0={64}-11+9 {km} {{{s}}}-1 {{Mpc}}-1 (68% CL), in excellent agreement with predictions from cosmic microwave background and recent weak lensing data + baryon acoustic oscillations + Big Bang nucleosynthesis (from the DES Collaboration). This is the first constraint on H 0 derived from time delays between multiple-lensed SN images, and the first with a galaxy cluster lens, subject to systematic effects different from other time-delay H 0 estimates. Additional time-delay measurements from new multiply imaged SNe will allow derivation of competitive constraints on H 0.

  20. Hubble diagram as a probe of minicharged particles

    International Nuclear Information System (INIS)

    Ahlers, Markus

    2009-01-01

    The luminosity-redshift relation of cosmological standard candles provides information about the relative energy composition of our Universe. In particular, the observation of type Ia supernovae up to a redshift of z∼2 indicates a universe which is dominated today by dark matter and dark energy. The propagation distance of light from these sources is of the order of the Hubble radius and serves as a very sensitive probe of feeble inelastic photon interactions with background matter, radiation, or magnetic fields. In this paper we discuss the limits on minicharged particle models arising from a dimming effect in supernova surveys. We briefly speculate about a strong dimming effect as an alternative to dark energy.

  1. WHITE DWARF-RED DWARF SYSTEMS RESOLVED WITH THE HUBBLE SPACE TELESCOPE. II. FULL SNAPSHOT SURVEY RESULTS

    International Nuclear Information System (INIS)

    Farihi, J.; Hoard, D. W.; Wachter, S.

    2010-01-01

    Results are presented for a Hubble Space Telescope Advanced Camera for Surveys high-resolution imaging campaign of 90 white dwarfs with known or suspected low-mass stellar and substellar companions. Of the 72 targets that remain candidate and confirmed white dwarfs with near-infrared excess, 43 are spatially resolved into two or more components, and a total of 12 systems are potentially triples. For 68 systems where a comparison is possible, 50% have significant photometric distance mismatches between their white dwarf and M dwarf components, suggesting that white dwarf parameters derived spectroscopically are often biased due to the cool companion. Interestingly, 9 of the 30 binaries known to have emission lines are found to be visual pairs and hence widely separated, indicating an intrinsically active cool star and not irradiation from the white dwarf. There is a possible, slight deficit of earlier spectral types (bluer colors) among the spatially unresolved companions, exactly the opposite of expectations if significant mass is transferred to the companion during the common envelope phase. Using the best available distance estimates, the low-mass companions to white dwarfs exhibit a bimodal distribution in projected separation. This result supports the hypothesis that during the giant phases of the white dwarf progenitor, any unevolved companions either migrate inward to short periods of hours to days, or outward to periods of hundreds to thousands of years. No intermediate projected separations of a few to several AU are found among these pairs. However, a few double M dwarfs (within triples) are spatially resolved in this range, empirically demonstrating that such separations were readily detectable among the binaries with white dwarfs. A straightforward and testable prediction emerges: all spatially unresolved, low-mass stellar and substellar companions to white dwarfs should be in short-period orbits. This result has implications for substellar companion and

  2. A New Approach for Optimal Sizing of Standalone Photovoltaic Systems

    OpenAIRE

    Khatib, Tamer; Mohamed, Azah; Sopian, K.; Mahmoud, M.

    2012-01-01

    This paper presents a new method for determining the optimal sizing of standalone photovoltaic (PV) system in terms of optimal sizing of PV array and battery storage. A standalone PV system energy flow is first analysed, and the MATLAB fitting tool is used to fit the resultant sizing curves in order to derive general formulas for optimal sizing of PV array and battery. In deriving the formulas for optimal sizing of PV array and battery, the data considered are based on five sites in Malaysia...

  3. Analysis and optimization of hybrid electric vehicle thermal management systems

    Science.gov (United States)

    Hamut, H. S.; Dincer, I.; Naterer, G. F.

    2014-02-01

    In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.

  4. Optimal robust control strategy of a solid oxide fuel cell system

    Science.gov (United States)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  5. Scalable and near-optimal design space exploration for embedded systems

    CERN Document Server

    Kritikakou, Angeliki; Goutis, Costas

    2014-01-01

    This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies.  The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems.  Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.   • Describes design space exploration (DSE) methodologies for data storage and processing in embedded systems, which achieve near-optimal solutions with scalable exploration time; • Presents a set of principles and the processes which support the development of the proposed scalable and near-optimal methodologies; • Enables readers to apply scalable and near-optimal methodologies to the intra-signal in-place optimization step for both regular and irregular memory accesses.

  6. HUBBLE'S ULTRAVIOLET VIEWS OF NEARBY GALAXIES YIELD CLUES TO EARLY UNIVERSE

    Science.gov (United States)

    2002-01-01

    Astronomers are using these three NASA Hubble Space Telescope images to help tackle the question of why distant galaxies have such odd shapes, appearing markedly different from the typical elliptical and spiral galaxies seen in the nearby universe. Do faraway galaxies look weird because they are truly weird? Or, are they actually normal galaxies that look like oddballs, because astronomers are getting an incomplete picture of them, seeing only the brightest pieces? Light from these galaxies travels great distances (billions of light-years) to reach Earth. During its journey, the light is 'stretched' due to the expansion of space. As a result, the light is no longer visible, but has been shifted to the infrared where present instruments are less sensitive. About the only light astronomers can see comes from regions where hot, young stars reside. These stars emit mostly ultraviolet light. But this light is stretched, appearing as visible light by the time it reaches Earth. Studying these distant galaxies is like trying to put together a puzzle with some of the pieces missing. What, then, do distant galaxies really look like? Astronomers studied 37 nearby galaxies to find out. By viewing these galaxies in ultraviolet light, astronomers can compare their shapes with those of their distant relatives. These three Hubble telescope pictures, taken with the Wide Field and Planetary Camera 2, represent a sampling from that survey. Astronomers observed the galaxies in ultraviolet and visible light to study all the stars that make up these 'cities of stars.' The results of their survey support the idea that astronomers are detecting the 'tip of the iceberg' of very distant galaxies. Based on these Hubble ultraviolet images, not all the faraway galaxies necessarily possess intrinsically odd shapes. The results are being presented today at the 197th meeting of the American Astronomical Society in San Diego, CA. The central region of the 'star-burst' spiral galaxy at far left

  7. Hubble 3D: A Science and Hollywood Collaboration Made (Nearly) in Heaven

    Science.gov (United States)

    Showstack, Randy

    2010-04-01

    Just 2 days after the 2010 Academy Awards® ceremony in early March bestowed Oscars® for motion picture achievements, NASA deputy administrator Lori Garver touted a new film about the Hubble Space Telescope, Hubble 3D, for best drama, special effects, screenplay, actors and actress, and director and producer. The 43-minute IMAX and Warner Brothers Pictures production, which opened in theaters on 19 March, is an example of the ability of Hollywood and the science community to partner in providing a dynamic educational and entertaining product, according to a number of people associated with the film. Sharing the red carpet at the Smithsonian National Air and Space Museum in Washington, D. C., with astronauts and others to mark the world premiere, Garver said the film shows the drama of the astronauts’ efforts to repair the telescope while traveling 17,000 miles per hour and performing grueling space walks (see Figure 1). “We have literally opened our eyes on the universe through this telescope,” she said. “This is a taxpayer-funded agency, and we are giving back to the public the very story that they paid for.”

  8. Optimization criteria for control and instrumentation systems in nuclear power plants

    International Nuclear Information System (INIS)

    Gonzalez, A.J.

    1978-01-01

    The system of dose limitation recently recommended by the International Commission on Radiation Protection includes, as a base for deciding what is reasonably achievable in dose reduction, the optimization of radioprotection systems. This paper, after compiling relevant points in the new system, discusses the application of optimization to control and instrumentation of radioprotection systems in nuclear power plants. Furthermore, an extension of the optimization criterion to nuclear safety systems is also presented and its application to control and instrumentation is discussed; systems including majority logics are particularly scrutinized. Finally, eventual regulatory implications are described. (author)

  9. Testing the Interacting Dark Energy Model with Cosmic Microwave Background Anisotropy and Observational Hubble Data

    Directory of Open Access Journals (Sweden)

    Weiqiang Yang

    2017-07-01

    Full Text Available The coupling between dark energy and dark matter provides a possible approach to mitigate the coincidence problem of the cosmological standard model. In this paper, we assumed the interacting term was related to the Hubble parameter, energy density of dark energy, and equation of state of dark energy. The interaction rate between dark energy and dark matter was a constant parameter, which was, Q = 3 H ξ ( 1 + w x ρ x . Based on the Markov chain Monte Carlo method, we made a global fitting on the interacting dark energy model from Planck 2015 cosmic microwave background anisotropy and observational Hubble data. We found that the observational data sets slightly favored a small interaction rate between dark energy and dark matter; however, there was not obvious evidence of interaction at the 1 σ level.

  10. Comments on `A discrete optimal control problem for descriptor systems'

    DEFF Research Database (Denmark)

    Ravn, Hans

    1990-01-01

    In the above-mentioned work (see ibid., vol.34, p.177-81 (1989)), necessary and sufficient optimality conditions are derived for a discrete-time optimal problem, as well as other specific cases of implicit and explicit dynamic systems. The commenter corrects a mistake and demonstrates that there ......In the above-mentioned work (see ibid., vol.34, p.177-81 (1989)), necessary and sufficient optimality conditions are derived for a discrete-time optimal problem, as well as other specific cases of implicit and explicit dynamic systems. The commenter corrects a mistake and demonstrates...

  11. Game-theoretic learning and distributed optimization in memoryless multi-agent systems

    CERN Document Server

    Tatarenko, Tatiana

    2017-01-01

    This book presents new efficient methods for optimization in realistic large-scale, multi-agent systems. These methods do not require the agents to have the full information about the system, but instead allow them to make their local decisions based only on the local information, possibly obtained during scommunication with their local neighbors. The book, primarily aimed at researchers in optimization and control, considers three different information settings in multi-agent systems: oracle-based, communication-based, and payoff-based. For each of these information types, an efficient optimization algorithm is developed, which leads the system to an optimal state. The optimization problems are set without such restrictive assumptions as convexity of the objective functions, complicated communication topologies, closed-form expressions for costs and utilities, and finiteness of the system’s state space. .

  12. Optimal design of power system stabilizer for power systems including doubly fed induction generator wind turbines

    International Nuclear Information System (INIS)

    Derafshian, Mehdi; Amjady, Nima

    2015-01-01

    This paper presents an evolutionary algorithm-based approach for optimal design of power system stabilizer (PSS) for multi-machine power systems that include doubly fed induction generator wind turbines. The proposed evolutionary algorithm is an improved particle swarm optimization named chaotic particle swarm optimization with passive congregation (CPSO-PC) applied for finding the optimal settings of PSS parameters. Two different eigenvalue-based objectives are combined as the objective function for the optimization problem of tuning PSS parameters. The first objective function comprises the damping factor of lightly damped electro-mechanical modes and the second one includes the damping ratio of these modes. The effectiveness of the proposed method to design PSS for the power systems including DFIG (Doubly Fed Induction Generator) is extensively demonstrated through eigenvalue analysis and time-domain simulations and also by comparing its simulation results with the results of other heuristic optimization approaches. - Highlights: • A new optimization model for design of PSS in power systems including DFIG is proposed. • A detailed and realistic modeling of DFIG is presented. • A new evolutionary algorithm is suggested for solving the optimization problem of designing PSS

  13. Optimal boundary control and boundary stabilization of hyperbolic systems

    CERN Document Server

    Gugat, Martin

    2015-01-01

    This brief considers recent results on optimal control and stabilization of systems governed by hyperbolic partial differential equations, specifically those in which the control action takes place at the boundary.  The wave equation is used as a typical example of a linear system, through which the author explores initial boundary value problems, concepts of exact controllability, optimal exact control, and boundary stabilization.  Nonlinear systems are also covered, with the Korteweg-de Vries and Burgers Equations serving as standard examples.  To keep the presentation as accessible as possible, the author uses the case of a system with a state that is defined on a finite space interval, so that there are only two boundary points where the system can be controlled.  Graduate and post-graduate students as well as researchers in the field will find this to be an accessible introduction to problems of optimal control and stabilization.

  14. Simulation-based optimization of sustainable national energy systems

    International Nuclear Information System (INIS)

    Batas Bjelić, Ilija; Rajaković, Nikola

    2015-01-01

    The goals of the EU2030 energy policy should be achieved cost-effectively by employing the optimal mix of supply and demand side technical measures, including energy efficiency, renewable energy and structural measures. In this paper, the achievement of these goals is modeled by introducing an innovative method of soft-linking of EnergyPLAN with the generic optimization program (GenOpt). This soft-link enables simulation-based optimization, guided with the chosen optimization algorithm, rather than manual adjustments of the decision vectors. In order to obtain EnergyPLAN simulations within the optimization loop of GenOpt, the decision vectors should be chosen and explained in GenOpt for scenarios created in EnergyPLAN. The result of the optimization loop is an optimal national energy master plan (as a case study, energy policy in Serbia was taken), followed with sensitivity analysis of the exogenous assumptions and with focus on the contribution of the smart electricity grid to the achievement of EU2030 goals. It is shown that the increase in the policy-induced total costs of less than 3% is not significant. This general method could be further improved and used worldwide in the optimal planning of sustainable national energy systems. - Highlights: • Innovative method of soft-linking of EnergyPLAN with GenOpt has been introduced. • Optimal national energy master plan has been developed (the case study for Serbia). • Sensitivity analysis on the exogenous world energy and emission price development outlook. • Focus on the contribution of smart energy systems to the EU2030 goals. • Innovative soft-linking methodology could be further improved and used worldwide.

  15. Development of nickel hydrogen battery expert system

    Science.gov (United States)

    Shiva, Sajjan G.

    1990-01-01

    The Hubble Telescope Battery Testbed employs the nickel-cadmium battery expert system (NICBES-2) which supports the evaluation of performances of Hubble Telescope spacecraft batteries and provides alarm diagnosis and action advice. NICBES-2 also provides a reasoning system along with a battery domain knowledge base to achieve this battery health management function. An effort to modify NICBES-2 to accommodate nickel-hydrogen battery environment in testbed is described.

  16. Optimization of a polygeneration system for energy demands of a livestock farm

    Directory of Open Access Journals (Sweden)

    Mančić Marko V.

    2016-01-01

    Full Text Available A polygeneration system is an energy system capable of providing multiple utility outputs to meet local demands by application of process integration. This paper addresses the problem of pinpointing the optimal polygeneration energy supply system for the local energy demands of a livestock farm in terms of optimal system configuration and optimal system capacity. The optimization problem is presented and solved for a case study of a pig farm in the paper. Energy demands of the farm, as well as the super-structure of the polygeneration system were modelled using TRNSYS software. Based on the locally available resources, the following polygeneration modules were chosen for the case study analysis: a biogas fired internal combustion engine co-generation module, a gas boiler, a chiller, a ground water source heat pump, solar thermal collectors, photovoltaic collectors, and heat and cold storage. Capacities of the polygeneration modules were used as optimization variables for the TRNSYS-GenOpt optimization, whereas net present value, system primary energy consumption, and CO2 emissions were used as goal functions for optimization. A hybrid system composed of biogas fired internal combustion engine based co-generation system, adsorption chiller solar thermal and photovoltaic collectors, and heat storage is found to be the best option. Optimal heating capacity of the biogas co-generation and adsorption units was found equal to the design loads, whereas the optimal surface of the solar thermal array is equal to the south office roof area, and the optimal surface of the PV array corresponds to the south facing animal housing building rooftop area. [Projekat Ministarstva nauke Republike Srbije, br. III 42006: Research and development of energy and environmentally highly effective polygeneration systems based on using renewable energy sources

  17. The availability of the step optimization in Monaco planning system

    International Nuclear Information System (INIS)

    Kim, Dae Sup

    2014-01-01

    We present a method to reduce this gap and complete the treatment plan, to be made by the re-optimization is performed in the same conditions as the initial treatment plan different from Monaco treatment planning system. The optimization is carried in two steps when performing the inverse calculation for volumetric modulated radiation therapy or intensity modulated radiation therapy in Monaco treatment planning system. This study was the first plan with a complete optimization in two steps by performing all of the treatment plan, without changing the optimized condition from Step 1 to Step 2, a typical sequential optimization performed. At this time, the experiment was carried out with a pencil beam and Monte Carlo algorithm is applied In step 2. We compared initial plan and re-optimized plan with the same optimized conditions. And then evaluated the planning dose by measurement. When performing a re-optimization for the initial treatment plan, the second plan applied the step optimization. When the common optimization again carried out in the same conditions in the initial treatment plan was completed, the result is not the same. From a comparison of the treatment planning system, similar to the dose-volume the histogram showed a similar trend, but exhibit different values that do not satisfy the conditions best optimized dose, dose homogeneity and dose limits. Also showed more than 20% different in comparison dosimetry. If different dose algorithms, this measure is not the same out. The process of performing a number of trial and error, and you get to the ultimate goal of treatment planning optimization process. If carried out to optimize the completion of the initial trust only the treatment plan, we could be made of another treatment plan. The similar treatment plan could not satisfy to optimization results. When you perform re-optimization process, you will need to apply the step optimized conditions, making sure the dose distribution through the optimization

  18. Hybrid computer optimization of systems with random parameters

    Science.gov (United States)

    White, R. C., Jr.

    1972-01-01

    A hybrid computer Monte Carlo technique for the simulation and optimization of systems with random parameters is presented. The method is applied to the simultaneous optimization of the means and variances of two parameters in the radar-homing missile problem treated by McGhee and Levine.

  19. Optimal sensor configuration for complex systems

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    . The procedure for sensor configuration is based on the simultaneous perturbation stochastic approximation (SPSA) algorithm. SPSA avoids the need for detailed modeling of the sensor response by simply relying on the observed responses obtained by limited experimentation with test sensor configurations. We......The paper considers the problem of sensor configuration for complex systems with the aim of maximizing the useful information about certain quantities of interest. Our approach involves: 1) definition of an appropriate optimality criterion or performance measure; and 2) description of an efficient...... and practical algorithm for achieving the optimality objective. The criterion for optimal sensor configuration is based on maximizing the overall sensor response while minimizing the correlation among the sensor outputs, so as to minimize the redundant information being provided by the multiple sensors...

  20. Reliability optimization using multiobjective ant colony system approaches

    International Nuclear Information System (INIS)

    Zhao Jianhua; Liu Zhaoheng; Dao, M.-T.

    2007-01-01

    The multiobjective ant colony system (ACS) meta-heuristic has been developed to provide solutions for the reliability optimization problem of series-parallel systems. This type of problems involves selection of components with multiple choices and redundancy levels that produce maximum benefits, and is subject to the cost and weight constraints at the system level. These are very common and realistic problems encountered in conceptual design of many engineering systems. It is becoming increasingly important to develop efficient solutions to these problems because many mechanical and electrical systems are becoming more complex, even as development schedules get shorter and reliability requirements become very stringent. The multiobjective ACS algorithm offers distinct advantages to these problems compared with alternative optimization methods, and can be applied to a more diverse problem domain with respect to the type or size of the problems. Through the combination of probabilistic search, multiobjective formulation of local moves and the dynamic penalty method, the multiobjective ACSRAP, allows us to obtain an optimal design solution very frequently and more quickly than with some other heuristic approaches. The proposed algorithm was successfully applied to an engineering design problem of gearbox with multiple stages

  1. Biomass based optimal cogeneration system for paper industry

    Energy Technology Data Exchange (ETDEWEB)

    Ashok, S.; Jayaraj, S. [National Inst. of Technology, Calicut (India)

    2008-07-01

    A mathematical model of a biomass supported steam turbine cogeneration system was presented. The multi-time interval non-linear model used genetic algorithms to determine optimal operating costs. The cogeneration system consisted of steam boilers; steam headers at different pressure levels; steam turbines operating at different capacities; and other auxiliary devices. System components were modelled separately to determine constraints and costs. Total costs were obtained by summing up costs corresponding to all equipment. Cost functions were fuel cost; grid electricity cost; grid electricity export revenues; start-up costs; and shut-down costs. The non-linear optimization model was formulated by considering equal intervals of 1-hour intervals. A case study of a typical paper industry plant system was considered using coal, black liquor, and groundnut shells. Results of the study showed that the use of groundnut shells as a fuel resulted in a savings of 11.1 per cent of the total monthly operating costs while delivering 48.6 MWh daily to the electricity grid after meeting the plant's total energy requirements. It was concluded that the model can be used to optimize cogeneration systems in paper plants. 14 refs., 3 tabs., 3 figs.

  2. A bivariate optimal replacement policy for a multistate repairable system

    International Nuclear Information System (INIS)

    Zhang Yuanlin; Yam, Richard C.M.; Zuo, Ming J.

    2007-01-01

    In this paper, a deteriorating simple repairable system with k+1 states, including k failure states and one working state, is studied. It is assumed that the system after repair is not 'as good as new' and the deterioration of the system is stochastic. We consider a bivariate replacement policy, denoted by (T,N), in which the system is replaced when its working age has reached T or the number of failures it has experienced has reached N, whichever occurs first. The objective is to determine the optimal replacement policy (T,N)* such that the long-run expected profit per unit time is maximized. The explicit expression of the long-run expected profit per unit time is derived and the corresponding optimal replacement policy can be determined analytically or numerically. We prove that the optimal policy (T,N)* is better than the optimal policy N* for a multistate simple repairable system. We also show that a general monotone process model for a multistate simple repairable system is equivalent to a geometric process model for a two-state simple repairable system in the sense that they have the same structure for the long-run expected profit (or cost) per unit time and the same optimal policy. Finally, a numerical example is given to illustrate the theoretical results

  3. Price-based Optimal Control of Electrical Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Jokic, A.

    2007-09-10

    The research presented in this thesis is motivated by the following issue of concern for the operation of future power systems: Future power systems will be characterized by significantly increased uncertainties at all time scales and, consequently, their behavior in time will be difficult to predict. In Chapter 2 we will present a novel explicit, dynamic, distributed feedback control scheme that utilizes nodal-prices for real-time optimal power balance and network congestion control. The term explicit means that the controller is not based on solving an optimization problem on-line. Instead, the nodal prices updates are based on simple, explicitly defined and easily comprehensible rules. We prove that the developed control scheme, which acts on the measurements from the current state of the system, always provide the correct nodal prices. In Chapter 3 we will develop a novel, robust, hybrid MPC control (model predictive controller) scheme for power balance control with hard constraints on line power flows and network frequency deviations. The developed MPC controller acts in parallel with the explicit controller from Chapter 2, and its task is to enforce the constraints during the transient periods following suddenly occurring power imbalances in the system. In Chapter 4 the concept of autonomous power networks will be presented as a concise formulation to deal with economic, technical and reliability issues in power systems with a large penetration of distributed generating units. With autonomous power networks as new market entities, we propose a novel operational structure of ancillary service markets. In Chapter 5 we will consider the problem of controlling a general linear time-invariant dynamical system to an economically optimal operating point, which is defined by a multiparametric constrained convex optimization problem related with the steady-state operation of the system. The parameters in the optimization problem are values of the exogenous inputs to

  4. Optimizing graph algorithms on pregel-like systems

    KAUST Repository

    Salihoglu, Semih

    2014-03-01

    We study the problem of implementing graph algorithms efficiently on Pregel-like systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high communication or computation cost, typically due to structural properties of the input graphs such as large diameters or skew in component sizes. We describe several optimization techniques to address these inefficiencies. Our most general technique is based on the idea of performing some serial computation on a tiny fraction of the input graph, complementing Pregel\\'s vertex-centric parallelism. We base our study on thorough implementations of several fundamental graph algorithms, some of which have, to the best of our knowledge, not been implemented on Pregel-like systems before. The algorithms and optimizations we describe are fully implemented in our open-source Pregel implementation. We present detailed experiments showing that our optimization techniques improve runtime significantly on a variety of very large graph datasets.

  5. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  6. Optimizing Storage and Renewable Energy Systems with REopt

    Energy Technology Data Exchange (ETDEWEB)

    Elgqvist, Emma M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, Katherine H. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Cutler, Dylan S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); DiOrio, Nicholas A. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Laws, Nicholas D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Olis, Daniel R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Walker, H. A. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-12-27

    Under the right conditions, behind the meter (BTM) storage combined with renewable energy (RE) technologies can provide both cost savings and resiliency. Storage economics depend not only on technology costs and avoided utility rates, but also on how the technology is operated. REopt, a model developed at NREL, can be used to determine the optimal size and dispatch strategy for BTM or off-grid applications. This poster gives an overview of three applications of REopt: Optimizing BTM Storage and RE to Extend Probability of Surviving Outage, Optimizing Off-Grid Energy System Operation, and Optimizing Residential BTM Solar 'Plus'.

  7. The Hubble law and the spiral structures of galaxies from equations of motion in general relativity

    International Nuclear Information System (INIS)

    Sachs, M.

    1975-01-01

    Fully exploiting the Lie group that characterizes the underlying symmetry of general relativity theory, Einstein's tensor formalism factorizes, yielding a generalized (16-component) quaternion field formalism. The associated generalized geodesic equation, taken as the equation of motion of a star, predicts the Hubble law from one approximation for the generally covariant equations of motion, and the spiral structure of galaxies from another approximation. These results depend on the imposition of appropriate boundary conditions. The Hubble law follows when the boundary conditions derive from the oscillating model cosmology, and not from the other cosmological models. The spiral structures of the galaxies follow from the same boundary conditions, but with a different time scale than for the whole universe. The solutions that imply the spiral motion are Fresnel integrals. These predict the star's motion to be along the 'Cornu Spiral'. The part of this spiral in the first quadrant is the imploding phase of the galaxy, corresponding to a motion with continually decreasing radii, approaching the galactic center as time increases. The part of the Cornu Spiral' in the third quadrant is the exploding phase, corresponding to continually increasing radii, as the star moves out from the hub. The spatial origin in the coordinate system of this curve is the inflection point, where the explosion changes to implosion. The two- (or many-) armed spiral galaxies are explained here in terms of two (or many) distinct explosions occurring at displaced times, in the domain of the rotating, planar galaxy. (author)

  8. Optimal maintenance of a multi-unit system under dependencies

    Science.gov (United States)

    Sung, Ho-Joon

    The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the

  9. Discovery of Hubble's Law as a Series of Type III Errors

    Science.gov (United States)

    Belenkiy, Ari

    2015-01-01

    Recently much attention has been paid to the history of the discovery of Hubble's law--the linear relation between the rate of recession of the remote galaxies and distance to them from Earth. Though historians of cosmology now mention several names associated with this law instead of just one, the motivation of each actor of that remarkable…

  10. Ground Vehicle System Integration (GVSI) and Design Optimization Model

    National Research Council Canada - National Science Library

    Horton, William

    1996-01-01

    This report documents the Ground Vehicle System Integration (GVSI) and Design Optimization Model GVSI is a top-level analysis tool designed to support engineering tradeoff studies and vehicle design optimization efforts...

  11. Collaborative Systems Driven Aircraft Configuration Design Optimization

    OpenAIRE

    Shiva Prakasha, Prajwal; Ciampa, Pier Davide; Nagel, Björn

    2016-01-01

    A Collaborative, Inside-Out Aircraft Design approach is presented in this paper. An approach using physics based analysis to evaluate the correlations between the airframe design, as well as sub-systems integration from the early design process, and to exploit the synergies within a simultaneous optimization process. Further, the disciplinary analysis modules involved in the optimization task are located in different organization. Hence, the Airframe and Subsystem design tools are integrated ...

  12. Control and System Theory, Optimization, Inverse and Ill-Posed Problems

    Science.gov (United States)

    1988-09-14

    Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The

  13. Optimal Control of Switching Linear Systems

    Directory of Open Access Journals (Sweden)

    Ali Benmerzouga

    2004-06-01

    Full Text Available A solution to the control of switching linear systems with input constraints was given in Benmerzouga (1997 for both the conventional enumeration approach and the new approach. The solution given there turned out to be not unique. The main objective in this work is to determine the optimal control sequences {Ui(k ,  i = 1,..., M ;  k = 0, 1, ...,  N -1} which transfer the system from a given initial state  X0  to a specific target state  XT  (or to be as close as possible by using the same discrete time solution obtained in Benmerzouga (1997 and minimizing a running cost-to-go function. By using the dynamic programming technique, the optimal solution is found for both approaches given in Benmerzouga (1997. The computational complexity of the modified algorithm is also given.

  14. Optimal Formation of Multirobot Systems Based on a Recurrent Neural Network.

    Science.gov (United States)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Yu, Junzhi; Tan, Min

    2016-02-01

    The optimal formation problem of multirobot systems is solved by a recurrent neural network in this paper. The desired formation is described by the shape theory. This theory can generate a set of feasible formations that share the same relative relation among robots. An optimal formation means that finding one formation from the feasible formation set, which has the minimum distance to the initial formation of the multirobot system. Then, the formation problem is transformed into an optimization problem. In addition, the orientation, scale, and admissible range of the formation can also be considered as the constraints in the optimization problem. Furthermore, if all robots are identical, their positions in the system are exchangeable. Then, each robot does not necessarily move to one specific position in the formation. In this case, the optimal formation problem becomes a combinational optimization problem, whose optimal solution is very hard to obtain. Inspired by the penalty method, this combinational optimization problem can be approximately transformed into a convex optimization problem. Due to the involvement of the Euclidean norm in the distance, the objective function of these optimization problems are nonsmooth. To solve these nonsmooth optimization problems efficiently, a recurrent neural network approach is employed, owing to its parallel computation ability. Finally, some simulations and experiments are given to validate the effectiveness and efficiency of the proposed optimal formation approach.

  15. A methodology for optimal sizing of autonomous hybrid PV/wind system

    International Nuclear Information System (INIS)

    Diaf, S.; Diaf, D.; Belhamel, M.; Haddadi, M.; Louche, A.

    2007-01-01

    The present paper presents a methodology to perform the optimal sizing of an autonomous hybrid PV/wind system. The methodology aims at finding the configuration, among a set of systems components, which meets the desired system reliability requirements, with the lowest value of levelized cost of energy. Modelling a hybrid PV/wind system is considered as the first step in the optimal sizing procedure. In this paper, more accurate mathematical models for characterizing PV module, wind generator and battery are proposed. The second step consists to optimize the sizing of a system according to the loss of power supply probability (LPSP) and the levelized cost of energy (LCE) concepts. Considering various types and capacities of system devices, the configurations, which can meet the desired system reliability, are obtained by changing the type and size of the devices systems. The configuration with the lowest LCE gives the optimal choice. Applying this method to an assumed PV/wind hybrid system to be installed at Corsica Island, the simulation results show that the optimal configuration, which meet the desired system reliability requirements (LPSP=0) with the lowest LCE, is obtained for a system comprising a 125 W photovoltaic module, one wind generator (600 W) and storage batteries (using 253 Ah). On the other hand, the device system choice plays an important role in cost reduction as well as in energy production

  16. Optimal reduction of flexible dynamic system

    International Nuclear Information System (INIS)

    Jankovic, J.

    1994-01-01

    Dynamic system reduction is basic procedure in various problems of active control synthesis of flexible structures. In this paper is presented direct method for system reduction by explicit extraction of modes included in reduced model form. Criterion for optimal system discrete approximation in synthesis reduced dynamic model is also presented. Subjected method of system decomposition is discussed in relation to the Schur method of solving matrix algebraic Riccati equation as condition for system reduction. By using exposed method procedure of flexible system reduction in addition with corresponding example is presented. Shown procedure is powerful in problems of active control synthesis of flexible system vibrations

  17. The Optimal Operation Criteria for a Gas Turbine Cogeneration System

    Directory of Open Access Journals (Sweden)

    Atsushi Akisawa

    2009-04-01

    Full Text Available The study demonstrated the optimal operation criteria of a gas turbine cogeneration system based on the analytical solution of a linear programming model. The optimal operation criteria gave the combination of equipment to supply electricity and steam with the minimum energy cost using the energy prices and the performance of equipment. By the comparison with a detailed optimization result of an existing cogeneration plant, it was shown that the optimal operation criteria successfully provided a direction for the system operation under the condition where the electric power output of the gas turbine was less than the capacity

  18. Contribution to the optimal sizing of the hybrid photovoltaic systems

    International Nuclear Information System (INIS)

    Dimitrov, Dimitar

    2009-01-01

    In this thesis, hybrid photovoltaic (HPV) systems are considered, in which the electricity is generated by a photovoltaic generator, and additionally by a diesel genset. Within this, a software tool for optimal sizing and designing was developed, which was used for optimization of HPV systems, aimed for supplying a small rural village. For optimization, genetic algorithms were used, optimizing 10 HPV system parameters (rated power of the components, battery capacity, dispatching strategy parameters etc.). The optimization objective is to size and design systems that continuously supply the load, with the lowest net electricity cost. In order to speed up the optimization process, the most suitable genetic algorithm settings were chosen by an in-depth previous analysis. Using measurements, the characteristics of PV generator working in real conditions were obtained. According to this, input values for the PV generator simulation model were adapted. It is introduced a quasi-steady battery simulation model, which avoid the voltage and state-of-the-charge value variation problems, when constant current charging/discharging, within a time step interval, is used. This model takes into account the influence of the battery temperature to its operational characteristics. There were also introduced simulation model improvements to the other components in the HPV systems. Using long-term measurement records, validity of solar radiation and air temperature data was checked. It was also analyzed the sensitivity of the obtained optimized HPV systems to the variation of the prices of the: components, fuel and economic rates. Based on the values of multi-decade records for more locations in the Balkan region, it was estimated the occurrence probability of the solar radiation values. This was used for analysing the sensitivity of some HPV performances to the expected stochastic variations of the solar radiation values. (Author)

  19. HUBBLE CAPTURES THE HEART OF STAR BIRTH

    Science.gov (United States)

    2002-01-01

    NASA Hubble Space Telescope's Wide Field and Planetary Camera 2 (WFPC2) has captured a flurry of star birth near the heart of the barred spiral galaxy NGC 1808. On the left are two images, one superimposed over the other. The black-and-white picture is a ground-based view of the entire galaxy. The color inset image, taken with the Hubble telescope's Wide Field and Planetary Camera 2 (WFPC2), provides a close-up view of the galaxy's center, the hotbed of vigorous star formation. The ground-based image shows that the galaxy has an unusual, warped shape. Most spiral galaxies are flat disks, but this one has curls of dust and gas at its outer spiral arms (upper right-hand corner and lower left-hand corner). This peculiar shape is evidence that NGC 1808 may have had a close interaction with another nearby galaxy, NGC 1792, which is not in the picture Such an interaction could have hurled gas towards the nucleus of NGC 1808, triggering the exceptionally high rate of star birth seen in the WFPC2 inset image. The WFPC2 inset picture is a composite of images using colored filters that isolate red and infrared light as well as light from glowing hydrogen. The red and infrared light (seen as yellow) highlight older stars, while hydrogen (seen as blue) reveals areas of star birth. Colors were assigned to this false-color image to emphasize the vigorous star formation taking place around the galaxy's center. NGC 1808 is called a barred spiral galaxy because of the straight lines of star formation on both sides of the bright nucleus. This star formation may have been triggered by the rotation of the bar, or by matter which is streaming along the bar towards the central region (and feeding the star burst). Filaments of dust are being ejected from the core into a faint halo of stars surrounding the galaxy's disk (towards the upper left corner) by massive stars that have exploded as supernovae in the star burst region. The portion of the galaxy seen in this 'wide-field' image is

  20. Optimization of Multibrid Permanent-Magnet Wind Generator Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Li, H.; Polinder, H.

    2009-01-01

    and multibrid wind turbine configurations are obtained, and the suitable ranges of gear ratios for different power ratings are investigated. Finally, the detailed comparisons of themost cost-effective multibridPMgenerator system and the optimized direct-drive PM generator system are also presented and discussed....... The comparative results have shown that the multibrid wind turbine concept appears more cost-effective than the direct-drive concept.......This paper investigates the cost-effective ranges of gearbox ratios and power ratings of multibrid permanent-magnet (PM) wind generator systems by using a design optimization method. First, the analytical model of a multibrid wind turbine concept consisting of a single-stage gearbox and a three...

  1. Super-Hubble de Sitter fluctuations and the dynamical RG

    Energy Technology Data Exchange (ETDEWEB)

    Burgess, C.P. [Department of Physics and Astronomy, McMaster University, Hamilton, Ontario (Canada); Leblond, L.; Shandera, S. [Perimeter Institute for Theoretical Physics, Waterloo, Ontario (Canada); Holman, R., E-mail: cburgess@perimeterinstitute.ca, E-mail: lleblond@perimeterinstitute.ca, E-mail: rha@andrew.cmu.edu, E-mail: sshandera@perimeterinstitute.ca [Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213 (United States)

    2010-03-01

    Perturbative corrections to correlation functions for interacting theories in de Sitter spacetime often grow secularly with time, due to the properties of fluctuations on super-Hubble scales. This growth can lead to a breakdown of perturbation theory at late times. We argue that Dynamical Renormalization Group (DRG) techniques provide a convenient framework for interpreting and resumming these secularly growing terms. In the case of a massless scalar field in de Sitter with quartic self-interaction, the resummed result is also less singular in the infrared, in precisely the manner expected if a dynamical mass is generated. We compare this improved infrared behavior with large-N expansions when applicable.

  2. Electrostatic Studies for the 2008 Hubble Service Repair Mission

    Science.gov (United States)

    Buhler, C. R.; Clements, J. S.; Calle, C. I.

    2012-01-01

    High vacuum triboelectric testing of space materials was required to identify possible Electrostatic Discharge (ESD) concerns for the astronauts in space during electronics board replacement on the Hubble Space Telescope. Testing under high vacuum conditions with common materials resulted in some interesting results. Many materials were able to charge to high levels which did not dissipate quickly even when grounded. Certain materials were able to charge up in contact with grounded metals while others were not. An interesting result was that like materials did not exchange electrostatic charge under high vacuum conditions. The most surprising experimental result is the lack of brush discharges from charged insulators under high vacuum conditions.

  3. Engineering constraints and computer-aided optimization of electrostatic lens systems

    International Nuclear Information System (INIS)

    Steen, H.W.G. van der; Barth, J.E.; Adriaanse, J.P.

    1990-01-01

    An optimization tool for the design of electrostatic lens systems with axial symmetry is presented. This tool is based on the second-order electrode method combined with a multivariable numerical optimization procedure. The second-order electrode method makes a cubic spline approximation to the axial potential for a given electrode shape. With the help of this approximation, a numerical optimization can be done. To demonstrate this optimization tool, a lens system for Auger analyses is optimized. It is shown that variations in the practical constraints imposed on the design, like maximum electrode potential or maximum lens diameter, have strong effects on the obtainable lens quality. It is concluded that a numerical optimization does not take over the lens designer's job, but allows him to thoroughly examine the optical consequences of engineering choices by finding the optimum design for each set of constraints. (orig.)

  4. THE UDF05 FOLLOW-UP OF THE HUBBLE ULTRA DEEP FIELD. III. THE LUMINOSITY FUNCTION AT z ∼ 6

    International Nuclear Information System (INIS)

    Su Jian; Stiavelli, Massimo; Bergeron, Eddie; Bradley, Larry; Dahlen, Tomas; Ferguson, Henry C.; Koekemoer, Anton; Lucas, Ray A.; Panagia, Nino; Pavlovsky, Cheryl; Oesch, Pascal; Carollo, Marcella; Lilly, Simon; Trenti, Michele; Giavalisco, Mauro; Mobasher, Bahram

    2011-01-01

    In this paper, we present a derivation of the rest-frame 1400 A luminosity function (LF) at redshift six from a new application of the maximum likelihood method by exploring the five deepest Hubble Space Telescope/Advanced Camera for Surveys (HST/ACS) fields, i.e., the Hubble Ultra Deep Field, two UDF05 fields, and two Great Observatories Origins Deep Survey fields. We work on the latest improved data products, which makes our results more robust than those of previous studies. We use unbinned data and thereby make optimal use of the information contained in the data set. We focus on the analysis to a magnitude limit where the completeness is larger than 50% to avoid possibly large errors in the faint end slope that are difficult to quantify. We also take into account scattering in and out of the dropout sample due to photometric errors by defining for each object a probability that it belongs to the dropout sample. We find the best-fit Schechter parameters to the z ∼ 6 LF are α = 1.87 ± 0.14, M * = -20.25 ± 0.23, and φ * = 1.77 +0.62 -0.49 x 10 -3 Mpc -3 . Such a steep slope suggests that galaxies, especially the faint ones, are possibly the main sources of ionizing photons in the universe at redshift six. We also combine results from all studies at z ∼ 6 to reach an agreement in the 95% confidence level that -20.45 * < -20.05 and -1.90 < α < -1.55. The luminosity density has been found not to evolve significantly between z ∼ 6 and z ∼ 5, but considerable evolution is detected from z ∼ 6 to z ∼ 3.

  5. Optimal dynamic control of resources in a distributed system

    Science.gov (United States)

    Shin, Kang G.; Krishna, C. M.; Lee, Yann-Hang

    1989-01-01

    The authors quantitatively formulate the problem of controlling resources in a distributed system so as to optimize a reward function and derive optimal control strategies using Markov decision theory. The control variables treated are quite general; they could be control decisions related to system configuration, repair, diagnostics, files, or data. Two algorithms for resource control in distributed systems are derived for time-invariant and periodic environments, respectively. A detailed example to demonstrate the power and usefulness of the approach is provided.

  6. Adaptive hybrid optimal quantum control for imprecisely characterized systems.

    Science.gov (United States)

    Egger, D J; Wilhelm, F K

    2014-06-20

    Optimal quantum control theory carries a huge promise for quantum technology. Its experimental application, however, is often hindered by imprecise knowledge of the input variables, the quantum system's parameters. We show how to overcome this by adaptive hybrid optimal control, using a protocol named Ad-HOC. This protocol combines open- and closed-loop optimal control by first performing a gradient search towards a near-optimal control pulse and then an experimental fidelity estimation with a gradient-free method. For typical settings in solid-state quantum information processing, adaptive hybrid optimal control enhances gate fidelities by an order of magnitude, making optimal control theory applicable and useful.

  7. Robust design of decentralized power system stabilizers using meta-heuristic optimization techniques for multimachine systems

    Directory of Open Access Journals (Sweden)

    Jeevanandham Arumugam

    2009-01-01

    Full Text Available In this paper a classical lead-lag power system stabilizer is used for demonstration. The stabilizer parameters are selected in such a manner to damp the rotor oscillations. The problem of selecting the stabilizer parameters is converted to a simple optimization problem with an eigen value based objective function and it is proposed to employ simulated annealing and particle swarm optimization for solving the optimization problem. The objective function allows the selection of the stabilizer parameters to optimally place the closed-loop eigen values in the left hand side of the complex s-plane. The single machine connected to infinite bus system and 10-machine 39-bus system are considered for this study. The effectiveness of the stabilizer tuned using the best technique, in enhancing the stability of power system. Stability is confirmed through eigen value analysis and simulation results and suitable heuristic technique will be selected for the best performance of the system.

  8. Thermally Induced Vibrations of the Hubble Space Telescope's Solar Array 3 in a Test Simulated Space Environment

    Science.gov (United States)

    Early, Derrick A.; Haile, William B.; Turczyn, Mark T.; Griffin, Thomas J. (Technical Monitor)

    2001-01-01

    NASA Goddard Space Flight Center and the European Space Agency (ESA) conducted a disturbance verification test on a flight Solar Array 3 (SA3) for the Hubble Space Telescope using the ESA Large Space Simulator (LSS) in Noordwijk, the Netherlands. The LSS cyclically illuminated the SA3 to simulate orbital temperature changes in a vacuum environment. Data acquisition systems measured signals from force transducers and accelerometers resulting from thermally induced vibrations of the SAI The LSS with its seismic mass boundary provided an excellent background environment for this test. This paper discusses the analysis performed on the measured transient SA3 responses and provides a summary of the results.

  9. System optimization for HVAC energy management using the robust evolutionary algorithm

    International Nuclear Information System (INIS)

    Fong, K.F.; Hanby, V.I.; Chow, T.T.

    2009-01-01

    For an installed centralized heating, ventilating and air conditioning (HVAC) system, appropriate energy management measures would achieve energy conservation targets through the optimal control and operation. The performance optimization of conventional HVAC systems may be handled by operation experience, but it may not cover different optimization scenarios and parameters in response to a variety of load and weather conditions. In this regard, it is common to apply the suitable simulation-optimization technique to model the system then determine the required operation parameters. The particular plant simulation models can be built up by either using the available simulation programs or a system of mathematical expressions. To handle the simulation models, iterations would be involved in the numerical solution methods. Since the gradient information is not easily available due to the complex nature of equations, the traditional gradient-based optimization methods are not applicable for this kind of system models. For the heuristic optimization methods, the continual search is commonly necessary, and the system function call is required for each search. The frequency of simulation function calls would then be a time-determining step, and an efficient optimization method is crucial, in order to find the solution through a number of function calls in a reasonable computational period. In this paper, the robust evolutionary algorithm (REA) is presented to tackle this nature of the HVAC simulation models. REA is based on one of the paradigms of evolutionary algorithm, evolution strategy, which is a stochastic population-based searching technique emphasized on mutation. The REA, which incorporates the Cauchy deterministic mutation, tournament selection and arithmetic recombination, would provide a synergetic effect for optimal search. The REA is effective to cope with the complex simulation models, as well as those represented by explicit mathematical expressions of

  10. Today and tomorrow on optimization of structural systems. Kozo system saitekika no genjo to shorai

    Energy Technology Data Exchange (ETDEWEB)

    1992-07-20

    It has been 30 years since a conception structurally optimized design method'' was advocated as a new structural design system which links three factors; mathematical programming, a finite element method, and computers. This paper summarizes the current states in the optimizing technologies in Japan and views over their future, with reference mainly to the two symposiums held in the past as an activity of the subcommittee for structural system optimization in the Japan Society of Civil Engineers. The summary covers the following matters: Optimizing algorithms for structural designs, fuzzy theories, practical use of expert systems and AI, maintenance and management systems for structures, vibration control, shock resistant designs, inverse problems and structure identifying problems, and designs of underground and off-shore structures. For instance, examples of bridge designs include a minimum mass design on a pedestrians' bridge incorporating vibration sensitivities into restricting conditions, comparisons of economics in suspension bridges using a multi-stage determination method, and many others. Optimizing technologies are believed to advance greatly in the future and be used as a very routine design tool. 145 refs., 1 fig., 3 tabs.

  11. Optimal beamforming in MIMO systems with HPA nonlinearity

    KAUST Repository

    Qi, Jian

    2010-09-01

    In this paper, multiple-input multiple-output (MIMO) transmit beamforming (TB) systems under the consideration of nonlinear high-power amplifiers (HPAs) are investigated. The optimal beamforming scheme, with the optimal beamforming weight vector and combining vector, is proposed for MIMO systems with HPA nonlinearity. The performance of the proposed MIMO beamforming scheme in the presence of HPA nonlinearity is evaluated in terms of average symbol error probability (SEP), outage probability and system capacity, considering transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects of several system parameters, namely, parameters of nonlinear HPA, numbers of transmit and receive antennas, and modulation order of phase-shift keying (PSK), on performance. ©2010 IEEE.

  12. Optimal beamforming in MIMO systems with HPA nonlinearity

    KAUST Repository

    Qi, Jian; Aissa, Sonia

    2010-01-01

    In this paper, multiple-input multiple-output (MIMO) transmit beamforming (TB) systems under the consideration of nonlinear high-power amplifiers (HPAs) are investigated. The optimal beamforming scheme, with the optimal beamforming weight vector and combining vector, is proposed for MIMO systems with HPA nonlinearity. The performance of the proposed MIMO beamforming scheme in the presence of HPA nonlinearity is evaluated in terms of average symbol error probability (SEP), outage probability and system capacity, considering transmission over uncorrelated quasi-static frequency-flat Rayleigh fading channels. Numerical results are provided and show the effects of several system parameters, namely, parameters of nonlinear HPA, numbers of transmit and receive antennas, and modulation order of phase-shift keying (PSK), on performance. ©2010 IEEE.

  13. Adaptive stimulus optimization for sensory systems neuroscience

    OpenAIRE

    DiMattina, Christopher; Zhang, Kechen

    2013-01-01

    In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system...

  14. Optimizing data access in the LAMPF control system

    International Nuclear Information System (INIS)

    Schaller, S.C.; Corley, J.K.; Rose, P.A.

    1985-01-01

    The LAMPF control system data access software offers considerable power and flexibility to application programs through symbolic device naming and an emphasis on hardware independence. This paper discusses optimizations aimed at improving the performance of the data access software while retaining these capabilities. The only aspects of the optimizations visible to the application programs are ''vector devices'' and ''aggregate devices.'' A vector device accesses a set of hardware related data items through a single device name. Aggregate devices allow run-time optimization of references to groups of unrelated devices. Optimizations not visible on the application level include careful handling of: network message traffic; the sharing of global resources; and storage allocation

  15. Discrete optimization in architecture extremely modular systems

    CERN Document Server

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  16. Site utility system optimization with operation adjustment under uncertainty

    International Nuclear Information System (INIS)

    Sun, Li; Gai, Limei; Smith, Robin

    2017-01-01

    Highlights: • Uncertainties are classified into time-based and probability-based uncertain factors. • Multi-period operation and recourses deal with uncertainty implementation. • Operation scheduling are specified at the design stage to deal with uncertainties. • Steam mains superheating affects steam distribution and power generation in the system. - Abstract: Utility systems must satisfy process energy and power demands under varying conditions. The system performance is decided by the system configuration and individual equipment operating load for boilers, gas turbines, steam turbines, condensers, and let down valves. Steam mains conditions in terms of steam pressures and steam superheating also play important roles on steam distribution in the system and power generation by steam expansion in steam turbines, and should be included in the system optimization. Uncertainties such as process steam power demand changes and electricity price fluctuations should be included in the system optimization to eliminate as much as possible the production loss caused by steam power deficits due to uncertainties. In this paper, uncertain factors are classified into time-based and probability-based uncertain factors, and operation scheduling containing multi-period equipment load sharing, redundant equipment start up, and electricity import to compensate for power deficits, have been presented to deal with the happens of uncertainties, and are formulated as a multi-period item and a recourse item in the optimization model. There are two case studies in this paper. One case illustrates the system design to determine system configuration, equipment selection, and system operation scheduling at the design stage to deal with uncertainties. The other case provides operational optimization scenarios for an existing system, especially when the steam superheating varies. The proposed method can provide practical guidance to system energy efficiency improvement.

  17. Optimization of Wireless Optical Communication System Based on Augmented Lagrange Algorithm

    International Nuclear Information System (INIS)

    He Suxiang; Meng Hongchao; Wang Hui; Zhao Yanli

    2011-01-01

    The optimal model for wireless optical communication system with Gaussian pointing loss factor is studied, in which the value of bit error probability (BEP) is prespecified and the optimal system parameters is to be found. For the superiority of augmented Lagrange method, the model considered is solved by using a classical quadratic augmented Lagrange algorithm. The detailed numerical results are reported. Accordingly, the optimal system parameters such as transmitter power, transmitter wavelength, transmitter telescope gain and receiver telescope gain can be established, which provide a scheme for efficient operation of the wireless optical communication system.

  18. Optimal structure of fault-tolerant software systems

    International Nuclear Information System (INIS)

    Levitin, Gregory

    2005-01-01

    This paper considers software systems consisting of fault-tolerant components. These components are built from functionally equivalent but independently developed versions characterized by different reliability and execution time. Because of hardware resource constraints, the number of versions that can run simultaneously is limited. The expected system execution time and its reliability (defined as probability of obtaining the correct output within a specified time) strictly depend on parameters of software versions and sequence of their execution. The system structure optimization problem is formulated in which one has to choose software versions for each component and find the sequence of their execution in order to achieve the greatest system reliability subject to cost constraints. The versions are to be chosen from a list of available products. Each version is characterized by its reliability, execution time and cost. The suggested optimization procedure is based on an algorithm for determining system execution time distribution that uses the moment generating function approach and on the genetic algorithm. Both N-version programming and the recovery block scheme are considered within a universal model. Illustrated example is presented

  19. The Hubble Legacy Archive ACS grism data

    Science.gov (United States)

    Kümmel, M.; Rosati, P.; Fosbury, R.; Haase, J.; Hook, R. N.; Kuntschner, H.; Lombardi, M.; Micol, A.; Nilsson, K. K.; Stoehr, F.; Walsh, J. R.

    2011-06-01

    A public release of slitless spectra, obtained with ACS/WFC and the G800L grism, is presented. Spectra were automatically extracted in a uniform way from 153 archival fields (or "associations") distributed across the two Galactic caps, covering all observations to 2008. The ACS G800L grism provides a wavelength range of 0.55-1.00 μm, with a dispersion of 40 Å/pixel and a resolution of ~80 Å for point-like sources. The ACS G800L images and matched direct images were reduced with an automatic pipeline that handles all steps from archive retrieval, alignment and astrometric calibration, direct image combination, catalogue generation, spectral extraction and collection of metadata. The large number of extracted spectra (73,581) demanded automatic methods for quality control and an automated classification algorithm was trained on the visual inspection of several thousand spectra. The final sample of quality controlled spectra includes 47 919 datasets (65% of the total number of extracted spectra) for 32 149 unique objects, with a median iAB-band magnitude of 23.7, reaching 26.5 AB for the faintest objects. Each released dataset contains science-ready 1D and 2D spectra, as well as multi-band image cutouts of corresponding sources and a useful preview page summarising the direct and slitless data, astrometric and photometric parameters. This release is part of the continuing effort to enhance the content of the Hubble Legacy Archive (HLA) with highly processed data products which significantly facilitate the scientific exploitation of the Hubble data. In order to characterize the slitless spectra, emission-line flux and equivalent width sensitivity of the ACS data were compared with public ground-based spectra in the GOODS-South field. An example list of emission line galaxies with two or more identified lines is also included, covering the redshift range 0.2 - 4.6. Almost all redshift determinations outside of the GOODS fields are new. The scope of science projects

  20. Optimal interface between principal deterrent systems and material accounting

    International Nuclear Information System (INIS)

    Deiermann, P.J.; Opelka, J.H.

    1983-01-01

    The purpose of this study is to find an optimal blend between three safeguards systems for special nuclear material (SNM), the material accounting system and the physical security and material control systems. The latter two are denoted as principal deterrent systems. The optimization methodology employed is a two-stage decision algorithm, first an explicit maximization of expected diverter benefits and subsequently a minimization of expected defender costs for changes in material accounting procedures and incremental improvements in the principal deterrent systems. The probability of diverter success function dependent upon the principal deterrents and material accounting system variables is developed. Within the range of certainty of the model, existing material accounting, material control and physical security practices are justified

  1. An energy systems engineering approach to the optimal design of energy systems in commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Pei; Pistikopoulos, Efstratios N. [Centre for Process Systems Engineering (CPSE), Department of Chemical Engineering, Imperial College London, London SW7 2AZ (United Kingdom); Li, Zheng [Department of Thermal Engineering, Tsinghua University, Beijing 100084 (China)

    2010-08-15

    Energy consumption in commercial buildings accounts for a significant proportion of worldwide energy consumption. Any increase in the energy efficiency of the energy systems for commercial buildings would lead to significant energy savings and emissions reductions. In this work, we introduce an energy systems engineering framework towards the optimal design of such energy systems with improved energy efficiency and environmental performance. The framework features a superstructure representation of the various energy technology alternatives, a mixed-integer optimization formulation of the energy systems design problem, and a multi-objective design optimization solution strategy, where economic and environmental criteria are simultaneously considered and properly traded off. A case study of a supermarket energy systems design is presented to illustrate the key steps and potential of the proposed energy systems engineering approach. (author)

  2. An energy systems engineering approach to the optimal design of energy systems in commercial buildings

    International Nuclear Information System (INIS)

    Liu Pei; Pistikopoulos, Efstratios N.; Li Zheng

    2010-01-01

    Energy consumption in commercial buildings accounts for a significant proportion of worldwide energy consumption. Any increase in the energy efficiency of the energy systems for commercial buildings would lead to significant energy savings and emissions reductions. In this work, we introduce an energy systems engineering framework towards the optimal design of such energy systems with improved energy efficiency and environmental performance. The framework features a superstructure representation of the various energy technology alternatives, a mixed-integer optimization formulation of the energy systems design problem, and a multi-objective design optimization solution strategy, where economic and environmental criteria are simultaneously considered and properly traded off. A case study of a supermarket energy systems design is presented to illustrate the key steps and potential of the proposed energy systems engineering approach.

  3. An energy systems engineering approach to the optimal design of energy systems in commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Liu Pei [Centre for Process Systems Engineering (CPSE), Department of Chemical Engineering, Imperial College London, London SW7 2AZ (United Kingdom); Pistikopoulos, Efstratios N., E-mail: e.pistikopoulos@imperial.ac.u [Centre for Process Systems Engineering (CPSE), Department of Chemical Engineering, Imperial College London, London SW7 2AZ (United Kingdom); Li Zheng [Department of Thermal Engineering, Tsinghua University, Beijing 100084 (China)

    2010-08-15

    Energy consumption in commercial buildings accounts for a significant proportion of worldwide energy consumption. Any increase in the energy efficiency of the energy systems for commercial buildings would lead to significant energy savings and emissions reductions. In this work, we introduce an energy systems engineering framework towards the optimal design of such energy systems with improved energy efficiency and environmental performance. The framework features a superstructure representation of the various energy technology alternatives, a mixed-integer optimization formulation of the energy systems design problem, and a multi-objective design optimization solution strategy, where economic and environmental criteria are simultaneously considered and properly traded off. A case study of a supermarket energy systems design is presented to illustrate the key steps and potential of the proposed energy systems engineering approach.

  4. A global optimization method for evaporative cooling systems based on the entransy theory

    International Nuclear Information System (INIS)

    Yuan, Fang; Chen, Qun

    2012-01-01

    Evaporative cooling technique, one of the most widely used methods, is essential to both energy conservation and environment protection. This contribution introduces a global optimization method for indirect evaporative cooling systems with coupled heat and mass transfer processes based on the entransy theory to improve their energy efficiency. First, we classify the irreversible processes in the system into the heat transfer process, the coupled heat and mass transfer process and the mixing process of waters in different branches, where the irreversibility is evaluated by the entransy dissipation. Then through the total system entransy dissipation, we establish the theoretical relationship of the user demands with both the geometrical structures of each heat exchanger and the operating parameters of each fluid, and derive two optimization equation groups focusing on two typical optimization problems. Finally, an indirect evaporative cooling system is taken as an example to illustrate the applications of the newly proposed optimization method. It is concluded that there exists an optimal circulating water flow rate with the minimum total thermal conductance of the system. Furthermore, with different user demands and moist air inlet conditions, it is the global optimization, other than parametric analysis, will obtain the optimal performance of the system. -- Highlights: ► Introduce a global optimization method for evaporative cooling systems. ► Establish the direct relation between user demands and the design parameters. ► Obtain two groups of optimization equations for two typical optimization objectives. ► Solving the equations offers the optimal design parameters for the system. ► Provide the instruction for the design of coupled heat and mass transfer systems.

  5. Optimal Robust Fault Detection for Linear Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Nike Liu

    2008-01-01

    Full Text Available This paper considers robust fault-detection problems for linear discrete time systems. It is shown that the optimal robust detection filters for several well-recognized robust fault-detection problems, such as ℋ−/ℋ∞, ℋ2/ℋ∞, and ℋ∞/ℋ∞ problems, are the same and can be obtained by solving a standard algebraic Riccati equation. Optimal filters are also derived for many other optimization criteria and it is shown that some well-studied and seeming-sensible optimization criteria for fault-detection filter design could lead to (optimal but useless fault-detection filters.

  6. Optimal angle reduction - a behavioral approach to linear system appromixation

    NARCIS (Netherlands)

    Roorda, B.; Weiland, S.

    2001-01-01

    We investigate the problem of optimal state reduction under minimization of the angle between system behaviors. The angle is defined in a worst-case sense, as the largest angle that can occur between a system trajectory and its optimal approximation in the reduced-order model. This problem is

  7. Optimizing a mobile robot control system using GPU acceleration

    Science.gov (United States)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  8. Optimized multi area AGC simulation in restructured power systems

    International Nuclear Information System (INIS)

    Bhatt, Praghnesh; Roy, Ranjit; Ghoshal, S.P.

    2010-01-01

    In this paper, the traditional automatic generation control loop with modifications is incorporated for simulating automatic generation control (AGC) in restructured power system. Federal energy regulatory commission (FERC) encourages an open market system for price based operation. FERC has issued a notice for proposed rulemaking of various ancillary services. One of these ancillary services is load following with frequency control which comes broadly under Automatic Generation Control in deregulated regime. The concept of DISCO participation matrix is used to simulate the bilateral contracts in the three areas and four area diagrams. Hybrid particle swarm optimization is used to obtain optimal gain parameters for optimal transient performance. (author)

  9. Multi-objective optimization of Stirling engine systems using Front-based Yin-Yang-Pair Optimization

    International Nuclear Information System (INIS)

    Punnathanam, Varun; Kotecha, Prakash

    2017-01-01

    Highlights: • Efficient multi-objective optimization algorithm F-YYPO demonstrated. • Three Stirling engine applications with a total of eight cases. • Improvements in the objective function values of up to 30%. • Superior to the popularly used gamultiobj of MATLAB. • F-YYPO has extremely low time complexity. - Abstract: In this work, we demonstrate the performance of Front-based Yin-Yang-Pair Optimization (F-YYPO) to solve multi-objective problems related to Stirling engine systems. The performance of F-YYPO is compared with that of (i) a recently proposed multi-objective optimization algorithm (Multi-Objective Grey Wolf Optimizer) and (ii) an algorithm popularly employed in literature due to its easy accessibility (MATLAB’s inbuilt multi-objective Genetic Algorithm function: gamultiobj). We consider three Stirling engine based optimization problems: (i) the solar-dish Stirling engine system which considers objectives of output power, thermal efficiency and rate of entropy generation; (ii) Stirling engine thermal model which considers the associated irreversibility of the cycle with objectives of output power, thermal efficiency and pressure drop; and finally (iii) an experimentally validated polytropic finite speed thermodynamics based Stirling engine model also with objectives of output power and pressure drop. We observe F-YYPO to be significantly more effective as compared to its competitors in solving the problems, while requiring only a fraction of the computational time required by the other algorithms.

  10. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    Science.gov (United States)

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  11. Optimization of maintenance periodicity of complex of NPP safety systems

    International Nuclear Information System (INIS)

    Kolykhanov, V.; Skalozubov, V.; Kovrigkin, Y.

    2006-01-01

    The analysis of the positive and negative aspects connected to maintenance of the safety systems equipment which basically is in a standby state is executed. Tests of systems provide elimination of the latent failures and raise their reliability. Poor quality of carrying out the tests can be a source of the subsequent failures. Therefore excess frequency of tests can result in reducing reliability of safety systems. The method of optimization of maintenance periodicity of the equipment taking into account factors of its reliability and restoration procedures quality is submitted. The unavailability factor is used as a criterion of optimization of maintenance periodicity. It is offered to use parameters of reliability of the equipment and each of safety systems of NPPs received at developing PSA. And it is offered to carry out the concordance of maintenance periodicity of systems within the NPP maintenance program taking into account a significance factor of the system received on the basis of the contribution of system in CDF. Basing on the submitted method the small computer code is developed. This code allows to calculate reliability factors of a separate safety system and to determine optimum maintenance periodicity of its equipment. Optimization of maintenance periodicity of a complex of safety systems is stipulated also. As an example results of optimization of maintenance periodicity at Zaporizhzhya NPP are presented. (author)

  12. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  13. Thermodynamic framework for discrete optimal control in multiphase flow systems

    Science.gov (United States)

    Sieniutycz, Stanislaw

    1999-08-01

    Bellman's method of dynamic programming is used to synthesize diverse optimization approaches to active (work producing) and inactive (entropy generating) multiphase flow systems. Thermal machines, optimally controlled unit operations, nonlinear heat conduction, spontaneous relaxation processes, and self-propagating wave fronts are all shown to satisfy a discrete Hamilton-Jacobi-Bellman equation and a corresponding discrete optimization algorithm of Pontryagin's type, with the maximum principle for a Hamiltonian. The extremal structures are always canonical. A common unifying criterion is set for all considered systems, which is the criterion of a minimum generated entropy. It is shown that constraints can modify the entropy functionals in a different way for each group of the processes considered; thus the resulting structures of these functionals may differ significantly. Practical conclusions are formulated regarding the energy savings and energy policy in optimally controlled systems.

  14. UV/Visible Telescope with Hubble Disposal

    Science.gov (United States)

    Benford, Dominic J.

    2013-01-01

    Submission Overview: Our primary objective is to convey a sense of the significant advances possible in astrophysics investigations for major Cosmic Origins COR program goals with a 2.4m telescope asset outfitted with one or more advanced UV visible instruments. Several compelling science objectives were identified based on community meetings these science objectives drove the conceptual design of instruments studied by the COR Program Office during July September 2012. This RFI submission encapsulates the results of that study, and suggests that a more detailed look into the instrument suite should be conducted to prove viability and affordability to support the demonstrated scientific value. This study was conducted in the context of a larger effort to consider the options available for a mission to dispose safely of Hubble hence, the overall architecture considered for the mission we studied for the 2.4m telescope asset included resource sharing. This mitigates combined cost and risk and provides naturally for a continued US leadership role in astrophysics with an advanced, general-purpose UV visible space telescope.

  15. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  16. Robust Design Optimization of an Aerospace Vehicle Prolusion System

    Directory of Open Access Journals (Sweden)

    Muhammad Aamir Raza

    2011-01-01

    Full Text Available This paper proposes a robust design optimization methodology under design uncertainties of an aerospace vehicle propulsion system. The approach consists of 3D geometric design coupled with complex internal ballistics, hybrid optimization, worst-case deviation, and efficient statistical approach. The uncertainties are propagated through worst-case deviation using first-order orthogonal design matrices. The robustness assessment is measured using the framework of mean-variance and percentile difference approach. A parametric sensitivity analysis is carried out to analyze the effects of design variables variation on performance parameters. A hybrid simulated annealing and pattern search approach is used as an optimizer. The results show the objective function of optimizing the mean performance and minimizing the variation of performance parameters in terms of thrust ratio and total impulse could be achieved while adhering to the system constraints.

  17. Computational Approach to Profit Optimization of a Loss-Queueing System

    Directory of Open Access Journals (Sweden)

    Dinesh Kumar Yadav

    2010-01-01

    Full Text Available Objective of the paper is to deal with the profit optimization of a loss queueing system with the finite capacity. Here, we define and compute total expected cost (TEC, total expected revenue (TER and consequently we compute the total optimal profit (TOP of the system. In order to compute the total optimal profit of the system, a computing algorithm has been developed and a fast converging N-R method has been employed which requires least computing time and lesser memory space as compared to other methods. Sensitivity analysis and its observations based on graphics have added a significant value to this model.

  18. HUBBLE STAYS ON TRAIL OF FADING GAMMA-RAY BURST FIREBALL

    Science.gov (United States)

    2002-01-01

    A Hubble Space Telescope image of the fading fireball from one of the universe's most mysterious phenomena, a gamma-ray burst. Though the visible component has faded to 1/500th its brightness (27.7 magnitude) from the time it was first discovered by ground- based telescopes last March (the actual gamma-ray burst took place on February 28), Hubble continues to clearly see the fireball and discriminated a surrounding nebulosity (at 25th magnitude) which is considered a host galaxy. The continued visibility of the burst, and the rate of its fading, support theories that the light from a gamma-ray burst is an expanding relativistic (moving near the speed of light) fireball, possibly produced by the collision of two dense objects, such as an orbiting pair of neutron stars. If the burst happened nearby, within our own galaxy, the resulting fireball should have had only enough energy to propel it into space for a month. The fact that this fireball is still visible after six months means the explosion was truly titanic and, to match the observed brightness, must have happened at the vast distances of galaxies. The energy released in a burst, which can last from a fraction of a second to a few hundred seconds, is equal to all of the Sun's energy generated over its 10 billion year lifetime. The false-color image was taken Sept. 5, 1997 with the Space Telescope Imaging Spectrograph. Credit: Andrew Fruchter (STScI), Elena Pian (ITSRE-CNR), and NASA

  19. Characterization optimization for the National TRU waste system

    International Nuclear Information System (INIS)

    Basabilvazo, George T.; Countiss, S.; Moody, D.C.; Jennings, S.G.; Lott, S.A.

    2002-01-01

    On March 26, 1999, the Waste Isolation Pilot Plant (WIPP) received its first shipment of transuranic (TRU) waste. On November 26, 1999, the Hazardous Waste Facility Permit (HWFP) to receive mixed TRU waste at WIPP became effective. Having achieved these two milestones, facilitating and supporting the characterization, transportation, and disposal of TRU waste became the major challenges for the National TRU Waste Program. Significant challenges still remain in the scientific, engineering, regulatory, and political areas that need to be addressed. The National TRU Waste System Optimization Project has been established to identify, develop, and implement cost-effective system optimization strategies that address those significant challenges. Fundamental to these challenges is the balancing and prioritization of potential regulatory changes with potential technological solutions. This paper describes some of the efforts to optimize (to make as functional as possible) characterization activities for TRU waste.

  20. Use of multilevel modeling for determining optimal parameters of heat supply systems

    Science.gov (United States)

    Stennikov, V. A.; Barakhtenko, E. A.; Sokolov, D. V.

    2017-07-01

    The problem of finding optimal parameters of a heat-supply system (HSS) is in ensuring the required throughput capacity of a heat network by determining pipeline diameters and characteristics and location of pumping stations. Effective methods for solving this problem, i.e., the method of stepwise optimization based on the concept of dynamic programming and the method of multicircuit optimization, were proposed in the context of the hydraulic circuit theory developed at Melentiev Energy Systems Institute (Siberian Branch, Russian Academy of Sciences). These methods enable us to determine optimal parameters of various types of piping systems due to flexible adaptability of the calculation procedure to intricate nonlinear mathematical models describing features of used equipment items and methods of their construction and operation. The new and most significant results achieved in developing methodological support and software for finding optimal parameters of complex heat supply systems are presented: a new procedure for solving the problem based on multilevel decomposition of a heat network model that makes it possible to proceed from the initial problem to a set of interrelated, less cumbersome subproblems with reduced dimensionality; a new algorithm implementing the method of multicircuit optimization and focused on the calculation of a hierarchical model of a heat supply system; the SOSNA software system for determining optimum parameters of intricate heat-supply systems and implementing the developed methodological foundation. The proposed procedure and algorithm enable us to solve engineering problems of finding the optimal parameters of multicircuit heat supply systems having large (real) dimensionality, and are applied in solving urgent problems related to the optimal development and reconstruction of these systems. The developed methodological foundation and software can be used for designing heat supply systems in the Central and the Admiralty regions in

  1. Data processing and optimization system to study prospective interstate power interconnections

    Science.gov (United States)

    Podkovalnikov, Sergei; Trofimov, Ivan; Trofimov, Leonid

    2018-01-01

    The paper presents Data processing and optimization system for studying and making rational decisions on the formation of interstate electric power interconnections, with aim to increasing effectiveness of their functioning and expansion. The technologies for building and integrating a Data processing and optimization system including an object-oriented database and a predictive mathematical model for optimizing the expansion of electric power systems ORIRES, are described. The technology of collection and pre-processing of non-structured data collected from various sources and its loading to the object-oriented database, as well as processing and presentation of information in the GIS system are described. One of the approaches of graphical visualization of the results of optimization model is considered on the example of calculating the option for expansion of the South Korean electric power grid.

  2. Charge retention test experiences on Hubble Space Telescope nickel-hydrogen battery cells

    Science.gov (United States)

    Nawrocki, Dave E.; Driscoll, J. R.; Armantrout, J. D.; Baker, R. C.; Wajsgras, H.

    1993-01-01

    The Hubble Space Telescope (HST) nickel-hydrogen battery module was designed by Lockheed Missile & Space Co (LMSC) and manufactured by Eagle-Picher Ind. (EPI) for the Marshall Space Flight Center (MSFC) as an Orbital Replacement Unit (ORU) for the nickel-cadmium batteries originally selected for this low earth orbit mission. The design features of the HST nickel hydrogen battery are described and the results of an extended charge retention test are summarized.

  3. Distributed Robust Optimization in Networked System.

    Science.gov (United States)

    Wang, Shengnan; Li, Chunguang

    2016-10-11

    In this paper, we consider a distributed robust optimization (DRO) problem, where multiple agents in a networked system cooperatively minimize a global convex objective function with respect to a global variable under the global constraints. The objective function can be represented by a sum of local objective functions. The global constraints contain some uncertain parameters which are partially known, and can be characterized by some inequality constraints. After problem transformation, we adopt the Lagrangian primal-dual method to solve this problem. We prove that the primal and dual optimal solutions of the problem are restricted in some specific sets, and we give a method to construct these sets. Then, we propose a DRO algorithm to find the primal-dual optimal solutions of the Lagrangian function, which consists of a subgradient step, a projection step, and a diffusion step, and in the projection step of the algorithm, the optimized variables are projected onto the specific sets to guarantee the boundedness of the subgradients. Convergence analysis and numerical simulations verifying the performance of the proposed algorithm are then provided. Further, for nonconvex DRO problem, the corresponding approach and algorithm framework are also provided.

  4. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  5. Optimization of thermal systems based on finite-time thermodynamics and thermoeconomics

    Energy Technology Data Exchange (ETDEWEB)

    Durmayaz, A. [Istanbul Technical University (Turkey). Department of Mechanical Engineering; Sogut, O.S. [Istanbul Technical University, Maslak (Turkey). Department of Naval Architecture and Ocean Engineering; Sahin, B. [Yildiz Technical University, Besiktas, Istanbul (Turkey). Department of Naval Architecture; Yavuz, H. [Istanbul Technical University, Maslak (Turkey). Institute of Energy

    2004-07-01

    The irreversibilities originating from finite-time and finite-size constraints are important in the real thermal system optimization. Since classical thermodynamic analysis based on thermodynamic equilibrium do not consider these constraints directly, it is necessary to consider the energy transfer between the system and its surroundings in the rate form. Finite-time thermodynamics provides a fundamental starting point for the optimization of real thermal systems including the fundamental concepts of heat transfer and fluid mechanics to classical thermodynamics. In this study, optimization studies of thermal systems, that consider various objective functions, based on finite-time thermodynamics and thermoeconomics are reviewed. (author)

  6. The Hubble Constant to 1%: Physics beyond LambdaCDM

    Science.gov (United States)

    Riess, Adam

    2017-08-01

    By steadily advancing the precision and accuracy of the Hubble constant, we now see 3.4-sigma evidence for a deviation from the standard LambdaCDM model and thus the exciting chance of discovering new fundamental physics such as exotic dark energy, a new relativistic particle, dark matter interactions, or a small curvature, to name a few possibilities. We propose a coordinated program to accomplish three goals with one set of observations: (1) improve the precision of the best route to H_0 with HST observations of Cepheids in the hosts of 11 SNe Ia, lowering the uncertainty to 1.3% to reach the discovery threshold of 5-sigma and begin resolving the underlying source of the deviation; (2) continue testing the quality of Cepheid distances, so far the most accurate and reliable indicators in the near Universe, using the tip of the red giant branch (TRGB); and (3) use oxygen-rich Miras to confirm the present tension with the CMB and establish a future route available to JWST. We can achieve all three goals with one dataset and take the penultimate step to reach 1% precision in H_0 after Gaia. With its long-pass filter and NIR capability, we can collect these data with WFC3 many times faster than previously possible while overcoming the extinction and metallicity effects that challenged the first generation of H_0 measurements. Our results will complement the leverage available at high redshift from other cosmological tools such as BAO, the CMB, and SNe Ia, and will provide a 40% improvement on the WFIRST measurements of dark energy. Reaching this precision will be a fitting legacy for the telescope charged to resolve decades of uncertainty regarding the Hubble constant.

  7. An approach for multi-objective optimization of vehicle suspension system

    Science.gov (United States)

    Koulocheris, D.; Papaioannou, G.; Christodoulou, D.

    2017-10-01

    In this paper, a half car model of with nonlinear suspension systems is selected in order to study the vertical vibrations and optimize its suspension system with respect to ride comfort and road holding. A road bump was used as road profile. At first, the optimization problem is solved with the use of Genetic Algorithms with respect to 6 optimization targets. Then the k - ɛ optimization method was implemented to locate one optimum solution. Furthermore, an alternative approach is presented in this work: the previous optimization targets are separated in main and supplementary ones, depending on their importance in the analysis. The supplementary targets are not crucial to the optimization but they could enhance the main objectives. Thus, the problem was solved again using Genetic Algorithms with respect to the 3 main targets of the optimization. Having obtained the Pareto set of solutions, the k - ɛ optimality method was implemented for the 3 main targets and the supplementary ones, evaluated by the simulation of the vehicle model. The results of both cases are presented and discussed in terms of convergence of the optimization and computational time. The optimum solutions acquired from both cases are compared based on performance metrics as well.

  8. Hybrid Techniques for Optimizing Complex Systems

    Science.gov (United States)

    2009-12-01

    relay placement problem, we modeled the network as a mechanical system with springs and a viscous damper ⎯a widely used approach for solving optimization...fundamental mathematical tools in many branches of physics such as fluid and solid mechanics, and general relativity [108]. More recently, several

  9. Maximal imaginery eigenvalues in optimal systems

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    1991-07-01

    Full Text Available In this note we present equations that uniquely determine the maximum possible imaginary value of the closed loop eigenvalues in an LQ-optimal system, irrespective of how the state weight matrix is chosen, provided a real symmetric solution of the algebraic Riccati equation exists. In addition, the corresponding state weight matrix and the solution to the algebraic Riccati equation are derived for a class of linear systems. A fundamental lemma for the existence of a real symmetric solution to the algebraic Riccati equation is derived for this class of linear systems.

  10. Optimized Extreme Learning Machine for Power System Transient Stability Prediction Using Synchrophasors

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2015-01-01

    Full Text Available A new optimized extreme learning machine- (ELM- based method for power system transient stability prediction (TSP using synchrophasors is presented in this paper. First, the input features symbolizing the transient stability of power systems are extracted from synchronized measurements. Then, an ELM classifier is employed to build the TSP model. And finally, the optimal parameters of the model are optimized by using the improved particle swarm optimization (IPSO algorithm. The novelty of the proposal is in the fact that it improves the prediction performance of the ELM-based TSP model by using IPSO to optimize the parameters of the model with synchrophasors. And finally, based on the test results on both IEEE 39-bus system and a large-scale real power system, the correctness and validity of the presented approach are verified.

  11. Probing the z > 6 universe with the first Hubble frontier fields cluster A2744

    International Nuclear Information System (INIS)

    Atek, Hakim; Kneib, Jean-Paul; Richard, Johan; Clement, Benjamin; Egami, Eiichi; Ebeling, Harald; Jauzac, Mathilde; Jullo, Eric; Limousin, Marceau; Laporte, Nicolas; Natarajan, Priyamvada

    2014-01-01

    The Hubble Frontier Fields program combines the capabilities of the Hubble Space Telescope (HST) with the gravitational lensing of massive galaxy clusters to probe the distant universe to an unprecedented depth. Here, we present the results of the first combined HST and Spitzer observations of the cluster A-2744. We combine the full near-infrared data with ancillary optical images to search for gravitationally lensed high-redshift (z ≳ 6) galaxies. We report the detection of 15 I 814 dropout candidates at z ∼ 6-7 and one Y 105 dropout at z ∼ 8 in a total survey area of 1.43 arcmin 2 in the source plane. The predictions of our lens model also allow us to identify five multiply imaged systems lying at redshifts between z ∼ 6 and z ∼ 8. Thanks to constraints from the mass distribution in the cluster, we were able to estimate the effective survey volume corrected for completeness and magnification effects. This was in turn used to estimate the rest-frame ultraviolet luminosity function (LF) at z ∼ 6-8. Our LF results are generally in agreement with the most recent blank field estimates, confirming the feasibility of surveys through lensing clusters. Although based on a shallower observations than what will be achieved in the final data set including the full Advanced Camera for Survey observations, the LF presented here goes down to M UV ∼–18.5, corresponding to 0.2L * at z ∼ 7 with one identified object at M UV ∼–15 thanks to the highly magnified survey areas. This early study forecasts the power of using massive galaxy clusters as cosmic telescopes and its complementarity to blank fields.

  12. Probing the z > 6 universe with the first Hubble frontier fields cluster A2744

    Energy Technology Data Exchange (ETDEWEB)

    Atek, Hakim; Kneib, Jean-Paul [Laboratoire d' Astrophysique, Ecole Polytechnique Fédérale de Lausanne, Observatoire de Sauverny, CH-1290 Versoix (Switzerland); Richard, Johan [CRAL, Observatoire de Lyon, Université Lyon 1, 9 Avenue Ch. André, 69561 Saint Genis Laval Cedex (France); Clement, Benjamin; Egami, Eiichi [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ, 85721 (United States); Ebeling, Harald [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, Hawaii 96822 (United States); Jauzac, Mathilde [Astrophysics and Cosmology Research Unit, School of Mathematical Sciences, University of KwaZulu-Natal, Durban, 4041 South Africa (South Africa); Jullo, Eric; Limousin, Marceau [Aix Marseille Université, CNRS, LAM (Laboratoire d' Astrophysique de Marseille) UMR 7326, 13388, Marseille (France); Laporte, Nicolas [Instituto de Astrofisica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Natarajan, Priyamvada [Department of Astronomy, Yale University, 260 Whitney Avenue, New Haven, CT 06511 (United States)

    2014-05-01

    The Hubble Frontier Fields program combines the capabilities of the Hubble Space Telescope (HST) with the gravitational lensing of massive galaxy clusters to probe the distant universe to an unprecedented depth. Here, we present the results of the first combined HST and Spitzer observations of the cluster A-2744. We combine the full near-infrared data with ancillary optical images to search for gravitationally lensed high-redshift (z ≳ 6) galaxies. We report the detection of 15 I {sub 814} dropout candidates at z ∼ 6-7 and one Y {sub 105} dropout at z ∼ 8 in a total survey area of 1.43 arcmin{sup 2} in the source plane. The predictions of our lens model also allow us to identify five multiply imaged systems lying at redshifts between z ∼ 6 and z ∼ 8. Thanks to constraints from the mass distribution in the cluster, we were able to estimate the effective survey volume corrected for completeness and magnification effects. This was in turn used to estimate the rest-frame ultraviolet luminosity function (LF) at z ∼ 6-8. Our LF results are generally in agreement with the most recent blank field estimates, confirming the feasibility of surveys through lensing clusters. Although based on a shallower observations than what will be achieved in the final data set including the full Advanced Camera for Survey observations, the LF presented here goes down to M {sub UV} ∼–18.5, corresponding to 0.2L {sup *} at z ∼ 7 with one identified object at M {sub UV} ∼–15 thanks to the highly magnified survey areas. This early study forecasts the power of using massive galaxy clusters as cosmic telescopes and its complementarity to blank fields.

  13. Initial Hubble Diagram Results from the Nearby Supernova Factory

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, S. [Lab. Nuclear and High-Energy Physics (LPNHE), Paris (France); Aldering, G. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Antilogus, P. [Lab. Nuclear and High-Energy Physics (LPNHE), Paris (France); Aragon, C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Baltay, C. [Yale Univ., New Haven, CT (United States); Bongard, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Buton, C [Inst. of Nuclear Physics of Lyon (France); Childress, M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Copin, Y. [Inst. of Nuclear Physics of Lyon (France); Gangler, E. [Inst. of Nuclear Physics of Lyon (France); Loken, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, P. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Pain, R. [Lab. Nuclear and High-Energy Physics (LPNHE), Paris (France); Pecontal, E. [Center of Research Astrophysics of Lyon (CRAL) (France); Pereira, R. [Lab. Nuclear and High-Energy Physics (LPNHE), Paris (France); Perlmutter, S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rabinowitz, D. [Yale Univ., New Haven, CT (United States); Rigaudier, G. [Center of Research Astrophysics of Lyon (CRAL) (France); Ripoche, P. [Lab. Nuclear and High-Energy Physics (LPNHE), Paris (France); Runge, K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scalzo, R. [Yale Univ., New Haven, CT (United States); Smadja, G. [Inst. of Nuclear Physics of Lyon (France); Tao, C. [Inst. of Nuclear Physics of Lyon (France); Thomas, R. C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, C. [Lab. Nuclear and High-Energy Physics (LPNHE), Paris (France)

    2017-07-06

    The use of Type Ia supernovae as distance indicators led to the discovery of the accelerating expansion of the universe a decade ago. Now that large second generation surveys have significantly increased the size and quality of the high-redshift sample, the cosmological constraints are limited by the currently available sample of ~50 cosmologically useful nearby supernovae. The Nearby Supernova Factory addresses this problem by discovering nearby supernovae and observing their spectrophotometric time development. Our data sample includes over 2400 spectra from spectral timeseries of 185 supernovae. This talk presents results from a portion of this sample including a Hubble diagram (relative distance vs. redshift) and a description of some analyses using this rich dataset.

  14. Fuzzy multiobjective models for optimal operation of a hydropower system

    Science.gov (United States)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  15. A 2.4% DETERMINATION OF THE LOCAL VALUE OF THE HUBBLE CONSTANT

    Energy Technology Data Exchange (ETDEWEB)

    Riess, Adam G.; Scolnic, Dan; Jones, David O. [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD (United States); Macri, Lucas M.; Hoffmann, Samantha L.; Yuan, Wenlong; Brown, Peter J. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX (United States); Casertano, Stefano [Space Telescope Science Institute, Baltimore, MD (United States); Filippenko, Alexei V.; Tucker, Brad E. [Department of Astronomy, University of California, Berkeley, CA (United States); Reid, Mark J.; Challis, Peter [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States); Silverman, Jeffrey M. [Department of Astronomy, University of Texas, Austin, TX (United States); Chornock, Ryan [Astrophysical Institute, Department of Physics and Astronomy, Ohio University, Athens, OH (United States); Foley, Ryan J., E-mail: ariess@stsci.edu [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL (United States)

    2016-07-20

    We use the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) to reduce the uncertainty in the local value of the Hubble constant from 3.3% to 2.4%. The bulk of this improvement comes from new near-infrared (NIR) observations of Cepheid variables in 11 host galaxies of recent type Ia supernovae (SNe Ia), more than doubling the sample of reliable SNe Ia having a Cepheid-calibrated distance to a total of 19; these in turn leverage the magnitude-redshift relation based on ∼300 SNe Ia at z < 0.15. All 19 hosts as well as the megamaser system NGC 4258 have been observed with WFC3 in the optical and NIR, thus nullifying cross-instrument zeropoint errors in the relative distance estimates from Cepheids. Other noteworthy improvements include a 33% reduction in the systematic uncertainty in the maser distance to NGC 4258, a larger sample of Cepheids in the Large Magellanic Cloud (LMC), a more robust distance to the LMC based on late-type detached eclipsing binaries (DEBs), HST observations of Cepheids in M31, and new HST -based trigonometric parallaxes for Milky Way (MW) Cepheids. We consider four geometric distance calibrations of Cepheids: (i) megamasers in NGC 4258, (ii) 8 DEBs in the LMC, (iii) 15 MW Cepheids with parallaxes measured with HST /FGS, HST /WFC3 spatial scanning and/or Hipparcos , and (iv) 2 DEBs in M31. The Hubble constant from each is 72.25 ± 2.51, 72.04 ± 2.67, 76.18 ± 2.37, and 74.50 ± 3.27 km s{sup 1} Mpc{sup 1}, respectively. Our best estimate of H {sub 0} = 73.24 ± 1.74 km s{sup 1} Mpc{sup 1} combines the anchors NGC 4258, MW, and LMC, yielding a 2.4% determination (all quoted uncertainties include fully propagated statistical and systematic components). This value is 3.4 σ higher than 66.93 ± 0.62 km s{sup 1} Mpc{sup 1} predicted by ΛCDM with 3 neutrino flavors having a mass of 0.06 eV and the new Planck data, but the discrepancy reduces to 2.1 σ relative to the prediction of 69.3 ± 0.7 km s{sup 1} Mpc{sup 1} based on the

  16. Multi-objective optimization of GENIE Earth system models.

    Science.gov (United States)

    Price, Andrew R; Myerscough, Richard J; Voutchkov, Ivan I; Marsh, Robert; Cox, Simon J

    2009-07-13

    The tuning of parameters in climate models is essential to provide reliable long-term forecasts of Earth system behaviour. We apply a multi-objective optimization algorithm to the problem of parameter estimation in climate models. This optimization process involves the iterative evaluation of response surface models (RSMs), followed by the execution of multiple Earth system simulations. These computations require an infrastructure that provides high-performance computing for building and searching the RSMs and high-throughput computing for the concurrent evaluation of a large number of models. Grid computing technology is therefore essential to make this algorithm practical for members of the GENIE project.

  17. An Optimal Power Flow (OPF) Method with Improved Power System Stability

    DEFF Research Database (Denmark)

    Su, Chi; Chen, Zhe

    2010-01-01

    This paper proposes an optimal power flow (OPF) method taking into account small signal stability as additional constraints. Particle swarm optimization (PSO) algorithm is adopted to realize the OPF process. The method is programmed in MATLAB and implemented to a nine-bus test power system which...... has large-scale wind power integration. The results show the ability of the proposed method to find optimal (or near-optimal) operating points in different cases. Based on these results, the analysis of the impacts of wind power integration on the system small signal stability has been conducted....

  18. Optimal allocation of resources in systems

    International Nuclear Information System (INIS)

    Derman, C.; Lieberman, G.J.; Ross, S.M.

    1975-01-01

    In the design of a new system, or the maintenance of an old system, allocation of resources is of prime consideration. In allocating resources it is often beneficial to develop a solution that yields an optimal value of the system measure of desirability. In the context of the problems considered in this paper the resources to be allocated are components already produced (assembly problems) and money (allocation in the construction or repair of systems). The measure of desirability for system assembly will usually be maximizing the expected number of systems that perform satisfactorily and the measure in the allocation context will be maximizing the system reliability. Results are presented for these two types of general problems in both a sequential (when appropriate) and non-sequential context

  19. Optimization Design of Multi-Parameters in Rail Launcher System

    Directory of Open Access Journals (Sweden)

    Yujiao Zhang

    2014-05-01

    Full Text Available Today the energy storage systems are still encumbering, therefore it is useful to think about the optimization of a railgun system in order to achieve the best performance with the lowest energy input. In this paper, an optimal design method considering 5 parameters is proposed to improve the energy conversion efficiency of a simple railgun. In order to avoid costly trials, the field- circuit method is employed to analyze the operations of different structural railguns with different parameters respectively. And the orthogonal test approach is used to guide the simulation for choosing the better parameter combinations, as well reduce the calculation cost. The research shows that the proposed method gives a better result in the energy efficiency of the system. To improve the energy conversion efficiency of electromagnetic rail launchers, the selection of more parameters must be considered in the design stage, such as the width, height and length of rail, the distance between rail pair, and pulse forming inductance. However, the relationship between these parameters and energy conversion efficiency cannot be directly described by one mathematical expression. So optimization methods must be applied to conduct design. In this paper, a rail launcher with five parameters was optimized by using orthogonal test method. According to the arrangement of orthogonal table, the better parameters’ combination can be obtained through less calculation. Under the condition of different parameters’ value, field and circuit simulation analysis were made. The results show that the energy conversion efficiency of the system is increased by 71.9 % after parameters optimization.

  20. Linear Optimization of Frequency Spectrum Assignments Across System

    Science.gov (United States)

    2016-03-01

    selection tools, frequency allocation, transmission optimization, electromagnetic maneuver warfare, electronic protection, assignment model 15. NUMBER ...Characteristics Modeled ...............................................................29 Table 10.   Antenna Systems Modeled , Number of Systems and...surveillance EW early warning GAMS general algebraic modeling system GHz gigahertz IDE integrated development environment ILP integer linear program

  1. Proper Motions of Dwarf Spheroidal Galaxies from Hubble Space Telescope Imaging. IV. Measurement for Sculptor

    Science.gov (United States)

    Piatek, Slawomir; Pryor, Carlton; Bristow, Paul; Olszewski, Edward W.; Harris, Hugh C.; Mateo, Mario; Minniti, Dante; Tinney, Christopher G.

    2006-03-01

    This article presents a measurement of the proper motion of the Sculptor dwarf spheroidal galaxy determined from images taken with the Hubble Space Telescope using the Space Telescope Imaging Spectrograph in the imaging mode. Each of two distinct fields contains a quasi-stellar object that serves as the ``reference point.'' The measured proper motion of Sculptor, expressed in the equatorial coordinate system, is (μα, μδ)=(9+/-13, 2+/-13) mas century-1. Removing the contributions from the motion of the Sun and the motion of the local standard of rest produces the proper motion in the Galactic rest frame: (μGrfα, μGrfδ)=(-23+/-13, 45+/-13) mas century-1. The implied space velocity with respect to the Galactic center has a radial component of Vr=79+/-6 km s-1 and a tangential component of Vt=198+/-50 km s-1. Integrating the motion of Sculptor in a realistic potential for the Milky Way produces orbital elements. The perigalacticon and apogalacticon are 68 (31, 83) and 122 (97, 313) kpc, respectively, where the values in the parentheses represent the 95% confidence interval derived from Monte Carlo experiments. The eccentricity of the orbit is 0.29 (0.26, 0.60), and the orbital period is 2.2 (1.5, 4.9) Gyr. Sculptor is on a polar orbit around the Milky Way: the angle of inclination is 86° (83°, 90°). Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.

  2. Housing Development Building Management System (HDBMS For Optimized Electricity Bills

    Directory of Open Access Journals (Sweden)

    Weixian Li

    2017-08-01

    Full Text Available Smart Buildings is a modern building that allows residents to have sustainable comfort with high efficiency of electricity usage. These objectives could be achieved by applying appropriate, capable optimization algorithms and techniques. This paper presents a Housing Development Building Management System (HDBMS strategy inspired by Building Energy Management System (BEMS concept that will integrate with smart buildings using Supply Side Management (SSM and Demand Side Management (DSM System. HDBMS is a Multi-Agent System (MAS based decentralized decision making system proposed by various authors. MAS based HDBMS was created using JAVA on a IEEE FIPA compliant multi-agent platform named JADE. It allows agents to communicate, interact and negotiate with energy supply and demand of the smart buildings to provide the optimal energy usage and minimal electricity costs.  This results in reducing the load of the power distribution system in smart buildings which simulation studies has shown the potential of proposed HDBMS strategy to provide the optimal solution for smart building energy management.

  3. Optimal Sensor Selection for Health Monitoring Systems

    Science.gov (United States)

    Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.

    2005-01-01

    Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.

  4. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    Science.gov (United States)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  5. Multi-objective optimization of linear multi-state multiple sliding window system

    International Nuclear Information System (INIS)

    Konak, Abdullah; Kulturel-Konak, Sadan; Levitin, Gregory

    2012-01-01

    This paper considers the optimal element sequencing in a linear multi-state multiple sliding window system that consists of n linearly ordered multi-state elements. Each multi-state element can have different states: from complete failure up to perfect functioning. A performance rate is associated with each state. The failure of type i in the system occurs if for any i (1≤i≤I) the cumulative performance of any r i consecutive elements is lower than w i . The element sequence strongly affects the probability of any type of system failure. The sequence that minimizes the probability of certain type of failure can provide high probability of other types of failures. Therefore the optimization problem for the multiple sliding window system is essentially multi-objective. The paper formulates and solves the multi-objective optimization problem for the multiple sliding window systems. A multi-objective Genetic Algorithm is used as the optimization engine. Illustrative examples are presented.

  6. Joint Transmitter-Receiver Optimization in the Downlink CDMA Systems

    Directory of Open Access Journals (Sweden)

    Mohammad Saquib

    2002-08-01

    Full Text Available To maximize the downlink code-division multiple access (CDMA system capacity, we propose to minimize the total transmitted power of the system subject to users′ signal-to-interference ratio (SIR requirements via designing optimum transmitter sequences and utilizing linear optimum receivers (minimum mean square error (MMSE receiver. In our work on joint transmitter-receiver design for the downlink CDMA systems with multiple antennas and multipath channels, we develop several optimization algorithms by considering various system constraints and prove their convergence. We empirically observed that under the optimization algorithm with no constraint on the system, the optimum receiver structure matches the received transmitter sequences. A simulation study is performed to see how the different practical system constraints penalize the system with respect to the optimum algorithm with no constraint on the system.

  7. An Evolutionary Approach for Optimizing Hierarchical Multi-Agent System Organization

    OpenAIRE

    Shen, Zhiqi; Yu, Ling; Yu, Han

    2014-01-01

    It has been widely recognized that the performance of a multi-agent system is highly affected by its organization. A large scale system may have billions of possible ways of organization, which makes it impractical to find an optimal choice of organization using exhaustive search methods. In this paper, we propose a genetic algorithm aided optimization scheme for designing hierarchical structures of multi-agent systems. We introduce a novel algorithm, called the hierarchical genetic algorithm...

  8. Power Consumption in Refrigeration Systems - Modeling for Optimization

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Skovrup, Morten Juel

    2011-01-01

    Refrigeration systems consume a substantial amount of energy. Taking for instance supermarket refrigeration systems as an example they can account for up to 50−80% of the total energy consumption in the supermarket. Due to the thermal capacity made up by the refrigerated goods in the system...... there is a possibility for optimizing the power consumption by utilizing load shifting strategies. This paper describes the dynamics and the modeling of a vapor compression refrigeration system needed for sufficiently realistic estimation of the power consumption and its minimization. This leads to a non-convex function...... with possibly multiple extrema. Such a function can not directly be optimized by standard methods and a qualitative analysis of the system’s constraints is presented. The description of power consumption contains nonlinear terms which are approximated by linear functions in the control variables and the error...

  9. Applied optimal control theory of distributed systems

    CERN Document Server

    Lurie, K A

    1993-01-01

    This book represents an extended and substantially revised version of my earlierbook, Optimal Control in Problems ofMathematical Physics,originally published in Russian in 1975. About 60% of the text has been completely revised and major additions have been included which have produced a practically new text. My aim was to modernize the presentation but also to preserve the original results, some of which are little known to a Western reader. The idea of composites, which is the core of the modern theory of optimization, was initiated in the early seventies. The reader will find here its implementation in the problem of optimal conductivity distribution in an MHD-generatorchannel flow.Sincethen it has emergedinto an extensive theory which is undergoing a continuous development. The book does not pretend to be a textbook, neither does it offer a systematic presentation of the theory. Rather, it reflects a concept which I consider as fundamental in the modern approach to optimization of dis­ tributed systems. ...

  10. Multi-objective optimization of an underwater compressed air energy storage system using genetic algorithm

    International Nuclear Information System (INIS)

    Cheung, Brian C.; Carriveau, Rupp; Ting, David S.K.

    2014-01-01

    This paper presents the findings from a multi-objective genetic algorithm optimization study on the design parameters of an underwater compressed air energy storage system (UWCAES). A 4 MWh UWCAES system was numerically simulated and its energy, exergy, and exergoeconomics were analysed. Optimal system configurations were determined that maximized the UWCAES system round-trip efficiency and operating profit, and minimized the cost rate of exergy destruction and capital expenditures. The optimal solutions obtained from the multi-objective optimization model formed a Pareto-optimal front, and a single preferred solution was selected using the pseudo-weight vector multi-criteria decision making approach. A sensitivity analysis was performed on interest rates to gauge its impact on preferred system designs. Results showed similar preferred system designs for all interest rates in the studied range. The round-trip efficiency and operating profit of the preferred system designs were approximately 68.5% and $53.5/cycle, respectively. The cost rate of the system increased with interest rates. - Highlights: • UWCAES system configurations were developed using multi-objective optimization. • System was optimized for energy efficiency, exergy, and exergoeconomics • Pareto-optimal solution surfaces were developed at different interest rates. • Similar preferred system configurations were found at all interest rates studied

  11. Semidefinite Relaxation-Based Optimization of Multiple-Input Wireless Power Transfer Systems

    Science.gov (United States)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-11-01

    An optimization procedure for multi-transmitter (MISO) wireless power transfer (WPT) systems based on tight semidefinite relaxation (SDR) is presented. This method ensures physical realizability of MISO WPT systems designed via convex optimization -- a robust, semi-analytical and intuitive route to optimizing such systems. To that end, the nonconvex constraints requiring that power is fed into rather than drawn from the system via all transmitter ports are incorporated in a convex semidefinite relaxation, which is efficiently and reliably solvable by dedicated algorithms. A test of the solution then confirms that this modified problem is equivalent (tight relaxation) to the original (nonconvex) one and that the true global optimum has been found. This is a clear advantage over global optimization methods (e.g. genetic algorithms), where convergence to the true global optimum cannot be ensured or tested. Discussions of numerical results yielded by both the closed-form expressions and the refined technique illustrate the importance and practicability of the new method. It, is shown that this technique offers a rigorous optimization framework for a broad range of current and emerging WPT applications.

  12. Optimization Design of Multi-Parameters in Rail Launcher System

    OpenAIRE

    Yujiao Zhang; Weinan Qin; Junpeng Liao; Jiangjun Ruan

    2014-01-01

    Today the energy storage systems are still encumbering, therefore it is useful to think about the optimization of a railgun system in order to achieve the best performance with the lowest energy input. In this paper, an optimal design method considering 5 parameters is proposed to improve the energy conversion efficiency of a simple railgun. In order to avoid costly trials, the field- circuit method is employed to analyze the operations of different structural railguns with different paramete...

  13. Economic Optimization of Component Sizing for Residential Battery Storage Systems

    Directory of Open Access Journals (Sweden)

    Holger C. Hesse

    2017-06-01

    Full Text Available Battery energy storage systems (BESS coupled with rooftop-mounted residential photovoltaic (PV generation, designated as PV-BESS, draw increasing attention and market penetration as more and more such systems become available. The manifold BESS deployed to date rely on a variety of different battery technologies, show a great variation of battery size, and power electronics dimensioning. However, given today’s high investment costs of BESS, a well-matched design and adequate sizing of the storage systems are prerequisites to allow profitability for the end-user. The economic viability of a PV-BESS depends also on the battery operation, storage technology, and aging of the system. In this paper, a general method for comprehensive PV-BESS techno-economic analysis and optimization is presented and applied to the state-of-art PV-BESS to determine its optimal parameters. Using a linear optimization method, a cost-optimal sizing of the battery and power electronics is derived based on solar energy availability and local demand. At the same time, the power flow optimization reveals the best storage operation patterns considering a trade-off between energy purchase, feed-in remuneration, and battery aging. Using up to date technology-specific aging information and the investment cost of battery and inverter systems, three mature battery chemistries are compared; a lead-acid (PbA system and two lithium-ion systems, one with lithium-iron-phosphate (LFP and another with lithium-nickel-manganese-cobalt (NMC cathode. The results show that different storage technology and component sizing provide the best economic performances, depending on the scenario of load demand and PV generation.

  14. Multi-objective optimization of the reactor coolant system

    International Nuclear Information System (INIS)

    Chen Lei; Yan Changqi; Wang Jianjun

    2014-01-01

    Background: Weight and size are important criteria in evaluating the performance of a nuclear power plant. It is of great theoretical value and engineering significance to reduce the weight and volume of the components for a nuclear power plant by the optimization methodology. Purpose: In order to provide a new method for the optimization of nuclear power plant multi-objective, the concept of the non-dominated solution was introduced. Methods: Based on the parameters of Qinshan I nuclear power plant, the mathematical models of the reactor core, the reactor vessel, the main pipe, the pressurizer and the steam generator were built and verified. The sensitivity analyses were carried out to study the influences of the design variables on the objectives. A modified non-dominated sorting genetic algorithm was proposed and employed to optimize the weight and the volume of the reactor coolant system. Results: The results show that the component mathematical models are reliable, the modified non-dominated sorting generic algorithm is effective, and the reactor inlet temperature is the most important variable which influences the distribution of the non-dominated solutions. Conclusion: The optimization results could provide a reference to the design of such reactor coolant system. (authors)

  15. Optimal Control of Hybrid Systems in Air Traffic Applications

    Science.gov (United States)

    Kamgarpour, Maryam

    Growing concerns over the scalability of air traffic operations, air transportation fuel emissions and prices, as well as the advent of communication and sensing technologies motivate improvements to the air traffic management system. To address such improvements, in this thesis a hybrid dynamical model as an abstraction of the air traffic system is considered. Wind and hazardous weather impacts are included using a stochastic model. This thesis focuses on the design of algorithms for verification and control of hybrid and stochastic dynamical systems and the application of these algorithms to air traffic management problems. In the deterministic setting, a numerically efficient algorithm for optimal control of hybrid systems is proposed based on extensions of classical optimal control techniques. This algorithm is applied to optimize the trajectory of an Airbus 320 aircraft in the presence of wind and storms. In the stochastic setting, the verification problem of reaching a target set while avoiding obstacles (reach-avoid) is formulated as a two-player game to account for external agents' influence on system dynamics. The solution approach is applied to air traffic conflict prediction in the presence of stochastic wind. Due to the uncertainty in forecasts of the hazardous weather, and hence the unsafe regions of airspace for aircraft flight, the reach-avoid framework is extended to account for stochastic target and safe sets. This methodology is used to maximize the probability of the safety of aircraft paths through hazardous weather. Finally, the problem of modeling and optimization of arrival air traffic and runway configuration in dense airspace subject to stochastic weather data is addressed. This problem is formulated as a hybrid optimal control problem and is solved with a hierarchical approach that decouples safety and performance. As illustrated with this problem, the large scale of air traffic operations motivates future work on the efficient

  16. A new hybrid optimization algorithm CRO-DE for optimal coordination of overcurrent relays in complex power systems

    Directory of Open Access Journals (Sweden)

    Mohamed Zellagui

    2017-09-01

    Full Text Available The paper presents a new hybrid global optimization algorithm based on Chemical Reaction based Optimization (CRO and Di¤erential evolution (DE algorithm for nonlinear constrained optimization problems. This approach proposed for the optimal coordination and setting relays of directional overcurrent relays in complex power systems. In protection coordination problem, the objective function to be minimized is the sum of the operating time of all main relays. The optimization problem is subject to a number of constraints which are mainly focused on the operation of the backup relay, which should operate if a primary relay fails to respond to the fault near to it, Time Dial Setting (TDS, Plug Setting (PS and the minimum operating time of a relay. The hybrid global proposed optimization algorithm aims to minimize the total operating time of each protection relay. Two systems are used as case study to check the effeciency of the optimization algorithm which are IEEE 4-bus and IEEE 6-bus models. Results are obtained and presented for CRO and DE and hybrid CRO-DE algorithms. The obtained results for the studied cases are compared with those results obtained when using other optimization algorithms which are Teaching Learning-Based Optimization (TLBO, Chaotic Differential Evolution Algorithm (CDEA and Modiffied Differential Evolution Algorithm (MDEA, and Hybrid optimization algorithms (PSO-DE, IA-PSO, and BFOA-PSO. From analysing the obtained results, it has been concluded that hybrid CRO-DO algorithm provides the most optimum solution with the best convergence rate.

  17. Thermal resistance analysis and optimization of photovoltaic-thermoelectric hybrid system

    International Nuclear Information System (INIS)

    Yin, Ershuai; Li, Qiang; Xuan, Yimin

    2017-01-01

    Highlights: • A detailed thermal resistance analysis of the PV-TE hybrid system is proposed. • c-Si PV and p-Si PV cells are proved to be inapplicable for the PV-TE hybrid system. • Some criteria for selecting coupling devices and optimal design are obtained. • A detailed process of designing the practical PV-TE hybrid system is provided. - Abstract: The thermal resistance theory is introduced into the theoretical model of the photovoltaic-thermoelectric (PV-TE) hybrid system. A detailed thermal resistance analysis is proposed to optimize the design of the coupled system in terms of optimal total conversion efficiency. Systems using four types of photovoltaic cells are investigated, including monocrystalline silicon photovoltaic cell, polycrystalline silicon photovoltaic cell, amorphous silicon photovoltaic cell and polymer photovoltaic cell. Three cooling methods, including natural cooling, forced air cooling and water cooling, are compared, which demonstrates a significant superiority of water cooling for the concentrating photovoltaic-thermoelectric hybrid system. Influences of the optical concentrating ratio and velocity of water are studied together and the optimal values are revealed. The impacts of the thermal resistances of the contact surface, TE generator and the upper heat loss thermal resistance on the property of the coupled system are investigated, respectively. The results indicate that amorphous silicon PV cell and polymer PV cell are more appropriate for the concentrating hybrid system. Enlarging the thermal resistance of the thermoelectric generator can significantly increase the performance of the coupled system using amorphous silicon PV cell or polymer PV cell.

  18. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  19. Investigation of Cost and Energy Optimization of Drinking Water Distribution Systems.

    Science.gov (United States)

    Cherchi, Carla; Badruzzaman, Mohammad; Gordon, Matthew; Bunn, Simon; Jacangelo, Joseph G

    2015-11-17

    Holistic management of water and energy resources through energy and water quality management systems (EWQMSs) have traditionally aimed at energy cost reduction with limited or no emphasis on energy efficiency or greenhouse gas minimization. This study expanded the existing EWQMS framework and determined the impact of different management strategies for energy cost and energy consumption (e.g., carbon footprint) reduction on system performance at two drinking water utilities in California (United States). The results showed that optimizing for cost led to cost reductions of 4% (Utility B, summer) to 48% (Utility A, winter). The energy optimization strategy was successfully able to find the lowest energy use operation and achieved energy usage reductions of 3% (Utility B, summer) to 10% (Utility A, winter). The findings of this study revealed that there may be a trade-off between cost optimization (dollars) and energy use (kilowatt-hours), particularly in the summer, when optimizing the system for the reduction of energy use to a minimum incurred cost increases of 64% and 184% compared with the cost optimization scenario. Water age simulations through hydraulic modeling did not reveal any adverse effects on the water quality in the distribution system or in tanks from pump schedule optimization targeting either cost or energy minimization.

  20. Organization of an optimal adaptive immune system

    Science.gov (United States)

    Walczak, Aleksandra; Mayer, Andreas; Balasubramanian, Vijay; Mora, Thierry

    The repertoire of lymphocyte receptors in the adaptive immune system protects organisms from a diverse set of pathogens. A well-adapted repertoire should be tuned to the pathogenic environment to reduce the cost of infections. I will discuss a general framework for predicting the optimal repertoire that minimizes the cost of infections contracted from a given distribution of pathogens. The theory predicts that the immune system will have more receptors for rare antigens than expected from the frequency of encounters and individuals exposed to the same infections will have sparse repertoires that are largely different, but nevertheless exploit cross-reactivity to provide the same coverage of antigens. I will show that the optimal repertoires can be reached by dynamics that describes the competitive binding of antigens by receptors, and selective amplification of stimulated receptors.

  1. Implementing of the multi-objective particle swarm optimizer and fuzzy decision-maker in exergetic, exergoeconomic and environmental optimization of a benchmark cogeneration system

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn; Babaie, Meisam; Farmani, Mohammad Reza

    2011-01-01

    Multi-objective optimization for design of a benchmark cogeneration system namely as the CGAM cogeneration system is performed. In optimization approach, Exergetic, Exergoeconomic and Environmental objectives are considered, simultaneously. In this regard, the set of Pareto optimal solutions known as the Pareto frontier is obtained using the MOPSO (multi-objective particle swarm optimizer). The exergetic efficiency as an exergetic objective is maximized while the unit cost of the system product and the cost of the environmental impact respectively as exergoeconomic and environmental objectives are minimized. Economic model which is utilized in the exergoeconomic analysis is built based on both simple model (used in original researches of the CGAM system) and the comprehensive modeling namely as TTR (total revenue requirement) method (used in sophisticated exergoeconomic analysis). Finally, a final optimal solution from optimal set of the Pareto frontier is selected using a fuzzy decision-making process based on the Bellman-Zadeh approach and results are compared with corresponding results obtained in a traditional decision-making process. Further, results are compared with the corresponding performance of the base case CGAM system and optimal designs of previous works and discussed. -- Highlights: → A multi-objective optimization approach has been implemented in optimization of a benchmark cogeneration system. → Objective functions based on the environmental impact evaluation, thermodynamic and economic analysis are obtained and optimized. → Particle swarm optimizer implemented and its robustness is compared with NSGA-II. → A final optimal configuration is found using various decision-making approaches. → Results compared with previous works in the field.

  2. Multidisciplinary Aerospace Systems Optimization: Computational AeroSciences (CAS) Project

    Science.gov (United States)

    Kodiyalam, S.; Sobieski, Jaroslaw S. (Technical Monitor)

    2001-01-01

    The report describes a method for performing optimization of a system whose analysis is so expensive that it is impractical to let the optimization code invoke it directly because excessive computational cost and elapsed time might result. In such situation it is imperative to have user control the number of times the analysis is invoked. The reported method achieves that by two techniques in the Design of Experiment category: a uniform dispersal of the trial design points over a n-dimensional hypersphere and a response surface fitting, and the technique of krigging. Analyses of all the trial designs whose number may be set by the user are performed before activation of the optimization code and the results are stored as a data base. That code is then executed and referred to the above data base. Two applications, one of the airborne laser system, and one of an aircraft optimization illustrate the method application.

  3. Distributed Optimization for a Class of Nonlinear Multiagent Systems With Disturbance Rejection.

    Science.gov (United States)

    Wang, Xinghu; Hong, Yiguang; Ji, Haibo

    2016-07-01

    The paper studies the distributed optimization problem for a class of nonlinear multiagent systems in the presence of external disturbances. To solve the problem, we need to achieve the optimal multiagent consensus based on local cost function information and neighboring information and meanwhile to reject local disturbance signals modeled by an exogenous system. With convex analysis and the internal model approach, we propose a distributed optimization controller for heterogeneous and nonlinear agents in the form of continuous-time minimum-phase systems with unity relative degree. We prove that the proposed design can solve the exact optimization problem with rejecting disturbances.

  4. The Carnegie–Chicago Hubble Program. III. The Distance to NGC 1365 via the Tip of the Red Giant Branch

    Science.gov (United States)

    Jang, In Sung; Hatt, Dylan; Beaton, Rachael L.; Lee, Myung Gyoon; Freedman, Wendy L.; Madore, Barry F.; Hoyt, Taylor J.; Monson, Andrew J.; Rich, Jeffrey A.; Scowcroft, Victoria; Seibert, Mark

    2018-01-01

    The Carnegie–Chicago Hubble Program (CCHP) seeks to anchor the distance scale of Type Ia supernovae via the Tip of the Red Giant Branch (TRGB) method. Based on deep Hubble Space Telescope ACS/WFC imaging, we present an analysis of the TRGB for the metal-poor halo of NGC 1365, a giant spiral galaxy in the Fornax cluster that was host to the Type Ia supernova SN 2012fr. We have measured the extinction-corrected TRGB magnitude of NGC 1365 to be F814W = 27.34 ± 0.03stat ± 0.04sys mag. In advance of future direct calibration by Gaia, we adopt a provisional I-band TRGB luminosity set at the Large Magellanic Cloud and find a true distance modulus μ 0 = 31.29 ± 0.04stat ± 0.06sys mag or D = 18.1 ± 0.3stat ± 0.5sys Mpc. This measurement is in excellent agreement with recent Cepheid-based distances to NGC 1365 and reveals no significant difference in the distances derived from stars of Populations I and II for this galaxy. We revisit the error budget for the CCHP path to the Hubble constant based on the analysis presented here, i.e., that for one of the most distant Type Ia supernova hosts within our Program, and find that a 2.5% measurement is feasible with the current sample of galaxies and TRGB absolute calibration. Based in part on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program #13691.

  5. Long-term optimization of cogeneration systems in a competitive market environment

    International Nuclear Information System (INIS)

    Thorin, E.; Brand, H.; Weber, C.

    2005-01-01

    A tool for long-term optimization of cogeneration systems is developed that is based on mixed integer linear-programming and Lagrangian relaxation. We use a general approach without heuristics to solve the optimization problem of the unit commitment problem and load dispatch. The possibility to buy and sell electric power at a spot market is considered as well as the possibility to provide secondary reserve. The tool has been tested on a demonstration system based on an existing combined heat-and-power (CHP) system with extraction-condensing steam turbines, gas turbines, and boilers for heat production and district-heating networks. The key feature of the model for obtaining solutions within reasonable times is a suitable division of the whole optimization period into overlapping sub-periods. Using Lagrangian relaxation, the tool can be applied to large CHP systems. For the demonstration model, almost optimal solutions were found. (author)

  6. Distributed Optimization Design of Continuous-Time Multiagent Systems With Unknown-Frequency Disturbances.

    Science.gov (United States)

    Wang, Xinghu; Hong, Yiguang; Yi, Peng; Ji, Haibo; Kang, Yu

    2017-05-24

    In this paper, a distributed optimization problem is studied for continuous-time multiagent systems with unknown-frequency disturbances. A distributed gradient-based control is proposed for the agents to achieve the optimal consensus with estimating unknown frequencies and rejecting the bounded disturbance in the semi-global sense. Based on convex optimization analysis and adaptive internal model approach, the exact optimization solution can be obtained for the multiagent system disturbed by exogenous disturbances with uncertain parameters.

  7. Optimal Scheduling of Residential Microgrids Considering Virtual Energy Storage System

    Directory of Open Access Journals (Sweden)

    Weiliang Liu

    2018-04-01

    Full Text Available The increasingly complex residential microgrids (r-microgrid consisting of renewable generation, energy storage systems, and residential buildings require a more intelligent scheduling method. Firstly, aiming at the radiant floor heating/cooling system widely utilized in residential buildings, the mathematical relationship between the operative temperature and heating/cooling demand is established based on the equivalent thermodynamic parameters (ETP model, by which the thermal storage capacity is analyzed. Secondly, the radiant floor heating/cooling system is treated as virtual energy storage system (VESS, and an optimization model based on mixed-integer nonlinear programming (MINLP for r-microgrid scheduling is established which takes thermal comfort level and economy as the optimization objectives. Finally, the optimal scheduling results of two typical r-microgrids are analyzed. Case studies demonstrate that the proposed scheduling method can effectively employ the thermal storage capacity of radiant floor heating/cooling system, thus lowering the operating cost of the r-microgrid effectively while ensuring the thermal comfort level of users.

  8. Sub-optimal control of fuzzy linear dynamical systems under granular differentiability concept.

    Science.gov (United States)

    Mazandarani, Mehran; Pariz, Naser

    2018-05-01

    This paper deals with sub-optimal control of a fuzzy linear dynamical system. The aim is to keep the state variables of the fuzzy linear dynamical system close to zero in an optimal manner. In the fuzzy dynamical system, the fuzzy derivative is considered as the granular derivative; and all the coefficients and initial conditions can be uncertain. The criterion for assessing the optimality is regarded as a granular integral whose integrand is a quadratic function of the state variables and control inputs. Using the relative-distance-measure (RDM) fuzzy interval arithmetic and calculus of variations, the optimal control law is presented as the fuzzy state variables feedback. Since the optimal feedback gains are obtained as fuzzy functions, they need to be defuzzified. This will result in the sub-optimal control law. This paper also sheds light on the restrictions imposed by the approaches which are based on fuzzy standard interval arithmetic (FSIA), and use strongly generalized Hukuhara and generalized Hukuhara differentiability concepts for obtaining the optimal control law. The granular eigenvalues notion is also defined. Using an RLC circuit mathematical model, it is shown that, due to their unnatural behavior in the modeling phenomenon, the FSIA-based approaches may obtain some eigenvalues sets that might be different from the inherent eigenvalues set of the fuzzy dynamical system. This is, however, not the case with the approach proposed in this study. The notions of granular controllability and granular stabilizability of the fuzzy linear dynamical system are also presented in this paper. Moreover, a sub-optimal control for regulating a Boeing 747 in longitudinal direction with uncertain initial conditions and parameters is gained. In addition, an uncertain suspension system of one of the four wheels of a bus is regulated using the sub-optimal control introduced in this paper. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Optimization of a flow injection analysis system for multiple solvent extraction

    International Nuclear Information System (INIS)

    Rossi, T.M.; Shelly, D.C.; Warner, I.M.

    1982-01-01

    The performance of a multistage flow injection analysis solvent extraction system has been optimized. The effect of solvent segmentation devices, extraction coils, and phase separators on performance characteristics is discussed. Theoretical consideration is given to the effects and determination of dispersion and the extraction dynamics within both glass and Teflon extraction coils. The optimized system has a sample recovery similar to an identical manual procedure and a 1.5% relative standard deviation between injections. Sample throughput time is under 5 min. These characteristics represent significant improvements over the performance of the same system before optimization. 6 figures, 2 tables

  10. Parameter estimation for chaotic systems with a Drift Particle Swarm Optimization method

    International Nuclear Information System (INIS)

    Sun Jun; Zhao Ji; Wu Xiaojun; Fang Wei; Cai Yujie; Xu Wenbo

    2010-01-01

    Inspired by the motion of electrons in metal conductors in an electric field, we propose a variant of Particle Swarm Optimization (PSO), called Drift Particle Swarm Optimization (DPSO) algorithm, and apply it in estimating the unknown parameters of chaotic dynamic systems. The principle and procedure of DPSO are presented, and the algorithm is used to identify Lorenz system and Chen system. The experiment results show that for the given parameter configurations, DPSO can identify the parameters of the systems accurately and effectively, and it may be a promising tool for chaotic system identification as well as other numerical optimization problems in physics.

  11. Expert systems and their use in augmenting design optimization

    Science.gov (United States)

    Kidwell, G. H.; Eskey, M. A.

    1985-01-01

    The challenging requirements that are evolving for future aircraft demand that each design be optimally integrated, for the penalties imposed by nonoptimal performance are significant. Classic numerical optimization algorithms have been and will continue to be important tools for aircraft designers. These methods are, however, limited to certain categories of aircraft design variables, leaving the remainder to be determined by the user. A method that makes use of knowledge-based expert systems offers the potential for aiding the conceptual design process in a way that is similar to that of numerical optimization, except that it would address discrete, discontinuous, abstract, or any other unoptimized aspect of vehicle design and integration. Other unique capabilities such as automatic discovery and learning in design may also be achievable in the near term. This paper discusses current practice in conceptual aircraft design and knowledge-based systems, and how knowledge-based systems can be used in conceptual design.

  12. Complex energy system management using optimization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bridgeman, Stuart; Hurdowar-Castro, Diana; Allen, Rick; Olason, Tryggvi; Welt, Francois

    2010-09-15

    Modern energy systems are often very complex with respect to the mix of generation sources, energy storage, transmission, and avenues to market. Historically, power was provided by government organizations to load centers, and pricing was provided in a regulatory manner. In recent years, this process has been displaced by the independent system operator (ISO). This complexity makes the operation of these systems very difficult, since the components of the system are interdependent. Consequently, computer-based large-scale simulation and optimization methods like Decision Support Systems are now being used. This paper discusses the application of a DSS to operations and planning systems.

  13. Dynamic optimization of distributed biological systems using robust and efficient numerical techniques.

    Science.gov (United States)

    Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A

    2012-07-02

    Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of

  14. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    Science.gov (United States)

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei

    2018-06-01

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  15. Dark energy and the quietness of the local Hubble flow

    International Nuclear Information System (INIS)

    Axenides, M.; Perivolaropoulos, L.

    2002-01-01

    The linearity and quietness of the local ( X (t 0 ) of dark energy obeying the time independent equation of state p X =wρ X . We find that dark energy can indeed cool the LHF. However the dark energy parameter values required to make the predicted velocity dispersion consistent with the observed value v rms ≅40 km/s have been ruled out by other observational tests constraining the dark energy parameters w and Ω X . Therefore despite the claims of recent qualitative studies, dark energy with time independent equation of state cannot by itself explain the quietness and linearity of the local Hubble flow

  16. Optimal preventive maintenance and repair policies for multi-state systems

    International Nuclear Information System (INIS)

    Sheu, Shey-Huei; Chang, Chin-Chih; Chen, Yen-Luan; George Zhang, Zhe

    2015-01-01

    This paper studies the optimal preventive maintenance (PM) policies for multi-state systems. The scheduled PMs can be either imperfect or perfect type. The improved effective age is utilized to model the effect of an imperfect PM. The system is considered as in a failure state (unacceptable state) once its performance level falls below a given customer demand level. If the system fails before a scheduled PM, it is repaired and becomes operational again. We consider three types of major, minimal, and imperfect repair actions, respectively. The deterioration of the system is assumed to follow a non-homogeneous continuous time Markov process (NHCTMP) with finite state space. A recursive approach is proposed to efficiently compute the time-dependent distribution of the multi-state system. For each repair type, we find the optimal PM schedule that minimizes the average cost rate. The main implication of our results is that in determining the optimal scheduled PM, choosing the right repair type will significantly improve the efficiency of the system maintenance. Thus PM and repair decisions must be made jointly to achieve the best performance

  17. Optimization analysis of thermal management system for electric vehicle battery pack

    Science.gov (United States)

    Gong, Huiqi; Zheng, Minxin; Jin, Peng; Feng, Dong

    2018-04-01

    Electric vehicle battery pack can increase the temperature to affect the power battery system cycle life, charge-ability, power, energy, security and reliability. The Computational Fluid Dynamics simulation and experiment of the charging and discharging process of the battery pack were carried out for the thermal management system of the battery pack under the continuous charging of the battery. The simulation result and the experimental data were used to verify the rationality of the Computational Fluid Dynamics calculation model. In view of the large temperature difference of the battery module in high temperature environment, three optimization methods of the existing thermal management system of the battery pack were put forward: adjusting the installation position of the fan, optimizing the arrangement of the battery pack and reducing the fan opening temperature threshold. The feasibility of the optimization method is proved by simulation and experiment of the thermal management system of the optimized battery pack.

  18. Modeling and operation optimization of a proton exchange membrane fuel cell system for maximum efficiency

    International Nuclear Information System (INIS)

    Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock

    2016-01-01

    Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.

  19. General structure and functions of the OPAL optimization system

    International Nuclear Information System (INIS)

    Mikolas, P.; Sustek, J.; Svarny, J.

    2005-01-01

    Presented version of OPAL - the in core fuel management system is under development also for core loading optimization of NPP Temelin (WWER-1000 type reactor). Description of the algorithm of separate modules was presented in several AER papers. The optimization process of NPP Temelin loading patterns comprises problems like preparation input data for NPP SW, loading searching, fixing and splitting of fuel enrichments, BP-assignment, FA rotation and fuel cycle economics. In application for NPP Temelin has been used NPP Temelin code system (spectral code with macrocode). The objective of fuel management is to design a fuel-loading scheme that is capable of producing the required energy at the minimum cost while satisfying the safety constraints. Usually the objectives are: a) To meet the energy production requirements (loaded fuel should have sufficient reactivity that covers reactivity defects associated with startup as well as reactivity loss due to the fuel depletion); b) To satisfy all safety-related limits (loaded fuel should preserve adequate power peaking limits (given namely LOCA), shutdown margins and no positive Moderator Temperature Coefficient (MTC); c) To minimize the power generation cost ($/kWh(e)). Flow of optimization process OPAL management system is in detail described and application for NPP Temelin cores optimization presented (Authors)

  20. Optimization criteria for solar and wind power systems

    Energy Technology Data Exchange (ETDEWEB)

    Salieva, R B

    1976-01-01

    It is shown that the design of solar and wind power systems requires both the specification of the target function and the optimization of the system with respect to two criteria, namely, the system must be economical (minimum cost to the economy) and it must be reliable (the probability of failure-free operation of the system must be not less than a standard value).

  1. A comparison of the economic benefits of centralized and distributed model predictive control strategies for optimal and sub-optimal mine dewatering system designs

    International Nuclear Information System (INIS)

    Romero, Alberto; Millar, Dean; Carvalho, Monica; Maestre, José M.; Camacho, Eduardo F.

    2015-01-01

    Mine dewatering can represent up to 5% of the total energy demand of a mine, and is one of the mine systems that aim to guarantee safe operating conditions. As mines go deeper, dewatering pumping heads become bigger, potentially involving several lift stages. Greater depth does not only mean greater dewatering cost, but more complex systems that require more sophisticated control systems, especially if mine operators wish to gain benefits from demand response incentives that are becoming a routine part of electricity tariffs. This work explores a two stage economic optimization procedure of an underground mine dewatering system, comprising two lifting stages, each one including a pump station and a water reservoir. First, the system design is optimized considering hourly characteristic dewatering demands for twelve days, one day representing each month of the year to account for seasonal dewatering demand variations. This design optimization minimizes the annualized cost of the system, and therefore includes the investment costs in underground reservoirs. Reservoir size, as well as an hourly pumping operation plan are calculated for specific operating environments, defined by characteristic hourly electricity prices and water inflows (seepage and water use from production activities), at best known through historical observations for the previous year. There is no guarantee that the system design will remain optimal when it faces the water inflows and market determined electricity prices of the year ahead, or subsequent years ahead, because these remain unknown at design time. Consequently, the dewatering optimized system design is adopted subsequently as part of a Model Predictive Control (MPC) strategy that adaptively maintains optimality during the operations phase. Centralized, distributed and non-centralized MPC strategies are explored. Results show that the system can be reliably controlled using any of these control strategies proposed. Under the operating

  2. Ant system for reliability optimization of a series system with multiple-choice and budget constraints

    International Nuclear Information System (INIS)

    Nahas, Nabil; Nourelfath, Mustapha

    2005-01-01

    Many researchers have shown that insect colonies behavior can be seen as a natural model of collective problem solving. The analogy between the way ants look for food and combinatorial optimization problems has given rise to a new computational paradigm, which is called ant system. This paper presents an application of ant system in a reliability optimization problem for a series system with multiple-choice constraints incorporated at each subsystem, to maximize the system reliability subject to the system budget. The problem is formulated as a nonlinear binary integer programming problem and characterized as an NP-hard problem. This problem is solved by developing and demonstrating a problem-specific ant system algorithm. In this algorithm, solutions of the reliability optimization problem are repeatedly constructed by considering the trace factor and the desirability factor. A local search is used to improve the quality of the solutions obtained by each ant. A penalty factor is introduced to deal with the budget constraint. Simulations have shown that the proposed ant system is efficient with respect to the quality of solutions and the computing time

  3. Min-max optimal public service system design

    Directory of Open Access Journals (Sweden)

    Marek Kvet

    2015-03-01

    Full Text Available This paper deals with designing a fair public service system. To achieve fairness, various schemes are be applied. The strongest criterion in the process is minimization of disutility of the worst situated users and then optimization of disutility of the better situated users under the condition that disutility of the worst situated users does not worsen, otherwise called lexicographical minimization. Focusing on the first step, this paper endeavours to find an effective solution to the weighted p-median problem based on radial formulation. Attempts at solving real instances when using a location-allocation model often fail due to enormous computational time or huge memory demands. Radial formulation can be implemented using commercial optimisation software. The main goal of this study is to show that the suitability solving of the min-max optimal public service system design can save computational time.

  4. Maintenance resources optimization applied to a manufacturing system

    International Nuclear Information System (INIS)

    Fiori de Castro, Helio; Lucchesi Cavalca, Katia

    2006-01-01

    This paper presents an availability optimization of an engineering system assembled in a series configuration, with redundancy of units and corrective maintenance resources as optimization parameters. The aim is to reach maximum availability, considering as constraints installation and corrective maintenance costs, weight and volume. The optimization method uses a Genetic Algorithm based on biological concepts of species evolution. It is a robust method, as it does not converge to a local optimum. It does not require the use of differential calculus, thus facilitating computational implementation. Results indicate that the methodology is suitable to solve a wide range of engineering design problems involving allocation of redundancies and maintenance resources

  5. Design and Optimization of Thermophotovoltaic System Cavity with Mirrors

    Directory of Open Access Journals (Sweden)

    Tian Zhou

    2016-09-01

    Full Text Available Thermophotovoltaic (TPV systems can convert radiant energy into electrical power. Here we explore the design of the TPV system cavity, which houses the emitter and the photovoltaic (PV cells. Mirrors are utilized in the cavity to modify the spatial and spectral distribution within. After discussing the basic concentric tubular design, two novel cavity configurations are put forward and parametrically studied. The investigated variables include the shape, number, and placement of the mirrors. The optimization objectives are the optimized efficiency and the extended range of application of the TPV system. Through numerical simulations, the relationship between the design parameters and the objectives are revealed. The results show that careful design of the cavity configuration can markedly enhance the performance of the TPV system.

  6. Genetic algorithm based optimization on modeling and design of hybrid renewable energy systems

    International Nuclear Information System (INIS)

    Ismail, M.S.; Moghavvemi, M.; Mahlia, T.M.I.

    2014-01-01

    Highlights: • Solar data was analyzed in the location under consideration. • A program was developed to simulate operation of the PV hybrid system. • Genetic algorithm was used to optimize the sizes of the hybrid system components. • The costs of the pollutant emissions were considered in the optimization. • It is cost effective to power houses in remote areas with such hybrid systems. - Abstract: A sizing optimization of a hybrid system consisting of photovoltaic (PV) panels, a backup source (microturbine or diesel), and a battery system minimizes the cost of energy production (COE), and a complete design of this optimized system supplying a small community with power in the Palestinian Territories is presented in this paper. A scenario that depends on a standalone PV, and another one that depends on a backup source alone were analyzed in this study. The optimization was achieved via the usage of genetic algorithm. The objective function minimizes the COE while covering the load demand with a specified value for the loss of load probability (LLP). The global warming emissions costs have been taken into account in this optimization analysis. Solar radiation data is firstly analyzed, and the tilt angle of the PV panels is then optimized. It was discovered that powering a small rural community using this hybrid system is cost-effective and extremely beneficial when compared to extending the utility grid to supply these remote areas, or just using conventional sources for this purpose. This hybrid system decreases both operating costs and the emission of pollutants. The hybrid system that realized these optimization purposes is the one constructed from a combination of these sources

  7. Optimal Planning and Operation Management of a Ship Electrical Power System with Energy Storage System

    DEFF Research Database (Denmark)

    Anvari-Moghaddam, Amjad; Dragicevic, Tomislav; Meng, Lexuan

    2016-01-01

    Next generation power management at all scales is highly relying on the efficient scheduling and operation of different energy sources to maximize efficiency and utility. The ability to schedule and modulate the energy storage options within energy systems can also lead to more efficient use...... of the generating units. This optimal planning and operation management strategy becomes increasingly important for off-grid systems that operate independently of the main utility, such as microgrids or power systems on marine vessels. This work extends the principles of optimal planning and economic dispatch...... for the proposed plan is derived based on the solution from a mixed-integer nonlinear programming (MINLP) problem. Simulation results showed that including well-sized energy storage options together with optimal operation management of generating units can improve the economic operation of the test system while...

  8. Design and Optimization of Annular Flow Electromagnetic Measurement System for Drilling Engineering

    Directory of Open Access Journals (Sweden)

    Liang Ge

    2018-01-01

    Full Text Available Using the downhole annular flow measurement system to get real-time information of downhole annular flow is the core and foundation of downhole microflux control drilling technology. The research work of electromagnetic flowmeter in recent years creates a challenge to the design of downhole annular flow measurement. This paper proposes a design and optimization of annular flow electromagnetic measurement system for drilling engineering based on the finite element method. Firstly, the annular flow measuring and optimization principle are described. Secondly, a simulation model of an annular flow electromagnetic measurement system with two pairs of coil is built based on the fundamental equation of electromagnetic flowmeter by COMSOL. Thirdly, simulations of the structure of excitation system of the measurement system are carried out, and simulations of the size of the electrode’s radius are also carried out based on the optimized structure, and then all the simulation results are analyzed to evaluate the optimization effect based on the evaluation indexes. The simulation results show that optimized shapes of the excitation system and electrode size can yield a better performance in the annular flow measurement.

  9. Optimization of biomass fuelled systems for distributed power generation using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Lopez, P. Reche; Reyes, N. Ruiz; Gonzalez, M. Gomez; Jurado, F.

    2008-01-01

    With sufficient territory and abundant biomass resources Spain appears to have suitable conditions to develop biomass utilization technologies. As an important decentralized power technology, biomass gasification and power generation has a potential market in making use of biomass wastes. This paper addresses biomass fuelled generation of electricity in the specific aspect of finding the best location and the supply area of the electric generation plant for three alternative technologies (gas motor, gas turbine and fuel cell-microturbine hybrid power cycle), taking into account the variables involved in the problem, such as the local distribution of biomass resources, transportation costs, distance to existing electric lines, etc. For each technology, not only optimal location and supply area of the biomass plant, but also net present value and generated electric power are determined by an own binary variant of Particle Swarm Optimization (PSO). According to the values derived from the optimization algorithm, the most profitable technology can be chosen. Computer simulations show the good performance of the proposed binary PSO algorithm to optimize biomass fuelled systems for distributed power generation. (author)

  10. Optimal Acquisition and Inventory Control for a Remanufacturing System

    Directory of Open Access Journals (Sweden)

    Zhigang Jiang

    2013-01-01

    Full Text Available Optimal acquisition and inventory control can often make the difference between successful and unsuccessful remanufacturing. However, there is a greater degree of uncertainty and complexity in a remanufacturing system, which leads to a critical need for planning and control models designed to deal with this added uncertainty and complexity. In this paper, a method for optimal acquisition and inventory control of a remanufacturing system is presented. The method considers three inventories, one for returned item and the other for serviceable and recoverable items. Taking the holding cost for returns, recoverable and remanufactured products, remanufacturing cost, disposal cost, and the loss caused by backlog into account, the optimal inventory control model is established to minimize the total costs. Finally, a numerical example is provided to illustrate the proposed methods.

  11. Modelling and optimization of reforming systems for use in PEM fuel cell systems

    International Nuclear Information System (INIS)

    Berry, M.; Korsgaard, A.R.; Nielsen, M.P.

    2004-01-01

    Three different reforming methods for the conversion of natural gas to hydrogen are studied and compared: Steam Reforming (SR), Auto-thermal Reforming (ATR), and Catalytic Partial Oxidation (CPOX). Thermodynamic and kinetic models are developed for the reforming reactors as well as the subsequent reactors needed for CO removal to make the synthesis gas suitable for use in a PEM fuel cell. The systems are optimized to minimize the total volume, and must supply adequate hydrogen to a fuel cell with a 100kW load. The resultant system efficiencies are calculated. The CPOX system is the smallest and exhibits a comparable efficiency to the SR system. The SR system had the best relation between efficiency and volume increase. Optimal temperature profiles within each reactor were found. It was shown that temperature control can significantly reduce reactor volume and increase conversion capabilities. (author)

  12. Optimal Investment Control of Macroeconomic Systems

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ke-jie; LIU Chuan-zhe

    2006-01-01

    Economic growth is always accompanied by economic fluctuation. The target of macroeconomic control is to keep a basic balance of economic growth, accelerate the optimization of economic structures and to lead a rapid, sustainable and healthy development of national economies, in order to propel society forward. In order to realize the above goal, investment control must be regarded as the most important policy for economic stability. Readjustment and control of investment includes not only control of aggregate investment, but also structural control which depends on economic-technology relationships between various industries of a national economy. On the basis of the theory of a generalized system, an optimal investment control model for government has been developed. In order to provide a scientific basis for government to formulate a macroeconomic control policy, the model investigates the balance of total supply and aggregate demand through an adjustment in investment decisions realizes a sustainable and stable growth of the national economy. The optimal investment decision function proposed by this study has a unique and specific expression, high regulating precision and computable characteristics.

  13. VizieR Online Data Catalog: Hubble Source Catalog (V1 and V2) (Whitmore+, 2016)

    Science.gov (United States)

    Whitmore, B. C.; Allam, S. S.; Budavari, T.; Casertano, S.; Downes, R. A.; Donaldson, T.; Fall, S. M.; Lubow, S. H.; Quick, L.; Strolger, L.-G.; Wallace, G.; White, R. L.

    2016-10-01

    The HSC v1 contains members of the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR Source Extractor source lists from HLA version DR8 (data release 8). The crossmatching process involves adjusting the relative astrometry of overlapping images so as to minimize positional offsets between closely aligned sources in different images. After correction, the astrometric residuals of crossmatched sources are significantly reduced, to typically less than 10mas. The relative astrometry is supported by using Pan-STARRS, SDSS, and 2MASS as the astrometric backbone for initial corrections. In addition, the catalog includes source nondetections. The crossmatching algorithms and the properties of the initial (Beta 0.1) catalog are described in Budavari & Lubow (2012ApJ...761..188B). The HSC v2 contains members of the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR Source Extractor source lists from HLA version DR9.1 (data release 9.1). The crossmatching process involves adjusting the relative astrometry of overlapping images so as to minimize positional offsets between closely aligned sources in different images. After correction, the astrometric residuals of crossmatched sources are significantly reduced, to typically less than 10mas. The relative astrometry is supported by using Pan-STARRS, SDSS, and 2MASS as the astrometric backbone for initial corrections. In addition, the catalog includes source nondetections. The crossmatching algorithms and the properties of the initial (Beta 0.1) catalog are described in Budavari & Lubow (2012ApJ...761..188B). Hubble Source Catalog Acknowledgement: Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESAC/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). (2 data files).

  14. Fuzzy Adaptive Particle Swarm Optimization for Power Loss Minimisation in Distribution Systems Using Optimal Load Response

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Bak-Jensen, Birgitte

    2014-01-01

    Consumers may decide to modify the profile of their demand from high price periods to low price periods in order to reduce their electricity costs. This optimal load response to electricity prices for demand side management generates different load profiles and provides an opportunity to achieve...... power loss minimization in distribution systems. In this paper, a new method to achieve power loss minimization in distribution systems by using a price signal to guide the demand side management is proposed. A fuzzy adaptive particle swarm optimization (FAPSO) is used as a tool for the power loss...

  15. Computer-Aided Communication Satellite System Analysis and Optimization.

    Science.gov (United States)

    Stagl, Thomas W.; And Others

    Various published computer programs for fixed/broadcast communication satellite system synthesis and optimization are discussed. The rationale for selecting General Dynamics/Convair's Satellite Telecommunication Analysis and Modeling Program (STAMP) in modified form to aid in the system costing and sensitivity analysis work in the Program on…

  16. Optimal Parameter Selection of Power System Stabilizer using Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyeng Hwan; Chung, Dong Il; Chung, Mun Kyu [Dong-AUniversity (Korea); Wang, Yong Peel [Canterbury Univeristy (New Zealand)

    1999-06-01

    In this paper, it is suggested that the selection method of optimal parameter of power system stabilizer (PSS) with robustness in low frequency oscillation for power system using real variable elitism genetic algorithm (RVEGA). The optimal parameters were selected in the case of power system stabilizer with one lead compensator, and two lead compensator. Also, the frequency responses characteristics of PSS, the system eigenvalues criterion and the dynamic characteristics were considered in the normal load and the heavy load, which proved usefulness of RVEGA compare with Yu's compensator design theory. (author). 20 refs., 15 figs., 8 tabs.

  17. Optimal control systems in hydro power plants

    International Nuclear Information System (INIS)

    Babunski, Darko L.

    2012-01-01

    The aim of the research done in this work is focused on obtaining the optimal models of hydro turbine including auxiliary equipment, analysis of governors for hydro power plants and analysis and design of optimal control laws that can be easily applicable in real hydro power plants. The methodology of the research and realization of the set goals consist of the following steps: scope of the models of hydro turbine, and their modification using experimental data; verification of analyzed models and comparison of advantages and disadvantages of analyzed models, with proposal of turbine model for design of control low; analysis of proportional-integral-derivative control with fixed parameters and gain scheduling and nonlinear control; analysis of dynamic characteristics of turbine model including control and comparison of parameters of simulated system with experimental data; design of optimal control of hydro power plant considering proposed cost function and verification of optimal control law with load rejection measured data. The hydro power plant models, including model of power grid are simulated in case of island ing and restoration after breakup and load rejection with consideration of real loading and unloading of hydro power plant. Finally, simulations provide optimal values of control parameters, stability boundaries and results easily applicable to real hydro power plants. (author)

  18. Importance measures and genetic algorithms for designing a risk-informed optimally balanced system

    International Nuclear Information System (INIS)

    Zio, Enrico; Podofillini, Luca

    2007-01-01

    This paper deals with the use of importance measures for the risk-informed optimization of system design and management. An optimization approach is presented in which the information provided by the importance measures is incorporated in the formulation of a multi-objective optimization problem to drive the design towards a solution which, besides being optimal from the points of view of economics and safety, is also 'balanced' in the sense that all components have similar importance values. The approach allows identifying design systems without bottlenecks or unnecessarily high-performing components and with test/maintenance activities calibrated according to the components' importance ranking. The approach is tested at first against a multi-state system design optimization problem in which off-the-shelf components have to be properly allocated. Then, the more realistic problem of risk-informed optimization of the technical specifications of a safety system of a nuclear power plant is addressed

  19. Optimized application of systems engineering to nuclear waste repository projects

    International Nuclear Information System (INIS)

    Miskimin, P.A.; Shepard, M.

    1986-01-01

    The purpose of this presentation is to describe a fully optimized application of systems engineering methods and philosophy to the management of a large nuclear waste repository project. Knowledge gained from actual experience with the use of the systems approach on two repository projects is incorporated in the material presented. The projects are currently evaluating the isolation performance of different geologic settings and are in different phases of maturity. Systems engineering methods were applied by the principal author at the Waste Isolation Pilot Plant (WIPP) in the form of a functional analysis. At the Basalt Waste Isolation Project (BWIP), the authors assisted the intergrating contractor with the development and application of systems engineering methods. Based on this experience and that acquired from other waste management projects, an optimized plan for applying systems engineering techniques was developed. The plan encompasses the following aspects: project organization, developing and defining requirements, assigning work responsibilities, evaluating system performance, quality assurance, controlling changes, enhancing licensability, optimizing project performance, and addressing regulatory issues. This information is presented in the form of a roadmap for the practical application of system engineering principles to a nuclear waste repository project

  20. Reexploration of interacting holographic dark energy model. Cases of interaction term excluding the Hubble parameter

    Energy Technology Data Exchange (ETDEWEB)

    Li, Hai-Li; Zhang, Jing-Fei; Feng, Lu [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Zhang, Xin [Northeastern University, Department of Physics, College of Sciences, Shenyang (China); Peking University, Center for High Energy Physics, Beijing (China)

    2017-12-15

    In this paper, we make a deep analysis for the five typical interacting holographic dark energy models with the interaction terms Q = 3βH{sub 0}ρ{sub de}, Q = 3βH{sub 0}ρ{sub c}, Q = 3βH{sub 0}(ρ{sub de} + ρ{sub c}), Q = 3βH{sub 0}√(ρ{sub de}ρ{sub c}), and Q = 3βH{sub 0}(ρ{sub de}ρ{sub c})/(ρ{sub de}+ρ{sub c}), respectively. We obtain observational constraints on these models by using the type Ia supernova data (the Joint Light-Curve Analysis sample), the cosmic microwave background data (Planck 2015 distance priors), the baryon acoustic oscillations data, and the direct measurement of the Hubble constant. We find that the values of χ{sub min}{sup 2} for all the five models are almost equal (around 699), indicating that the current observational data equally favor these IHDE models. In addition, a comparison with the cases of an interaction term involving the Hubble parameter H is also made. (orig.)

  1. Automated Morphological Classification in Deep Hubble Space Telescope UBVI Fields: Rapidly and Passively Evolving Faint Galaxy Populations

    Science.gov (United States)

    Odewahn, Stephen C.; Windhorst, Rogier A.; Driver, Simon P.; Keel, William C.

    1996-11-01

    We analyze deep Hubble Space Telescope Wide Field Planetary Camera 2 (WFPC2) images in U, B, V, I using artificial neural network (ANN) classifiers, which are based on galaxy surface brightness and light profile (but not on color nor on scale length, rhl). The ANN distinguishes quite well between E/S0, Sabc, and Sd/Irr+M galaxies (M for merging systems) for BJ ~ 24 mag. The faint blue galaxy counts in the B band are dominated by Sd/Irr+M galaxies and can be explained by a moderately steep local luminosity function (LF) undergoing strong luminosity evolution. We suggest that these faint late-type objects (24 mag <~ BJ <~ 28 mag) are a combination of low-luminosity lower redshift dwarf galaxies, plus compact star-forming galaxies and merging systems at z ~= 1--3, possibly the building blocks of the luminous early-type galaxies seen today.

  2. Multi-objective approach in thermoenvironomic optimization of a benchmark cogeneration system

    International Nuclear Information System (INIS)

    Sayyaadi, Hoseyn

    2009-01-01

    Multi-objective optimization for designing of a benchmark cogeneration system known as CGAM cogeneration system has been performed. In optimization approach, the exergetic, economic and environmental aspects have been considered, simultaneously. The thermodynamic modeling has been implemented comprehensively while economic analysis conducted in accordance with the total revenue requirement (TRR) method. The results for the single objective thermoeconomic optimization have been compared with the previous studies in optimization of CGAM problem. In multi-objective optimization of the CGAM problem, the three objective functions including the exergetic efficiency, total levelized cost rate of the system product and the cost rate of environmental impact have been considered. The environmental impact objective function has been defined and expressed in cost terms. This objective has been integrated with the thermoeconomic objective to form a new unique objective function known as a thermoenvironomic objective function. The thermoenvironomic objective has been minimized while the exergetic objective has been maximized. One of the most suitable optimization techniques developed using a particular class of search algorithms known as multi-objective evolutionary algorithms (MOEAs) has been considered here. This approach which is developed based on the genetic algorithm has been applied to find the set of Pareto optimal solutions with respect to the aforementioned objective functions. An example of decision-making has been presented and a final optimal solution has been introduced. The sensitivity of the solutions to the interest rate and the fuel cost has been studied

  3. Parametric Optimization of Some Critical Operating System Functions--An Alternative Approach to the Study of Operating Systems Design

    Science.gov (United States)

    Sobh, Tarek M.; Tibrewal, Abhilasha

    2006-01-01

    Operating systems theory primarily concentrates on the optimal use of computing resources. This paper presents an alternative approach to teaching and studying operating systems design and concepts by way of parametrically optimizing critical operating system functions. Detailed examples of two critical operating systems functions using the…

  4. Directing orbits of chaotic systems by particle swarm optimization

    International Nuclear Information System (INIS)

    Liu Bo; Wang Ling; Jin Yihui; Tang Fang; Huang Dexian

    2006-01-01

    This paper applies a novel evolutionary computation algorithm named particle swarm optimization (PSO) to direct the orbits of discrete chaotic dynamical systems towards desired target region within a short time by adding only small bounded perturbations, which could be formulated as a multi-modal numerical optimization problem with high dimension. Moreover, the synchronization of chaotic systems is also studied, which can be dealt with as an online problem of directing orbits. Numerical simulations based on Henon Map demonstrate the effectiveness and efficiency of PSO, and the effects of some parameters are also investigated

  5. Optimizing the Sustainment of U.S. Army Weapon Systems

    Science.gov (United States)

    2016-03-17

    Dale Carnegie , Fellowship, etc.) NA None of the above OPTIMIZING SUSTAINMENT OF ARMY SYSTEMS 36 Question 11: Army Civilian Education System...Darden, Dale Carnegie , Fellowship, etc.) None of the above 11. Army Civilian Education System (check all that apply) Action Officer Development Course

  6. Vehicle Propulsion Systems Introduction to Modeling and Optimization

    CERN Document Server

    Guzzella, Lino

    2013-01-01

    This text provides an introduction to the mathematical modeling and subsequent optimization of vehicle propulsion systems and their supervisory control algorithms. Automobiles are responsible for a substantial part of the world's consumption of primary energy, mostly fossil liquid hydrocarbons and the reduction of the fuel consumption of these vehicles has become a top priority. Increasing concerns over fossil fuel consumption and the associated environmental impacts have motivated many groups in industry and academia to propose new propulsion systems and to explore new optimization methodologies. This third edition has been prepared to include many of these developments. In the third edition, exercises are included at the end of each chapter and the solutions are available on the web.

  7. Cost-optimal power system extension under flow-based market coupling

    Energy Technology Data Exchange (ETDEWEB)

    Hagspiel, Simeon; Jaegemann, Cosima; Lindenberger, Dietmar [Koeln Univ. (Germany). Energiewirtschaftliches Inst.; Brown, Tom; Cherevatskiy, Stanislav; Troester, Eckehard [Energynautics GmbH, Langen (Germany)

    2013-05-15

    Electricity market models, implemented as dynamic programming problems, have been applied widely to identify possible pathways towards a cost-optimal and low carbon electricity system. However, the joint optimization of generation and transmission remains challenging, mainly due to the fact that different characteristics and rules apply to commercial and physical exchanges of electricity in meshed networks. This paper presents a methodology that allows to optimize power generation and transmission infrastructures jointly through an iterative approach based on power transfer distribution factors (PTDFs). As PTDFs are linear representations of the physical load flow equations, they can be implemented in a linear programming environment suitable for large scale problems. The algorithm iteratively updates PTDFs when grid infrastructures are modified due to cost-optimal extension and thus yields an optimal solution with a consistent representation of physical load flows. The method is first demonstrated on a simplified three-node model where it is found to be robust and convergent. It is then applied to the European power system in order to find its cost-optimal development under the prescription of strongly decreasing CO{sub 2} emissions until 2050.

  8. Optimal inspection and replacement periods of the safety system in Wolsung Nuclear Power Plant Unit 1 with an optimized cost perspective

    International Nuclear Information System (INIS)

    Jinil Mok; Poong Hyun Seong

    1996-01-01

    In this work, a model for determining the optimal inspection and replacement periods of the safety system in Wolsung Nuclear Power Plant Unit 1 is developed, which is to minimize economic loss caused by inadvertent trip and the system failure. This model uses cost benefit analysis method and the part for optimal inspection period considers the human error. The model is based on three factors as follows: (i) The cumulative failure distribution function of the safety system, (ii) The probability that the safety system does not operate due to failure of the system or human error when the safety system is needed at an emergency condition and (iii) The average probability that the reactor is tripped due to the failure of system components or human error. The model then is applied to evaluate the safety system in Wolsung Nuclear Power Plant Unit 1. The optimal replacement periods which are calculated with proposed model differ from those used in Wolsung NPP Unit 1 by about a few days or months, whereas the optimal inspection periods are in about the same range. (author)

  9. Linear systems optimal and robust control

    CERN Document Server

    Sinha, Alok

    2007-01-01

    Introduction Overview Contents of the Book State Space Description of a Linear System Transfer Function of a Single Input/Single Output (SISO) System State Space Realizations of a SISO System SISO Transfer Function from a State Space Realization Solution of State Space Equations Observability and Controllability of a SISO System Some Important Similarity Transformations Simultaneous Controllability and Observability Multiinput/Multioutput (MIMO) Systems State Space Realizations of a Transfer Function Matrix Controllability and Observability of a MIMO System Matrix-Fraction Description (MFD) MFD of a Transfer Function Matrix for the Minimal Order of a State Space Realization Controller Form Realization from a Right MFD Poles and Zeros of a MIMO Transfer Function Matrix Stability Analysis State Feedback Control and Optimization State Variable Feedback for a Single Input System Computation of State Feedback Gain Matrix for a Multiinput System State Feedback Gain Matrix for a Multi...

  10. Revisiting the stellar velocity ellipsoid-Hubble-type relation: observations versus simulations

    Science.gov (United States)

    Pinna, F.; Falcón-Barroso, J.; Martig, M.; Martínez-Valpuesta, I.; Méndez-Abreu, J.; van de Ven, G.; Leaman, R.; Lyubenova, M.

    2018-04-01

    The stellar velocity ellipsoid (SVE) in galaxies can provide important information on the processes that participate in the dynamical heating of their disc components (e.g. giant molecular clouds, mergers, spiral density waves, and bars). Earlier findings suggested a strong relation between the shape of the disc SVE and Hubble type, with later-type galaxies displaying more anisotropic ellipsoids and early types being more isotropic. In this paper, we revisit the strength of this relation using an exhaustive compilation of observational results from the literature on this issue. We find no clear correlation between the shape of the disc SVE and morphological type, and show that galaxies with the same Hubble type display a wide range of vertical-to-radial velocity dispersion ratios. The points are distributed around a mean value and scatter of σz/σR = 0.7 ± 0.2. With the aid of numerical simulations, we argue that different mechanisms might influence the shape of the SVE in the same manner and that the same process (e.g. mergers) does not have the same impact in all the galaxies. The complexity of the observational picture is confirmed by these simulations, which suggest that the vertical-to-radial axis ratio of the SVE is not a good indicator of the main source of disc heating. Our analysis of those simulations also indicates that the observed shape of the disc SVE may be affected by several processes simultaneously and that the signatures of some of them (e.g. mergers) fade over time.

  11. Multiphysics simulation electromechanical system applications and optimization

    CERN Document Server

    Dede, Ercan M; Nomura, Tsuyoshi

    2014-01-01

    This book highlights a unique combination of numerical tools and strategies for handling the challenges of multiphysics simulation, with a specific focus on electromechanical systems as the target application. Features: introduces the concept of design via simulation, along with the role of multiphysics simulation in today's engineering environment; discusses the importance of structural optimization techniques in the design and development of electromechanical systems; provides an overview of the physics commonly involved with electromechanical systems for applications such as electronics, ma

  12. Online optimization of a multi-conversion-level DC home microgrid for system efficiency enhancement

    DEFF Research Database (Denmark)

    Boscaino, V.; Guerrero, J. M.; Ciornei, I.

    2017-01-01

    stages, three paralleled DC/DC converters are implemented. A Genetic Algorithm performs the on-line optimization of the DC network’s global efficiency, generating the optimal current sharing ratios of the concurrent power converters. The overall DC/DC conversion system including the optimization section......In this paper, an on-line management system for the optimal efficiency operation of a multi-bus DC home distribution system is proposed. The operation of the system is discussed with reference to a distribution system with two conversion stages and three voltage levels. In each of the conversion...

  13. Graph-related optimization and decision support systems

    CERN Document Server

    Krichen, Saoussen

    2014-01-01

    Constrained optimization is a challenging branch of operations research that aims to create a model which has a wide range of applications in the supply chain, telecommunications and medical fields. As the problem structure is split into two main components, the objective is to accomplish the feasible set framed by the system constraints. The aim of this book is expose optimization problems that can be expressed as graphs, by detailing, for each studied problem, the set of nodes and the set of edges.  This graph modeling is an incentive for designing a platform that integrates all optimizatio

  14. Reliability optimization of a redundant system with failure dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Yu Haiyang [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France)]. E-mail: Haiyang.YU@utt.fr; Chu Chengbin [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France); Management School, Hefei University of Technology, 193 Tunxi Road, Hefei (China); Chatelet, Eric [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France); Yalaoui, Farouk [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France)

    2007-12-15

    In a multi-component system, the failure of one component can reduce the system reliability in two aspects: loss of the reliability contribution of this failed component, and the reconfiguration of the system, e.g., the redistribution of the system loading. The system reconfiguration can be triggered by the component failures as well as by adding redundancies. Hence, dependency is essential for the design of a multi-component system. In this paper, we study the design of a redundant system with the consideration of a specific kind of failure dependency, i.e., the redundant dependency. The dependence function is introduced to quantify the redundant dependency. With the dependence function, the redundant dependencies are further classified as independence, weak, linear, and strong dependencies. In addition, this classification is useful in that it facilitates the optimization resolution of the system design. Finally, an example is presented to illustrate the concept of redundant dependency and its application in system design. This paper thus conveys the significance of failure dependencies in the reliability optimization of systems.

  15. Reliability optimization of a redundant system with failure dependencies

    International Nuclear Information System (INIS)

    Yu Haiyang; Chu Chengbin; Chatelet, Eric; Yalaoui, Farouk

    2007-01-01

    In a multi-component system, the failure of one component can reduce the system reliability in two aspects: loss of the reliability contribution of this failed component, and the reconfiguration of the system, e.g., the redistribution of the system loading. The system reconfiguration can be triggered by the component failures as well as by adding redundancies. Hence, dependency is essential for the design of a multi-component system. In this paper, we study the design of a redundant system with the consideration of a specific kind of failure dependency, i.e., the redundant dependency. The dependence function is introduced to quantify the redundant dependency. With the dependence function, the redundant dependencies are further classified as independence, weak, linear, and strong dependencies. In addition, this classification is useful in that it facilitates the optimization resolution of the system design. Finally, an example is presented to illustrate the concept of redundant dependency and its application in system design. This paper thus conveys the significance of failure dependencies in the reliability optimization of systems

  16. Mach's Principle to Hubble's Law and Light Relativity

    Science.gov (United States)

    Zhang, Tianxi

    2018-01-01

    Discovery of the redshift-distance relation to be linear (i.e. Hubble's law) for galaxies in the end of 1920s instigated us to widely accept expansion of the universe, originated from a big bang around 14 billion years ago. Finding of the redshift-distance relation to be weaker than linear for distant type Ia supernovae nearly two decades ago further precipitated us to largely agree with recent acceleration of the universe, driven by the mysterious dark energy. The time dilation measured for supernovae has been claimed as a direct evidence for the expansion of the universe, but scientists could not explain why quasars and gamma-ray bursts had not similar time dilations. Recently, an anomaly was found in the standard template for the width of supernova light curves to be proportional to the wavelength, which exactly removed the time dilation of supernovae and hence was strongly inconsistent with the conventional redshift mechanism. In this study, we have derived a new redshift-distance relation from Mach's principle with light relativity that describes the effect of light on spacetime as well as the influence of disturbed spacetime on the light inertia or frequency. A moving object or photon, because of its continuously keeping on displacement, disturbs the rest of the entire universe or distorts/curves the spacetime. The distorted or curved spacetime then generates an effective gravitational force to act back on the moving object or photon, so that reduces the object inertia or photon frequency. Considering the disturbance of spacetime by a photon is extremely weak, we have modelled the effective gravitational force to be Newtonian and derived the new redshift-distance relation that can not only perfectly explain the redshift-distance measurement of distant type Ia supernovae but also inherently obtain Hubble's law as an approximate at small redshift. Therefore, the result obtained from this study does neither support the acceleration of the universe nor the

  17. The Pealization of the Most Economical and optimized Control System

    Institute of Scientific and Technical Information of China (English)

    WUBin

    2002-01-01

    In order to plow an access to low cost automation,the method to set up the most economical and optimized control system is studied.Such a system is achieved by adopting the field bus technologies based on net connection to form the hierarchical architecture and employing genetic algorithm to intelliently optimize the parameters of the topology structure at the field execution level and the parameters of a local controller,Praxios has proved that this realization can shorten the system development cycle,improve the systtem's reliability,and achieve conspicuous social economic benefits.

  18. Multicriterial optimization of preventive maintenance of informational/technical stochastic system

    Directory of Open Access Journals (Sweden)

    Kovalenko Anna

    2016-01-01

    Full Text Available The problem of optimization of reliability and economical indexes is solved for informational/technical system with periodical diagnostics and restoration. The system considered is the one with users accesses and errors occurring. The probability that the error occurs increases in time. Therefore, the longer is the restoration period the more there is a risk of system operation with error. But too frequent restoration can be too expensive. To take into account all these factors two-criteria optimization is made with the restriction of reliability characteristic being of the given level.

  19. Optimal Resources Planning of Residential Complex Energy System in a Day-ahead Market Based on Invasive Weed Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    P. Αhmadi

    2017-10-01

    Full Text Available This paper deals with optimal resources planning in a residential complex energy system, including FC (fuel cell, PV (Photovoltaic panels and the battery. A day-ahead energy management system (EMS based on invasive weed optimization (IWO algorithm is defined for managing different resources to determine an optimal operation schedule for the energy resources at each time interval to minimize the operation cost of a smart residential complex energy system. Moreover, in this paper the impacts of the sell to grid and purchase from grid are also considered. All practical constraints of the each energy resources and utility policies are taken into account. Moreover, sensitivity analysis are conducted on electricity prices and sell to grid factor (SGF, in order to improve understanding the impact of key parameters on residential CHP systems economy. It is shown that proposed system can meet all electrical and thermal demands with economic point of view. Also enhancement of electricity price leads to substantial growth in utilization of proposed CHP system.

  20. STS 31 PAYLOAD HUBBLE SPACE TELESCOPE ENCLOSED IN AN AIR-TIGHT PLASTIC BAG FOR PROTECTION IN VERTICA

    Science.gov (United States)

    1989-01-01

    Preparations are made to enclose the Hubble Space Telescope [HST] inside an air-tight plastic bag in the VPF. Processing of the 94- inch primary mirror telescope for launch on the Discovery in March 1990, involves working within strict controls to prevent contamination.